A hospital algorithm designed to predict a deadly condition misses most cases

The biggest electronic health record company in the United States, Epic Systems, claims it can solve a major problem for hospitals: identifying signs of sepsis, an often deadly complication from infections that can lead to organ failure. It’s a leading cause of death in hospitals.

But the algorithm doesn’t work as well as advertised, according to a new study published in JAMA Internal Medicine on Monday. Epic says its alert system can correctly differentiate patients who do and don’t have sepsis 76 percent of the time. The new study found it was only right 63 percent of the time.

An Epic spokesperson disputed the findings in a statement to Stat News, saying that other research showed the algorithm was accurate.

Sepsis is hard to spot early, but starting treatment as soon as possible can improve patients’ chances of survival. The Epic system, and other automated warning tools like it, scan patient test results for signals that someone could be developing the condition. Around a quarter of US hospitals use Epic’s electronic medical records, and hundreds of hospitals use its sepsis prediction tool, including the health center at the University of Michigan, where study author Karandeep Singh is an assistant professor.

The study examined data from nearly 40,000 hospitalizations at Michigan Medicine in 2018 and 2019. Patients developed sepsis in 2,552 of those hospitalizations. Epic’s sepsis tool missed 1,709 of those cases, around two-thirds of which were still identified and treated quickly. It only identified 7 percent of sepsis cases that were missed by a physician. The analysis also found a high rate of false positives: when an alert went off for a patient, there was only a 12 percent chance that the patient actually would develop sepsis.

Part of the problem, Singh told Stat News, seemed to be in the way the Epic algorithm was developed. It defined sepsis based on when a doctor would submit a bill for treatment, not necessarily when a patient first developed symptoms. That means it’s catching cases where the doctor already thinks there’s an issue. “It’s essentially trying to predict what physicians are already doing,” Singh said. It’s also not the measure of sepsis that researchers would ordinarily use.

Tools that mine patient data to predict what could happen with their health are common and can be useful for doctors. But they’re only as good as the data they’re developed with, and they should be subject to outside evaluation. When researchers scrutinize tools like this one, they sometimes find holes: for example, one algorithm used by major health systems to flag patients who need special attention was biased against Black patients, a 2019 study found.

Epic rolled out another predictive tool, called the Deterioration Index, during the early days of the COVID-19 pandemic. It was designed to help doctors decide which patients should move into intensive care and which could be fine without it. The pandemic was an emergency, so hospitals around the country started using it before it was subject to any sort of independent evaluation. Even now, there has been limited research on the tool. One small study showed it could identify high- and low-risk patients but might not be useful to doctors. There could be unforeseen problems or biases in the system that are going unnoticed, Brown University researchers warned in Undark.

If digital tools are going to live up to their potential in healthcare, companies like Epic should be transparent about how they’re made and they should be regularly monitored to make sure they’re working well, Singh says on Twitter. These tools are becoming more and more common, so these types of issues aren’t going away, Roy Adams, an assistant professor at Johns Hopkins School of Medicine, told Wired. “We need more independent evaluations of these proprietary systems,” he says.

Leave a Comment