The Once and Future Oracle

No Comments

An old approach to artificial intelligence never really left us, and may rise again

by Nicholas Genes, MD, PhD

 

Before smartphones and apps, before the World Wide Web or graphic user interfaces, scientists and engineers tried to build computers that could out-diagnose doctors. These “Expert Systems” were an early approach to artificial intelligence, and their successes and failures have particular relevance today.

For MYCIN, the premise was simple: teach a machine the necessary “rules” about infectious disease so that, if a user answered enough questions about a sick patient, the machine would ultimately arrive at the correct diagnosis and recommend appropriate antibiotics.

MYCIN was developed in the 1970s by one of the founders of clinical informatics, Ted Shortliffe, while he was a computer science student at Stanford. Its predictions for likely causes of bacteremia and meningitis were based on history, exam and lab findings, and incorporated unknowns and degrees of certainty, well before cultures were resulted. And MYCIN worked better than doctors – blinded ID faculty evaluators rated MYCIN’s choices for antibiotics as correct 65% of the time, beating the human specialists ratings of 42.5–62.5%.

MYCIN’s edge was its memory and its methodical nature – it never forgot a detail or overlooked a disease, and it never jumped to conclusions before systematically evaluating all the facts. But these advantages were also shortcomings. The knowledge base and hundreds of rules took a long time to develop, were hard to maintain, and each diagnosis required many dozens of questions to be asked of the user.

Other systems, like INTERNIST-I (modeled a er Dr. Jack Myers’ ranking algorithms for generating di erential diagnoses in internal medicine), tried to expand the scope of what expert systems could approach, with mixed results.  These were time-consuming solutions that didn’t scale well. Expert systems remain useful for constrained domains like EKG interpretation, but weren’t practical for routine diagnosis. Critics lamented the “Greek oracle” proclamations of these systems, which relegated the physician to the role of passive data entry clerk instead of leveraging their knowledge and judgment.

Looking back, the name “Expert System” expresses a naive confidence in the power of computers, and an appeal to authority, that seems dated. Today’s Clinical Decision Support, baked into our electronic health records, sound meek and apologetic by comparison. But these CDS tools are based on similar rules engines as the Expert Systems, and even though popups about core measures or irrelevant drug interactions can be annoying, it’s still progress. The system is integrated into work flows and monitors physician activities in real- time, trying to steer us in a direction that seems adherent to guidelines and best practices.

With the arrival of “big data” and personalized medicine, it’s likely CDS algorithms will get more sophisticated, with tailored recommendations based on a patient’s demographic, historical and genetic information. IBM has touted Watson’s ability to absorb rules from the medical literature, as well as a hospital network’s own logs of data and outcomes, to guide physician decision-making. Ubiquitous advertising for Watson is raising public expectations of what physicians and their tools can accomplish.

It’s not clear that Watson, or other insights gleaned from big data, will ever live up to expectations. But prominent institutions are already investing in systems to synthesize health data and guide physician behavior. The role of clinical decision support tools will only continue to expand to more aspects of care. Ultimately we may end up with CDS algorithms so complex, physicians won’t be able to piece together the specic reasons why a recommendation was made. If that day comes, it would represent the unexpected triumph of the expert systems of old – we’ll have our “Greek oracle” issuing pronouncements, after all. 

Leave A Reply