Proscia, Ibex, and Quest Diagnostics Webinar: Quantifying the Value of AI-Powered Digital WorkflowsSIGN UP

AI in Healthcare: Keep in Mind the Applications

Proscia
By Proscia | June 22, 2017

It can be hard to get through the week without seeing something written about the potential of artificial intelligence (AI) in healthcare. A Google search for “AI+healthcare” yields nearly 36 million results. While this growing attention is a good thing, those of us in industry and academia driving new AI developments need to be mindful of our words. While there is a time and place for broad, sweeping language, we can’t let buzzwords distract or distort the meaningful progress that is being made in very specific applications across healthcare.

Bruce Liang, chief information officer of Singapore’s Ministry of Health, was quoted in a CNBC article as saying “A.I. could play a big role in supporting prevention, diagnosis, treatment plans, medication management, precision medicine and drug creation.” Most people with some familiarity with artificial intelligence agree with Mr. Liang—myself included. However, the anthropomorphization of artificial intelligence (“A.I. could play…”) is something to watch out for.

Not to pick on Mr. Liang in particular, but “AI” is a term with both positive and negative cultural connotations. Countless fictional works tell tales of runaway, evil, or otherwise uncaring artificially intelligent beings. Elon Musk, one of the most prolific and well-recognized innovators of this century, has cautioned us that AI “could wipe out humanity.” When “AI” becomes a sort of mantra, it resembles the black monolith of Kubrick’s 2001, ominous and full of foreign power. Instead of bundling AI into a singular, ever-growing juggernaut of unknown capability, we ought to focus on the tangible, quantifiable improvements AI applications can make in patient care. There’s no doubt it’s convenient to refer to the extremely diverse set of “smart” technologies as the collective term “AI,” but I think we are better served to talk about specific applications.

A recent Accenture report breaks AI applications in healthcare into ten different domains, ranging from “robot-assisted surgery,” to “fraud detection,” to “automated image diagnosis.” These high level groupings are a good start at pulling apart the AI monolith that old and new media have crafted for us, inadvertently or not.

A closer look at just one of those domains, “automated image diagnosis” (which I think is more usefully described as “image-based analytics”) yields a dizzying number of applications and several technologies at various stages of maturity.

Enlitic and IBM Watson Health are two ventures applying deep learning technology to the field of radiology, where new technology promises to assist in patient triage, screening, and clinical decision support. Enlitic claims to be able to improve diagnostic efficiency and accuracy using advanced image analysis technology.

In the realm of pathology, a fast-growing international competition called CAMELYON has attracted the attention of both academic researchers and tech companies (including Google) eager to tackle the difficult problem of detecting and staging metastatic breast cancer. In the most recent phase of competition, Proscia and other leading participants demonstrated either substantial or almost perfection agreement with a panel of expert physicians.

While such results highlight the potential for AI to augment medical practice, further validation is required to draw firm conclusions on clinical efficacy. However, some technologies are more mature. Pittsburgh-based Cernostics has performed multi-institutional, multi-year studies on its image analysis-driven assay for Barrett’s esophagus, a potential precursor to esophageal cancer. Evidence from studies shows that Cernostics’ technology “has stronger prognostic power than current clinical variables, including the pathologic diagnosis.”

As builders of these exciting new technologies, it is our responsibility to communicate them effectively. We can indulge in buzzwords occasionally, but we cannot let them displace the quantitative results we see in hyper-specific domains.

Our website uses cookies. By using this site, you agree to its use of cookies.