Paging Dr. AI

Why AI Doesn’t Threaten Doctors

Neel V. Patel
12.11.17 9:00 AM ET

Photo Illustration by The Daily Beast

Here’s one reason why artificial intelligence is definitely in our medical future: “A.I. doesn’t get tired,” Roberto Novoa, a dermatologist based at Stanford, said. “You can show them literally thousands or millions of images, and there’s little additional cost to each one that’s analyzed.”

That’s huge for a profession that has long been criticized for its physicians, residents, nurses, and other support staff running on next to no sleep. Even with sleep, a human radiologist may analyze an image wrong: They might get fatigued after several hours, or be inclined to identify something as, say, a particular melanoma after having identified it several-hundred times over.

The algorithm, on the other hand, will give you the same answer it would have given at any other time, whether it’s 2:00 a.m. on Saturday or 3:00 p.m. on a Wednesday.

But while the cold perfection of A.I. makes for a more perfect medical diagnosis, it can’t replace a human. After all, we love and embrace physicians who approach their work with empathy for patients and a warm understanding of what their fellow human beings are going through. But humans are flawed. No amount of empathy will offset the pitfalls of human error, and when it comes to the medical profession, that can lead to literal life-or-death consequences—especially true in diagnosing cancer: In order to accurately identify a tumor, radiologists have to spend quite a bit time pouring over various images of a patient’s tissues and organs and identify lesions or other signs of cancer that are often incredibly subtle. Like any job, it’s a task that gets easier to do over the long-run, but can be exacerbated by the demands required in a single day or week.

Handing off some duties of medicine to artificial intelligence might make some people nervous, but for many physicians around the world, the role of A.I. in medicine has already moved from a question of if to a question of when. Novoa is one of them.

A few years ago, Novoa was inspired by how well algorithms were being used to classify dog breeds. “I thought, if they could do this so well for dog breeds, what could they do for [diagnosing] skin cancer?” he told The Daily Beast. He reached out to folks at Stanford’s computer science department, and soon they were training an A.I. system to identify the presence of skin cancer based on a database of 129,000 images of benign and malignant lesions.

Novoa says an algorithm can learn to more sharply pick up on subtle patterns it sees over large datasets — which doesn’t just make diagnosing more effective, but could contribute to the larger body of knowledge we have over tumors.

The algorithm, a pretty representative model for how A.I. in medical diagnostics can work, was tested against a group of board-certified dermatologists, using another group of biopsy images which already possessed a positive or negative diagnosis of cancer. Novoa and his group compared the performance of the two, and “the algorithm performed as well as the dermatologists,” he said. Although the findings, published in Nature in February, are just a proof-of-concept study using retrospective data, the next step is to train and augment the system so that it’s in a position to actively diagnose skin cancer in new patients.

Certainly the algorithm, like any machine-based tool, carries its own set of problems which need rectifying. “Any algorithm can learn to do its job better, but like humans, they might also learn biases of their own,” Novoa admitted. "In our data set, dermatologists tended to do this only for lesions that were a cause for concern."

He and his colleagues had one such problem in their their study with rulers. When dermatologists are looking at a lesion that they think might be a tumor, they’ll break out a ruler—the type you might have used in grade school—to take an accurate measurement of its size. Dermatologists tend to do this only for lesions that are a cause for concern. So in the set of biopsy images, if an image had a ruler in it, the algorithm was more likely to call a tumor malignant, because the presence of a ruler correlated with an increased likelihood a lesion was cancerous. Unfortunately, as Novoa emphasizes, the algorithm doesn’t know why that correlation makes sense, so it could easily misinterpret a random ruler sighting as grounds to diagnose cancer.

That bias, and others like it, will need to be culled in order for A.I. to truly be a popular approach in medical diagnostics. “These technologies are a bit like the driverless car, in that they have to perform extremely well in order to be available to the general public,” Novoa said. People’s lives are tied to something that will diagnose cancer.”

One way to offset those biases is to ensure an A.I. is working with more than just images to make diagnoses. Manisha Bahl, a physician at Massachusetts General Hospital, is the lead author of a recent study published in Radiology that used an A.I. system to predict whether a high-risk lesion identified through a breast cancer biopsy after a mammogram is truly malignant. Currently, 90 percent of these lesions that lead to surgery turn out to be benign at the actual time of surgery. Bahl and her team developed a machine learning model that accurately diagnosed 97 percent of malignant breast cancer, and decreased the number of benign surgeries by more than 30 percent.

Their model — which was designed to consider about 20,000 data elements simultaneously when assessing a lesion — was actually not trained on any images at all, but rather textual information about such images. “For a model, that’s pretty powerful anyways,” she said.Bahl hopes that a subsequent iteration of the platform trained through actual imaging data could prove even more beneficial.

Another major advantage to A.I. is the technology’s portability. Novoa and his team, for instance, are seeking to develop their algorithm as a smartphone app which could be used by practically any physician around the world. George Shih, a physician and professor at Weill Cornell Graduate School of Medical Sciences and the co-founder of A.I. diagnostics company MD.ai. The company, unique for being led by physicians rather than computer scientists, recently finished in the top 10 in a data science contest to develop machine learning platforms that could diagnose lung cancer. “Our vision is to be able to do all this collaboration and A.I.-building on the web, so all our tools are web-based,” Shih. He likens it to something like Google Docs, in which multiple groups can work at the same time to advance the system and refine it. This is especially useful in places around the world where a radiologist or other imaging specialist isn’t available.

What does this mean for the future of human doctors? Novoa, for one, isn’t fretting too much. “These technologies are going to shape the way we make diagnoses in the future, but I don’t expect them to replace humans,” he said. “A hundred years ago, the focus of neurology was on the localization of a lesion. So neurologists were focused on finding the problem in the patient, but they couldn’t do a whole lot for the patient. And now we have CAT scans and MRIs, and this technology has dramatically improved the ability to localize a lesion. But it hasn’t eliminated the need for a neurologist. Technologies haven’t eliminated the need for doctors themselves.” Certain physician duties might shift, but human doctors won’t go away — their skillsets will simply expand in areas that cannot be taken over by machines.

“There’s a lot of skepticism by physicians in my field,” Shih said. “And it requires us to find ways to allow physicians to become comfortable and validate these tools. We should allow these tools to become our assistants, to make us become more accurate and efficient and confident in our diagnoses.”

It never hurts to have two pairs of eyes instead of one, even if one of those pairs are from a machine.

Sponsored Stories