Ever since algorithms began distinguishing patterns faster and better than humans, computers have been building doctors lives easier and diagnosings more accurate. But widely used tools like automated cell counters, which can quickly point to illness like malaria and leukemia by get a head count on different kind of blood cell, are beginning to look quaint next to the deep learning and neural networks coming online. Today, hospitals can outfit their existing computer systems with a $1,000 graphics processor and speed-boost their capacity up to 260 million images per day. Thats basically equivalent to all the MRIs, CT scans, and other images that all the radiologists in America look at each day.
Unleashing that kind of AI on the medical world’s mountains of patient data could speed up diagnosis and get patients on the path to recovery much sooner. But the committee is also promises to drastically change the job description for doctors who identify as information specialists–those whose primary chores involve deciphering diagnosings from images. Doctors who get their MDs in image interpreting, namely pathologists, radiologists, and dermatologists, are the most vulnerable. These three areas will be the first ten-strike, tells Eric Topol, director of the Scripps Translational Science Institute and a leader in the NIHs Precision Health Initiative. Then well start to see it across the board for medicine.
Take skin cancer. Each year five million American moles, freckles, and skin places turn out to be malignant, costing the healthcare system$ 8 billion. Catching deadly cancers like melanoma early makes a huge difference–survival rates drop from 98 percentage to as low as 16 percentage if the disease progresses to the lymph nodes.
Dermatologists use a variety of magnifying tools to identify possible bad blemishes, and because the outcomes can be so disastrous, they tend to be a cautious bunch. For every 10 lesions surgically biopsied, only one melanoma get discovered. Thats a lot of unnecessary knifing.
So physicians are now turning to artificial intelligence to tell the difference between innocuous and potentially life-threatening blotches. The hope is that computer vision, with its they are able to make thousands of tiny measurings, will catch cancers early enough and with enough specificity to cut down on the amount of cutting doctors do. And by initial measures, its well on its way. Computer scientists and physicians at Stanford University recently teamed up to develop a deep learn algorithm on 130,000 images of 2,000 skin disorder. The outcome, the subject of a paper out today in Nature , performed as well as 21 board-certified dermatologists in picking out deadly skin lesions.
The researchers started with a Google-developed algorithm primed to distinguish cats from puppies. Then they fed it images from medical databases and the web and taught it to differentiate between a malignant squamous cell carcinoma and a patch of scratchy dry scalp. Like an outstanding dermatology resident, the more images it ensure, the better it get. It was definitely an incremental process, but it was exciting to see it slowly be able to actually do better than us at classifying these lesions, told Roberto Novoa, the Stanford dermatologist who first contacted the schools AI group about collaborating on skin cancer.
Stanfords robo-derm may be pure research at this point, but there are plenty of AI start-ups( more than 100) and software giants( Google, Microsoft, IBM) working to get deep learning into hospitals, clinics, and even smartphones. Last year, a squad of Harvard and Beth Israel Deacon researchers won an international imaging competition with a neural network that could see metastatic breast cancer simply by looking at pathology slide images from lymph nodes. The researchers are now commercializing the technology through a spinoff called PathAI. IBMs artificial intelligence engine, Watson, has also been working on identifying scalp cancers, when its not investigating CT scans for blood clots or watching for wonky heart wall motion in ECGs. With 30 billion images and counting, Watson will soon have specialized knowledge in all the big imaging fields–radiology, pathology and now, dermatology–setting it up to be either a doctors best friend or biggest nemesis.
The key to avoiding being replaced by computers, Topol tells, is for doctors to allow themselves to be displaced instead. Most physicians in these fields are overtrained to do things like screen images for lung and breast cancers, he tells. Those undertakings are ideal for delegation to artificial intelligence. When a computer can do the job of a single radiologist, the job of the radiologist expands–perhaps to monitoring multiple AI systems and using research results to construct more comprehensive therapy schemes. Less period describing on X-rays, more period talking patients through options.
Thats exactly what cloud-based medical imaging company Arterysis doing for cardiologists, with an application that uses AI to quantify blood flowing through the heart. The algorithm, which is based on about 10 million rules, use MRI images to make contours of each of the hearts four chambers, precisely measuring how much blood they move with each contraction. Today, cardiologists have to draw these contours by hand–especially tricky with the peanut-shaped right ventricle. Doctors usually need 30 to 60 minutes to calculate the volume of blood transported with each pump. But Arteryss AI comes up with the answer in 15 seconds.
Earlier this month the FDA cleared the company to market its product, and with a partnership with GE Healthcare to get the Arterys system in GE MRI machines, doctors could be using it as soon as this year. The decision opens up the route for more applications of deep learn AI to get into the hands of physicians as fast as companies can train them. Whether or not doctors use them will be the first true exam of the technology’s potential to improve patient care.