How Computer Vision & Natural Language Processing are Revolutionising Healthcare
26th October 2018
By John Stephenson
Machine Learning & Deep Learning
Artificial intelligence is transforming healthcare. AI companies are using machine learning algorithms to understand drug chemistry and genetic markers. They’re offering online consultations using predictive analytics, and they’re incorporating test results and sensor data to give real-time patient status updates to medical practitioners.
Among the most exciting areas for AI in healthcare, however, are computer vision and natural language processing (NLP). Recently, we had the opportunity to attend and exhibit at the ReWork Deep Learning Summit and Deep Learning in Healthcare in London at the end of September. While we were there, we heard presentations from Mark Gooding, Miranda Medical; Sarak Culkin, NHS; Ahmed Serag, Phillips; and Trevor Back, DeepMind Health.
We were impressed with the real current applications of computer vision and natural language processing in healthcare. What the presenters shared made us even more excited for the near future where computer vision and NLP play an increasingly important role in helping doctors, patients, and researchers alike discover and fight disease and injury. In this article, we’ll share the top current healthcare applications of computer vision and NLP and what you can expect in the near future.
Computer Vision & Natural Language Processing are changing the face of healthcare.
Healthcare & AI: A Great Match
AI is good at identifying patterns, making predictions, and analysing complex situations. Healthcare is perhaps the ultimate combination of those three disciplines. Well implemented AI algorithms can literally save lives when they help a doctor notice something, point out a mistake, improve drug delivery, or help train medical experts. Natural language processing and computer vision are the cutting edge of AI with the greatest potential in healthcare.
NLP helps computers interpret and respond to human language. Aside from visual observation, one of the key inputs a doctor relies on to make a diagnosis or narrow down possibilities is the patient’s description of their symptoms, therefore Natural Language Processing in Healthcare can have major benefits. If NLP algorithms can help with initial screening questions, doctors can spend less time triaging and asking background information. Instead, they can get right to ordering tests and investigating specific concerns.
Healthcare also relies heavily on various types of images and scans for everything from diagnosis to new drug discovery, this is where Computer Vision in Healthcare comes into its own. Often, these images are grainy, hard to distinguish, or require recognising very small, specific patterns. Computers can assist and often exceed human capabilities in these types of image analysis tasks. Using computer vision in healthcare, this artificial intelligence technology can help doctors and researchers get faster, more accurate results from tests, scans, and screenings.
Examples of Computer Vision and Natural Language Processing in Healthcare
IBM Watson uses Deep Learning to diagnose patients
Computer vision has shown major promise is in identifying cancerous cells and tumours from images and biopsy results.
So far, the biggest breakthroughs have come in dermatology, where a computer can analyse an image of a person’s skin much more quickly and thoroughly than a dermatologist doing an in-person exam. Recently, computer vision algorithms have proven themselves more effective at identifying potential skin cancer tumours than doctors.
Similar breakthroughs have come in the field of breast cancer screenings. Computer vision can be applied to mammogram images to accurately identify tumors in the breast. Moreover, lung CT scan images processed through computer vision algorithms have shown promise at identifying lung cancer, as well.
One of the presenters we saw at ReWork, from DeepMind Health, shared some of the success they’ve had identifying head and neck cancer in collaboration with the Radiotherapy Department at University College London Hospitals. Initial testing shows DeepMind’s algorithm can identify head and neck cancer with the same accuracy as a trained doctor in a fraction of the time.
From Testing to Treatment, Faster
Another promising application of NLP and computer vision in healthcare is for remote diagnosis and faster test results. If patients can get seen and tested more quickly, preventative medicine is more effective in mitigating the consequences of disease.
Babylon Health is one British startup working on the area of rapid diagnosis. They’ve developed an app and NLP algorithms to help a chatbot ask you the same questions a doctor would ask you at an in-person examination. The app does not profile an official diagnosis but uses speech and language processing to pull out symptoms and then forwards your profile information to a doctor. The doctor uses the processed information from the app to provide a fast diagnosis and can even chat with the patient via video call in the app.
Another company, Medopad, has been working on similar issues but with a focus on providers. Their intelligent apps provide doctors with supplemental information during the diagnosis process. Recently, both Babylon Health and Medopad have partnered with Chinese company Tencent to use and improve its machine learning algorithms alongside Tencent’s other computer vision applications that can identify symptoms from user photos. The major promise of computer vision is triage, easily weeding out obvious non-symptomatic cases so that doctors can focus on reviewing images, and ultimately seeing patients, that are symptomatic.
Even after a visit to the doctor, NLP can help patients understand their diagnosis and options for treatment and prevention of future problems. NLP algorithms can provide research-backed advice tailored to the patient’s education level in much greater depth than a doctor ever could bedside.
Another highly-promising application of computer vision in healthcare is for research. Using anonymised scans of past patients, researchers, medical device manufacturers, and drug companies can identify trends and save time and money in the clinical trials phases of research. Computer vision promises to accelerate the identification of trends in patient images, making connections that would be time-consuming, if not impossible, for human researchers to discover on their own.
Identifying patterns in injuries and disease progression is key to discovering solutions and learning how to prevent diseases in the first place. On this front, Benevolent AI is one company leading the charge into a new AI-powered world of medical research. Their natural language processing algorithms analyse the world’s research papers and link common papers together for researchers with a reach and depth that wasn’t feasible before AI. The next step is applying this linking philosophy to research images, drug molecules, and other visual models to accelerate and contextualise healthcare research even further.
3D visualisation of patients enables surgeons to be better prepared for surgery
At the intersection of computer vision and augmented reality is surgical simulation and surgical assistance technology. This rapidly developing field helps surgeons train for and make decisions during complicated surgeries, including laparoscopic surgeries where surgeons only have camera images to rely on.
One company paving the way in this space is Touch Surgery. Their mobile app allows anyone in the world to learn and prepare for surgery based on cutting-edge best practices with more than 100 surgical simulations across fourteen specialities. Touch Surgery’s app already has over 1.5 million users, and new hires and partnerships in computer vision and augmented reality will allow Touch Surgery’s training to become even more immersive.
As computer vision improves in its recognition capacity, surgeons might be able to use augmented reality in real-life surgeries. They could receive guidance, warnings, and updates in real time based on what the computer vision algorithm sees in the operating room.
Computer vision and natural language processing clearly hold great potential for improving healthcare. Doctors rely on images, scans, in-person vision, the patient’s responses, and medical research to make their diagnoses. With the help of computer vision and NLP, those diagnoses can come more quickly and comprehensively, leading to faster, higher quality healthcare for everyone.
Feel free to reach out to me if you would like to discuss anything from this article: [email protected], at Logikk we engage the exceptional humans that build these applications of ai in healthcare.