You are currently viewing The future of voice AI in healthcare

The future of voice AI in healthcare


Famous social media influencer Gary Vayernurchuk says, “The future belongs to Voice.” Period. Look at all AI (artificial intelligence) driven assistants around us, from Alexa to Google Assistant, there is inherent convenience in just saying it out loud and having voice based conversations with your ‘virtual’ assistant rather than writing commands or selecting a drop down menu. Right?

We all know that healthcare needs to be digitised in order to reach the next level of patient care. Technologies like AI and Blockchain need to be integrated into existing healthcare systems in order to make them more efficient. But these technologies can only work if all our processes are digitised first.

Digitisation of healthcare processes will bring out new insights and speed up research and innovation in the field. It will make the sector efficient for all players, especially and most importantly for the 3 Ps of healthcare- Patient, Provider (the doctors) and the Payer (the insurance companies). Data is the fuel needed to power all this innovation.

However, manual digitisation is a herculean task to say the least. On top of that, medical vocabulary is not something any data entry operator can understand and work on. There will be errors and mistakes of all sorts if a person from a non-medical background is assigned the task of digitising healthcare systems, from medical notes, to doctors orders to ICD 10 codes for insurance companies.

Voice recognition or more technically known as ‘speech recognition’ or ‘speech to text’ vs ‘text to speech’ systems are AI powered engines that can do exactly as the name suggests.

They can turn speech into text , and text into speech with ease. Once speech is converted into a digital text, it can be used for many other purposes.

Potential use cases of voice AI and its benefits

On the consumer (patient) side:

Bots and virtual health guides

Many healthcare bots are already prevalent in the market today and they can undertake symptom evaluation, provide guidance and connect patients to relevant healthcare services. Additionally, there are other AI powered bots that work specifically to help people to deal with emotional stress or mental health issues.

These ‘virtual beings’ are able to do text based conversations with patients on an array of health issues to give them timely guidance. However, a large majority of people in developing countries like India are not able to do these text based conversations due to lack of language fluency or the inability to read and write. In this scenario, Voice AI has the potential to revolutionize patient care.

Patients can directly speak to the virtual assistant and share their distress, and the agent can also use ‘speech synthesis’ to reply back in voice based responses. This brings ‘AI to everyone’ in a sense. AI systems to learn the meaning of different ‘tones or moods’ of a user’s voice.

Similar to how humans can figure out if a person is happy or sad just through the tone of their voice, AI systems can also be trained to do so. In fact, in some situations they can be more efficient than humans.

Voice can speak volumes about a person’s health, well being, emotional and mental state of a patient and AI systems can identify, triage, analyse or guide patients very efficiently.

Many people can be prevented from entering into a depressive state by identifying them early on, just by seeing cues and changes in their voice. How many suicides may be prevented? The possibilities are endless.

Appointment bookings and telemedicine

Appointment booking apps are a common sight now, especially after the COVID-19 Pandemic. And many of them are still facing the same challenges of language and readability in developing countries.

Voice based assistants can guide patients on what they need to do and how to operate these solutions better.

Health assistants

Typically health assistants refer to softwares and solutions that help patients better manage their treatment regimens. From medicine reminders to noting daily vitals like blood pressure to blood sugar, all kinds of health assistants operate in the healthcare domain. Voice based systems can become a better guide.

A healthcare assistant that knows the treatment regimen and schedule of the patient can better remind patients. It can help patients become more compliant and better understand their prescriptions.

On the Provider’s Side

Taking notes on rounds

Every time a doctor comes to visit a patient in the ward, the doctor’s orders are noted by the staff. All these manual notes are then filled in to create a digital version of it. This is further labelled by medical coders manually for use by different departments of the hospital.

Speech to text systems can fully automate this process, save time and money as well as promote research. Not only that, such AI systems can analyse the voice patterns of the doctor on duty and figure out his alertness or mood while working on his/her patients.

Virtual assistants for doctors

General interactions with bots and assistants will eventually increase for doctors as well. One doctor has to manage a lot of patients. And a ‘Jarvis’ like assistant can help him to a large extent. A virtual PA of sorts, these technologies will work on Speech recognition systems largely.

Some interesting clinical evaluations based on voice AI systems:

Many health disorders are based on voice analysis. For example, many laryngeal disorders can be triaged into low risk, or high risk profiles based on various features of the patient’s voice.

There are AI systems currently being developed that can listen to a patient’s voice and identify if the patient may have a benign or malignant lesion in their larynx (voice box) based on various parameters. Such systems can be scaled easily and increase the reach of primary care services to a large extent.

Risks and challenges

Despite the potential, there are some major challenges and risks involved. Gathering the required datasets to train AI models for such tasks is a big challenge. AI is very sensitive to the kind of data used. Early days may have limitations in terms of what kinds of tonations, accents and languages these systems can understand.

Availability in regional languages will be slow to develop and require a lot of persistent efforts in terms of data collection, and digitisation.

However, all these problems can be solved and with time, as we gather more data, the systems will only get better and better over time.

Edited by Affirunisa Kankudti

(Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of YS.)



Source link

Leave a Reply