A recent news article highlights just how scary the advancements in technology are becoming as a new service is being tested with the hopes of being rolled out in the future to ease up pressure on doctors and medics.
Babylon Health are now hoping to roll out the concept of chat-bots for NHS 111 helplines out nationwide. This concept is basically an app that you “chat” with by inputting your symptoms. This automated piece of tech is pre-programmed with numerous diagnoses, questions, services and suggestions. It will respond to your symptoms with further questions to probe the nature of your problem, and from there it tries to advise you what to do next. It can also tell you which specific service to visit, as close to your home as possible.
“There are 300 million pieces of knowledge that we have collected. No human brain can do that… I think we will soon be able to diagnose more accurately and faster than a doctor in most cases. That leaves the doctor to focus on the management of the diseases” – Dr Parsa, the app’s creator.
A pilot has already been done and more testing will be undertaken with the hopes of refining its accuracy and usefulness.
If this pilot is successful, there’s every chance it will be pushed and promoted to ease the pressure of GPs, A&E and the human-operated 111 service. But in reality, what would it mean to move the 111 service to such a technologically advanced status, such as through this app? Aside from obvious concerns about a computer using its programmed data to analyse solely human feelings and responses, there’s another huge issue here. Not everyone uses apps! Let’s not get in to the potential issues with privacy, technical glitches, lack of WiFi or power outages…
The calls made to the 111 helpline have increased from 2 million to 15 million calls a year over a four year period, which is quite a shocking statistic. I can understand the desire to utilise technology and automated processing to ease the burden.
I do like that an app can’t give you a condescending, judgemental or sceptical look. It can’t rush you out of the doctor’s office and make you feel as though you’ve wasted its time. Instead, it’s me that would give it the sceptical look. How can an app possibly give adequate responses to purely human feelings? How can an app draw lines between the multi-faceted nature of our lives and experiences, between the symptoms and nuances and things both said and unsaid? How do we possibly get our point across to a system that is pre-programmed to respond in certain ways?
Of course, there is the issue of human error with a person at the end of the 111 helpline or with the doctor sitting across from you. Cases of missed illnesses, patients being fobbed off and symptoms being misdiagnosed are seen all too often in the media. There will always be mistakes, overlooked symptoms and incorrect assumptions. But an app isn’t going to give us the chance to fight our case to be taken seriously, it’s not going to order any tests that may be needed, send us for referrals. It’s simply going to tell us where to go (where we will then probably either wait months for an appointment, or get an appointment only to be fobbed off eventually anyway) or give us suggestions to help ourselves. Perhaps for low level issues or more straightforward health concerns this could be helpful. But I’m certainly dubious about “chatting” to an app about anything more complex than a common cold.