One of the best parts of my job as a Quality Assurance Consultant at ADAMAS is getting out on-site to perform vendor audits. I love visiting new companies to discover the myriad of ways they’re using computers and tech. It’s always an education, and I especially like chatting to clients about our constantly evolving business. A topic that crops up again and again is the role of artificial intelligence (AI) in healthcare.
There are many good reasons to believe AI has the potential to change healthcare forever. Within the field of Pharmacovigilance alone, AI’s already playing a key role in auto-narrative generation; narrative analysis (including case extraction and creation); QC assessment; causality assessment; and ‘touchless’ case processing, where non-serious cases are received, verified, coded, processed and submitted without any human intervention.
Already it’s delivering substantial, real-world benefits. When it comes to identifying patterns and trends, we’ve never witnessed anything as quick, or efficient. It can sift through gigantic datasets and data libraries in a fraction of the time it would take an entire team of humans. That brevity has a beneficial effect on the bottom line, too.
Naturally, that excites ‘big pharma’, and it’s easy to understand why. We’ve already seen AI identify new target molecules, the lifeblood of the new drug development process. It can cost around $1 billion to take a drug from compound identification, to manufacture. For every Viagra, there are 99 others that don’t make it. There’s an element of shooting fish in a barrel and it’s easy to see why the pharma giants would like more control over that roulette wheel. And let’s not forget patents expire after 20 years, and intellectual property claims lost. AI could be the difference within keeping hold of a billion dollar idea… or losing it.
So it’s clear AI’s got a role to play across our sector. But a recent conference I attended highlighted a few flaws.
We saw a collection of animal pictures. The AI had been programmed to ‘flag’ each time we saw a panda. It did that faultlessly. But then it got interesting. The presenter replaced the panda picture with an almost identical one, but with a handful of pixels replaced, barely visible to the human eye.
He ran the test again. And this time, the AI couldn’t find the panda. We were clearly still looking at a panda, but the program didn’t know anything about it. It demonstrated that AI can be powerful, but can be easily fooled either intentionally or unexpectedly.
Of course, no-one’s using AI for the big life or death decisions just yet. But that’s a red flag. A tiny tweak had a huge impact on the outcome. That’s a concern for an industry entirely founded on patient safety, and the need to get the big calls spot on. The tools we use have got to be safe and this experiment suggests AI in its current form doesn’t have a place in say, making decisions on patient safety.
Current computer systems (certainly within my field of Computer Systems Compliance) are thoroughly validated, so behave as intended . They can be trusted. If we’re asking how to validate AI to make sure it is performing correctly – well, we don’t quite have the answer yet.
Ultimately, we’re right to feel excited about what AI can do for our business, but it needs to be with caution. It’s already delivering big wins in a number of areas. But at present, the job of AI should be to help make informed decisions when it comes to patient safety – but not the ultimate decision. Checks and balances need to be in place. And in reality, that means some human input.