🇬🇧 AI has no place in the NHS if patient privacy isn’t assured


“While doctors and other medical staff are bound by confidentiality and ethics, we haven’t yet figured out what it means when a digital third party — the apps and algorithms — are allowed in the room, too.”



Hum, maybe it’s because I’ve always understood machines better than humans, but I kind feel safer with the AI than with the doctor. Provided, the AI is open source and/or audited in some way.



I like the idea of things being open to audit and it certainly would not hurt for medical technology. See Karen Sandler’s keynote at the Gnome 2017 GUADEC conference [Keynote on Youtube, keynote for download]. Spoiler: She realizes that she is very dependent on a medical device with an open wireless connection.

However, I’m uncertain how you would realistically review/audit an AI system. The math behind is generic and the output seems incomprehensible; xkcd has a humorous take on this. I’m sure AI has its place, but I still want someone to be accountable for a diagnosis and suggested treatment.

Finally, and more importantly, any AI system has to be fed huge amounts of data to be trained - and the standard procedure is to have some big, nebulous company do it. I have huge privacy concerns over that. The idea of handing over sensitive data to an entity whose business model is surveying every single individual on the planet is appalling. There simply can be no ethical way to do that.


This was a good article on AI and how it could possibly be audited (previously on the Radar) https://www.bostonglobe.com/ideas/2017/07/07/why-artificial-intelligence-far-too-human/jvG77QR5xPbpwBL2ApAFAN/story.html

I’d say more of the issue here is that the corporations entrusted with analysing medical data have a vested interest in personal information.


Good to see that efforts are made to put algorithmic decisions under scrutiny! With respect to AI and health data, I was more thinking of some kind of due diligence audit before starting the process. This still seems difficult, as does checking the training data for bias.

And I agree, here the AI part is a lesser problem than leaking health data to people farmers.