Dignity over data – where medical AI fails to impress

2 minute read

Computer says yes, patient says no to AI medical decisions.

AI in healthcare is only going to get bigger and new Macquarie University research reveals how to do it better. 

In this super short podcast, we hear from Associate Professor Paul Formosa from Macquarie University. He’s been researching how patients respond to AI making their medical decisions compared to how they respond if a human is involved.  

Professor Formosa says that patients see humans as appropriate decision makers and that AI is perceived as dehumanizing even when the decision outcome is identical. 

“There’s this dual aspect to people’s relationship with data. They want decisions based on data and they don’t like it when data is missing. However, they also don’t like themselves to be reduced merely to a number,” Professor Formosa says. 

Study scenarios included use of imaging AI to diagnose of skin cancers and to allocate organ donations. Professor Formosa poses a key question for both technology designers and specialists to ask. 

“If AI technology is used, is it being used in ways that promote good health care interactions between patients and healthcare providers? Or is it just automatically relied on in a way that interferes with that relationship?” he says. 


End of content

No more pages to load

Log In Register ×