AI regulations are up to you, futurist tells AMA

7 minute read


Bertalan Mesko says the peak body should be proactive, like medical associations in Canada and the US.


Leading healthcare bodies have weighed in on how AI should be regulated, but Australia may be too late to the party. 

The recent government consultation on responsible use of AI was the second in two years and received submissions from a cross-section of the healthcare sector including medtech, private hospitals and digital health research.  

However, it was the Australian Medical Association’s submission that prompted a response from Bertalan Mesko, leading healthcare futurist, suggesting that Australia is falling behind. 

Dr Mesko (PhD) said that medical associations in Canada and the US took it upon themselves to design regulations for “advanced medical technologies” which allows policymakers sufficient time to bring those regulations into action. 

“I always smile out of disappointment when a medical association ‘calls for stricter regulations on healthcare AI’, just like the Australian Medical Association did,” Dr Mesko wrote in a LinkedIn post

“It is your job to provide regulations. This is why some other medical associations have been working with professional futurist researchers like me to make it happen.”  

Leading Australian experts also say that Australia lags most developed nations in its engagement with AI in healthcare and has done so for many years. 

Professor Enrico Coeira from Macquarie University, Professor Karin Verspoor fromRMIT University, and Dr David Hansen (PhD) from the Australian EHealth Research Centre at the CSIRO wrote in the MJA that it was “a national imperative” to respond to the challenges of AI. 

“With AI’s many opportunities and risks, one would think the national gaze would be firmly fixed on it. [However] … there is currently no national framework for an AI ready workforce, overall regulation of safety, industry development, or targeted research investment. 

“The policy space is embryonic, with focus mostly on limited safety regulation of AI embedded in clinical devices and avoidance of general purpose technologies such as ChatGPT,” the authors said. 

Meanwhile, other countries with more advanced AI regulations and digital health infrastructure are leaping ahead.  Israel’s largest acute care facility announced this week that it will be using a ChatGPT chatbot to help triage admissions

The Tel Aviv Sourasky Medical Center has 1500 beds and nearly two million patient visits each year. The hospital will be using the clinical-intake tool created by Israeli startup Kahun to “free up medical staff” and prevent burnout by providing pre-visit summaries, diagnostic suggestions and next steps of care advice. 

The Australian Department of Industry, Science and Technology called for submissions in June “to inform consideration across government on any appropriate regulatory and policy responses” for the safe and responsible use of AI.   

Emma Hossack, chief executive of Medical Software Association (MSIA) said that any regulation needs to take a “Goldilocks approach”. She said the right amount of regulation would not be too permissive and allow AI risks to go unchecked but would also be not too heavy. Ms Hossack said that regulation was a good thing as long it it was done promptly and in full consultation with industry. 

“An unregulated market, in an area where there’s risk, creates uncertainty of investment and development paths and uncertainty for a business case. This is because any time after an adverse event there’ll be new regulation applied which creates risk for business,” she said. 

Ms Hossack said the TGA’s regulation of software-based medical device could be a template for new regulations about generative AI. 

“The principles applied, the legislation and then the guidelines were exemplary,” she said. 

The MSIA said in its submission to that transparency on what AI was doing in any healthcare application was primary to build trust, “so that the provenance of AI outputs is appropriately managed in a risk-based framework”. 

It also called for thorough co-design and education, and an underpinning taxonomy for all medical software providers. 

Private Healthcare Australia said in its submission that regulation of AI “should not intrude on a fund’s ability to make commercially confidential decisions or engage in product development or service automation”. The submission said that health funds which already used automated decision-making processes “may increasingly use AI” to assess and process insurance claims. 

The Digital Health CRC put in a strong plug for a risk-based approach to regulation that “establishes rules to govern the deployment of AI in specific use-cases but does not regulate the technology itself”. 

Dr Stefan Harrer, the DHCRC’s chief innovation officer, said in a statement that new AI technologies “evolve at lightning speed, making it near impossible to generate the evidence base for risk mitigation at the same pace”. 

“Only regulation that focuses on outcomes rather than technology will be able to keep up and adapt to changing conditions quickly and efficiently,” he said. 

Dr Michael Bonning, president of the Australian Medical Association (AMA) NSW confirmed the AMA position as a supporter of technological advancement in healthcare as long as it served the doctor and patient, and didn’t widen the inequity gap. He told TMR that good regulation was required as it builds trust in a system but that upholding good regulation might slow down access to AI-enabled solutions. 

“The general AI space is quite poorly regulated and, like most new technologies there’s the [development model] of breaking things along the way and fixing it as you go. This is not something that we can tolerate on behalf of patients or practitioners in the Australian healthcare context,” he said. 

The Australian Medical Technology Association of Australia and the Asia Pacific Medical Technology Association wrote in a joint submission that there was a need “for regulation to evolve to address sector risk”. They named the TGA, and other existing medical device regulators, as being best placed to incorporate emerging AI regulation within existing frameworks. They also endorsed codesign of any new codes. 

“It is important that substantial consultation with the medical technology industry occurs regarding any proposed regulation. Any broad regulation of AI, even regulation of AI aimed at the medical industry generally, could have unintended consequences for patient outcomes and the medical technology industry,” the submission said. 

The Australian Alliance for AI in Healthcare met today with the specific goal to work out what policy was needed to ensure that Australia is positioned to take advantage of all the benefits of this technology. 

Professor Coiera, director of the Australian Institute of Health Innovation at Macquarie University, is a founding member of the AAAiH. He said today’s meeting was about bringing research, industry, government, clinical and consumer voices together to develop a national strategy. 

“It is imperative that we develop a national strategy for AI in healthcare in order to not only realise the enormous potential benefits, but to also identify and manage the risks.  

“No one part of the sector can do this on their own, we need a whole of system approach to be effective and sustainable. It is vitally important and exactly what AAAiH is leading.” 

Professor Enrico said the AAAiH was strongly supportive of the Government’s active interest in developing industry appropriate policy and governance for AI. 

The Department of Industry, Science and Technology has been asked to comment on what will be done with submissions to the current consultation on AI and also, what was actioned from the 2022 consultation on AI. They did not respond by publication deadline. 

End of content

No more pages to load

Log In Register ×