A bioethicist and professor of medicine on regulating AI in health care

Artificial intelligence (AI) sensation ChatGPT, and rivals such as Bloom and Stable Diffusion, are big language models for consumers. ChatGPT has caused particular delight since it first appeared in November. But more specialized AI is already widely used in medical settings, including radiology, cardiology and ophthalmology. Major developments are in the pipeline. Med-PaLM, developed by Alphabet-owned AI firm DeepMind, is another great language model. Its 540 billion parameters have been trained on datasets from professional medical examinations, medical research, and consumer health-care questions. Technology like this means that our societies now need to consider the best ways for doctors and AI to work together, and how medical roles will change as a result.

The benefits of health AI can be vast. Examples include more accurate diagnosis using imaging technology, automated early diagnosis of diseases through analysis of health and non-health data (such as a person’s online-search history or phone-handling data), and clinical planning for the patient. immediate generation of is involved. AI could make care cheaper because it enables new ways to assess the risk of diabetes or heart disease, for example by scanning the retina instead of administering multiple blood tests. AI has the potential to mitigate some of the challenges left by COVID-19. These include falling productivity in health services and backlogs in testing and care, as well as many other problems affecting health systems around the world.

For all the promise of AI in medicine, a clear regime is badly needed to regulate it and the liabilities it presents. Patients must be protected from the risks of misdiagnosis, unacceptable use of personal data, and biased algorithms. If machines are unable to offer the empathy and compassion found at the core of good medical practice, they must also prepare themselves for the potential depersonalization of health care. At the same time, regulators everywhere face difficult issues. The law has to keep pace with the ongoing technological developments – which is not happening at present. It also needs to take into account the dynamic nature of algorithms, which learn and change over time. To help, regulators should keep in mind three principles: coordination, adaptation and accountability.

First, there is an urgent need to coordinate expertise internationally to fill the governance vacuum. AI tools will be used in more and more countries, so regulators should start cooperating with each other now. Regulators proved during the pandemic that they can move together and with speed. This form of cooperation should become standard and build on existing global architectures, such as the International Alliance of Medicines Regulatory Authorities, which supports regulators working on scientific issues.

Second, the approach to governance needs to be adaptive. In the pre-licensing phase, regulatory sandboxes (where companies test products or services under the supervision of a regulator) will help develop the necessary agility. For example, they can be used to determine what can and should be done to ensure product safety. But a variety of concerns, including uncertainty about the legal responsibilities of businesses participating in sandboxes, mean that this approach is not used as often as it should be. The first step would therefore be to clarify the rights and obligations of the sandbox participants. For reassurance, the sandbox should be used in conjunction with the “rolling-review” market-authorization process that was pioneered for vaccines during the pandemic. This involves completing an evaluation of a promising therapy by reviewing packages of data in the shortest possible time. ,

The performance of an AI system must also be continuously evaluated after a product has gone to market. This will prevent health services from being locked into flawed patterns and unfair outcomes that harm particular groups of people. The US Food and Drug Administration (FDA) has made a head start by formulating specific regulations that take into account the learning abilities of algorithms once they are approved. This would allow AI products to automatically update over time if manufacturers introduce a well-understood protocol for how the product’s algorithm might change, and then test those changes to ensure that the product works. Maintains a significant level of safety and effectiveness. This will ensure transparency for users and drive real-world performance-monitoring pilots.

Third, collaboration between technology providers and health care systems requires new business and investment models. The former seek to develop products, the latter manage and analyze troves of high-resolution data. Partnership is inevitable and has been tried in the past with some notable failures. IBM Watson, a computing system touted to great fanfare as a “moonshot” to improve medical care and help doctors make more accurate diagnoses, has come and gone. Multiple barriers, including an inability to integrate with electronic health-record data, poor clinical utility, and a misalignment of expectations between doctors and technologists proved fatal. A partnership between DeepMind and the Royal Free Hospital in London has caused controversy. The company gained access to 1.6m NHS patient records without patients’ knowledge and the matter ended up in court.

What we have learned from these examples is that the success of such partnerships will depend on clear commitments to transparency and public accountability. This will require not only what can be achieved by different business models for consumers and companies, but also sustained engagement with doctors, patients, hospitals and many other groups. Regulators need to be open about the deals tech companies will make with health care systems, and how the sharing of profits and responsibilities will work. The trick would be aligning the incentives of all involved.

Good AI governance should promote both business and customer safety, but it will require flexibility and agility. It took decades to turn climate change awareness into real action, and we’re still not doing enough. Given the pace of innovation, we cannot accept the same walking pace on AI.

Effy Vayena is a founding professor of the Health Ethics and Policy Lab at ETH Zurich, a Swiss university. Andrew Morris is director of Health Data Research UK, a scientific institute.

© 2023, The Economist Newspaper Limited. All rights reserved. From The Economist, published under license. Original content can be found at www.economist.com

catch all technology news And updates on Live Mint. download mint news app to receive daily market update & Live business News,

More
Less