AI is coming to healthcare and medical research. How do these technologies challenge long-standing principles of medical ethics? What new challenges will healthcare practitioners, medical researchers, patients, and regulators need to face?
We interviewed and organised roundtable discussions with more than 70 experts all round the world, including clinicians, patients, computer scientists, policymakers, ethicists and regulators.
We identify 5 key use cases for AI in health and medical research, separating hype and hope from what's actually happening:
- Process optimisation e.g procurement, logistics, and staff scheduling
- Preclinical research e.g drug discovery and genomic science
- Clinical pathways e.g. diagnostics and prognostication
- Patient-facing applications e.g delivery of therapies or the provision of information
- Population-level applications e.g. identifying epidemics and understanding non-communicable chronic diseases
We also identify 10 ethical, social, and political challenges that urgently need to be addressed. It is essential that research focusing on these challenges is multidisciplinary, and that the the voices of patients and their relatives are heard. It is only by developing tools that address real-world medical needs that the opportunities of artificial intelligence in health can be maximised, while the risks are minimised.
Across these use cases, a number of ethical, social, and political challenges are raised and the 10 most important are:
- What effect will AI have on human relationships in health and care?
- How is the use, storage and sharing of medical data impacted by AI?
- What are the implications of issues around algorithmic transparency/explainability on health?
- Will these technologies help eradicate or exacerbate existing health inequalities?
- What is the difference between an algorithmic decision and a human decision?
- What do patients and members of the public want from AI and related technologies?
- How should these technologies be regulated?
- Just because these technologies could enable access to new information, should we always use it?
- What makes algorithms, and the entities that create them, trustworthy?
- What are the implications of collaboration between public and private sector organisations in the development of these tools?
In this report, we explore these and the other challenges raised by our research and make recommendations for further study in this complex and sensitive field. We also find that there are overarching ethical themes, namely consent, fairness and rights, that cut across the challenges we identify. We ask how users can give meaningful consent to an AI where there may be an element of autonomy in the algorithm’s decisions, or where we do not fully understand these decisions. Ensuring fairness through preventing and eliminating health inequality, and providing value to stakeholders is another critical issue. Finally, the right to health may well be expanded to encompass questions such as “do people have a right to know how much AI is used in their care?” and “do people have a right not to have AI involved in their care at all?”
We recommend a multidisciplinary approach to dealing with these issues. This refers not only to galvanising a broad range of experts, many of whom will use and be impacted by these tools, but also to the active participation of patients, their relatives, and the public in their development. It is equally important that these technologies are developed with a view to sharing their benefits as widely as possible. This is the best way to ensure that real-world challenges are addressed, that the needs of patients beyond their clinical care are considered, and that these technologies are accepted by patients and practitioners alike.
-----------------------------------------------------------------------------------------------------------------------------------------------------

-----------------------------------------------------------------------------------------------------------------------------------------------------