

These include concerns around data privacy, algorithmic fairness, bias, safety, informed consent, and transparency, for which the medical profession may be unprepared to navigate. In the domain of healthcare, the expansion in predictive modelling research is paired with rapidly emerging concerns about the ethical use of such methods, particularly artificial intelligence (AI). We show how this ethical algorithm can be used to assess, across seven mutually exclusive and exhaustive domains, whether an AI-supported action can be morally justified.Ī great deal of effort is currently being expended on developing risk prediction models for individuals and patient groups using a variety of approaches ranging from genomics and metabonomics through to socioeconomic phenotyping. We use a common but key challenge in healthcare interactions, the disclosure of bad news (likely imminent death), to illustrate how the philosophical framework of the 'Felicific Calculus' developed in the eighteenth century by Jeremy Bentham, may have a timely quasi-quantitative application in the age of AI. While physicians have always had to carefully consider the ethical background and implications of their actions, detailed deliberations around fast-moving technological progress may not have kept up. However, the addition of these technologies to patient–clinician interactions, as with any complex human interaction, has potential pitfalls. Advances in AI hold the promise of improving the precision of outcome prediction at the level of the individual.


