Page Loader

Ethical and Legal Challenges of Artificial Intelligence-Driven Health Care

/Ethical and Legal Challenges of Artificial Intelligence-Driven Health Care

European AI healthcare worth €20 billion by 2020 While recent legal developments in the US and Europe will hopefully promote the safety of AI-based healthcare products, services and processes, cyberattacks are often a global problem; Data sharing and data breaches often do not stop at the borders of the US or Europe, but occur all over the world [53]. It is therefore necessary to have a large-scale and internationally enforceable cybersecurity legal framework that ensures a high level of cybersecurity and resilience across borders [53]. It will not be easy to create such a framework, as it will require an appropriate balance between the different interests of all relevant stakeholders [53]. We will first briefly clarify what AI is and provide an overview of trends and strategies regarding AI ethics and law in healthcare in the US and Europe. This is followed by an analysis of the ethical challenges of AI in healthcare. We will discuss four main challenges: (1) informed consent to use, (2) security and transparency, (3) algorithmic fairness and bias, and (4) privacy. We then address five legal challenges in the United States and Europe, namely (1) security and efficiency, (2) accountability, (3) data protection and privacy, (4) cybersecurity, and (5) intellectual property law. To harness the enormous potential of AI to transform healthcare for the better, AI stakeholders, including AI manufacturers, clinicians, patients, ethicists, and legislators, need to engage in the ethical and legal debate about how AI is successfully implemented in practice (Table 12.1). Keywords: artificial intelligence, machine learning, ethical issues, legal issues, social issues Quote: Naik N, Hameed BMZ, Shetty DK, Swain D, Shah M, Paul R, Aggarwal K, Ibrahim S, Patil V, Smriti K, Shetty S, Rai BP, Chlosta P and Somani BK (2022) Legal and Ethical Consideration in Artificial Intelligence in Healthcare: Who Takes Responsibility? In front of-. Surg. 9:862322.

doi: 10.3389/fsurg.2022.862322 From clinical applications in areas such as imaging and diagnostics, to optimizing workflow in hospitals, to using health apps to assess an individual`s symptoms, many believe artificial intelligence (AI) will revolutionize healthcare. Economic forecasters have predicted explosive growth in the AI healthcare market in the coming years. According to one analysis, the size of the market will increase tenfold between 2014 and 2021 [1]. With this growth comes many challenges, and it is crucial that AI is implemented ethically and legally in the healthcare system. In this chapter, the ethical and legal challenges of AI in healthcare are presented and solutions are proposed. Several practical examples have shown that algorithms can present biases that can lead to injustices in terms of ethnicity and skin or gender color [59], [60], [61], [62], [63]. Distortions may also occur in relation to other characteristics such as age or disability. The explanations for such distortions differ and can be multiple.

They can result, for example, from the datasets themselves (which are not representative), the way data scientists and ML systems select and analyze data, the context in which AI is used [64], etc. For example, in the healthcare sector, where phenotype and sometimes genotype information is at stake, biased AI could lead to misdiagnosis and render treatments ineffective for certain subpopulations, jeopardizing their safety. For example, imagine AI-based clinical decision support software (CDS) that helps doctors find the best treatment for skin cancer patients. However, the algorithm was mainly trained on Caucasian patients. As a result, AI software is likely to provide less specific, or even inaccurate, recommendations for subpopulations for which training data was not inclusive, such as African Americans. 20. Rezler AG, Lambert P, Obenshain SS, Schwartz RL, Gibson JM, Bennahum DA. Professional choices and ethical values among medical and law students. Acad Med.

(1990) 65:. doi: 10.1097/00001888-199009000-00030 There are many other important FDA initiatives that we cannot discuss here, including the Guidance on Software as a Medical Device (SAMD): Clinical Evaluation [79] and the launch of the Software Pre-Cert pilot program. The latter will allow certain digital health service developers to pre-certify based on excellence in identified criteria (e.g., patient safety, clinical responsibility, and product quality) and market their low-risk software medical devices with simplified FDA review or no review at all ([80], pp. 5-7; [81]). The FDA has also published a working model that includes proposals for the main components of the Pre-Cert pilot program [81], [82]. While there are still many open questions, the program is an innovative regulatory experiment that may contain lessons for peer countries and should be closely monitored. Ethical and legal challenges of AI-based healthcare 2. Drukker L, Noble JA, Papageorghiou AT. Introduction to artificial intelligence in ultrasound imaging in obstetrics and gynecology.