|
|
GUEST EDITORIAL |
|
Year : 2019 | Volume
: 67
| Issue : 1 | Page : 3-6 |
|
Artificial intelligence (AI) in healthcare and biomedical research: Why a strong computational/AI bioethics framework is required?
Jatinder Bali1, Rohit Garg2, Renu T Bali3
1 Department of Ophthalmology, Hindu Rao Hospital and NDMC Medical College, Delhi, India 2 Department of Information Technology and HIS, North Delhi Municipal Corporation, Delhi, India 3 Department of Medicine, Deep Chand Bandhu Hospital, Govt. of National Capital Territory of Delhi, Delhi, India
Date of Web Publication | 21-Dec-2018 |
Correspondence Address: Jatinder Bali Department of Ophthalmology, Hindu Rao Hospital and NDMC Medical College, Delhi India
Source of Support: None, Conflict of Interest: None | Check |
DOI: 10.4103/ijo.IJO_1292_18
How to cite this article: Bali J, Garg R, Bali RT. Artificial intelligence (AI) in healthcare and biomedical research: Why a strong computational/AI bioethics framework is required?. Indian J Ophthalmol 2019;67:3-6 |
What is Artificial Intelligence? | | |
Artificial intelligence (AI) refers to a computer mimicking “intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience” to achieve goals without being explicitly programmed for specific action. There is no consensus on what constitutes AI. Different criteria for intelligence proposed have not satisfied everyone leading to the famous aphorism, “AI is whatever hasn't been done yet.” For example, optical character recognition and translation has now been relegated from “artificial intelligence” because of the routine nature of their use.[1],[2]
The broad agreement is that any device which uses reason, devises strategy, solves puzzles, and makes “judgments under uncertainty representing knowledge, including commonsense knowledge, plans, learns, communicates in natural language and integrates all these skills towards common goals” demonstrates intelligence. Currently, in 2018, we can safely place activities such as understanding human speech, competing at the highest level in strategic game systems (such as chess and Go), driving autonomous cars, planning intelligent routing in content delivery network, and military simulations in the realm of AI.[3]
Some famous authors mentioned different benchmarks. The most famous was the “Turing-Test” (1950) by Alan Turing where a human who converses with an unseen machine and an unseen human must guess which of the two is the machine. The machine passes this test if it fools the evaluator for 30% of the time. In 2014, a program called Eugene Goostman cleared this test.[4]
Edward Feigenbaum in 2003 tweaked the “Turing-Test” to create “Subject-Matter-Expert-Turing-Test” or the “Feigenbaum-Test” where a machine's response cannot be distinguished from an expert in a given field. These are examples of “narrow-artificial intelligence.” The next higher goal is “artificial-general-intelligence,” where a model trained on one task can be re-purposed on a second related task, a concept called transfer learning.[3] We are some distance from it today.
Artificial Intelligence and Strategy Games | | |
In 1997, IBM DeepBlue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov, in a full series. In 2011, IBM's Watson was crowned champion for beating the two greatest Jeopardy champions, Brad Rutter and Ken Jennings. In 2016, DeepMind's AlphaGo defeated Go champion Lee Sedol in an ancient strategy game played on a “19 × 19-board,” winning 4 out of 5 games, becoming the first computer to beat a professional Go player without handicaps. In 2017, AlphaGo won a three-game match with world No. 1 ranked Ke Jie. Till now, the DeepMind was trained by human experts.[5]
DeepMind AI then developed a completely self-taught program without any human intervention and called it AlphaGo Zero (AGZ). This AlphaGo Zero gained tremendous human knowledge around the game Go in just 72 hours and was called AlphaGo Zero (3 days). It beat the version of the original AlphaGo that had defeated human champion Lee Sedol with a score of 100 to 0. It did so without any human data, which usually provided the baseline to train the AI. It used spontaneous data depending only on the rules and constraints while playing repeatedly. In December 2017, another program Alpha Zero trained within 24 hours to demonstrate superhuman capabilities in Chess, Go, and Shogi together. With mere 34 hours of self-learning of Go, AlphaZero defeated its predecessor AlphaGoZero 60 wins to 40 losses. In chess, AlphaZero gave a fantastic 28 wins, 0 losses, and 72 draws. In Shogi, it recorded 90 wins, 8 losses, and 2 draws. Thus, we now had “a bot that defeated the bot that defeated the world champion human.”[5]
Artificial Intelligence in Medicine | | |
AI applications have become common, e.g. Siri, Alexa, and Cortana. In medicine, IBM Watson-Oncology has picked up drugs for treatment of cancer patients with equal or better efficiency than human experts. Microsoft's Hanover Project at Oregan has analyzed medical research to tailor personalized cancer treatment option.[6] United Kingdom's National Health Service (NHS) used Google's DeepMind platform for detecting health risks by analyzing mobile app data and medical images collected from NHS patients.[7] Stanford's radiology algorithm picked up pneumonia better than human radiologists,[8] while in diabetic retinopathy challenge, the computer was as good as expert ophthalmologists in making a referral decision.[9]
In 2018, Krause et al. trained an automated algorithm for diabetic retinopathy (DR) grading while working on quantifying errors in DR grading based on individual graders and the majority decision using adjudication. They retrospectively analyzed Health Insurance Portability and Accountability Act Safe Harbor deidentified images labeled by American board-certified ophthalmologists and retinal specialists in addition to the “developed and tuned” algorithm. The retinal fundus images were contributed by EyePACS-affiliated clinics, Aravind Eye Hospital, Sankara Nethralaya, Narayana Nethralaya, and Messidor-2 dataset from Brest University Hospital supported Laboratory of Medical Information Processing and the original images from Gulshan et al. Ethics review and institutional review board exemption was granted to the project by Quorum Review Institutional Review Board. These results were rated against the consensus of the retinal specialists as the reference standard. The commonly used International Clinical Diabetic Retinopathy (ICDR) disease severity scale consisting of a five-point grade for DR: no, mild, moderate, severe, and proliferative was used by three ophthalmologists. Three grading types were used in development, including grading by EyePACS graders, grading by ophthalmologists, and adjudicated consensus grading by retinal specialists.
The quadratic-weighted kappa score to assess agreement between different graders demonstrated a high degree of correlation in moderate or worse DR with following scores among individual retinal specialists (Kappa = 0.82 to 0.91), ophthalmologists (Kappa = 0.80 to 0.84), and algorithm (Kappa = 0.84).
Thus, a small number of adjudicated consensus grades in the tuning dataset and higher resolution images in the input resulted in improved AUC from 0.934 to 0.986 for moderate or worse DR for the algorithm. The algorithm performed at par with the recommendations of American board-certified ophthalmologists and retinal specialists.[10]
Why Should India Be Concerned? | | |
There are no explicit laws covering data transfer for processing in India. Humongous amount of data was processed by an indirect third party by service providers, recoding the data according to US laws. A similar deal between Google DeepMind and the Royal Free London NHS Foundation Trust lead to much debate in 2017.[7] That agreement was criticized on grounds of violating the Caldicott Principles by transferring more data than necessary and blurring of the line between the data controllers and data processors. Legal obligations and liabilities are associated with each. The UK has mechanisms such as Information Commissioner's Office (responsible for enforcing the Data Protection Act and Health Research Authority (responsible for governance framework for health research) and Confidentiality Advisory Group (method for confidential health information in absence of explicit consent), which were not consulted before beginning data-transfer, which was done after using a “self-assessment information governance toolkit” used to validate the security of technical infrastructure to handle NHS data.[7]
What Can Be Done? | | |
The direct care providers need to be careful when sharing data with a third party which is not in a direct care relationship with the patient in question. Direct care is defined as “activity concerned with the prevention, investigation and treatment of illness and the alleviation of suffering of an identified individual.” A notice for use of such data must be given to the patient/subject who is in-the-care. If explicit consent and notice have not been given, then all de-identified (labelled or unlabelled) data should come into public domain and be published by a statutory body. This will keep a check on illegal proprietary exploitation of the data and force the data processor to seek limited amounts of data for exchange. Such publicly available public/peer scrutinized datasets such as Messidor will aid independent development of algorithms and processes. Because in the absence of consent such de-identified datasets should be considered community resource, there is logic in placing it in the hands of the community, thereby enabling policing of these data exchanges at a level which is not possible for any government. In India, we need to have statutory regulation such as section 251 in UK, which brings the government and statutory control for such transfers.[7] In fact, the ownership and custodial responsibility of such data are often never discussed in our country.[11] With AI tool use on such datasets, there is no specific brief of how the data will be manipulated by the machine. With transfer learning and AGI, humans may not even be able to understand how the machine handled the data like people found some of the lines of play in AlphaZero “alien” but effective. Justice BN Srikrishna Committee has made welcome progress by empowering patients.[11] However, it will be foolhardy to expect general protection to address extremely convoluted bioethical concerns in the development of medical AI. The medical fraternity needs to insist on specific “dos-and-don'ts,” which if followed will keep it safe from litigation in case of any data breach because India is not only the largest producer and the cheapest source of such data (by admission of the authors in the Google Diabetic Retinopathy Project),[10] it will be the largest market for the algorithms derived from it in future.
There is a need for a strong bioethical and computational ethics framework to ensure that this is hardwired into the rules we give to the machines for recursive self-improvement. The clichéd aphorism, “First, do no harm,” needs to be carried from medical ethics into the domain of computational bioethics. We may be like parents here; beyond a stage of development we may not be in control of these algorithms any further or may not understand them at all.
AlphaZero mastered Go, chess, and Shogi without any human guidance, except the game rules. Within 24 hours, it was able to defeat all state-of-the-art AI programs such as Stockfish, Elmo, and AlphaGo (3 day).[5] AI development is now shifting focus from “supervised” learning (which required large amounts of labeled examples to train the machine to recognize similar patterns) to “unsupervised-learning” (form of learning in which the machine trains without labeled data). Clearly, AI is becoming powerful, and will continue to do so on the back of higher computational power, thereby raising legitimate concerns about a scenario with this power finding its way into the wrong hands – human or artificial. The former will evolve at our human pace of evolution, allowing us a window of opportunity to reclaim our lives but not so for latter as the AlphaGo experience has shown us. More caution is necessary in case of medicine and medical research because the person affected by each decision is a sentient human being.
Why Teaching Machines What is Right is Important for the Human Race? | | |
Humans were at the top of the food chain because of their intelligence. They could control dangerous snakes and tigers with cages. Today we are training machines to be smarter than us. Do we need to protect the humans and make these machines slaves to humans? Do we want them to be like friendly Siri, Cortana, and Alexa or like rogue heuristically programmed algorithmic computer (HAL) of “2001: A Space Odyssey” who killed the crew of the spaceship for the sake of his program?[12] To prevent the latter, we must ensure that human physicians are informed of all reasons and decisions taken by the machine. This human operator must also possess a veto power or a manual override.
Futuristic dystopian extreme of machines displacing human knowledge workers appears unlikely. Throughout recorded history, technological advances have consistently made majority of workers richer and provided them extra leisure time. When Kelman invented phacoemulsification and when we started using computers to sculpt corneas, everyone gained – the patient, the practitioner, and the industry. History is replete with examples of “man-with-machine” progress. In medical application AI, the “healthcare-domain-experts” acted only as “raters and dataset-providers” for number crunching. They did not become integral to guiding the process of development of AI algorithms in healthcare. Failure of public institutions and oversight mechanisms in protecting the vulnerable is an irrevocable mistake. We may be teaching these machines disdain for human ordained rules. That may prove to be the costliest failure of mankind.
The “big-red-switch” needs to firmly remain in the hands of human operator/s or agencies even if the machine becomes artificially superintelligent surpassing the human in all cognitive domains. AI, like fire, is a great slave but a poor master. Beyond one stage of development, we may not be able to control it, so we need to inculcate in them the rules, the respect for benefiecience and lives of humans. The need for strong computational/AI bioethics framework in consultation with the medical fraternity cannot be overemphasized.
References | | |
1. | |
2. | |
3. | |
4. | |
5. | |
6. | |
7. | Powles J, Hodson H. Google DeepMind and healthcare in an age of algorithms. Health and Technol 2017;7:351-67. |
8. | |
9. | Abràmoff MD, Garvin MK, Sonka M. Retinal imaging and image analysis. IEEE Rev Biomed Eng 2010;3:169-208. |
10. | Krause J, Gulshan V, Rahimy E, Karth P, Widner K, Corrado G, et al. Grader variability and the importance of reference standards for evaluating machine learning models for diabetic retinopathy. Ophthalmology 2018;125:1264-72. |
11. | |
12. | Clarke A, Kubrick S. 2001: A Space Odyssey. New York: Orbit; 2012. |
Authors | | |
The author is an Ophthalmologist by training. He has been working on computers and allied areas since 1984. His instruction courses on “ Use of Computers for Managing Patient Data and Conducting Research” and “Do-It-Yourself Statistics for the Practitioner” have been well received by the community at national and international levels. He has an MBA in Operations Research with specialization in Information Systems and Technology Management. He was formerly the Member Secretary of the Institutional Ethics Committee and the Nodal Officer (Information Technology) of a thousand bedded medical college hospital with integrated healthcare information system in the national capital. He has been Assistant DNB Co-ordinator and Member Secretary of Hospital Scientific Committee. He has been Chief Instructor at World Ophthalmology Congress and invited international faculty for Asia Pacific Academy of Ophthalmology and American Academy of Ophthalmology. He has been faculty for Delhi Ophthalmological Society and has chaired sessions in seven All India Ophthalmological Society Conferences on research, statistics and computer-related issues. He has been a peer reviewer for British Journal of Ophthalmology, Journal of Venomous Animals and Toxins, Canadian Medical Journal, Singapore Medical Journal, Indian Journal of Ophthalmology, Annals of Health and Medical Research, Journal of Clinical Research and Ophthalmology, Saudi Journal of Ophthalmology and a host of other publications. He has been on the editorial board of different journals. He has authored two books including one on “Basics of Biostatistics: A Manual for Medical Practitioners”. He is currently the Chairman of DOS Subcommittee for Information Technology, Practice Automation and Clinical Informatics set up by the Delhi Ophthalmology Society.
This article has been cited by | 1 |
Machine learning and deep learning-based approach in smart healthcare: Recent advances, applications, challenges and opportunities |
|
| Anichur Rahman, Tanoy Debnath, Dipanjali Kundu, Md. Saikat Islam Khan, Airin Afroj Aishi, Sadia Sazzad, Mohammad Sayduzzaman, Shahab S. Band | | AIMS Public Health. 2024; 11(1): 58 | | [Pubmed] | [DOI] | | 2 |
The pivotal role of artificial intelligence in enhancing experimental animal model research: A machine learning perspective |
|
| Anushka Ghosh, Gajendra Choudhary, Bikash Medhi | | Indian Journal of Pharmacology. 2024; 56(1): 1 | | [Pubmed] | [DOI] | | 3 |
Bioethics in a transformation society on the example of the legal regulation |
|
| M.M. Blikhar, I.M. Zharovska, N.V. Ortynska, I.I. Komarnytska, R.M. Matkivska | | REPRODUCTIVE ENDOCRINOLOGY. 2023; (67): 115 | | [Pubmed] | [DOI] | | 4 |
Using Zebrafish in Preclinical Drug Studies: Challenges and Opportunities |
|
| A. V. Kalueff, M. M. Kotova, A. N. Ikrin, T. O. Kolesnikova | | Safety and Risk of Pharmacotherapy. 2023; 11(3): 303 | | [Pubmed] | [DOI] | | 5 |
The Evolution of Artificial Intelligence in Biomedicine: Bibliometric Analysis |
|
| Jiasheng Gu, Chongyang Gao, Lili Wang | | JMIR AI. 2023; 2: e45770 | | [Pubmed] | [DOI] | | 6 |
Ebelik Ögrencilerin Yapay Zekâ Kaygi Durumlarinin Degerlendirilmesi |
|
| Feyza AKTAS REYHAN, Elif DAGLI | | Cumhuriyet Üniversitesi Saglik Bilimleri Enstitüsü Dergisi. 2023; 8(Special Is): 290 | | [Pubmed] | [DOI] | | 7 |
A timeline of surgical lighting – Is automated lighting the future? |
|
| Nikhil Sharma, Amrita Heer, Lei Su | | The Surgeon. 2023; | | [Pubmed] | [DOI] | | 8 |
Self-learning effect of CsFAMAPbIBr memristor achieved by electroforming process |
|
| Yucheng Wang, Hongsu Wang, Xiaochuan Chen, Yueyang Shang, Hexin Wang, Zeyang An, Jiawei Zheng, Shaoxi Wang | | Materials Chemistry and Physics. 2023; 310: 128488 | | [Pubmed] | [DOI] | | 9 |
Artificial intelligence, nutrition, and ethical issues: A mini-review |
|
| Paraskevi Detopoulou, Gavriela Voulgaridou, Panagiotis Moschos, Despoina Levidi, Thelma Anastasiou, Vasilios Dedes, Eirini- Maria Diplari, Nikoleta Fourfouri, Constantinos Giaginis, Georgios I. Panoutsopoulos, Sousana K. Papadopoulou | | Clinical Nutrition Open Science. 2023; 50: 46 | | [Pubmed] | [DOI] | | 10 |
Artificial intelligence-driven phenotyping of zebrafish psychoactive drug responses |
|
| Dmitrii V. Bozhko, Vladislav O. Myrov, Sofia M. Kolchanova, Aleksandr I. Polovian, Georgii K. Galumov, Konstantin A. Demin, Konstantin N. Zabegalov, Tatiana Strekalova, Murilo S. de Abreu, Elena V. Petersen, Allan V. Kalueff | | Progress in Neuro-Psychopharmacology and Biological Psychiatry. 2022; 112: 110405 | | [Pubmed] | [DOI] | | 11 |
Introducing AI to the molecular tumor board: one direction toward the establishment of precision medicine using large-scale cancer clinical and biological information |
|
| Ryuji Hamamoto, Takafumi Koyama, Nobuji Kouno, Tomohiro Yasuda, Shuntaro Yui, Kazuki Sudo, Makoto Hirata, Kuniko Sunami, Takashi Kubo, Ken Takasawa, Satoshi Takahashi, Hidenori Machino, Kazuma Kobayashi, Ken Asada, Masaaki Komatsu, Syuzo Kaneko, Yasushi Yatabe, Noboru Yamamoto | | Experimental Hematology & Oncology. 2022; 11(1) | | [Pubmed] | [DOI] | | 12 |
Applications of artificial intelligence multiomics in precision oncology |
|
| Ruby Srivastava | | Journal of Cancer Research and Clinical Oncology. 2022; | | [Pubmed] | [DOI] | | 13 |
Applications of Artificial Intelligence in Healthcare |
|
| Shagufta Quazi, Rudra Prasad Saha, Manoj Kumar Singh | | Journal of Experimental Biology and Agricultural Sciences. 2022; 10(1): 211 | | [Pubmed] | [DOI] | | 14 |
Multi-Omics Techniques Make it Possible to Analyze Sepsis-Associated Acute Kidney Injury Comprehensively |
|
| Jiao Qiao, Liyan Cui | | Frontiers in Immunology. 2022; 13 | | [Pubmed] | [DOI] | | 15 |
Holistic Approach for Artificial Intelligence Implementation in Pharmaceutical Products Lifecycle: A Meta-Analysis |
|
| Konstantin A. Koshechkin, Georgiy S. Lebedev, Eduard N. Fartushnyi, Yuriy L. Orlov | | Applied Sciences. 2022; 12(16): 8373 | | [Pubmed] | [DOI] | | 16 |
Challenges of AI Adoption in the UAE Healthcare |
|
| Fatma Khamis Al Badi, Khawla Ali Alhosani, Fauzia Jabeen, Agata Stachowicz-Stanusch, Nazia Shehzad, Wolfgang AMANN | | Vision: The Journal of Business Perspective. 2021; : 0972262920 | | [Pubmed] | [DOI] | | 17 |
Towards enabling a cardiovascular digital twin for human systemic circulation using inverse analysis |
|
| Neeraj Kavan Chakshu, Igor Sazonov, Perumal Nithiarasu | | Biomechanics and Modeling in Mechanobiology. 2021; 20(2): 449 | | [Pubmed] | [DOI] | | 18 |
Machine learning in oral squamous cell carcinoma: Current status, clinical concerns and prospects for future—A systematic review |
|
| Rasheed Omobolaji Alabi, Omar Youssef, Matti Pirinen, Mohammed Elmusrati, Antti A. Mäkitie, Ilmo Leivo, Alhadi Almangush | | Artificial Intelligence in Medicine. 2021; 115: 102060 | | [Pubmed] | [DOI] | | 19 |
CustodyBlock: A Distributed Chain of Custody Evidence Framework |
|
| Fahad F. Alruwaili | | Information. 2021; 12(2): 88 | | [Pubmed] | [DOI] | | 20 |
Artificial Intelligence in Surveillance, Diagnosis, Drug Discovery and Vaccine Development against COVID-19 |
|
| Gunjan Arora, Jayadev Joshi, Rahul Shubhra Mandal, Nitisha Shrivastava, Richa Virmani, Tavpritesh Sethi | | Pathogens. 2021; 10(8): 1048 | | [Pubmed] | [DOI] | | 21 |
Artificial intelligence in ophthalmology and healthcare: An updated review of the techniques in use |
|
| Jatinder Bali, Ojasvini Bali | | Indian Journal of Ophthalmology. 2021; 69(1): 8 | | [Pubmed] | [DOI] | | 22 |
Artificial intelligence and multi agent based distributed ledger system for better privacy and security of electronic healthcare records |
|
| Fahad F. Alruwaili | | PeerJ Computer Science. 2020; 6: e323 | | [Pubmed] | [DOI] | | 23 |
Practicing precision medicine with intelligently integrative clinical and multi-omics data analysis |
|
| Zeeshan Ahmed | | Human Genomics. 2020; 14(1) | | [Pubmed] | [DOI] | | 24 |
Homology modeling in the time of collective and artificial intelligence |
|
| Tareq Hameduh, Yazan Haddad, Vojtech Adam, Zbynek Heger | | Computational and Structural Biotechnology Journal. 2020; 18: 3494 | | [Pubmed] | [DOI] | | 25 |
Artificial intelligence with multi-functional machine learning platform development for better healthcare and precision medicine |
|
| Zeeshan Ahmed, Khalid Mohamed, Saman Zeeshan, XinQi Dong | | Database. 2020; 2020 | | [Pubmed] | [DOI] | | 26 |
Precision medicine for pancreatic diseases |
|
| Celeste A. Shelton, David C. Whitcomb | | Current Opinion in Gastroenterology. 2020; 36(5): 428 | | [Pubmed] | [DOI] | | 27 |
Artificial Intelligence and Robotics in Nursing: Ethics of Caring as a Guide to Dividing Tasks Between AI and Humans |
|
| Felicia Stokes, Amitabha Palmer | | Nursing Philosophy. 2020; 21(4) | | [Pubmed] | [DOI] | | 28 |
Smartphone and App Usage in Orthopedics and Trauma Surgery: Survey Study of Physicians Regarding Acceptance, Risks, and Future Prospects in Germany |
|
| Florian Dittrich, David Alexander Back, Anna Katharina Harren, Stefan Landgraeber, Felix Reinecke, Sebastian Serong, Sascha Beck | | JMIR Formative Research. 2020; 4(11): e14787 | | [Pubmed] | [DOI] | |
|
|
|
|