|Part of a series on|
Artificial intelligence in healthcare is an overarching term used to describe the use of machine-learning algorithms and software, or artificial intelligence (AI), to mimic human cognition in the analysis, presentation, and comprehension of complex medical and health care data, or to exceed human capabilities by providing new ways to diagnose, treat, or prevent disease.   Specifically, AI is the ability of computer algorithms to approximate conclusions based solely on input data.
The primary aim of health-related AI applications is to analyze relationships between clinical data and patient outcomes.  AI programs are applied to practices such as diagnostics, treatment protocol development, drug development, personalized medicine, and patient monitoring and care. What differentiates AI technology from traditional technologies in healthcare is the ability to gather larger and more diverse data, process it, and produce a well-defined output to the end-user. AI does this through machine learning algorithms and deep learning. Because radiographs are the most common imaging tests conducted in most radiology departments, the potential for AI to help with triage and interpretation of traditional radiographs (X-ray pictures) is particularly noteworthy.  These processes can recognize patterns in behavior and create their own logic. To gain useful insights and predictions, machine learning models must be trained using extensive amounts of input data. AI algorithms behave differently from humans in two ways: (1) algorithms are literal: once a goal is set, the algorithm learns exclusively from the input data and can only understand what it has been programmed to do, (2) and some deep learning algorithms are black boxes; algorithms can predict with extreme precision, but offer little to no comprehensible explanation to the logic behind its decisions aside from the data and type of algorithm used. 
As widespread use of AI in healthcare is relatively new, research is ongoing into its application in various fields of medicine and industry. Additionally, greater consideration is being given to the unprecedented ethical concerns related to its practice such as data privacy, automation of jobs, and representation biases.  Furthermore, new technologies brought about by AI in healthcare are often resisted by healthcare leaders, leading to slow and erratic adoption. 
Research in the 1960s and 1970s produced the first problem-solving program, or expert system, known as Dendral.   While it was designed for applications in organic chemistry, it provided the basis for a subsequent system MYCIN,  considered one of the most significant early uses of artificial intelligence in medicine.   MYCIN and other systems such as INTERNIST-1 and CASNET did not achieve routine use by practitioners, however. 
The 1980s and 1990s brought the proliferation of the microcomputer and new levels of network connectivity. During this time, there was a recognition by researchers and developers that AI systems in healthcare must be designed to accommodate the absence of perfect data and build on the expertise of physicians.  Approaches involving fuzzy set theory,  Bayesian networks,  and artificial neural networks,   have been applied to intelligent computing systems in healthcare.
Medical and technological advancements occurring over this half-century period that have enabled the growth of healthcare-related applications of AI to include:
AI algorithms can also be used to analyze large amounts of data through electronic health records for disease prevention and diagnosis. Medical institutions such as The Mayo Clinic, Memorial Sloan Kettering Cancer Center,   and the British National Health Service,  have developed AI algorithms for their departments. Large technology companies such as IBM  and Google,  have also developed AI algorithms for healthcare. Additionally, hospitals are looking to AI software to support operational initiatives that increase cost saving, improve patient satisfaction, and satisfy their staffing and workforce needs.  Currently, the United States government is investing billions of dollars to progress the development of AI in healthcare.  Companies are developing technologies that help healthcare managers improve business operations through increasing utilization, decreasing patient boarding, reducing length of stay and optimizing staffing levels. 
Artificial intelligence algorithms have shown promising results in accurately diagnosing and risk stratifying patients with concern for coronary artery disease, showing potential as an initial triage tool,   though few studies have directly compared the accuracy of machine learning models to clinician diagnostic ability.  Other algorithms have been used in predicting patient mortality, medication effects, and adverse events following treatment for acute coronary syndrome.  Wearables, smartphones, and internet-based technologies have also shown the ability to monitor patients' cardiac data points, expanding the amount of data and the various settings AI models can use and potentially enabling earlier detection of cardiac events occurring outside of the hospital.  Another growing area of research is the utility of AI in classifying heart sounds and diagnosing valvular disease.  Challenges of AI in cardiovascular medicine have included the limited data available to train machine learning models, such as limited data on social determinants of health as they pertain to cardiovascular disease. 
A key limitation of studies to date is that they have not compared algorithmic performance to humans. Two exceptions include showing AI is noninferior to humans in interpretation of cardiac echocardiograms  and that AI can diagnose heart attack better than human physicians in the emergency setting, reducing both low-value testing and missed diagnoses. 
Dermatology is an imaging abundant speciality  and the development of deep learning has been strongly tied to image processing. Therefore, there is a natural fit between the dermatology and deep learning. There are 3 main imaging types in dermatology: contextual images, macro images, micro images.  For each modality, deep learning showed great progress.  Han et al. showed keratinocytic skin cancer detection from face photographs.  Esteva et al. demonstrated dermatologist-level classification of skin cancer from lesion images.  Noyan et al. demonstrated a convolutional neural network that achieved 94% accuracy at identifying skin cells from microscopic Tzanck smear images.  A concern raised with this work is that it has not engaged with disparities related to skin color or differential treatment of patients with non-white skin tones. 
Recent advances have suggested the use of AI to describe and evaluate the outcome of maxillo-facial surgery or the assessment of cleft palate therapy in regard to facial attractiveness or age appearance.  
In 2018, a paper published in the journal Annals of Oncology mentioned that skin cancer could be detected more accurately by an artificial intelligence system (which used a deep learning convolutional neural network) than by dermatologists. On average, the human dermatologists accurately detected 86.6% of skin cancers from the images, compared to 95% for the CNN machine. 
AI can play a role in various facets of the field of gastroenterology. Endoscopic exams such as esophagogastroduodenoscopies (EGD) and colonoscopies rely on rapid detection of abnormal tissue. By enhancing these endoscopic procedures with AI, clinicians can more rapidly identify diseases, determine their severity, and visualize blind spots. Early trials in using AI detection systems of early gastric cancer have shown sensitivity close to expert endoscopists. 
AI has shown potential in both the laboratory and clinical spheres of infectious disease medicine.  As the novel coronavirus ravages through the globe, the United States is estimated to invest more than $2 billion in AI-related healthcare research by 2025, more than 4 times the amount spent in 2019 ($463 million).  While neural networks have been developed to rapidly and accurately detect a host response to COVID-19 from mass spectrometry samples, a scoping review of the literature found few examples of AI being used directly in clinical practice during the COVID-19 pandemic itself.  Other applications include support-vector machines identifying antimicrobial resistance, machine learning analysis of blood smears to detect malaria, and improved point-of-care testing of Lyme disease based on antigen detection. Additionally, AI has been investigated for improving diagnosis of meningitis, sepsis, and tuberculosis, as well as predicting treatment complications in hepatitis B and hepatitis C patients. . Recently, AI can also help predict the pharmacokinetics parameters in different situation 
AI has been used to identify causes of knee pain that doctors miss, that disproportionately affect Black patients.  Underserved populations experience higher levels of pain. These disparities persist even after controlling for the objective severity of diseases like osteoarthritis, as graded by human physicians using medical images, raising the possibility that underserved patients’ pain stems from factors external to the knee, such as stress. Researchers have conducted a study using a machine-learning algorithm to show that standard radiographic measures of severity overlook objective but undiagnosed features that disproportionately affect diagnosis and management of underserved populations with knee pain. They proposed that new algorithmic measure ALG-P could potentially enable expanded access to treatments for underserved patients. 
AI has been explored for use in cancer diagnosis, risk stratification, molecular characterization of tumors, and cancer drug discovery. A particular challenge in oncologic care that AI is being developed to address is the ability to accurately predict which treatment protocols will be best suited for each patient based on their individual genetic, molecular, and tumor-based characteristics.  Through its ability to translate images to mathematical sequences, AI has been trialed in cancer diagnostics with the reading of imaging studies and pathology slides.  In January 2020, researchers demonstrated an AI system, based on a Google DeepMind algorithm, capable of surpassing human experts in breast cancer detection.   In July 2020, it was reported that an AI algorithm developed by the University of Pittsburgh achieves the highest accuracy to date in identifying prostate cancer, with 98% sensitivity and 97% specificity.  
The eyes are not to be left out in the use of artificial intelligence-enhanced technology to aid in the screening of eye disease.  In 2018, the U.S. Food and Drug Administration authorized the marketing of the first medical device to diagnose a specific type of eye disease, diabetic retinopathy using an artificial intelligence algorithm.  Moreover, AI technology may be utilized to further improve "diagnosis rates" because of the potential to decrease detection time. 
For many diseases, pathological analysis of cells and tissues is considered to be the gold standard of disease diagnosis. Methods of digital pathology allows microscopy slides to be scanned and digitally analyzed. AI-assisted pathology tools have been developed to assist with the diagnosis of a number of diseases, including breast cancer, hepatitis B, gastric cancer, and colorectal cancer. AI has also been used to predict genetic mutations and prognosticate disease outcomes.  AI is well-suited for use in low-complexity pathological analysis of large-scale screening samples, such as colorectal or breast cancer screening, thus lessening the burden on pathologists and allowing for faster turnaround of sample analysis.  Several deep learning and artificial neural network models have shown accuracy similar to that of human pathologists,  and a study of deep learning assistance in diagnosing metastatic breast cancer in lymph nodes showed that the accuracy of humans with the assistance of a deep learning program was higher than either the humans alone or the AI program alone.  Additionally, implementation of digital pathology is predicted to save over $12 million for a university center over the course of five years,  though savings attributed to AI specifically have not yet been widely researched. The use of augmented and virtual reality could prove to be a stepping stone to wider implementation of AI-assisted pathology, as they can highlight areas of concern on a pathology sample and present them in real-time to a pathologist for more efficient review.  AI also has the potential to identify histological findings at levels beyond what the human eye can see,  and has shown the ability to utilize genotypic and phenotypic data to more accurately detect the tumor of origin for metastatic cancer.  One of the major current barriers to widespread implementation of AI-assisted pathology tools is the lack of prospective, randomized, multi-center controlled trials in determining the true clinical utility of AI for pathologists and patients, highlighting a current area of need in AI and healthcare research. 
Primary care has become one key development area for AI technologies.   AI in primary care has been used for supporting decision making, predictive modelling, and business analytics.  There are only a few examples of AI decision support systems that were prospectively assessed on clinical efficacy when used in practice by physicians. But there are cases where the use of these systems yielded a positive effect on treatment choice by physicians. 
In psychiatry, AI applications are still in a phase of proof-of-concept.  Areas where the evidence is widening quickly include predictive modelling of diagnosis and treatment outcomes,  chatbots, conversational agents that imitate human behaviour and which have been studied for anxiety and depression. 
Challenges include the fact that many applications in the field are developed and proposed by private corporations, such as the screening for suicidal ideation implemented by Facebook in 2017.  Such applications outside the healthcare system raise various professional, ethical and regulatory questions.  Another issue is often with the validity and interpretability of the models. Small training datasets contain bias that is inherited by the models, and compromises the generalizability and stability of these models. Such models may also have the potential to be discriminatory against minority groups that are underrepresented in samples. 
AI is being studied within the field of radiology to detect and diagnose diseases through Computerized Tomography (CT) and Magnetic Resonance (MR) Imaging.  It may be particularly useful in settings where demand for human expertise exceeds supply, or where data is too complex to be efficiently interpreted by human readers.  Several deep learning models have shown the capability to be roughly as accurate as healthcare professionals in identifying diseases through medical imaging, though few of the studies reporting these findings have been externally validated.  AI can also provide non-interpretive benefit to radiologists, such as reducing noise in images, creating high-quality images from lower doses of radiation, enhancing MR image quality,  and automatically assessing image quality.  Further research investigating the use of AI in nuclear medicine focuses on image reconstruction, anatomical landmarking, and the enablement of lower doses in imaging studies. 
An article by Jiang, et al. (2017) demonstrated that there are several types of AI techniques that have been used for a variety of different diseases, such as support vector machines, neural networks, and decision trees. Each of these techniques is described as having a "training goal" so "classifications agree with the outcomes as much as possible…".
To demonstrate some specifics for disease diagnosis/classification there are two different techniques used in the classification of these diseases including using " Artificial Neural Networks (ANN) and Bayesian Networks (BN)". It was found that ANN was better and could more accurately classify diabetes and CVD.
Through the use of Medical Learning Classifiers (MLC's), Artificial Intelligence has been able to substantially aid doctors in patient diagnosis through the manipulation of mass Electronic Health Records (EHR's).  Medical conditions have grown more complex, and with a vast history of electronic medical records building, the likelihood of case duplication is high.  Although someone today with a rare illness is less likely to be the only person to have had any given disease, the inability to access cases from similarly symptomatic origins is a major roadblock for physicians.  The implementation of AI to not only help find similar cases and treatments, such as through early predictors of Alzheimer’s disease and dementias,  but also factor in chief symptoms and help the physicians ask the most appropriate questions helps the patient receive the most accurate diagnosis and treatment possible. 
Recent developments in statistical physics, machine learning, and inference algorithms are being explored for their potential in improving medical diagnostic approaches.  Combining the skills of medical professionals and machines can help overcome decision-making weaknesses in medical practice. To do so, one needs precise disease definitions and a probabilistic analysis of symptoms and molecular profiles. Physicists have been studying similar problems for years, utilizing microscopic elements and their interactions to extract macroscopic states of various physical systems. Physics inspired machine learning approaches can thus be applied to study disease processes and to perform biomarker analysis.
The increase of telemedicine, the treatment of patients remotely, has shown the rise of possible AI applications.  AI can assist in caring for patients remotely by monitoring their information through sensors.  A wearable device may allow for constant monitoring of a patient and the ability to notice changes that may be less distinguishable by humans. The information can be compared to other data that has already been collected using artificial intelligence algorithms that alert physicians if there are any issues to be aware of. 
Another application of artificial intelligence is chat-bot therapy. Some researchers charge that the reliance on chatbots for mental healthcare does not offer the reciprocity and accountability of care that should exist in the relationship between the consumer of mental healthcare and the care provider (be it a chat-bot or psychologist), though. 
Since the average age has risen due to a longer life expectancy, artificial intelligence could be useful in helping take care of older populations.  Tools such as environment and personal sensors can identify a person's regular activities and alert a caretaker if a behavior or a measured vital is abnormal.  Although the technology is useful, there are also discussions about limitations of monitoring in order to respect a person's privacy since there are technologies that are designed to map out home layouts and detect human interactions. 
Electronic health records (EHR) are crucial to the digitalization and information spread of the healthcare industry. Now that around 80% of medical practices use EHR, the next step is to use artificial intelligence to interpret the records and provide new information to physicians. 
One application uses natural language processing (NLP) to make more succinct reports that limit the variation between medical terms by matching similar medical terms.  For example, the term heart attack and myocardial infarction mean the same things, but physicians may use one over the over based on personal preferences.  NLP algorithms consolidate these differences so that larger datasets can be analyzed.  Another use of NLP identifies phrases that are redundant due to repetition in a physician's notes and keeps the relevant information to make it easier to read.  Other applications use concept processing to analyze the information entered by the current patient's doctor to present similar cases and help the physician remember to include all relevant details. 
Beyond making content edits to an EHR, there are AI algorithms that evaluate an individual patient's record and predict a risk for a disease based on their previous information and family history.  One general algorithm is a rule-based system that makes decisions similarly to how humans use flow charts.  This system takes in large amounts of data and creates a set of rules that connect specific observations to concluded diagnoses.  Thus, the algorithm can take in a new patient's data and try to predict the likeliness that they will have a certain condition or disease.  Since the algorithms can evaluate a patient's information based on collective data, they can find any outstanding issues to bring to a physician's attention and save time.  One study conducted by the Centerstone research institute found that predictive modeling of EHR data has achieved 70–72% accuracy in predicting individualized treatment response.  These methods are helpful due to the fact that the amount of online health records doubles every five years.  Physicians do not have the bandwidth to process all this data manually, and AI can leverage this data to assist physicians in treating their patients. 
Improvements in natural language processing led to the development of algorithms to identify drug-drug interactions in medical literature.     Drug-drug interactions pose a threat to those taking multiple medications simultaneously, and the danger increases with the number of medications being taken.  To address the difficulty of tracking all known or suspected drug-drug interactions, machine learning algorithms have been created to extract information on interacting drugs and their possible effects from medical literature. Efforts were consolidated in 2013 in the DDIExtraction Challenge, in which a team of researchers at Carlos III University assembled a corpus of literature on drug-drug interactions to form a standardized test for such algorithms.  Competitors were tested on their ability to accurately determine, from the text, which drugs were shown to interact and what the characteristics of their interactions were.  Researchers continue to use this corpus to standardize the measurement of the effectiveness of their algorithms.   
Other algorithms identify drug-drug interactions from patterns in user-generated content, especially electronic health records and/or adverse event reports.   Organizations such as the FDA Adverse Event Reporting System (FAERS) and the World Health Organization's VigiBase allow doctors to submit reports of possible negative reactions to medications. Deep learning algorithms have been developed to parse these reports and detect patterns that imply drug-drug interactions. 
The trend of large health companies merging allows for greater health data accessibility. Greater health data lays the groundwork for the implementation of AI algorithms.
A large part of industry focus of implementation of AI in the healthcare sector is in the clinical decision support systems. As more data is collected, machine learning algorithms adapt and allow for more robust responses and solutions.  Numerous companies are exploring the possibilities of the incorporation of big data in the healthcare industry. Many companies investigate the market opportunities through the realms of "data assessment, storage, management, and analysis technologies" which are all crucial parts of the healthcare industry. 
The following are examples of large companies that have contributed to AI algorithms for use in healthcare:
Digital consultant apps use AI to give medical consultation based on personal medical history and common medical knowledge. Users report their symptoms into the app, which uses speech recognition to compare against a database of illnesses. Babylon then offers a recommended action, taking into account the user's medical history. Entrepreneurs in healthcare have been effectively using seven business model archetypes to take AI solution[ buzzword] to the marketplace. These archetypes depend on the value generated for the target user (e.g. patient focus vs. healthcare provider and payer focus) and value capturing mechanisms (e.g. providing information or connecting stakeholders).
IFlytek launched a service robot "Xiao Man", which integrated artificial intelligence technology to identify the registered customer and provide personalized recommendations in medical areas. It also works in the field of medical imaging. Similar robots are also being made by companies such as UBTECH ("Cruzr") and Softbank Robotics ("Pepper").
With the market for AI expanding constantly, large tech companies such as Apple, Google, Amazon, and Baidu all have their own AI research divisions, as well as millions of dollars allocated for acquisition of smaller AI based companies.  Many automobile manufacturers are beginning to use machine learning healthcare in their cars as well.  Companies such as BMW, GE, Tesla, Toyota, and Volvo all have new research campaigns to find ways of learning a driver's vital statistics to ensure they are awake, paying attention to the road, and not under the influence of substances or in emotional distress. 
Artificial intelligence continues to expand in its abilities to diagnose more people accurately in nations where fewer doctors are accessible to the public. Many new technology companies such as SpaceX and the Raspberry Pi Foundation have enabled more developing countries to have access to computers and the internet than ever before.  With the increasing capabilities of AI over the internet, advanced machine learning algorithms can allow patients to get accurately diagnosed when they would previously have no way of knowing if they had a life-threatening disease or not. 
Using AI in developing nations that do not have the resources will diminish the need for outsourcing and can improve patient care. AI can allow for not only diagnosis of patient in areas where healthcare is scarce, but also allow for a good patient experience by resourcing files to find the best treatment for a patient.  The ability of AI to adjust course as it goes also allows the patient to have their treatment modified based on what works for them; a level of individualized care that is nearly non-existent in developing countries. 
While research on the use of AI in healthcare aims to validate its efficacy in improving patient outcomes before its broader adoption, its use may nonetheless introduce several new types of risk to patients and healthcare providers, such as algorithmic bias, Do not resuscitate implications, and other machine morality issues. AI may also compromise the protection of patients' rights, such as the right to informed consent and the right to medical data protection.  These challenges of the clinical use of AI have brought about a potential need for regulations.
Currently, there are regulations pertaining to the collection of patient data. This includes policies such as the Health Insurance Portability and Accountability Act ( HIPAA) and the European General Data Protection Regulation ( GDPR).  The GDPR pertains to patients within the EU and details the consent requirements for patient data use when entities collect patient healthcare data. Similarly, HIPAA protects healthcare data from patient records in the United States.  In May 2016, the White House announced its plan to host a series of workshops and formation of the National Science and Technology Council (NSTC) Subcommittee on Machine Learning and Artificial Intelligence. In October 2016, the group published The National Artificial Intelligence Research and Development Strategic Plan, outlining its proposed priorities for Federally-funded AI research and development (within government and academia). The report notes a strategic R&D plan for the subfield of health information technology is in development stages.
The only agency that has expressed concern is the FDA. Bakul Patel, the Associate Center Director for Digital Health of the FDA, is quoted saying in May 2017:
"We're trying to get people who have hands-on development experience with a product's full life cycle. We already have some scientists who know artificial intelligence and machine learning, but we want complementary people who can look forward and see how this technology will evolve."
The joint ITU- WHO Focus Group on Artificial Intelligence for Health (FG-AI4H) has built a platform - known as the ITU-WHO AI for Health Framework - for the testing and benchmarking of AI applications in health domain. As of November 2018, eight use cases are being benchmarked, including assessing breast cancer risk from histopathological imagery, guiding anti-venom selection from snake images, and diagnosing skin lesions.
In January 2021, the US FDA published a new Action Plan, entitled Artificial Intelligence (AI) /Machine Learning (ML)-Based Software as a Medical Device (SaMD) Action Plan. This plan lays out the FDA's future plans for regulation of medical devices that would include artificial intelligence in their software. There are five main actions the FDA plans to take to increase regulation: 1. Tailored Regulatory Framework for Ai/M:-based SaMD, 2. Good Machine Learning Practice (GMLP), 3. Patient-Centered Approach Incorporating Transparency to Users, 4. Regulatory Science Methods Related to Algorithm Bias & Robustness, and 5. Real-World Performance(RWP). This plan was in direct response to stakeholders' feedback on a 2019 discussion paper also published by the FDA.
According to the U.S. Department of Health and Human Services, the Office for Civil Rights (OCR) has issued guidance on the ethical use of AI in healthcare. The guidance outlines four core ethical principles that must be followed: respect for autonomy, beneficence, non-maleficence, and justice. Respect for autonomy requires that individuals have control over their own data and decisions. Beneficence requires that AI be used to do good, such as improving the quality of care and reducing health disparities. Non-maleficence requires that AI be used to do no harm, such as avoiding discrimination in decisions. Finally, justice requires that AI be used fairly, such as using the same standards for decisions no matter a person's race, gender, or income level. Moreover, as of March 2021, the OCR hired a Chief Artificial Intelligence Officer (OCAIO) to pursue the "implementation of the HHS AI strategy". The OCR also has issued rules and regulations to protect the privacy of individuals’ health information. These regulations require healthcare providers to follow certain privacy rules when using AI. The OCR also requires healthcare providers to keep a record of how they use AI and to ensure that their AI systems are secure. Overall, the U.S. has taken steps to protect individuals’ privacy and ethical issues related to AI in healthcare 
The U.S. is not the only country to develop or initiate regulations of data privacy with AI. Other countries have implemented data protection regulations, more specifically with company privacy invasions. In Denmark, the Danish Expert Group on Data Ethics has adopted recommendations on 'Data for the Benefit of the People'. These recommendations are intended to encourage the responsible use of data in the business sector, with a focus on data processing. The recommendations include a focus on equality and non-discrimination with regard to bias in AI, as well as human dignity. The importance of human dignity is stressed, as it is said to outweigh profit and must be respected in all data processes 
The European Union has implemented the General Data Protection Regulation (GDPR) to protect citizens' personal data, which applies to the use of AI in healthcare. In addition, the European Commission has established guidelines to ensure the ethical development of AI, including the use of algorithms to ensure fairness and transparency.  With GDPR, the European Union was the first to regulate AI through data protection legislation. The Union finds privacy as a fundamental human right, it wants to prevent unconsented and secondary uses of data by private or public health facilities. By streamlining access to personal data for health research and findings, they are able to instate the right and importance of patient privacy.  In the United States, the Health Insurance Portability and Accountability Act (HIPAA) requires organizations to protect the privacy and security of patient information. The Centers for Medicare and Medicaid Services have also released guidelines for the development of AI-based medical applications. 
In order to effectively train Machine Learning and use AI in healthcare, massive amounts of data must be gathered. Acquiring this data, however, comes at the cost of patient privacy in most cases and is not well received publicly. For example, a survey conducted in the UK estimated that 63% of the population is uncomfortable with sharing their personal data in order to improve artificial intelligence technology.  The scarcity of real, accessible patient data is a hindrance that deters the progress of developing and deploying more artificial intelligence in healthcare.
According to a 2019 study, AI can replace up to 35% of jobs in the UK within the next 10 to 20 years.  However, of these jobs, it was concluded that AI has not eliminated any healthcare jobs so far. Though if AI were to automate healthcare-related jobs, the jobs most susceptible to automation would be those dealing with digital information, radiology, and pathology, as opposed to those dealing with doctor-to-patient interaction. 
Automation can provide benefits alongside doctors as well. It is expected that doctors who take advantage of AI in healthcare will provide greater quality healthcare than doctors and medical establishments who do not.  AI will likely not completely replace healthcare workers but rather give them more time to attend to their patients. AI may avert healthcare worker burnout and cognitive overload
AI will ultimately help contribute to the progression of societal goals which include better communication, improved quality of healthcare, and autonomy. 
Recently, there have been many discussions between healthcare experts in terms of AI and elder care. In relation to elder care, AI bots have been helpful in guiding older residents living in assisted living with entertainment and company. These bots are allowing staff in the home to have more one-on-one time with each resident, but the bots are also programmed with more ability in what they are able to do; such as knowing different languages and different types of care depending on the patient's conditions. The bot is an AI machine, which means it goes through the same training as any other machine - using algorithms to parse the given data, learn from it and predict the outcome in relation to what situation is at hand 
Since AI makes decisions solely on the data it receives as input, it is important that this data represents accurate patient demographics. In a hospital setting, patients do not have full knowledge of how predictive algorithms are created or calibrated. Therefore, these medical establishments can unfairly code their algorithms to discriminate against minorities and prioritize profits rather than providing optimal care.  A recent scoping review identified 18 equity challenges along with 15 strategies that can be implemented to help address them when AI applications are developed using many-to-many mapping. 
There can also be unintended bias in these algorithms that can exacerbate social and healthcare inequities.  Since AI's decisions are a direct reflection of its input data, the data it receives must have accurate representation of patient demographics. White males are overly represented in medical data sets.  Therefore, having minimal patient data on minorities can lead to AI making more accurate predictions for majority populations, leading to unintended worse medical outcomes for minority populations.  Collecting data from minority communities can also lead to medical discrimination. For instance, HIV is a prevalent virus among minority communities and HIV status can be used to discriminate against patients.  In addition to biases that may arise from sample selection, different clinical systems used to collect data may also impact AI functionality. For example, radiographic systems and their outcomes (e.g., resolution) vary by provider. Moreover, clinician work practices, such as the positioning of the patient for radiography, can also greatly influence the data and make comparability difficult.  However, these biases are able to be eliminated through careful implementation and a methodical collection of representative data.
A final source of bias, which has been called “label choice bias,” arises when proxy measures are used to train algorithms, that build in bias against certain groups. For example, a widely-used algorithm predicted health care costs as a proxy for health care needs, and used predictions to allocate resources to help patients with complex health needs. This introduced bias because Black patients have lower costs, even when they are just as unhealthy as White patients.  Solutions to the "label choice bias" aim to match the actual target (what the algorithm is predicting) more closely to the ideal target (what researchers want the algorithm to predict), so for the prior example, instead of predicting cost, researchers would focus on the variable of healthcare needs which is rather more significant. Adjusting the target led to almost double the number of Black patients being selected for the program