AI Series: 1 Beyond The Buzz and The Hype: What is AI?

Many of you have likely heard of or even used artificial intelligence (AI) tools such as ChatGPT. While chatbots and “natural language processing” are not new concepts, the introduction of the GPT (Generative Pre-trained Transformer) models by OpenAI marked a significant milestone in AI, profoundly influencing how we interact with technology in everyday life. Although ChatGPT is a powerful example of AI, it is not the only model capable of performing tasks intelligently. Numerous other algorithms, both predating and succeeding ChatGPT, have been designed to emulate human intelligence and address challenges across diverse domains.

AI often serves as a catch-all term to describe any form of machine intelligence that mimics human cognitive abilities. These systems include explicitly programmed decision support tools as well as adaptive algorithms capable of learning and improving over time. However, contemporary discussions about AI often focus on a subset known as machine learning (ML). ML refers to systems that demonstrate human-like intelligence without being explicitly programmed for specific tasks. To understand this distinction, let’s consider the following two examples:

  1. A decision support system for diagnosing anemia might function as follows: If hemoglobin (Hb) levels fall below a defined threshold, the system identifies anemia. Depending on mean corpuscular volume (MCV) values, it further categorizes the anemia as microcytic, macrocytic, or normocytic (Figure 1). This type of system operates based on fixed, explicitly programmed rules applied uniformly to all input data. It is non-adaptive, meaning it neither learns from the data it processes nor evolves over time.

Figure 1: Python code for a program that can determine the existence and type of anemia (left) and a flowchart showing the steps (right). In this case, the system is explicitly programmed using “if…then” statements and is thus non-adaptive. While a skilled coder might refine this code for efficiency, its essence lies in its explicit programming – a hallmark of traditional decision support systems.

2. A system for detecting Parkinson’s Disease from voice samples could be designed as a neural network trained to analyze voice samples and predict whether a person has Parkinson’s Disease. Unlike the anemia example, this system learns from patterns in the data. Over time, it becomes increasingly accurate at identifying the subtle features indicative of the disease (Figure 2). This adaptability is the defining characteristic of machine learning.

Figure 2: An adaptive algorithm for Parkinson’s Disease diagnosis: an adaptive neural network analyzes voice samples, identifying unique features to categorize input as “Parkinson’s” or “Not Parkinson’s.” Some algorithms may even provide a confidence percentage, indicating how certain they are about the diagnosis and could thus make the output a bit more fuzzy, just like medicine!

Both these examples fall under the broad umbrella of AI, but only the latter exemplifies ML. ChatGPT is an ML-based tool; more specifically, a generative AI (Gen AI) algorithm and a large language model (LLM). Recent upgrades to its Pro versions have made it multimodal, allowing inputs and outputs across text, voice, and image modalities. However, it is essential to acknowledge the limitations of Gen AI. These systems can "hallucinate" or “confabulate” (i.e., fabricate information) due to their current nature, often presenting errors with unwarranted confidence; we need to make sure we don’t mistake their confidence for competence.

AI tools are already transforming daily life, aiding in tasks like brainstorming ideas, editing text, and even assisting clinicians with diagnoses. The implications specifically for healthcare are almost limitless. Medical knowledge is growing exponentially, making it nearly impossible for any single clinician to stay current. As feasible as the synthesis of all the available information in a specific medical specialty was in the past, it is nearly impossible for any one doctor to keep up with the sheer volume of data and new insights now. AI algorithms designed to assist in synthesizing this knowledge could help bridge this gap. For instance, imagine an algorithm capable of sifting through vast volumes of medical literature to help provide relevant, evidence-based insights in seconds. Such tools could save hours of research, but they require human expertise for interpretation and contextualization. Think of it as a more advanced version of “Dr. Google” – extremely helpful but not a substitute for trained judgment.

I would be remiss not to address the fear that AI might replace doctors (well, every human, for that matter). Currently, and at least for the foreseeable future, AI serves as an augmentative tool, excelling in tasks like data analysis, pattern recognition, and predictive modelling. What AI lacks is the empathy, judgment, and nuanced understanding that is inherent to the clinical practice of medicine.

AI has already made significant strides in various domains of medicine. For instance, in radiology, advanced algorithms analyze imaging data to identify abnormalities with remarkable speed and precision. This not only helps prioritize urgent cases but also reduces diagnostic errors. In pathology, machine learning models assist in detecting features in tissue samples that may indicate cancer or other diseases and significantly enhance diagnostic accuracy. Predictive analytics has enabled AI systems to forecast patient outcomes, such as the likelihood of hospital readmission or complications, allowing for more proactive care planning. Robotic systems powered by AI provide precision and guidance during complex surgical procedures while minimizing risks and improving outcomes. In drug discovery, AI accelerates the identification of potential drug candidates, which is revolutionizing the research and development process by making it faster and more cost-effective.

While the potential of AI in healthcare is immense, there are significant challenges to address as well. Bias in algorithms is one of the most pressing issues. AI systems can perpetuate or amplify biases present in the training data, leading to unequal outcomes for different patient populations. For example, if training data disproportionately represents certain demographics, predictions or recommendations could unfairly favour these groups and/or disadvantage others. Addressing bias requires careful curation of training datasets and ongoing monitoring of AI outputs.

Data privacy is another critical concern. The use of patient data to train AI models raises questions about confidentiality and security. Ensuring that sensitive health information remains protected while enabling its use for algorithm development is a delicate balance, necessitating robust encryption methods, de-identification protocols, and stringent data governance practices.

Overreliance on AI, especially with the easily accessible algorithms for day-to-day use nowadays, can be a problem as well. Everyone using AI, including clinicians, must retain and use their own critical thinking skills and ensure AI remains a supplement, not become a substitute, for human judgment. Ensuring clinicians retain ownership of decisions is key to maintaining trust in the physician-patient relationship.

Regulation and oversight are also paramount. Ensuring the safety and efficacy of AI tools requires comprehensive regulatory frameworks. As AI becomes more integrated into healthcare, clear guidelines must address issues like accountability, liability, and validation to ensure these systems benefit patients without causing harm. We are already behind in this aspect and the regulatory bodies need to catch up very quickly before we encounter bigger problems in the arena.

AI-integrated healthcare is not distant anymore; it is happening now. For medical professionals, understanding the fundamentals of AI and its applications is no longer optional. As these tools become increasingly prevalent, it is essential to remain informed, critical, and adaptable. By leveraging AI responsibly and ethically, we can enhance patient care, improve outcomes, and ensure that technology serves as a partner, not a replacement, in the practice of medicine. Oh, and don’t forget to say “thank you” when ChatGPT does a good job at something; it’s never too early to get on the good side of our (very distant) future robot overlords!

Ehsan Misaghi

Ehsan Misaghi is an MD/PhD student at the University of Alberta and a clinician-scientist-innovator-in-training. Ehsan is passionate about combining scientific research, novel technologies, and advocacy to improve patient-oriented and patient-centred care. Ehsan's work spans a wide range of areas, from understanding basic genetic mechanisms to the application of artificial intelligence in healthcare. As part of the website team at the AMA advocacy committee, he is contributing to our understanding of the applications of disruptive technologies in healthcare.

Next
Next

AI in Medicine: A Student Perspective