Blog

AI Terms Physicians Should Know

6 January 2026 By SwissMed AI
AI Terms Physicians Should Know

A concise glossary of AI terms physicians will encounter in clinical practice, research, and healthcare software descriptions.

This glossary summarises key AI-related terms that are frequently used in clinical practice, research, and healthcare software descriptions. The aim is a concise, technically sound orientation without unnecessary technical detail.


LLMs (Large Language Models)

  • Generate text by calculating statistically likely word sequences, not by understanding meaning.

  • Models such as ChatGPT, Claude, or Gemini are general-purpose language models, not medical systems.


ML (Machine Learning)

  • Learns patterns from existing data and therefore inherits errors and biases present in those data.

  • Performs only as well as the data on which it was trained.


NLP (Natural Language Processing)

  • Enables software to recognise, analyse, and structure unstructured medical text.

  • Forms the basis for allowing computers to “read” clinical notes and extract relevant information, for example for coding, quality analysis, or searching large text corpora.


ACI (Ambient Clinical Intelligence)

  • Captures conversations between clinicians and patients and automatically generates draft documentation.

  • The generated text often appears highly plausible but may contain subtle inaccuracies.

  • Commonly used to pre-draft anamnesis and progress notes.


CDSS (Clinical Decision Support Systems)

  • Provide alerts or warnings about potential risks without making decisions themselves.

  • Typical use cases include drug-drug interaction warnings, risk alerts, or reminder systems.


Hallucination

  • Refers to the generation of factually incorrect content that is linguistically convincing and therefore difficult to detect.

  • Example:

“According to the ESC Guideline 2023, ivabradine is recommended as first-line therapy in stable angina.”

Sounds credible, but is fabricated.


Algorithmic Bias

  • Occurs when training data underrepresent or distort certain patient groups.

  • Leads to systematically poorer model performance for specific populations.


Model Drift / Dataset Shift

  • Describes declining model performance as data or clinical contexts change over time.

  • Causes include new guidelines, shifting patient populations, or changes in clinical workflows.


Human-in-the-Loop

  • Means that AI outputs are reviewed by a human before being used.

  • Considered a minimum standard for clinically relevant AI applications.


Data Protection and Regulation (Switzerland)

  • revDSG (Revised Federal Act on Data Protection): Governs how patient data may be collected, processed, stored, and shared.

  • DSFA (Data Protection Impact Assessment): Required when introducing systems that pose increased risks to patients’ rights, particularly AI tools processing sensitive health data.

  • GDPR (General Data Protection Regulation): Relevant when EU-based providers, servers, or cross-border data processing are involved.