Blog
What Your Patient Will Not Tell You Anymore
Ambient AI scribes reduce documentation time — but change what patients are willing to say. What clinicians need to know about consent, self-censorship, and clinical risk.
🇬🇧 English edition – Auf Deutsch lesen
Ambient AI scribes are tools that listen to the clinical conversation in real time and automatically generate a draft note — no typing required. Their adoption in clinical settings is accelerating rapidly. They have a strong case for reducing physician burnout and documentation time [1,2]. What’s less discussed is what they change on the other side of the room – the patient side, and not always for the better.
⏱️ Reading time: ~3 minutes
🔧 What Scribes Actually Deliver
The evidence is real but more modest than the commercial narrative suggests:
The largest RCT to date (238 outpatient physicians, 14 specialties) found one of two scribes tested (Microsoft Dragon Ambient eXperience (DAX) Copilot or Nabla) reduced documentation time by ~10%; the other showed no significant effect [1]
Both reduced burnout-related metrics
A pre-post study (46 clinicians, 5 weeks) found documentation time dropped from 10.3 to 8.2 minutes per appointment, and after-hours documentation reduced by 30% — but without a control group [2]
One study found only 34 seconds saved per note on average — with significant variability between physicians [6]
Note accuracy: “correct 80–85% of the time” for subjective sections — meaning edits are still needed before every sign-off [2]. AI hallucination rates are estimated at 1–3% [6]
Consistently reported benefit: less divided attention, more eye contact with patients [2]
What hasn’t been shown yet: any improvement in patient outcomes.
🧠 What Patients Know (At the moment, not much)
Fewer than 1 in 3 patients are aware AI scribes already exist in clinical settings [3]
When informed, ~60% express reluctance
74.9% report initial comfort — but this drops to 55.3% once they’re told about AI features, data storage, and corporate involvement [4]
Privacy concerns are the strongest predictor of rejection
In practice, consent often means a notice in the waiting room or a brief verbal mention. Whether patients understand that the entire conversation is being transcribed is a different question.
🚨 The Self-Censorship Problem
A 2025 study found that when patients know an AI is recording, a significant share would withhold information on [4]:
Mental health — 35%
Sexual health — 40.8%
Illicit activity — 51.5%
The clinical risk is direct. Ambient scribes are adopted partly to free up physician attention for the patient. But if patients self-censor on exactly the topics that require openness, the consultation becomes more documented and less informative at the same time.
💡 Key Takeaways for Your Practice
1. A notice on the door is not enough A brief verbal opt-in at the start — “there’s a tool recording today to help with my notes — is that okay?” is recommended. Ideally, written consent should also be obtained, for example as part of the data privacy statement required at the start of treatment.
2. Don’t wait for patients to volunteer sensitive information When something is listening, patients are less likely to bring up mental health, sexual health, or substance use unprompted. Ask directly.
3. Reconsider ambient recording for sensitive consultations For psychiatry, addiction, sexual health — the trade-off between documentation efficiency and patient disclosure deserves a deliberate decision, not a default.
4. Language matters Tools trained predominantly on English perform less reliably on other languages [5]. So in the Swiss context, where the patient population is linguistically diverse, accuracy is not a given.
5. Marginalised patients may pull back more Patients who already approach healthcare with caution — migrants, asylum seekers, undocumented patients — may self-censor more than average. Worth keeping in mind before applying ambient recording across the board.
❓ What We Don’t Know Yet
The evidence on ambient scribes is still measuring the wrong endpoint. Reduced documentation time matters — but it’s a proxy. No study has yet linked ambient scribes to better patient outcomes. And the documentation gap runs deeper than self-censorship: one analysis found that approximately 50% of patient problems discussed verbally were never captured in the EHR at all [6] — regardless of whether AI was involved. The self-censorship data is largely self-reported and comes from small samples. We also don’t know whether patients become more comfortable over time as the technology becomes familiar, or whether the withholding effect persists — and quietly compounds.
📚 Sources
Lukac PJ, Turner W, Vangala S, et al. Ambient AI Scribes in Clinical Practice: A Randomized Trial. NEJM AI. 2025;2(12). https://pmc.ncbi.nlm.nih.gov/articles/pmid/41497288/
Duggan MJ, Gervase J, Schoenbaum A, et al. Clinician Experiences With Ambient Scribe Technology to Assist With Clinical Documentation. JAMA Netw Open. 2025;8(2):e2460637. https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2830383
Chandrasekaran R, Moustakas E. Patient Attitudes Toward Ambient Artificial Intelligence Scribes in Clinical Care. J Am Med Inform Assoc. 2026;33(2):263–272. https://doi.org/10.1093/jamia/ocaf218
Lawrence K, Kuram VS, Levine DL, et al. Informed Consent for Ambient Documentation Using Generative AI in Ambulatory Care. JAMA Netw Open. 2025;8(7):e2522400. https://doi.org/10.1001/jamanetworkopen.2025.22400
Kakani P, Kilaru AS, Buntin MB. Implications of Artificial Intelligence–Powered Ambient Scribes. JAMA Health Forum. 2026;7(1):e256150. https://doi.org/10.1001/jamahealthforum.2025.6150
Topaz M, Peltonen LM, Zhang Z. Beyond human ears: navigating the uncharted risks of AI scribes in clinical practice. NPJ Digit Med. 2025. https://pmc.ncbi.nlm.nih.gov/articles/PMC12460601/
Disclosure: No conflict of interest. Sources independently identified by SwissMedAI from peer-reviewed literature and published editorials.