Back to Blog Patient Safety

Why AI Scribes Hallucinate Medication Dosages (And What We Do Differently)

Canybec Sulayman, PMHNP-BC, MBA March 1, 2026 8 min read

If you've used an AI scribe in psychiatric practice, you've probably caught it. Lamictal 200mg becomes 20mg. Seroquel 25mg becomes 250mg. Adderall XR 30mg gets documented as Adderall IR 30mg. These aren't minor formatting errors — they're patient safety events waiting to happen.

The Problem Is Bigger Than You Think

Browse r/PMHNP or r/Psychiatry and you'll find thread after thread of prescribing clinicians reporting the same issue: general AI scribes flip medication dosages, confuse formulations, and occasionally invent medications that don't exist.

This isn't a bug — it's a fundamental limitation of how general-purpose AI scribes work. They're trained on broad medical datasets covering 96+ specialties. Psychiatric pharmacology is a small fraction of their training data, so the model has weak priors on psychiatric medication names, typical dosage ranges, and formulation differences.

Real Examples from the Field

  • Dosage flipping: Lamictal 200mg transcribed as 20mg (10x underdose)
  • Formulation confusion: Adderall XR 30mg documented as Adderall IR 30mg (completely different pharmacokinetics)
  • Dosage inflation: Seroquel 25mg PRN transcribed as 250mg (10x overdose)
  • Generic/brand confusion: Bupropion XL 300mg documented as Wellbutrin SR 300mg (different release mechanism)
  • Fabrication: AI "corrects" an unusual but intentional off-label dose to a standard dose it's seen more frequently

Why This Happens: The Technical Explanation

Large language models generate text by predicting the most likely next token. When a general scribe encounters "Lamictal" followed by a number, it assigns probability weights to various dosages. If its training data contains more instances of "Lamictal 25mg" (a common starting dose) than "Lamictal 200mg" (a therapeutic dose), it may "correct" the dosage toward what it sees as more statistically likely.

This is the core problem: statistical likelihood is not clinical accuracy. A general AI doesn't understand that this specific patient has been titrated to 200mg over six months. It just sees a number that looks less common and nudges it toward the mean.

The problem compounds in psychiatry because:

  • Dose ranges are wide — Seroquel is prescribed from 25mg (sleep) to 800mg (psychosis). A 10x error in either direction is clinically plausible to an AI.
  • Formulations matter enormously — XR vs IR, SR vs XL. These aren't interchangeable, but AI treats them as variants of the same word.
  • Off-label dosing is common — Psychiatry uses off-label doses more than almost any other specialty. AI trained on standard dosing "corrects" these.
  • Polypharmacy increases error surface — A patient on 5+ medications means 5+ opportunities for dosage hallucination per note.

What Psynopsis Does Differently

Psynopsis was built by a practicing PMHNP who has caught these errors in his own documentation. The solution isn't to "try harder" with a general model — it's to build a system that understands psychiatric pharmacology natively.

1. Psychiatric-Specific AI Training

Our AI is trained with deep emphasis on psychiatric medications, their dosage ranges, formulations, and titration protocols. It knows that Lamictal 200mg is a standard therapeutic dose, not an error. It knows the difference between Adderall XR and IR. It won't "correct" your intentional off-label prescribing.

2. Dosage Validation Against Known Ranges

When the AI generates a medication dosage, it cross-references against known therapeutic ranges for psychiatric medications. If the transcribed dose falls outside expected parameters, it flags it for review rather than silently generating an error.

3. Formulation Awareness

XR, IR, SR, XL, ER, DR, ODT — these matter. Our system maintains awareness of available formulations for each medication and preserves the specific formulation you documented, not the most statistically common one.

4. Post-Session Dictation Mode

Many medication errors in ambient scribes come from mishearing during fast-paced sessions. Psynopsis offers post-session dictation — you dictate a clear 2-minute summary after the patient leaves. No ambient noise, no crosstalk, no mishearing "twenty" as "two hundred."

What You Should Do Right Now

Whether you use Psynopsis, another scribe, or no AI at all:

  1. Always verify medication sections. Every time. No exceptions. AI-generated medication lists should be treated as drafts, not final documentation.
  2. Check formulations, not just drug names. "Bupropion 300mg" is incomplete. Is it XL? SR? IR? The AI may have dropped the formulation.
  3. Compare against last visit. If the patient was on Lamictal 200mg last visit and the AI scribe says 20mg today, something went wrong.
  4. Consider the tool's training. A scribe built for 96+ specialties will always be weaker in psychiatric pharmacology than one built exclusively for it.

The Stakes Are Too High for "Close Enough"

In dermatology, if an AI scribe gets a topical cream concentration wrong, the consequence is mild. In psychiatry, a 10x medication dosage error in a progress note can lead to an incorrect prescription fill, a patient safety event, and a malpractice claim.

That's why we built Psynopsis to go deep in one specialty rather than wide across many. When documentation accuracy directly affects patient safety, "close enough" isn't enough.

Canybec Sulayman, PMHNP-BC, MBA

Founder of Psynopsis. Practicing psychiatric NP and creator of the Diagnostic Psychiatry methodology. Licensed in AZ, CA, and NM.

Ready for an AI scribe that gets psychiatry right?

Medication accuracy isn't optional. Try Psynopsis free.

Start Free Today
No credit card required

Stop Spending Evenings on Notes

Psychiatric documentation that understands your workflow. MSE, medication changes, SI/HI — documented correctly while you focus on your patients.

HIPAA compliant · BAA included · Audio never stored