AI treatment plans aren’t the bottleneck — decision consistency is

AI treatment plans aren’t the bottleneck — decision consistency is

Why faster AI and prettier plans don’t solve inconsistency, risk, or scale in dental clinics

Over the past two years, artificial intelligence has moved rapidly into dental clinics. Treatment plans can now be generated in seconds. Clinical findings are converted into polished patient-facing PDFs. Documentation that once consumed chairside or after-hours time has become dramatically faster.

From a speed and presentation perspective, this is real progress.

Yet many clinic owners, operators, and senior clinicians are quietly reporting the same frustration: despite faster planning and better presentation, the underlying problems inside clinics haven’t disappeared.

Treatment plans are still inconsistent. Decisions still vary between clinicians. Cases still stall before treatment begins. And scaling beyond individual expertise remains difficult.

The issue is not that AI tools don’t work. The issue is that plan generation and clinical decision-making are not the same problem.

AI solved generation — not decisions

Most current AI systems in dentistry are excellent at generation. They summarize findings, propose options, and structure plans based on input data. They reduce manual effort and improve clarity compared to handwritten notes or fragmented documentation.

But generation answers a different question than the one clinics actually struggle with.

AI answers: “What could be done?”

Clinics struggle with: “Which option should we choose, why, and how do we remain consistent across cases and clinicians?”

That distinction matters more than it seems.

Generation ≠ decision-making

A treatment plan is not just a list of procedures. It is a decision embedded in a broader context of:

  • Risk tolerance
  • Clinical philosophy
  • Patient expectations
  • Long-term maintenance
  • Operational constraints
  • Legal and reputational exposure

Two clinicians can receive the same AI-generated plan and make different decisions about what to present, prioritize, or defer. Neither is necessarily wrong — but the clinic now carries variation that is rarely visible until something goes wrong.

AI tools do not resolve this variation. They often amplify it, by producing plausible options without enforcing decision logic.

Clinical decision making illustration

A clinic vignette: when AI makes inconsistency visible

Consider a multi-chair general practice that recently adopted an AI-assisted planning and presentation tool across all clinicians. Within weeks, management noticed something unexpected.

Two patients with nearly identical profiles — moderate periodontal findings, early carious lesions, and signs of erosive wear — were seen by different clinicians. Both plans were generated using the same AI system. The layouts were clean. The language was professional.

The documentation looked standardized. Yet the substance of the plans differed markedly.

One plan emphasized immediate periodontal stabilization and conservative monitoring. The other prioritized restorative treatment with a more aggressive intervention sequence. Case acceptance, chairtime estimates, and projected costs varied significantly. No clear clinical error was identified. Each plan could be defended.

What the AI exposed was not a software flaw — but the absence of a shared decision framework behind those choices.

Where inconsistency appears — before treatment even begins

Most treatment failures are not technical failures. They occur before treatment starts. Clinic operators recognize these patterns immediately:

  • Cases accepted but never scheduled
  • Patients pausing due to unclear priorities
  • Replanning the same case multiple times
  • Internal disagreement on the “best” approach
  • Clinicians second-guessing their own recommendations

These are not problems of skill. They are problems of decision coherence. When decision-making remains implicit and experience-driven, clinics depend on personal authority rather than shared structure. That works at small scale — and quietly breaks as complexity increases.

Why operators and DSOs feel this first

Individual clinicians can often function comfortably with implicit reasoning. Operators cannot. As clinics grow, operators face uncomfortable questions:

  • Why do similar cases produce different plans?
  • Why do some clinicians escalate risk faster than others?
  • Why does standardization feel restrictive rather than enabling?
  • Why does adding talent sometimes increase friction instead of performance?

These are not software problems. They are decision architecture problems. AI makes planning faster, but it does not make reasoning visible, comparable, or repeatable.

The missing layer: decision consistency

Decision consistency does not mean uniform treatment. It means that differences are intentional, explainable, and defensible. A consistent clinic can answer:

  • Why one option was chosen over alternatives
  • What risks were accepted or deferred
  • How a case aligns with clinic strategy
  • Where clinical judgment overrides automation

Without this structure, clinics rely on reassurance — not reasoning. Polished PDFs may calm patients. AI-generated plans may look confident. But none of this guarantees that decisions are aligned, scalable, or safe over time.

From automation to accountability

Dentistry is entering a post-AI phase faster than many realize. In this phase, plan generation is assumed, speed is expected, and presentation is table stakes.

The differentiator becomes how decisions are evaluated, compared, and repeated. AI can generate options. Only structured reasoning creates accountability. As regulatory scrutiny increases and clinics scale, the ability to explain why a decision was made will matter as much as what was done.

Decision consistency is not a luxury. It is the infrastructure that allows AI to be used safely, intelligently, and at scale.

Closing thought

The question dentistry now faces is not: “How do we generate better treatment plans?”

But rather: “How do we make better decisions — consistently, defensibly, and at scale?”

AI solved one layer. The next bottleneck is already here.


About the author

Dr. Sami Savolainen

Dr. Sami Savolainen is a dentist and healthcare systems thinker working at the intersection of clinical decision-making, documentation, and risk management. With experience in clinical practice and system design, he focuses on how planning structures determine safety, trust, and scalability in modern healthcare.

Augmented intelligence vs. artificial intelligence: Redefining AI in dentistry

Augmented intelligence vs. artificial intelligence: Redefining AI in dentistry

Artificial intelligence is everywhere today — woven into nearly every aspect of life and conversation. But what is it exactly, what can it do, and more importantly, how can it serve you and your patients?

To understand its role, we should start with definitions. “Artificial” refers to something made by humans rather than occurring naturally, often designed to simulate the real thing. “Intelligence” describes the capacity to learn, reason, adapt, and solve problems — abilities central to understanding and navigating the world. The term “artificial” sometimes carries negative undertones, suggesting something insincere or unnatural, while the word “Augmented” implies enhancement — making something more complete, effective, or capable.

Artificial intelligence” (AI) is a branch of computer science devoted to developing systems that perform cognitive tasks typically requiring human intellect, such as learning, reasoning, and decision-making. These systems process data, identify patterns, and make predictions that often emulate human thought.

Augmented intelligence” (AI), by contrast, is human-centric, focusing on collaboration between humans and machines. Instead of replacing human expertise, it enhances it — using advanced analytics and vast computational power to support better judgments and outcomes. In dentistry, this distinction is essential. Our profession thrives on human empathy, intuition, and ethical care. Therefore, dentistry should embrace an augmented, rather than purely artificial, approach — one that integrates machine learning and diagnostic data with the clinician’s judgment and compassion.

Yet this integration raises vital ethical questions. Borrowing inspiration from Isaac Asimov’s “Three Laws of Robotics,” we might propose similar principles for dental AI:

  1. It must never harm a patient;
  2. it must follow human direction within that constraint; and
  3. it must protect its data integrity without violating the first two rules.

Augmented intelligence is a powerful addition to modern dentistry — one that, used ethically and wisely, will amplify human intelligence, empowering both practitioner and patient.


Sincerely,
George Freedman BSc, DDS, FIADFE, DiplABD, FAACD, FASDA, FPFA on behalf of the Artificial Intelligence Journal of Medicine and Dentistry (AIMEDENT Journal)
freedman6469@rogers.com

Results near ATLANTA, GA 30385 – HealthCare.gov

Times available Language or interpretive services Search by name Showing 10 of 2,006 results near ATLANTA, GA 30385, filtered by Coverage type: Individual or Family,Type of local help: Assister Change location

AI and diagnostic safety: Anchoring bias in a case of lip melanoma

AI and diagnostic safety: Anchoring bias in a case of lip melanoma

Clinical experience has limits, even in skilled hands. Clinical expertise remains a cornerstone of medical and dental practice. Years of training refine pattern recognition, inform diagnostic reasoning, and enable clinicians to navigate uncertainty in complex clinical environments. However, experience alone does not render clinicians immune to diagnostic error, particularly when disease presents with atypical features that fall outside classical descriptions. Diagnostic reasoning is also shaped by
cognitive biases
that can unconsciously influence clinical interpretation over time, potentially delaying more definitive investigation.

A Routine Referral That Wasn’t Routine

A 60-year-old male patient with a long-standing history of smoking was referred for what appeared to be a routine
dental implant consultation.
The referral did not raise immediate concern. However, clinical examination revealed a lesion on the lower lip that the patient reported had been appearing and resolving intermittently for nearly two years.

During that period, the patient had been assessed by both a medical doctor and a dentist. Because of its fluctuating presentation, the lesion was
diagnosed as herpes simplex
and managed conservatively. No biopsy was undertaken. Over time, the lesion persisted and increased in size, and the patient became increasingly self-conscious, even wearing a mask in public to conceal its appearance.

When a Familiar Diagnosis Becomes a Blind Spot

On clinical assessment, the lesion’s characteristics were inconsistent with a benign viral condition. Its location, persistence, and the patient’s risk profile prompted urgent referral for biopsy. Histopathological analysis confirmed the diagnosis of lip melanoma, a rare but aggressive malignancy. The head and neck surgeon later indicated that had the diagnosis been further delayed by six to twelve months, the prognosis could have been significantly worse.

This case provides a clear example of anchoring bias, in which the initial diagnosis of herpes simplex influenced all subsequent clinical interpretations despite evolving evidence to the contrary. Anchoring bias is among the most frequently discussed cognitive biases in healthcare decision-making, affecting clinicians’ ability to revisit or revise diagnostic hypotheses when faced with new or discordant information.

Fig. 1 – Pre-treatment

Lip melanoma pre-treatment
Pre-treatment

Fig. 2 – Post-treatment

Lip melanoma post-treatment
Post-treatment

Where AI Could Have Changed the Timeline

AI has the potential to intervene precisely at vulnerable points in the diagnostic process by providing objective pattern recognition that is independent of prior clinical assumptions. In dermatology and related domains, AI-based image analysis systems have demonstrated performance levels comparable to or exceeding those of experienced clinicians when trained on large, well-curated datasets.

In this case, while AI would not replace histopathological diagnosis—the gold standard—it could have flagged the lesion as atypical and prompted earlier biopsy referral. This earlier warning might have reoriented clinical reasoning sooner and reduced diagnostic delay.

Importantly, recent research shows that diversity and dataset quality are critical to AI performance. Models trained predominantly on lighter skin tones may underperform on other populations, underscoring the need for equitable data representation.

AI as a Clinical Safety Net

AI does not undermine clinical autonomy; instead, it serves as a safeguard against diagnostic inertia and cognitive blind spots. By introducing an objective analytical perspective, AI supports clinicians in identifying patterns that may be subtle or atypical, especially in early disease presentations or high-risk patient profiles.

AI functions as a “second set of eyes,” complementing human judgment and prompting re-evaluation when visual or contextual features do not align with benign expectations. This aligns with broader evidence that AI systems can enhance lesion classification and risk stratification when integrated into clinical workflows.

Seeing Risk Before It Becomes Obvious

This case raises important questions for contemporary clinical practice. How many serious conditions are delayed because they resemble common, low-risk presentations? How often does initial diagnostic familiarity reduce ongoing vigilance?

While early detection remains crucial for improving outcomes, early diagnostic doubt supported by objective tools like AI often makes timely intervention possible.

The future of healthcare will not be defined by clinicians or algorithms working in isolation. Human clinical reasoning, grounded in experience, context, and ethical judgment, must be augmented by AI’s capacity for large-scale pattern recognition and resistance to cognitive bias. Together, these strengths create a more resilient diagnostic framework.

In the case described, human clinical judgment ultimately altered the patient’s outcome. With AI integrated earlier into the diagnostic pathway, that judgment could have been supported much sooner.

References

  1. Karimzadhagh S. et al. (2026). Performance of Artificial Intelligence in Skin Cancer Detection. International Journal of Dermatology, 65(1), 69–85.
  2. Elumalai K. (2024). Improving oral cancer diagnosis with artificial intelligence. Oral Oncology Reports, 11, 100624.
    https://doi.org/10.1016/j.oor.2024.100624
  3. Górecki S., Tatka A., & Brusey J. (2025). Artificial Intelligence in Melanoma Diagnosis. Cancers, 17(24), 3896.
  4. Ly D.P., Shekelle P.G., & Song Z. (2023). Evidence for anchoring bias during physician decision-making. JAMA Internal Medicine, 183(8), 818–823.
  5. Semerci Z.M. et al. (2024). The role of AI in early diagnosis of head and neck skin cancers. Diagnostics, 14(14), 1477.
    https://doi.org/10.3390/diagnostics14141477
  6. Papachristou P. et al. (2024). AI decision support for melanoma detection in primary care. British Journal of Dermatology, 191(1), 125–133.
    https://doi.org/10.1093/bjd/ljae021

About the Author

Dr. Shervin MolayemDr. Shervin Molayem is a California-based periodontist and co-founder of Trust AI, the first AI-native patient management system in dentistry. His work focuses on the oral-systemic connection, salivary diagnostics, and multimodal AI treatment planning. Dr. Molayem also serves as a board member, advisor, and investor in dental technology companies, helping accelerate innovation and modernize clinical care.

Een bestand downloaden – Computer – Google Chrome Help

Om alle downloads te bekijken als de Downloadlade niet rechts van de adresbalk staat, klik je op Meer Downloads. Je kunt een gedownload bestand naar een andere map, een ander programma of een andere website slepen. Als je een gedownload bestand wilt verplaatsen, klik je in het Downloadvak op het bestand en sleep je het naar de doellocatie.

Fazer o download de um arquivo

Para acessar todos os downloads se a bandeja de download não estiver presente à direita da barra de endereço, clique em Mais Downloads do Google Analytics. Você pode arrastar um arquivo baixado para outra pasta, programa ou site. Para mover um arquivo transferido, na Bandeja de download , clique no arquivo e arraste-o para o local de destino.

Download a file – Android – Google Chrome Help

On your Android phone or tablet, open the Chrome app . Go to the site where you want to download a file. Touch and hold what you want to download, then tap Download link or Download image. On some video and audio files, tap Download . On some video and audio files, tap Download .

Download and install Google Chrome

How to install Chrome Important: Before you download, check if Chrome supports your operating system and you’ve met all other system requirements.