Clinical experience has limits, even in skilled hands. Clinical expertise remains a cornerstone of medical and dental practice. Years of training refine pattern recognition, inform diagnostic reasoning, and enable clinicians to navigate uncertainty in complex clinical environments. However, experience alone does not render clinicians immune to diagnostic error, particularly when disease presents with atypical features that fall outside classical descriptions. Diagnostic reasoning is also shaped by
cognitive biases
that can unconsciously influence clinical interpretation over time, potentially delaying more definitive investigation.

A Routine Referral That Wasn’t Routine

A 60-year-old male patient with a long-standing history of smoking was referred for what appeared to be a routine
dental implant consultation.
The referral did not raise immediate concern. However, clinical examination revealed a lesion on the lower lip that the patient reported had been appearing and resolving intermittently for nearly two years.

During that period, the patient had been assessed by both a medical doctor and a dentist. Because of its fluctuating presentation, the lesion was
diagnosed as herpes simplex
and managed conservatively. No biopsy was undertaken. Over time, the lesion persisted and increased in size, and the patient became increasingly self-conscious, even wearing a mask in public to conceal its appearance.

When a Familiar Diagnosis Becomes a Blind Spot

On clinical assessment, the lesion’s characteristics were inconsistent with a benign viral condition. Its location, persistence, and the patient’s risk profile prompted urgent referral for biopsy. Histopathological analysis confirmed the diagnosis of lip melanoma, a rare but aggressive malignancy. The head and neck surgeon later indicated that had the diagnosis been further delayed by six to twelve months, the prognosis could have been significantly worse.

This case provides a clear example of anchoring bias, in which the initial diagnosis of herpes simplex influenced all subsequent clinical interpretations despite evolving evidence to the contrary. Anchoring bias is among the most frequently discussed cognitive biases in healthcare decision-making, affecting clinicians’ ability to revisit or revise diagnostic hypotheses when faced with new or discordant information.

Fig. 1 – Pre-treatment

Lip melanoma pre-treatment
Pre-treatment

Fig. 2 – Post-treatment

Lip melanoma post-treatment
Post-treatment

Where AI Could Have Changed the Timeline

AI has the potential to intervene precisely at vulnerable points in the diagnostic process by providing objective pattern recognition that is independent of prior clinical assumptions. In dermatology and related domains, AI-based image analysis systems have demonstrated performance levels comparable to or exceeding those of experienced clinicians when trained on large, well-curated datasets.

In this case, while AI would not replace histopathological diagnosis—the gold standard—it could have flagged the lesion as atypical and prompted earlier biopsy referral. This earlier warning might have reoriented clinical reasoning sooner and reduced diagnostic delay.

Importantly, recent research shows that diversity and dataset quality are critical to AI performance. Models trained predominantly on lighter skin tones may underperform on other populations, underscoring the need for equitable data representation.

AI as a Clinical Safety Net

AI does not undermine clinical autonomy; instead, it serves as a safeguard against diagnostic inertia and cognitive blind spots. By introducing an objective analytical perspective, AI supports clinicians in identifying patterns that may be subtle or atypical, especially in early disease presentations or high-risk patient profiles.

AI functions as a “second set of eyes,” complementing human judgment and prompting re-evaluation when visual or contextual features do not align with benign expectations. This aligns with broader evidence that AI systems can enhance lesion classification and risk stratification when integrated into clinical workflows.

Seeing Risk Before It Becomes Obvious

This case raises important questions for contemporary clinical practice. How many serious conditions are delayed because they resemble common, low-risk presentations? How often does initial diagnostic familiarity reduce ongoing vigilance?

While early detection remains crucial for improving outcomes, early diagnostic doubt supported by objective tools like AI often makes timely intervention possible.

The future of healthcare will not be defined by clinicians or algorithms working in isolation. Human clinical reasoning, grounded in experience, context, and ethical judgment, must be augmented by AI’s capacity for large-scale pattern recognition and resistance to cognitive bias. Together, these strengths create a more resilient diagnostic framework.

In the case described, human clinical judgment ultimately altered the patient’s outcome. With AI integrated earlier into the diagnostic pathway, that judgment could have been supported much sooner.

References

  1. Karimzadhagh S. et al. (2026). Performance of Artificial Intelligence in Skin Cancer Detection. International Journal of Dermatology, 65(1), 69–85.
  2. Elumalai K. (2024). Improving oral cancer diagnosis with artificial intelligence. Oral Oncology Reports, 11, 100624.
    https://doi.org/10.1016/j.oor.2024.100624
  3. Górecki S., Tatka A., & Brusey J. (2025). Artificial Intelligence in Melanoma Diagnosis. Cancers, 17(24), 3896.
  4. Ly D.P., Shekelle P.G., & Song Z. (2023). Evidence for anchoring bias during physician decision-making. JAMA Internal Medicine, 183(8), 818–823.
  5. Semerci Z.M. et al. (2024). The role of AI in early diagnosis of head and neck skin cancers. Diagnostics, 14(14), 1477.
    https://doi.org/10.3390/diagnostics14141477
  6. Papachristou P. et al. (2024). AI decision support for melanoma detection in primary care. British Journal of Dermatology, 191(1), 125–133.
    https://doi.org/10.1093/bjd/ljae021

About the Author

Dr. Shervin MolayemDr. Shervin Molayem is a California-based periodontist and co-founder of Trust AI, the first AI-native patient management system in dentistry. His work focuses on the oral-systemic connection, salivary diagnostics, and multimodal AI treatment planning. Dr. Molayem also serves as a board member, advisor, and investor in dental technology companies, helping accelerate innovation and modernize clinical care.