Jaco Greyling – Trustlink (Pty) Ltd | LinkedIn

Experience: Trustlink (Pty) Ltd · Education: University of Pretoria/Universiteit van Pretoria · Location: City of Johannesburg · 150 connections on LinkedIn. View Jaco Greyling’s profile on LinkedIn, a professional community of 1 billion members.

Jaco De Villiers – Zutari | LinkedIn

Jaco has more than 15 years experience as Zutari / Aurecon Team Leader in design and… · Experience: Zutari · Education: Msc Eng (Structures) – University of Stellenbosch · Location: South Africa · 500+ connections on LinkedIn. View Jaco De Villiers’ profile on LinkedIn, a professional community of 1 billion members.

Jaco Fouche – Foxtel | LinkedIn

I’m a former management consultant who transitioned into industry leadership, currently… · Experience: Foxtel · Education: SAICA · Location: South Turramurra · 500+ connections on LinkedIn. View Jaco Fouche’s profile on LinkedIn, a professional community of 1 billion members.

100+ “Jaco Smit” profiles | LinkedIn

View the profiles of professionals named “Jaco Smit” on LinkedIn. There are 100+ professionals named “Jaco Smit”, who use LinkedIn to exchange information, ideas, and …

Jaco Billing – Circulation Marketing Manager at The Citizen … – LinkedIn

Circulation Marketing Manager at The Citizen Newspaper · 🔹 Dedicated and results-driven professional with a strong track record in the print media industry, supported by experience in the e-commerce space. I bring a dynamic mix of strategic marketing insight and hands-on expertise in commercial management, circulation (sales) management, retail marketing, and sales leadership. 🔹 My …

When artificial intelligence should remain silent: Algorithmic silence in oral disease diagnosis and the ethics of clinical judgment

When artificial intelligence should remain silent: Algorithmic silence in oral disease diagnosis and the ethics of clinical judgment

Artificial intelligence has entered dentistry with remarkable speed. Machine learning systems now assist clinicians in detecting caries, interpreting radiographs, identifying periodontal changes, and screening oral lesions with levels of consistency approaching expert performance. Ethical discussion surrounding dental AI has largely focused on improving prediction accuracy. Clinical medicine recognizes uncertainty as an essential component of responsible diagnosis. This paper argues that responsible AI-assisted dentistry requires recognition of a critical ethical boundary — the moment at which automated diagnostic expression should be intentionally withheld.

Defining algorithmic silence

Algorithmic silence may be defined as the intentional withholding of automated diagnostic recommendation when model confidence, contextual validity, or ethical safety thresholds are not sufficiently satisfied. Silence functions as a designed safeguard embedded within clinical decision-support architecture, allowing uncertainty to become an actionable clinical signal.

Algorithmic silence in oral disease diagnosis

Oral disease diagnosis represents one of the most uncertainty-rich domains of healthcare. AI systems trained on image datasets may perform effectively in advanced disease detection while remaining vulnerable in early or atypical presentations. Calibrated abstention signals prompting referral or further investigation may better protect patients than forced classification.

Responsibility and clinical authority

Continuous algorithmic prediction introduces the risk of responsibility diffusion. Healthcare ethics maintains that the individual who decides bears responsibility. Algorithmic silence helps re-establish accountability by explicitly returning decision authority to the practitioner whenever uncertainty exceeds safe limits.

Designing ethical dental AI systems

Operationalizing algorithmic silence requires intentional design strategies including uncertainty quantification models, calibrated confidence thresholds, out-of-distribution detection, abstention-enabled neural networks, and clinician override prioritization.

Fig. 1
The Algorithmic Silence Model in AI-Assisted Oral Diagnosis
Photo supplied

Conclusion

Algorithmic silence represents an essential evolution in AI-assisted oral healthcare. By embedding uncertainty awareness within clinical systems, dentistry can preserve professional judgment while benefiting from computational intelligence.

References

  1. Amodei D, Olah C, Steinhardt J, et al. Concrete problems in AI safety. arXiv. 2016.
  2. Kelly CJ, Karthikesalingam A, Suleyman M, Corrado G, King D. Key challenges for delivering clinical impact with artificial intelligence. BMC Medicine. 2019;17:195.
  3. Schwendicke F, Samek W, Krois J. Artificial intelligence in dentistry: chances and challenges. Journal of Dental Research. 2020;99(7):769–774.
  4. Floridi L, Cowls J, Beltrametti M, et al. AI4People—An ethical framework for a good AI society. Minds and Machines. 2018;28:689–707.
  5. World Health Organization. Ethics and governance of artificial intelligence for health. Geneva: WHO; 2021.
  6. European Commission. Artificial Intelligence Act. Brussels; 2024.

About the author
Dr. Ameed Khalid Abdul-Hamid is an Iraqi–British dental surgeon and academic researcher, internationally recognized for his contributions to artificial intelligence in dentistry and healthcare. He serves as Chairman of the Arab Organisation for Artificial Intelligence in Healthcare and Chairman of the Saudi-British Medical Forum (London). His research focuses on AI-enabled diagnostics, digital health systems, and the ethical, responsible integration of artificial intelligence in clinical care.

Defensive medicine in the age of AI

Defensive medicine in the age of AI

Documentation, attribution, and skepticism as clinical safety tools

Defensive medicine used to mean ordering one more test, writing one more line in the note, or documenting that risks were discussed “in detail.” That definition no longer holds. In the AI era, defensive medicine is less about doing more, and more about how decisions were made, who contributed to them, and how uncertainty was handled once artificial intelligence entered the workflow.

The medical record is no longer written by a single clinician at the end of a long shift. It is increasingly co-authored by ambient scribes, summarization engines, clinical decision support tools, and large language models that can sound confident even when they are wrong.

Here is the legal reality clinicians need to absorb early: When AI enters the chart, responsibility does not shift. It concentrates.

A shared workspace, personal liability

Most clinicians already use AI, whether they call it that or not. Ambient documentation, auto-generated assessments, triage tools, record summarization, and literature synthesis are now routine. These tools reduce friction and save time. But they also introduce a new medico-legal problem:

If an AI-generated statement is wrong and it appears in the chart, who owns it?

The law has been consistent so far. The signer owns the note. Courts do not meaningfully distinguish between human-authored and AI-assisted documentation. The medical record remains a clinician’s representation of reality, regardless of how the text was produced. That alone should change how we document.

Clinician reviewing digital records
Photo supplied

Hallucinations are a feature, not a bug

One of the most dangerous myths in healthcare AI is the belief that hallucinations are technical glitches that will be fixed with better models. They will not. Hallucinations are a structural feature of large language models. These systems do not retrieve truth. They generate statistically plausible language based on patterns in prior data.

This behaviour is not accidental. Models are rewarded for producing answers, not for saying “I don’t know.” In fact, any AI model that claims over 75 percent accuracy in a complex system shouldn’t be trusted. It’s probably learning the wrong thing very well.

Language models amplify this risk. They sound authoritative. They write cleanly. They can fabricate references, guidelines, or reasoning unless the reader already knows the answer. In healthcare, fluency without grounding is not neutral. It is dangerous.

Accuracy is not trustworthiness

We are repeatedly told to trust AI because it is “95 percent accurate.” However:

  • Accuracy is not safety.
  • Accuracy can hide bias.
  • Accuracy can ignore uncertainty.
  • Accuracy can collapse under distribution shifts.

In medicine, what matters is not how often a model is right in aggregate, but how it fails, when it fails, and whether humans can detect that failure in time. Clean metrics do not equal resilient performance.

AI diagnostic verification
Photo supplied

“Who (or what) said what” now matters

When AI contributes to clinical reasoning or documentation, attribution becomes a safety tool. Attribution documents judgment. It shows that AI assisted but did not replace reasoning. Years later, that distinction can be very important.

The standard of care is already shifting

Clinicians may be exposed when they use AI and override it, especially after adverse outcomes. This tension has been described as the negative outcome penalty paradox: clinicians can be punished whether they follow or reject AI recommendations once AI becomes normalized. Defensive medicine now requires reasoned positioning, not blind adoption or avoidance.

Defensive documentation needs a major redesign

Traditional defensive documentation emphasized thoroughness; AI-era defensive documentation emphasizes provenance. Practical shifts clinicians should adopt now:

  • Label AI-assisted content explicitly.
  • Avoid pasting AI output without review.
  • Document when AI recommendations were rejected and why.
  • Be cautious with AI-generated citations and guidelines.
  • Preserve clinician reasoning, not just summaries.

Training clinicians is now a legal issue

Clinicians do not need to become data scientists, but they do need to understand where hallucinations are most likely, how training data limitations affect output, and why confidence does not equal correctness. Ignorance will not be a defense when AI tools are widely available and increasingly normalized.

Augmented intelligence is the only defensible frame

Augmented intelligence keeps the clinician explicitly in the loop. It reinforces that AI assists but does not decide. The moment AI silently replaces reasoning in documentation, both patient safety and legal defensibility erode. The chart reclaims its original role: a legal narrative of clinical reasoning under uncertainty.


About the author

Dr. Hassan Bencheqroun

Dr. Hassan Bencheqroun is a pulmonary and critical care physician, assistant professor at the University of California Riverside School of Medicine, and CEO of Medical AI Academy. He hosts “The AI-Ready Doctor podcast” and is an active AiMed participant and speaker bridging clinical care, education, and technology.

Agentic AI in healthcare: Autonomous systems transforming clinical practice, patient safety, and the future of care delivery

Agentic AI in healthcare: Autonomous systems transforming clinical practice, patient safety, and the future of care delivery

Fig. 1: Agentic AI orchestration across clinical, administrative, regulatory, and patient engagement domains.

The healthcare industry has entered a transformative era. Agentic artificial intelligence — systems capable of autonomous reasoning, planning, and multi-step task execution with defined human oversight — is transitioning from research concept to enterprise deployment.

Unlike traditional machine learning that excels at narrow pattern recognition or generative AI that produces content reactively, agentic AI operates with goal-directed autonomy: decomposing complex objectives, coordinating specialized agents across disparate systems, and adapting strategies based on outcomes. This paradigm shift addresses healthcare’s most persistent challenges, including administrative burden consuming 20 percent of institutional budgets, physician burnout, and the growing complexity of clinical decision-making.1

The acceleration in late 2025 has been remarkable. On December 1, 2025, the U.S. Food and Drug Administration announced agentic AI deployment for all agency employees — the first major regulatory body to institutionalize autonomous AI workflows for administrative functions including meeting management, document processing, and compliance operations.2 Critically, these internal tools support agency operations but do not autonomously render pre-market review decisions. Days later, the Department of Health and Human Services released its comprehensive AI strategy positioning autonomous systems as central to federal health operations.3

Defining the Agentic Paradigm

The distinction between agentic AI and its predecessors is substantive. Traditional machine learning excels at classification within narrow domains. Generative AI expanded to content creation. Agentic AI introduces systems that pursue defined goals with limited supervision, typically employing multiple specialized agents coordinated through an orchestration layer.4

Fig. 2
AI evolution comparison — Traditional AI, Generative AI, and Agentic AI
AI evolution comparison — Traditional AI, Generative AI, and Agentic AI capabilities and interaction modes

Technical precision requires distinguishing workflow automation from true goal-directed autonomy. Research published in Frontiers in Artificial Intelligence found that agentic architectures can reduce cognitive workload by up to 52 percent compared to traditional clinical decision support.1

The Enterprise Technology Landscape

The market is consolidating around major platforms. Microsoft’s healthcare agent orchestrator provides pre-configured agents for clinical trial matching and tumor board preparation. Epic Systems has deployed multiple AI agents: Emmie (patient engagement), Art (provider communications), and Penny (revenue cycle management).6

Table 1: Healthcare Agentic AI Platform Comparison (January 2026)

Platform Capabilities Deployment Considerations
Microsoft Healthcare Agent Orchestrator Multi-agent orchestration, clinical trial matching Azure AI Foundry; broader EHR interoperability
Epic (Emmie, Art, Penny, Cosmos) Patient engagement, provider communications, revenue cycle Version-dependent; 6-12-month configuration
Nuance Dragon Copilot Ambient clinical and nursing documentation Generally available; multi-EHR integration
Atropos Evidence Agent Proactive real-world evidence at point of care Pilot deployments; addresses evidence accessibility

Regulatory Landscape and Liability

The FDA’s database lists over 1,250 AI-enabled medical devices authorized for marketing, but the vast majority are narrow diagnostic imaging tools. True agentic systems for clinical decision-making face the most stringent Class III regulatory pathway.9

Fig. 3
FDA AI adoption timeline
FDA AI adoption timeline – Elsa launch (June 2025) through Agentic AI deployment (December 2025)

The liability landscape presents novel challenges. Traditionally, the “learned intermediary” doctrine shields device manufacturers when physicians exercise independent judgment. However, agentic systems that execute autonomously may eliminate this shield, exposing manufacturers to direct liability.12

Implementation Prioritization Framework

Table 2: Implementation Prioritization Framework

Phase Use Cases Risk Profile Timeline
Immediate (2026) Scheduling, prior auth, messaging triage Low clinical risk 3-6 months
Near-term (2026-2027) Ambient documentation, coding assistance Moderate risk 6-12 months
Medium-term (2027-2028) Clinical decision support, care coordination Higher risk 12-24 months
Longer-term (2028+) Autonomous monitoring, closed-loop systems Highest risk 24+ months

References

  1. Hinostroza Fuentes N, et al. Frontiers in Artificial Intelligence. 2025;8.
  2. U.S. Food and Drug Administration. News Release. Dec 1, 2025.
  3. HHS Artificial Intelligence Strategy. Dec 4, 2025.
  4. IBM. What is Agentic AI? 2025.
  5. Microsoft Build 2025 Announcement.
  6. Healthcare IT News. Epic UGM 2025.
  7. Microsoft Industry Blog. Nov 18, 2025.
  8. PDA News Brief. Dec 2025.
  9. Bipartisan Policy Center. Nov 10, 2025.
  10. European Pharmaceutical Review. Jan 2026.
  11. Manatt Health AI Policy Tracker. 2025.
  12. Price WN, et al. JAMA. 2019;322(18):1765-1766.
  13. Parasuraman R, Human Factors. 2010.
  14. Longoni C, Journal of Consumer Research. 2019.
  15. Medical Economics. Johns Hopkins study. 2025.
  16. Health Affairs. 2014;33(9):1586-1594.
  17. Gartner Top Strategic Trends 2025.
  18. Microsoft Research Podcast. July 23, 2025.

Dr. Srikanth Mahankali

About the Author

Dr. Srikanth Mahankali is a leading expert in the implementation of medical AI and policy. As CEO of Shree Advisory & Consulting and a member of the NSF/MITRE AI Workforce Working Group, he shapes national AI strategy while driving responsible innovation.