Artificial intelligence (AI) is rapidly transforming dentistry through applications in radiographic interpretation, caries detection, orthodontic planning, and digital smile design. Yet, these advances raise profound ethical questions regarding patient privacy, algorithmic bias, accountability, and transparency. This article examines the ethical foundations of Dental AI, drawing on global frameworks and emerging debates, while highlighting the pioneering Saudi experience in publishing the first national-level AI ethics charter for healthcare. The Saudi framework, issued in 2022 by the Saudi Data and Artificial Intelligence Authority (SDAIA) and the Ministry of Health, integrates international best practices with Islamic bioethical values, emphasizing patient-centricity, privacy, transparency, equity, accountability, and sustainability. Case studies demonstrate the real-world consequences of ethical lapses, from radiographic misinterpretation to orthodontic bias and data-sharing concerns. Particular attention is given to federated learning as a privacy-preserving solution that enables collaboration without compromising data security. Finally, future directions are discussed, including the integration of ethics into dental curricula and the need for international consensus through bodies such as the FDI World Dental Federation. By embedding ethics at the core, Dental AI can remain a tool in service of humanity, not the reverse.
Introduction
Artificial intelligence (AI) is transforming dentistry with unprecedented speed and depth. From diagnostic imaging and caries detection to orthodontic planning, implantology, and digital smile design, AI has begun to redefine the relationship between technology, clinician, and patient. Studies have shown that AI can reach or even exceed expert-level performance in radiographic interpretation, pathology detection, and predictive analytics for oral diseases. Despite this promise, profound ethical questions emerge: How can patient data be protected? How can we prevent algorithmic bias that could disadvantage vulnerable populations? Who bears responsibility when AI fails? These are not peripheral questions—they are central to the legitimacy and long-term adoption of AI in dentistry.
This article examines the broad ethical landscape of AI in dental practice, drawing on global guidelines and bioethical debates, and then focuses on the pioneering Saudi experience in publishing the first national-level ethical framework for AI in healthcare. This initiative represents a historic milestone for the region and provides valuable lessons for global dentistry, including implications for the diverse dental communities across the United States.Moreover, AI’s integration into dentistry is not limited to diagnostics. Robotics for oral surgery, AI-powered scheduling systems, smart dental chairs, and patient engagement chatbots are redefining dental care delivery. With these advances, ethical issues become intertwined with practical realities—making the discussion of Dental AI ethics not merely theoretical, but a pressing matter for clinicians, policymakers, and technologists alike.
Core ethical challenges in dental AI
The integration of AI into dental practice raises several key ethical challenges that must be addressed:
Patient privacy and data security – Reliance on imaging and records raises concerns around consent, secondary use, and vulnerability to cyberattacks. For instance, dental radiographs stored in cloud servers may be vulnerable to unauthorized access if robust encryption protocols are not followed. Ethical dental AI must therefore incorporate state-of-the-art cybersecurity solutions while ensuring compliance with local and international regulations such as GDPR and HIPAA.
Algorithmic bias and fairness – Datasets may underrepresent certain populations, creating disparities in diagnostic accuracy. This challenge is universal, affecting diverse populations whether in the Middle East, North America, or other regions globally.
Transparency and explainability – The ‘black box’ nature of AI requires explainable AI (XAI) for trust and adoption across all healthcare systems.
Accountability and liability – Clarity is needed in defining responsibility when AI outputs cause harm, regardless of jurisdiction.
Human oversight and autonomy – The dentist must remain the final decision-maker in all clinical contexts.
Professional integrity – Preventing over-reliance on AI and preserving clinical reasoning skills remains essential for maintaining professional standards globally.
Global ethical frameworks for AI in healthcare
International organizations have addressed these issues through guidelines:
WHO (2021) emphasized accountability, inclusiveness, and sustainability.
European Commission (2019) proposed ‘Trustworthy AI.’
ADA and various dental AI associations have begun adapting general AI principles into dentistry.
Yet, most frameworks remain broad and not specific to dental practice, highlighting the need for more specialized guidance.The Saudi experience: A pioneering ethical charter
Saudi Arabia, aligned with Vision 2030, has positioned itself as a global leader in AI governance. In 2022, the Saudi Data and Artificial Intelligence Authority (SDAIA), in collaboration with the Ministry of Health, published the ‘AI Ethics in Healthcare Charter’—the first national framework of its kind in the Middle East. The Charter integrates international best practices with Islamic bioethical principles such as justice, beneficence, and respect for human dignity.
The process of drafting the Charter involved multi-stakeholder collaboration—including ethicists, engineers, clinicians, legal scholars, and policymakers. Workshops and consultations ensured that the framework was both technically rigorous and socially legitimate. Importantly, dentistry was identified as a priority field given its rapid digitization and the unique sensitivity of dental data, such as facial images and 3D intraoral scans.
Key principles of the Saudi framework
Patient-centricity – Prioritizing safety and dignity in all AI applications.
Privacy and security – Ensuring robust data protection within national jurisdiction while enabling beneficial uses.
Transparency – Requiring explainable and auditable AI systems that practitioners can understand and trust.
Equity – Ensuring fair access across urban and rural regions, addressing disparities in healthcare delivery.
Accountability– Clarifying roles and responsibilities of developers, clinicians, and regulators.
Sustainability – Aligning AI adoption with long-term healthcare goals and resource allocation.
Historical and cultural context of dental ethics in AI
Ethics in medicine and dentistry has a long history rooted in cultural, religious, and professional codes. From the Hippocratic Oath to Islamic medical ethics pioneered by Ibn Sina and Al-Razi, the central principles of beneficence, non-maleficence, autonomy, and justice remain relevant. In modern dentistry, these principles intersect with new technological paradigms: algorithms, machine learning models, and robotic systems. Saudi Arabia’s approach is distinctive in that it explicitly connects its AI ethics framework to Islamic values, positioning the Kingdom at the crossroads of tradition and innovation while offering universal principles applicable across cultures.
Federated learning: An ethical enabler
One of the most promising approaches to address privacy and fairness challenges in Dental AI is Federated Learning. Instead of centralizing sensitive dental data, federated learning allows multiple clinics or hospitals to train a shared AI model locally. The model parameters are then aggregated centrally without transferring raw patient data. This method, first proposed by Konečný et al. (2016), enables cross-institutional collaboration while safeguarding privacy.
In the Saudi context, federated learning aligns perfectly with the AI Ethics Charter. It allows hospitals from Riyadh to Jeddah, and from Dammam to NEOM, to contribute to the development of robust AI tools without compromising confidentiality. Such a system not only protects privacy but also ensures representation of diverse patient populations, thereby reducing algorithmic bias.
Additionally, federated learning supports continual model improvement while respecting local regulations. For example, dental schools across Saudi Arabia could collectively train AI systems for caries detection, benefiting from data diversity without compromising privacy. This collaborative approach also aligns with global trends towards distributed, privacy-preserving AI and offers a model that other countries and regions can adapt to their own contexts.
Case studies in dental AI ethics
Several real-world scenarios highlight the importance of ethical principles in practice:
Radiographic misinterpretation: An AI model misclassified a periapical lesion, leading to unnecessary endodontic treatment. The case raised questions about liability between the dentist and software developer, demonstrating the need for clear accountability frameworks.
Bias in orthodontic predictions: A machine learning model trained primarily on European patients underperformed when used on Saudi adolescents, underlining the need for diverse datasets that represent global populations.
Data sharing concerns: A multinational dental imaging project faced resistance from patients who feared their facial scans could be misused beyond healthcare. Federated learning was introduced as a solution, demonstrating practical applications of privacy-preserving technologies.
Implications for dental AI worldwide
The Saudi ethical framework, combined with federated learning, offers a replicable model for the global community:
It serves as a blueprint for other nations seeking culturally grounded AI ethics frameworks that respect local values while maintaining universal ethical principles.
It encourages dental associations worldwide to issue Dental AI-specific ethical guidelines adapted to their regulatory and cultural contexts.
It strengthens international collaborations in AI development without violating privacy regulations, enabling global knowledge sharing while protecting patient data.
The Saudi experience demonstrates that comprehensive ethical frameworks can be developed that honor cultural and religious traditions while embracing technological innovation. This model has particular relevance for diverse societies worldwide, including the multicultural communities served by dental professionals globally.
Future directions
Looking ahead, Dental AI ethics will increasingly require dynamic governance structures. Rapid innovations such as generative AI, 3D printing with AI-based design, and integration of genomic data pose new ethical dilemmas. Saudi Arabia’s framework offers a strong foundation, but continuous updates will be essential as technology evolves.
Professional dental associations should develop AI ethics curricula for dental education, ensuring that future dentists are not only competent in using AI tools but also conscious of their ethical implications. International collaboration, perhaps through the FDI World Dental Federation, could lead to a global consensus on Dental AI ethics, harmonizing diverse cultural and regulatory perspectives while building on pioneering efforts like the Saudi framework.
The integration of ethics education into dental curricula worldwide will be essential for preparing the next generation of practitioners. This education should include both theoretical foundations and practical case studies, drawing on experiences from various cultural and regulatory contexts.
Conclusion
Ethics in Dental AI is not an afterthought—it is the foundation of trust between patients, clinicians, and technology. Saudi Arabia’s pioneering step in publishing the first AI ethics framework in healthcare demonstrates that cultural values and modern bioethics can converge successfully. By embracing privacy-preserving technologies such as federated learning, the Kingdom sets a global precedent for ethical AI in dentistry.
The Saudi experience offers valuable lessons for the international dental community, demonstrating that comprehensive ethical frameworks can be developed that honor local values while establishing universal principles. As dental professionals worldwide grapple with similar challenges, the Saudi model provides a roadmap for integrating ethics into AI implementation.
As dentistry enters an era where AI systems may detect caries, design smiles, and even guide surgeries, the guiding question remains: Will AI remain a servant of humanity, or will humanity become its servant? Saudi Arabia’s experience suggests that ethics can ensure the former, providing a model that the global dental community can adapt and build upon.
The path forward requires continued international collaboration, sharing of best practices, and commitment to placing patient welfare and professional integrity at the center of all AI development and implementation efforts.
References
Topol E. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Basic Books; 2019.
. Char DS, Shah NH, Magnus D. Implementing Machine Learning in Health Care — Addressing Ethical Challenges. N Engl J Med. 2018;378:981–983.
Doshi-Velez F, Kim B. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608. 2017.
World Health Organization. Ethics and governance of artificial intelligence for health: WHO guidance. Geneva: WHO; 2021.
European Commission. Ethics Guidelines for Trustworthy AI. Brussels; 2019.
Saudi Data and Artificial Intelligence Authority (SDAIA). Charter of AI Ethics in Healthcare. Riyadh; 2022.
Konečný J, McMahan HB, Ramage D, Richtárik P. Federated optimization: Distributed optimization beyond the datacenter. arXiv preprint arXiv:1511.03575. 2016.
American Dental Association. AI in Dentistry: Current and Future Applications. ADA White Paper; 2023.
About the author
Dr. Ameed Khalid Abdul-Hamid is an Iraqi–British dental surgeon and academic researcher, internationally recognized for his contributions to artificial intelligence in dentistry and healthcare. He holds advanced qualifications from the University of Baghdad and the University of London, and is a Fellow of the Royal College of Surgeons (UK). Dr. Abdul-Hamid serves as Chairman of the Arab Organisation for Artificial Intelligence in Healthcare and Chairman of the Saudi-British Medical Forum (London). His research focuses on AI-enabled diagnostics, digital health systems, and the ethical, responsible integration of artificial intelligence in clinical care. In 2025, his work in dental artificial intelligence was published in the British Dental Journal, and he is a recipient of the Alan Turing Award in Dental Artificial Intelligence.
In the rapidly advancing landscape of medical technology, few innovations capture the clinical and public imagination as profoundly as Brain-Computer Interfaces (BCIs). Once a concept confined to science fiction, BCIs are now a clinical reality, emerging as a transformative modality for patients with severe neurological impairments. These sophisticated systems establish a direct communication pathway between the brain and an external device, translating neural signals into commands that can restore lost motor and communicative functions. As we transition from investigational research to tangible clinical application, it is imperative for clinicians, scientists, and policymakers to assess the evidence critically, navigate the complex implementation challenges, and steer the ethical trajectory of this powerful technology.
At its core, a BCI decodes intended movements from neural ensemble activity in the motor cortex, translating multi-unit spike patterns into kinematic parameters for an output device.1 The recent acceleration in BCI development has been catalyzed by the integration of artificial intelligence (AI), particularly machine learning and deep learning algorithms. These computational tools have dramatically enhanced the fidelity and efficiency of neural decoding, enabling the discernment of subtle patterns in complex brain activity with unprecedented precision.2 This synergy between computational neuroscience and AI is unlocking clinical applications that were previously deemed unattainable, heralding a new era of restorative neurology.
A new dawn for patients with paralysis: From restoration to recovery
The most immediate and life-altering impact of BCIs is being realized in patients with paralysis resulting from conditions such as spinal cord injury (SCI), amyotrophic lateral sclerosis (ALS), and stroke. For individuals who have lost the ability to move or speak, BCIs offer a gateway to regain autonomy and reconnect with the world. Landmark clinical trials conducted with small cohorts under carefully controlled conditions are demonstrating not just functional restoration, but also evidence of underlying neurological recovery.
A pivotal 2023 study in Nature by Lorach et al. detailed a “digital bridge”—a brain-spine interface that restored communication between the brain and the spinal cord in an individual with chronic tetraplegia. This BSI enabled the participant to stand and walk naturally, with the system’s reliability remaining stable for over a year. The transition to
independent home use represents a watershed moment, marking the technology’s maturation from a laboratory-based tool to a viable medical device.3 Remarkably, neurorehabilitation supported by the BCI led to improved neurological recovery, with the participant regaining the ability to walk with crutches even when the interface was switched off.
This progress is mirrored across the field, with the number of individuals with permanent BCI implants growing from approximately 50 in 2020 to 70-80 in 2025. While this accelerating clinical translation is driven by multiple research groups and commercial entities, it is crucial to recognize that these remain small-scale deployments. The strongest results continue to emerge from a handful of specialized centers working with carefully selected participants under controlled task conditions.
Company/Initiative
Key BCI Technology & Approach
Key Clinical Finding / Status (as of late 2025)
BrainGate
Intracortical microelectrode arrays (Craniotomy)
Demonstrated typing speeds of 90 characters per minute with >99% accuracy in controlled laboratory settings.4
Synchron
Endovascular stent electrode (Stentrode)
Avoids open-brain surgery. The COMMAND trial, a 15-patient feasibility study, met its primary safety endpoint.5
Approximately 3-5 individuals with paralysis are reportedly using the implant to control digital devices in supervised research settings.
Paradromics
High-data-rate cortical implants (Craniotomy)
Received FDA Breakthrough Device Designation and approval for its Connexus BCI clinical trial focused on restoring speech.6
Clinatec
Implantable BCI with exoskeleton control (Craniotomy)
Enabled a tetraplegic patient to control a four-limb exoskeleton and restored natural walking in a paraplegic patient.3
The computational engine: AI’s role in decoding neural intent
The sophistication of modern BCIs is inextricably linked to advancements in AI. The brain’s electrical signals are inherently noisy and complex, subject to drift over time as electrodes shift position and tissue responds to chronic implantation. Machine learning models, particularly deep neural networks such as Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) networks, are exceptionally adept at identifying meaningful patterns within this neural variability.7 More recently, Transformer-based architectures and self-supervised learning approaches have emerged as state-of-the-art methods for neural decoding, offering improved performance and reduced training data requirements. These AI “co-pilots” learn to associate specific patterns of brain activity with a user’s intended actions. Furthermore, advances in transfer learning and domain adaptation have reduced within-session calibration time from hours to 15-30 minutes in many systems.
However, cross-session and cross-task transfer remain active areas of research.8
Modern BCIs increasingly employ closed-loop adaptive decoders that continuously update based on user feedback and changing neural patterns. These systems integrate multi-modal signals – combining neural recordings with eye tracking or residual muscle activity – to achieve more robust and reliable control, particularly in real-world environments outside the laboratory.
Systematic reviews have confirmed the superiority of deep learning approaches for enhancing the accuracy of neural decoding in specific, well-defined tasks.2 However, it is crucial to distinguish between BCI modalities and their real-world applicability. While non-invasive EEG-based systems achieve motor imagery classification accuracies around 85%,9 invasive intracortical systems consistently demonstrate performance exceeding 95% for discrete classification tasks (e.g., cursor selection, menu navigation) in laboratory settings, though continuous trajectory control typically achieves 70-85% accuracy.4 As these AI models become more integrated into clinical devices, addressing the “black box” problem through explainable AI (XAI) will be critical for regulatory approval and clinical trust. It is important to note that models demonstrating impressive results in controlled research environments may require substantial adaptation when deployed across different tasks, environments, or individual users.
Navigating the regulatory and ethical frontier
As with any transformative medical technology, the path from laboratory to widespread clinical adoption is governed by rigorous regulatory oversight and complex ethical considerations. In the U.S., high-performance BCIs are designated as Class III medical devices and require the most stringent Premarket Approval (PMA) pathway from the FDA. To accelerate this process, many BCI developers have received Breakthrough Device
Designation, which provides for a more collaborative and prioritized review. The FDA’s 2021 draft guidance on implanted BCI devices, supplemented by workshops on adaptive trial designs, provides a clear framework for developers.10 However, this must be complemented by robust post-market surveillance to monitor long-term safety and signal stability. In this area, international regulatory bodies, such as the European Union (under its Medical Device Regulation), are also establishing stringent requirements. Emerging regulatory considerations include cybersecurity requirements for wireless implantable BCIs, the use of real-world evidence (RWE) for post-market surveillance, and the development of international standards through IEEE, ISO, and ASTM working groups to ensure device safety and interoperability.
Beyond regulatory approval, the deployment of BCIs raises profound ethical questions. The ability to decode neural signals brings concerns about mental privacy and cognitive liberty to the forefront. The unique challenge of ensuring informed consent—when a device might decode thoughts a user did not consciously intend to share—demands novel ethical and legal safeguards. Frameworks such as the OECD Recommendation on Responsible Innovation in Neurotechnology and the Neurorights Foundation’s proposed legal protections are vital starting points.11
Critical questions of neural data governance remain unresolved: Who owns the neural data generated by these devices? How long should it be retained? What restrictions should govern secondary use for research or commercial purposes? The IEEE P2976 working group on neural data privacy is developing standards to address these questions, but comprehensive legal frameworks are still emerging.
Issues of equity and access are also paramount. The specter of “neurocolonialism”—where technologies developed in high-income nations are deployed in low-resource settings in an extractive manner—is a critical challenge. For example, collecting neural data from vulnerable populations for analysis and commercialization elsewhere, without equitable benefit-sharing, would perpetuate digital colonialism in the neurotechnology domain.
Within high-income nations, equity challenges persist. Geographic disparities in access to specialized neurosurgical centers, insurance coverage variations, and the substantial out-of-pocket costs for experimental therapies create barriers that disproportionately affect rural and socioeconomically disadvantaged populations. Moreover, the disability rights community has raised important questions about the framing of BCIs as ‘cures’ versus tools for accommodation, emphasizing the principle of ‘Nothing About Us Without Us’ in technology development.
The future is clinical: From approval to access
Despite the challenges, the future of Brain-Computer Interfaces is no longer a distant prospect; it is an emerging clinical reality for specific patient populations. The convergence
of neuroscience, AI, and medicine is creating a powerful new therapeutic toolkit for some of the most devastating neurological conditions. Based on current regulatory trajectories, the first commercial approvals for narrowly defined indications—such as severe paralysis with communication impairment—are anticipated by 2027-2028, with broader but still clinically focused adoption by 2030-2032. However, FDA approval is only the first step. Widespread clinical access will depend on establishing clear reimbursement pathways with Medicare, Medicaid, and private insurers – a critical health economics challenge that the field must address proactively.
However, scaling from clinical trials to widespread deployment faces substantial practical barriers. Manufacturing must transition from research-grade, hand-assembled devices to commercial-grade production with stringent quality control. Neurosurgical capacity is limited—specialized training programs are needed to prepare surgeons for these novel procedures. Device longevity is typically 5-10 years, necessitating replacement surgeries and upgrade pathways, with adverse event rates, including infection, remaining below 5% in recent trials. The total cost of care—including surgery ($100,000-$200,000), devices, follow-up visits, and maintenance—raises critical cost-effectiveness questions that payers will scrutinize.
It is essential to maintain realistic expectations. BCIs are not poised to replace keyboards, smartphones, or conventional assistive technologies for the general population in the foreseeable future. Instead, their transformative potential lies in serving individuals for whom conventional interfaces are inaccessible—those who are locked-in, severely paralyzed, or have lost the ability to communicate through traditional means. For this population, a reliable, high-bandwidth neural communication channel is not a lifestyle enhancement but a profound restoration of agency and connection to the world.
From the patient perspective, early adopters report profound improvements in quality of life and sense of agency. However, the daily reality includes calibration sessions, periodic clinical visits for system maintenance, and the psychological adjustment to living with an implanted device. Understanding and supporting the lived experience of BCI users—not just the technical performance—will be essential for successful long-term deployment.
For physicians, surgeons, and allied health professionals, the rise of BCIs signals a need to engage with and understand this rapidly advancing field. By fostering a collaborative ecosystem built on evidence-based practice, rigorous scientific validation, and a steadfast commitment to ethical principles, we can ensure that the mind’s new frontier benefits all of humanity – less magic, more medicine.
References
Donoghue, J. P. (2002). Connecting cortex to machines: recent advances in brain interfaces. Nature Neuroscience, 5(Suppl), 1085-1088.
Saeidi, M., Karwowski, W., Farahani, F. V., Fiok, K., & Taiar, R. (2021). Neural Decoding of EEG Signals with Machine Learning: A Systematic Review. Brain Sciences, 11(11), 1525.
Lorach, H., Galvez, A., Spagnolo, V., et al. (2023). Walking naturally after spinal cord injury using a brain‒spine interface. Nature, 618, 126-133.
Willett, F. R., Avansino, D. T., Hochberg, L. R., Henderson, J. M., & Shenoy, K. V. (2021). High-performance brain-to-text communication via handwriting. Nature, 593, 249-254.
ClinicalTrials.gov. (2024). A Feasibility Study to Assess the Safety and Efficacy of the Synchron Stentrode System for Motor Enablement in Patients With Severe Paralysis (COMMAND). NCT05035823.
STAT News. (2025, November 20). FDA approves Paradromics’ brain-computer interface trial for speech restoration.
Livezey, J. A., & Kording, K. P. (2021). Deep learning approaches for neural decoding across tasks and time. Briefings in Bioinformatics, 22(2), 1577-1591.
Glaser, J. I., Benjamin, A. S., Farhoodi, R., & Kording, K. P. (2020). The Roles of Supervised Machine Learning in Systems Neuroscience. Progress in Neurobiology, 194, 101826.
Das, A., et al. (2025). Enhanced EEG signal classification in brain-computer interface systems by leveraging advanced machine learning and deep learning. Scientific Reports.
U.S. Food and Drug Administration. (2021, May 20). Implanted Brain-Computer Interface (BCI) Devices for Patients with Paralysis or Amputation: Non-Clinical Testing and Clinical Considerations.
OECD. (2019). Recommendation of the Council on Responsible Innovation in Neurotechnology. OECD Legal Instruments.
About the author
Dr. Srikanth Mahankali is a leading expert in the implementation of medical AI and policy. As CEO of Shree Advisory & Consulting and a member of the NSF/MITRE AI Workforce Machine Learning Working Group, he shaped national AI strategy while driving responsible innovation in healthcare.
The next wave of digital health—AI explainers, digital health avatars, and simulation-ready digital twins—will not thrive inside institution-tethered portals. These tools need comprehensive, longitudinal data assembled under a single, durable identity and governed by transparent, revocable consent. A patient-controlled personal health record (PHR) provides exactly that substrate. Recent federal policy has made consumer-mediated exchange both lawful and practical; meanwhile, advances in AI and FHIR standards make the PHR the most agile place to generate individualized guidance while protecting safety and privacy. Far from being a niche adjunct, a patient-owned PHR is the better platform for AI-directed care, patient engagement, and digital health equity.
Rails that finally favor the individual
The 21st Century Cures Act final rule prohibits information blocking and requires standardized, certified APIs that let patients access and use their electronic health information—cementing app-mediated retrieval as a baseline right.1 CMS’s Patient Access policies reinforce this architecture.2 Together they shift interoperability from institution-negotiated pipes to patient-authorized flows, enabling PHRs to aggregate data across sites, not just within a single portal.3,4,5,6 In parallel, TEFCA is standing up national exchange “rails.” Its Individual Access Services (IAS) pathway lets consumer apps retrieve a person’s records via QHIN networks—meaning a PHR can reach beyond one-off connections and into nationwide coverage. For older adults, Blue Button 2.0 adds multiple years of Medicare claims, essential for medication, risk, and adherence analytics that AI avatars will use.5
Why PHRs fit AI (and digital twins)
AI-directed tools need heterogeneous inputs: clinical records from multiple EHRs, claims, imaging, devices, and patient-reported outcomes. Portals are excellent for transactions (orders, messages) but remain siloed by enterprise. A PHR, by design, fuses cross-site data under a single, consented identity and can expose it—selectively—to analytic services that return lay explanations, adherence plans, and scenario-based simulations. The standards are ready: SMART/HL7 Bulk Data (Flat FHIR) supports cohort-level exports for quality and population health, while SMART on FHIR APIs handle patient-level access inside apps.7,8,9 These capabilities underpin both real-time coaching and “push-button population health.”1 For digital health twins, recent reviews highlight the need for rich, longitudinal, multi-modal inputs—a requirement a patient-controlled PHR can meet more readily than any single EHR portal or OS-tied aggregator.10
Safety and scope: clear lines for AI
The FDA’s final guidance on Clinical Decision Support (CDS) draws a workable boundary: clinician-facing software that merely supports decisions and allows the professional to independently review the basis qualifies as Non-Device CDS (i.e., outside device regulation).7 Patient-facing diagnostic, triage, or treatment claims, or analysis of signals/images, tip into SaMD and need clearance. This is ideal for PHRs: they can host patient-education AI (plain-language lab/imaging explanations, discharge checklists) and non-device CDS for clinicians today—while isolating higher-risk modules for separate FDA pathways tomorrow. That modularity is harder inside monolithic portals where features and claims blur across the product.7
Privacy that is strict—just different.
Many consumer PHRs operate outside HIPAA unless acting for a provider/plan; they are regulated by the FTC and a growing lattice of state health-data and biometric laws. The FTC’s modernized Health Breach Notification Rule (HBNR) explicitly covers health apps and connected devices and clarifies breach obligations.8 A PHR that bans ad-tech trackers on sensitive surfaces, uses purpose-specific, revocable consents, and maintains an auditable sharing ledger is not “unregulated”; it is accountable under a framework designed for consumer technologies. When a PHR contracts as a Business Associate (e.g., to document discharge teaching or write back to a chart), HIPAA simply governs those flows. This dual-regime architecture lets PHRs innovate quickly on consumer features while satisfying enterprise requirements when needed.
Engagement—and equity—are where PHRs shine.
If AI is to narrow disparities, it must meet people where they are: on mobile devices, after hours, and outside clinic walls. Pew Research reports that roughly nine in ten U.S. adults own a smartphone; mobile-only internet use remains common among lower-income populations—an adoption pattern tailor-made for a mobile-first PHR with plain-language explainers, SMS nudges, and proxy support for caregivers.11 Evidence already links digital engagement to operational gains: a systematic review finds patient portal use can improve outcomes and efficiency; other studies associate digital scheduling/portal use with fewer no-shows, a key lever for access in safety-net systems. A patient-controlled PHR generalizes those benefits across all sites of care rather than confining them to one portal.12 For population health, the same platform can consent patients into community and research initiatives, enabling culturally relevant education and data-donation models that include groups historically underrepresented in research—without sacrificing agency.
Better for payors, providers, and regulators
For payors, a member-authorized PHR provides fused clinical-plus-claims context to target outreach, support medication adherence, and close HEDIS/Stars gaps—without waiting for each provider’s portal to catch up. Blue Button 2.0 ensures that, at least for Medicare members, a robust longitudinal baseline is available on day one.2 For providers, a patient-owned longitudinal record reduces re-work (chasing CDs, re-taking histories), documents teach-back, and deflects routine “please explain my labs” messages by delivering explanations upstream—freeing clinical time for relationship-heavy tasks. For regulators, PHRs operationalize the intent of Cures and TEFCA: they convert “right of access” and “trusted exchange” into real-world, patient-directed data liquidity and inject competition at the edge, where innovation touches patients.1
What about the incumbents?
EHRs remain essential as transactional systems of record for orders, documentation, and revenue cycle. But they are ill-suited to be the only interface for AI-directed, patient-facing services. Their incentives are rightly aligned to clinician productivity and institutional compliance. A patient-controlled PHR occupies a different locus of control: it’s the individual’s canonical copy, portable across life contexts (new insurer, new employer, moving states), with consented APIs that let many AI services compete to deliver value. By design, that ecosystem is more modular: non-device education today; cleared SaMD add-ons tomorrow; Bulk Data for quality programs when a sponsor funds it; TEFCA IAS to fill any remaining connectivity gaps.6
A pragmatic way forward
Build to four near-term capabilities: (1) Cures-compliant FHIR connections for major portals, (2) Blue Button 2.0 for claims, (3) device/wearable feeds and patient-reported data, and (4) consented sharing with time-boxed, scope-limited links. Layer explainable AI that turns results into actions patients can take today; keep clinician-facing support transparent and reviewable to stay outside device scope; and reserve diagnostic/triage modules for separate SaMD tracks. Use Bulk Data exports, when authorized, to feed quality reporting and community programs. As TEFCA IAS matures, plug in to scale from regional to national without renegotiating one portal at a time.2
Conclusion
If we want AI avatars that actually help people, digital twins that simulate realistic trajectories, and engagement that narrows—not widens—disparities, we should stop forcing everything through institution-tethered portals. A patient-controlled PHR gives individuals total, auditable control of their health data; gives innovators permissioned access to the multi-modal fuel AI needs; and gives payors, providers, and regulators a scalable path to better outcomes with stronger accountability. With Cures, TEFCA, Blue Button, Bulk Data, and clear FDA/FTC guardrails, the policy and technical pieces are finally aligned. The most direct route to AI-directed health care that works for everyone—especially the underserved—is to put the patient-owned PHR at the center.
References
Office of the National Coordinator for Health IT. 21st Century Cures Act Final Rule (Interoperability & Information Blocking). 2020–2024. (Federal Register)
Mandl KD, et al. “Push-button population health: SMART/HL7 FHIR Bulk Data.” npj Digit Med. 2020. (PMC)
Katsoulakis E, et al. “Digital twins for health: a scoping review.” npj Digit Med. 2024. (Nature)
Pew Research Center. Americans’ Use of Mobile Technology and Home Broadband (2024). (Pew Research Center)
Carini E, et al. “Impact of patient portals on outcomes and efficiency: systematic review.” J Med Internet Res. 2021. (JMIR Publications)
About the author
Sanjaya Khanal, MD, FACC—Interventional cardiologist and Founder/CMO of MyMR, a patient-owned AI PHR. Harvard-trained, Associate Professor, IT director and Chief of Staff; med-device entrepreneur with multiple patents and publications advancing scalable, patient-centric care.
Everyone who has ever tried to introduce new technology into the practice of medicine knows how difficult this process is. The reasons are numerous, but the overarching issues are that patient safety is supremely important, and any new clinical tool must be vigorously tested to ensure its safety. Another issue is that the introduction of new tools into existing workflows has proven to be a complicated task that few wish to risk. These workflows have been developed over decades to be compliant with regulatory requirements, protect patient safety, and ensure care team collaboration and access. Introduction of new technology means disruption to these well-established workflows, potentially making the entire team’s life more difficult rather than easier.
It is interesting to note that technology helps automation in industries such as travel, hospitality, mobility, and more, but is less efficient in healthcare. One of the reasons is the fragmented nature of healthcare data, making automation of clinical, administrative or operational activities more difficult. While Amazon and Netflix can improve the consumer experience for shopping and streaming by having only a small slice of personal data in those areas, this is not possible in healthcare.
If a segment of the medical history is missing, decision-making support is not possible. Submitting a medical payment claim is unworkable if part of the patient’s work-up is not included in the documentation; insurance companies are always on the lookout for any reason to deny or reduce payment. Allowing AI to process and submit payment claims autonomously will simply not function if the radiology reporting system is not connected to the HER.
While progress has been made in interoperability, there are simply too many hubs to connect, even within a city, let alone in a state or a nation. Submitting a claim with prior authorization for a medication may not require full interoperability, but identifying health issues and gaps in care proactively, and addressing them effectively, requires near complete medical records. If a procedure was performed at one medical center, but the patient’s records at a different center do not show the results, it is a clear indication that neither center is fully aware of the patient’s health status, indicating a serious danger to the patient.
There is a perceived lack of clear evidence of clinical benefits or financial return on investment (ROI.) New technology requires the expenditure of money, time and effort. It must demonstrate some possible benefits, such as improving patient outcomes, enhancing clinical productivity, upgrading operational and administrative efficiency, increasing revenues, and/or lowering costs.
To show improved patient outcomes, you need to do real-world, prospective and controlled trials to document the promised benefits. Without that, you’re just making unsubstantiated claims, and the medical community is a tough crowd to do that to. They’ve shown to be quite uncooperative with anything that does not prove its claims through well-designed studies. A good example of this is all the various radiology AI solutions that read scans and “help” the radiologist with their workflows. All of them have FDA approval, a low bar that can be met by showing that your tool is as accurate as a radiologist in finding a defined abnormality. However, in most cases the studies show that using the AI in addition to the radiologist improves patient outcomes haven’t been done. What does that mean for those companies? Insurance companies are not paying for them and if patients want AI to assist the radiologist in reading their scans, they need to pay out of pocket. Most are opting not to, and these tools have seen limited adoption.
There are a number of other barriers such as the medical legal concerns by the providers, lack of the training of the staff in using them, lack of IT resources that can implement and monitor these tools, unclear regulatory framework, cost, and more. However, in the last 18 months, we have seen brisk adoption of some use cases such as ambient documentation, co-pilot function within the EHRs, and to a lesser extent autonomous coding. All of these are clinical workflow and administrative use cases. There is a reason for this. These are lower risk use cases that don’t involve clinical decisions, and none is fully autonomous. Doctors review the notes generated by ambient AI documentation tools and review any codes that are automatically created, and chart summaries are intended to save them time from reading a thick patient record but often that information is validated by the patient. It has been reported that the chart summaries contain numerous errors, necessitating physicians to review and verify the information, which reduces efficiency. There’s also emerging early evidence that there’s ROI for these tools. Peterson’s Health Technology Institute published a study that estimated lower burnout and cognitive load for physicians from the use of ambient AI documentation but the financial ROI to the health systems is not clear yet (Figure 1).
Source: Peterson Health Technology Institute
As for clinical AI tools, it doesn’t look like the studies needed are being done by companies and therefore adoption remains low. These studies take time and money, and most companies think they can magically drive adoption of their products without documenting clinical benefits. An RSNA study found that 81% of AI models dropped in performance when tested on external datasets. For nearly half, it was a noticeable drop, and for a quarter, it was significant. After these tools are approved, there’s no standard way to keep an eye on how they work across different scanners, hospitals or patient groups. In a recent interview with Health IT News, Pelu Tran, CEO and cofounder of Ferrum Health, opined that companies are not doing real-world studies to show that their tools perform as expected in that messy environment and that they result in improved patient outcomes. He calls for the buyers to demand solid evidence of clinical outcome improvement or financial ROI.
About the author
Dr. Ronald Razmi is the author of the “AI Doctor: The Rise of Artificial Intelligence in Healthcare” (Wiley, 2024) and former Cardiologist, McKinsey consultant, and CEO of digital health company, Acupera. He completed his medical training at the Mayo Clinic and holds an MBA from Northwestern University’s Kellogg School of Management