- Tomorrow's Dose
- Posts
- Edition 15 - Autonomy, Accountability, and the Algorithm: Who Decides When AI Reads Alone?
Edition 15 - Autonomy, Accountability, and the Algorithm: Who Decides When AI Reads Alone?
Discover why a hospital CEO's call to replace radiologists is more regulatory challenge than clinical breakthrough, explore how the EU just quietly rewrote the rulebook for AI medical devices, and learn how a Lancet Oncology pathology model could make Oncotype DX redundant in clinics that can't afford it.

NYC's largest public hospital CEO says AI is ready to replace radiologists — and the backlash has been fierce
EU Parliament votes to lift AI medical devices out of the AI Act's high-risk compliance category
AI reads H&E slides to predict breast cancer recurrence and chemotherapy benefit — validated on 13,781 patients
Featured follow of the week
Top posts of the week across social
Meet the editor
Want a featured article?

Specialty: Radiology // Sub-Specialty: AI // Body Site: Prostate
1. NYC Health + Hospitals CEO: "We could replace a great deal of radiologists with AI at this moment"
Mitchell H. Katz MD, president and CEO of NYC Health + Hospitals — the largest public hospital system in the United States, serving 1.4 million patients across 11 hospitals — declared at a Crain's New York Business forum on March 25, 2026, that he is prepared to replace radiologists with AI for first reads, contingent solely on regulatory reform. Katz's stated rationale was cost reduction and expanded access, particularly in mammography, where he suggested AI could handle initial reads with radiologists reviewing any flagged abnormalities. The statement provoked immediate pushback from the radiology community: Mohammed Suhail MD, a San Diego-based radiologist, called Katz's comments "undeniable proof that confidently uninformed hospital administrators are a danger to patients." Separately, researchers at Stanford found that leading AI chest X-ray tools can "hallucinate" findings on images they never actually saw, describing it as "epistemic mimicry" in which a model generates a fluent, apparently image-based reasoning trace while being anchored to no image at all — a preprint that appeared the same week as Katz's statement circulated widely.
Read Full Article
Paul’s Thoughts:
The framing of "we could do it now if regulators allowed" tells you something important: this is not a clinical argument, it's a financial one. At GMI, every AI tool we deploy goes through structured validation — we evaluate performance on our local patient population before it touches a clinical decision, and even then there is always a clinician in the loop. The gap between benchmark performance and real-world clinical safety is not a regulatory inconvenience to be cleared — it is the entire point. What concerns me more than one CEO's soundbite is the broader trend it reflects: hospital executives framing AI autonomy as a regulatory problem to be solved, rather than a safety threshold to be earned. Radiology is already managing a workforce crisis across the NHS and European healthcare systems; the right question isn't whether AI can replace radiologists, but whether we have the governance infrastructure to define precisely which tasks it can perform autonomously, in which patient populations, with what monitoring in place. That answer does not yet exist at system scale.
Timescale: Acute | 1 Years
Specialty: All // Sub-Specialty: AI // Body Site: All
2. EU Parliament votes to remove AI-enabled medical devices from AI Act high-risk obligations
On March 26, 2026, the European Parliament approved, by a large majority, a proposal to reclassify AI-enabled medical devices regulated under the MDR and IVDR from Annex I Section A to Section B of the EU AI Act — effectively removing them from the full suite of High-Risk AI System (HRAIS) compliance requirements. The change, part of the broader EU Digital Omnibus simplification package, means AI medical devices would continue to be governed by MDR/IVDR sector-specific regulation, while the European Commission would retain the power to re-impose specific AI Act requirements via delegated acts in the future. Several further legislative steps remain before the proposal becomes law, with adoption expected by summer 2026 or 2027 at the earliest. Legal observers have noted the transition may create a period of regulatory uncertainty for manufacturers whose compliance programmes were built around the dual MDR/IVDR + AI Act model.
Read Full Article
Paul’s Thoughts:
On paper, this is a win for medtech manufacturers who have spent the last two years building compliance programmes for two overlapping regulatory regimes. The MDR and IVDR already require post-market surveillance, clinical evaluation, and quality management - so the argument that duplicating those requirements under the AI Act created administrative burden without clinical benefit has real merit. But concerns remain: the EU AI Act was specifically designed with AI-system-level requirements - transparency obligations, human oversight provisions, explainability standards - that MDR/IVDR does not currently mandate in the same way. Moving AI medical devices to Section B removes those requirements as defaults. At GMI we're in the process of evaluating several CE-marked AI tools, and the questions we ask - can we interrogate this model's outputs, does the vendor disclose training data characteristics, what is the post-deployment drift monitoring plan - are not questions that MDR approval alone answers. The Commission's ability to reimpose requirements via delegated acts is a safety valve, but it is not a substitute for having those standards baked into the approval process from the start.
Timescale: Acute | 1 Year
Specialty: Pathology // Sub-Specialty: AI // Body Site: Prostate
3. AI reads standard H&E slides to predict breast cancer recurrence risk and chemotherapy benefit — replacing the need for expensive genomic testing
Researchers at the Technion–Israel Institute of Technology have developed a deep learning model that predicts both breast cancer recurrence risk and chemotherapy benefit directly from routine hematoxylin and eosin (H&E)-stained pathology slides, without the need for expensive molecular testing such as Oncotype DX. The model was pre-trained on 171,189 histopathology slides and fine-tuned using data from 8,000 patients enrolled in the TAILORx randomised clinical trial — one of the largest breast cancer genomic testing studies ever conducted — and externally validated across six independent cohorts comprising 13,781 patients worldwide. Performance was strong: AUC of 0.898 for identifying high genomic-risk disease, with the model classifying 45.6% of patients as low-risk, 42.4% as intermediate, and 12% as high-risk. The study was published in The Lancet Oncology and presented at ESMO, and is described as the first digital pathology AI model for recurrence-score estimation to be assessed retrospectively using data from a randomised clinical trial.
Read Full Article
Paul’s Thoughts:
Oncotype DX costs around $3,000–$4,000 per test and is not reimbursed in many healthcare systems outside the US and parts of Western Europe. For clinics across Cyprus, the Middle East, Africa, and large parts of Asia — where patients present with the same disease but have no access to genomic testing — a model that extracts equivalent prognostic signal from a slide that already exists is not a research curiosity, it is a potential step-change in access to personalised oncology. The validation on TAILORx data matters enormously: this isn't a retrospective convenience cohort, it's a model benchmarked against a randomised trial's ground truth. The clinical question this raises is prospective deployment — the external validation AUC of 0.898 in curated cohorts needs to hold up in routine clinical pathology laboratories with variable slide preparation, scanner heterogeneity, and pathologist workflow variation. That gap between curated dataset performance and real-world lab implementation is the next test, and it is exactly where tools like this tend to either earn their place or quietly fail. Who funds that validation in low-resource settings, and who owns the liability when they don't?
Timescale: Early | 3 Years

Dr. Daniel Pinto dos SantosDeputy Editor, European Radiology; Associate Professor of Radiology, University Hospital of Cologne & University Hospital of Frankfurt; Vice-President, European Society of Medical Imaging Informatics (EuSoMII). |
Follow Daniel Pinto dos Santos for consistently rigorous, evidence-grounded analysis of AI in radiology — from model validation and post-market surveillance to the regulatory implications of the EU AI Act for clinical imaging workflows. This week's edition connects directly to his work as lead author on the ESR's recommendations for implementing the EU AI Act in radiology and his ongoing research into the governance frameworks that will determine whether clinical AI tools are safe and accountable at scale.

A round-up of some of the best posts we found online this week.
Was this email forwarded to you?
Our weekly email brings you the latest health trends and insights, combining top news and opinions into a straightforward, digestible format.

Want an article featured?
Have an insightful link or story about the future of medical health? Reach out below, and we may include it in a future release.








Reply