AI Removes Blind Experimentation in Pharma, Not Experiments

Bayer’s CEO says every new drug starts on computers. The pipeline shows AI shifted discovery upstream, but lab validation still defines outcomes.
Bayer CEO Bill Anderson said, “Every new medicine is now designed on computers,” speaking at the Semafor World Economy event last week. The statement invites interpretation as a broader shift away from wet lab research across pharma. Experimental validation still remains part of the process across all stages of drug development.
AI has changed how drug discovery begins. It has not changed what is required to complete it. Early-stage discovery is increasingly computational, but validation, clinical testing, and approval still depend on physical experimentation.
Developing a new drug still takes 10-15 years and costs around $2.6 billion when failures are included. High failure rates across the pipeline make early-stage efficiency critical. AI is being adopted to reduce wasted experimentation at the start of the process, not to eliminate experimentation entirely.
Drug discovery now starts in silico
Drug discovery has historically relied on large-scale wet lab screening. Researchers tested thousands of chemical compounds against biological targets, with high failure rates and long timelines.
That starting point has shifted.
Companies now use AI systems to identify disease targets, generate candidate molecules, and predict how those molecules will behave. AI models can analyze large biological datasets, simulate molecular interactions, and generate new compounds before anything is synthesized in a lab.
At Insilico Medicine, an AI-designed drug candidate (INS018_055) moved from discovery to clinical trials in under 30 months. At Schrödinger, physics-based simulations are used to model molecular interactions before compounds are synthesized, reducing the number of candidates that need to be tested in the lab.
Anderson described this shift directly, noting Bayer moved away from screening large libraries of compounds toward computational design. That change reflects a broader industry pattern.
The shift is also reflected in capital allocation. Large pharmaceutical companies are committing billions of dollars to AI-driven discovery partnerships, focused primarily on generating preclinical candidates rather than replacing downstream development.
Drug discovery now begins with computation. Lab work is no longer the entry point into the pipeline.
The market is scaling around this shift. The AI drug discovery sector is growing rapidly, driven by the need to shorten R&D timelines and improve early-stage decision-making.
Candidates still need to be synthesized and tested before they can progress.
Experimentation has moved downstream, not disappeared
The role of experimentation has changed, but it has not been removed.
Experiments are now fewer, more targeted, and positioned later in the pipeline. AI systems narrow the field before any physical testing begins.
This shift changes the economics of R&D. AI is used to select experiments with a higher probability of success, reducing the number of low-value tests. Even modest improvements in early-stage efficiency can translate into significant cost savings across the full development cycle.
At Recursion Pharmaceuticals, the model combines computation with large-scale automated experimentation. The company generates large volumes of biological data through robotic labs and feeds that data back into its models.
This creates a closed-loop system: AI generates hypotheses, experiments test them, and results refine the models. The lab remains central to the process.
Industry analysis reflects the same structure. The OECD states that AI improves how experiments are selected, but “meticulous experiments… will always be needed.”
The constraint is biological, not technical. Complex outcomes such as toxicity, long-term effects, and system-wide responses cannot be reliably predicted using current models. These uncertainties require physical validation before any drug can advance.
Wet labs have shifted from discovery engines to validation layers. They remain embedded in every step where biological uncertainty needs to be resolved.
Clinical trials remain the hard boundary
The strongest constraint on full digitization appears in clinical development.
No AI-discovered drug has received FDA approval as of 2026. Candidates generated using AI are still moving through clinical pipelines, not completing them.
Clinical trials remain the largest cost center in drug development, with late-stage trials alone costing hundreds of millions of dollars. These stages require human testing to establish safety and efficacy, which cannot be replaced by simulation.
At Exscientia, an early AI-designed drug candidate entered clinical trials but did not progress after Phase I. At Insilico Medicine, its lead program remains in mid-stage trials. The transition from computational discovery to clinical validation remains uncertain and time-consuming.
Failure rates remain high once drugs reach human testing. Roughly 90% of drug candidates entering clinical trials do not receive approval. AI has not yet demonstrated a consistent ability to change outcomes at this stage.
AI is used in clinical development to support trial design, patient recruitment, and data analysis. It does not replace human testing. Regulatory approval still requires evidence generated through trials in patients.
The structure of the pipeline has changed. The constraint has not.
Drug discovery has accelerated at the front end. Validation and approval continue to depend on physical experimentation and human data.