AIM Media House

Ginkgo Bioworks’ Reshma Shetty on Moving Biology Off the Bench

Ginkgo Bioworks’ Reshma Shetty on Moving Biology Off the Bench

Autonomous infrastructure and AI agents change how experiments are designed and executed

“Right now, I would say 99% of science is done at the bench with individuals pipetting liquids around,” Reshma Shetty said to R&D World. “More science can and should be done using autonomous labs, using AI agents to help design and analyze experiments.”

Ginkgo Bioworks and OpenAI published a bioRxiv preprint describing a closed-loop system in which GPT-5 designed and iterated cell-free protein synthesis experiments on Ginkgo’s automated cloud lab. According to the paper, the system ran roughly 36,000 experimental conditions across about 580 multi-well plates over six months, generating nearly 150,000 data points. The authors describe a validation schema that checked plate layout, controls, replication, reagent availability, and volume constraints before execution.

The preprint reports a reduction in reaction component costs for superfolder green fluorescent protein from a previously published benchmark of $698 per gram to $422 per gram under comparable conditions. Ginkgo described the result in a company announcement outlining plans to commercialize the optimized reaction mix.

Shetty described how she views the current state of lab automation and where she believes it can move.

The Lab as Programmable Infrastructure

Shetty outlined three forms of lab automation. The first is “walk-up” automation. A scientist places a plate on a liquid handler, removes it, and manually moves it to an incubator or analytical instrument. The second is integrated automation. A central robotic arm moves samples between instruments in a fixed configuration designed for specific workflows.

“The lab bench is science’s equivalent of the car: very flexible, you can change what experiment you do every day,” she said.

Integrated systems are designed for repeated workflows. Modular systems are designed to allow workflows to change. Ginkgo’s Reconfigurable Automation Carts form the hardware layer of that approach. Software encodes protocols and schedules runs. The preprint describes how the validation model blocked experimental designs that did not meet predefined constraints.

“Where I think we realize the full potential of AI-driven science is autonomous labs,” Shetty said.

In the experiment detailed in the preprint, GPT-5 proposed experimental designs, selected reagent combinations, analyzed results, and determined subsequent experiments. The paper describes more than 36,000 executed conditions across six iterative cycles.

Ginkgo markets its RAC hardware and cloud lab capabilities and sells cell-free protein synthesis reagents through its reagent platform. In its announcement accompanying the preprint, the company said the optimized reaction mix developed during the run would be commercialized, and that the experimental validation framework would be released as open-source software.

The laboratory under this model operates through encoded protocols and automated execution. The preprint documents sustained experimental volume over six months. It does not include detailed capital cost accounting. The system requires robotic hardware, orchestration software, and staff to operate and maintain it.

The Scientist Moves Up a Layer

“You should think of it as a team, except instead of your team being a team of humans, it’s now a mixture of humans and AI agents,” Shetty said.

According to the preprint, GPT-5 handled experimental design, literature search, parameter selection, and data analysis across six cycles. The model evaluated prior results and proposed new conditions to test.

The paper also documents continued human involvement. Staff prepared reagents, loaded and unloaded plates, and monitored system performance. Early plates showed high variability due to stock concentrations, which the team corrected. During the experiment, the DNA template and the cell lysate were improved.

“On the human side, we also improved some reagent quality on both the DNA side and the lysate side. That’s when we were able to actually achieve the 40% improvement,” Shetty said.

The optimization targeted a single protein. When the optimized reaction composition was tested against twelve additional proteins, the preprint reports that only about half produced visible yield on SDS-PAGE.

“My assumption is that you get what you optimize for,” Shetty said. “Biologists have a saying: you get what you screen for. I think the same is true for AI-driven science.”

The gains were tied to the defined objective described in the paper. Broader generalization was not demonstrated in the reported experiments.

Shetty also addressed cost structure. “We expect more and more experiments to be run on autonomous labs where reagent and consumables costs dominate the cost of an experiment,” she said.

The preprint documents how a language model operated within a modular automated lab under validation constraints and human supervision. Hardware executed the experiments. Software encoded the protocols and iteration logic. Human intervention addressed chemistry quality and variability when required.