The dry lab side of most R&D organizations is producing more experimental candidates than the wet lab can absorb. AI models identify compound targets in hours. In silico screens narrow candidate pools overnight. Computational biology teams generate prioritized queues that should, in theory, flow directly into physical lab operations.
They don’t.
Those queues land in spreadsheets. They sit in email threads. They get buried in informal handoff documents that wait, sometimes weeks, for someone on the experimental side to pick them up. Not because the science is wrong, but because no operational system exists to move that output into an instrument queue with any structure or speed.
This is the reality behind the lab in the loop concept: computational predictions generate experimental work, experimental results refine computational models, and the cycle should accelerate R&D at a pace neither environment achieves alone.
The promise is a continuous, self-improving loop between in silico intelligence and physical validation. The reality is a loop that stalls at every operational seam between the two environments. Not at the point of intelligence. At the point of execution.
What the Loop Actually Means in Practice
The lab in the loop describes a specific operational model. A computational lab generates compound candidates, experimental parameters, or process hypotheses using AI models, molecular simulations, or statistical frameworks.
The most promising outputs transition into wet lab queues. They get matched to available instruments. They are assigned to qualified personnel. They produce results that return to the computational environment with enough operational context to be meaningful.
Each stage is a handoff. Each handoff is a potential failure point.
The dry lab produces an output. Someone translates it into a request. Someone else checks instrument availability. A third person schedules the work, possibly weeks out. The experiment runs. Results get recorded, most of the time in diverse structured systems. Eventually, the data reaches the computational team, often stripped of the operational context that would make it fully interpretable
The question the lab in the loop forces is not whether these handoffs exist. They always will. The question is whether the infrastructure connecting them is managed or improvised.
In most organizations today, it is improvised.
Where the Loop Actually Breaks
The loop does not fail at the AI layer. The computational platforms work. The models produce useful output. The failure is downstream, in the operational infrastructure that should translate computational output into experimental action and return structured results.
Two failure points account for the majority of the friction.
The Dry Lab Moves Faster Than the Wet Lab Can Respond
Computational research operates at a fundamentally different tempo than experimental research. A well-configured AI model can generate hundreds of candidates in the time it takes a wet lab team to prepare and execute a single assay. This is not a flaw in either environment. It is a structural asymmetry that creates a queue management problem at scale.
When dry lab output rate exceeds the wet lab’s capacity to respond in an organized way, prioritization collapses. The most computationally promising candidates are not the ones that get scheduled first. They are the ones that reach the right person’s inbox at the right moment. A coordination failure, not a scientific one.
At scale, this compounds fast. Organizations running multiple computational programs across therapeutic areas find that wet lab queues become undifferentiated backlogs. Priority gets determined by recency or proximity rather than scientific merit.
The Data Flow Back Is Broken Before It Begins
The second and less examined failure point is the return loop. For in silico models to improve, they require structured, accurate experimental results traceable to specific parameters, equipment states, and conditions.
In most organizations, that traceability does not exist at the point of origin. Results are reconstructed after the fact from lab notebooks, instrument log files, and manually compiled reports.
By the time data reaches the computational model, critical questions are unanswerable:
- Which instrument ran the assay?
- Was it recently calibrated?
- What was its utilization load that week?
- Was a service event performed between this run and the previous one?
These are not trivial questions for a model trying to distinguish meaningful experimental variance from operational noise. The loop closes. But it closes with degraded data. And degraded data produces degraded model refinements that compound over every iteration.
The Operational Infrastructure the Loop Requires
An operational infrastructure capable of supporting a functioning lab in the loop has four requirements. They need to be in place simultaneously, not sequentially.
Real-time resource visibility. When a dry lab output generates an experimental request, the system needs to answer immediately: which instruments are available, which are scheduled, which are in maintenance, and what realistic queue time looks like across all relevant sites. Without this, scheduling is reactive. It happens after conflicts, not before them.
Structured workflow handoffs. The movement of an in silico output into a wet lab execution queue should trigger a defined process that routes the request to the right team, matches it to available resources, and creates a traceable record both environments can reference. An email thread does not do this. A shared calendar does not do this. A ticket sitting in a general queue without routing logic does not do this.
Automatic operational context capture. Equipment identity, booking record, calibration status, and service history should become part of the experimental record as a byproduct of how the lab operates. Not as a manual documentation step performed after the work is done. Every piece of context assembled manually after the fact is a piece of context that might be incomplete, inaccurate, or missing entirely.
Structure at the source for return data. The experimental results feeding back into computational models are only as useful as the operational metadata accompanying them. Which instrument, under what conditions, with what service history, at what utilization level. That context transforms a data point into an interpretable, reproducible result.
“The integration of in silico and in vivo workflows is not a theoretical next step. The computational side is already operational in most advanced R&D organizations. What remains unbuilt is the operational connective tissue between prediction and physical execution, the infrastructure that determines whether a hypothesis generated at nine in the morning is running on an instrument by noon or sitting in a queue for three weeks.”
— Pierre, newLab®
Lab in the Loop: Operational Gap vs. Functional State
The table below maps each layer of the loop against two states: the gap most organizations operate within today, and what changes when a managed operational infrastructure is in place.
| Loop Layer | Without Operational Layer | With Operational Layer |
| Dry lab to wet lab handoff | Email, tickets, or informal requests with no routing logic | Automated workflow triggered by computational output, routed to the right team and resources |
| Resource availability | Checked manually, often after scheduling conflicts occur | Real-time visibility into instrument status, availability, and queue depth across all sites |
| Experiment prioritization | Based on who asks the loudest or who is in the room | Based on dry lab priority scores matched against live resource availability |
| Equipment context capture | Manually noted in lab notebooks after the fact | Automatically associated with the experimental record through the booking and workflow system |
| Result traceability | Reconstructed from scattered records, often incomplete | Full operational context attached to results at the point of origin |
| Return data to the computational model | Manually compiled, delayed, and structurally inconsistent | Structured operational metadata available immediately for model refinement |
| Cross-team visibility | Scientists in dry and wet labs operate with limited mutual awareness | Shared operational layer gives both environments visibility into pipeline status and capacity |
What newLab® Contributes to the Loop
For the lab in the loop to function at the pace modern R&D demands, the operational layer connecting computational outputs to experimental execution cannot be informal. It needs to be managed, visible, and automated, sitting within the enterprise IT environment, which both the dry lab and the wet lab already share. The platform that delivers it is newLab®.
Built natively on ServiceNow, newLab® manages the operational and workflow infrastructure of the wet lab side of the loop. It gives computational and experimental teams a shared operational layer where:
- Resource availability is visible in real time across sites
- Handoffs between environments happen through structured workflows
- Equipment context is captured automatically as part of the booking and service record
- Operational metadata that structured results depend on is produced as a byproduct of how the lab runs
It functions as smart lab software in the most literal sense: software that makes the operational connections between environments intelligent, traceable, and fast.
What newLab® does not do – it does not connect scientific instruments directly. It does not extract raw experimental data from lab equipment. It does not replace LIMS or ELN systems.
It works alongside those systems. Its role is the operational and workflow layer that makes the loop between dry lab intelligence and wet lab execution functional. That distinction matters to anyone who understands that every layer in this stack has a specific job.
Closing the Loop Is an Operational Decision, Not a Technology Decision
The technology on both sides of the loop is not the bottleneck. Computational platforms are mature. Instruments are capable and, in many organizations, underutilized. The bottleneck is the operational layer connecting them: scheduling infrastructure, workflow routing, resource visibility, and structured data handoff.
Organizations genuinely closing the loop in 2026 are not doing it by buying more AI tools or more instruments. They are doing it by building the workflow and scheduling infrastructure that allows dry lab outputs to reach wet lab execution queues in hours instead of weeks, and that ensures the results coming back carry enough operational context to be genuinely useful for model refinement
That is not a technology upgrade. It is an operational decision about how the two most valuable environments in R&D are connected.
If your organization is operating with that gap, book a demo with newLab® and see how the operational layer of the loop works inside an enterprise R&D environment that runs on ServiceNow.
Frequently Asked Questions
What does “lab in the loop” mean in R&D?
Lab in the loop describes an R&D model where in silico predictions feed directly into wet lab operations, and experimental results flow back to refine the computational models in a continuous cycle. The speed of that cycle, from generated hypothesis to validated result and back, determines its value.
What is the difference between a dry lab and a wet lab in this context?
A dry lab handles computation, modeling, and AI-driven hypothesis generation. A wet lab handles physical experimentation and in vivo or in vitro validation. Their operational relationship, not their individual capabilities, determines whether the loop functions.
Why is connecting dry lab and wet lab outputs harder than it sounds?
The science on both sides works. The systems connecting them do not talk to each other in any structured way, so handoffs happen through email, tickets, and informal coordination that introduce delays, lose context, and break traceability.
What role does a computational lab play in a lab in the loop model?
The computational lab is the hypothesis-generating engine of the loop. Its value is realized only when wet lab operations can receive, schedule, execute, and return results from its outputs at a pace that matches computational production speed.
How does smart lab software support the connection between dry and wet lab environments?
Smart lab software provides the operational infrastructure connecting both environments: real-time resource visibility, structured workflow routing, automated equipment context capture, and the operational metadata that makes return data useful for model refinement. It does not replace scientific data systems. It makes the operational layer between them functional.


