Skip to content
← Back to blog
Latch Journal

The Runtime Control Layer: What This Category of Software Is and Why It Exists

A new category of operational software is forming between governance platforms and ticketing systems. It combines AI-assisted triage, controlled execution, a...

Move This Into A Governed Workflow

Keep the work, approvals, and evidence in one audit trail.

Start with a live workflow conversation or jump straight to the most relevant Latch product path for this topic.

Talk through your workflow Approval Workflows See how sensitive actions run with reviewer checkpoints, policy checks, and execution history.

The Problem Every Growing Team Hits

A five-person team can run approvals in Slack, track refunds in a spreadsheet, and keep the story in someone's head. That works until it does not — until a customer asks "who approved that refund?" and nobody can answer without digging through three systems.

The problem gets worse as teams add AI to triage, use plugins to trigger actions in Stripe or an ERP, or hire a second shift that was not in the original Slack thread. At some point the team needs one place where the work happens, the approvals are enforced, and the evidence survives.

Most teams try to solve this with one of three tools. Each covers part of the problem. None covers all of it.

Three Common Approaches and Where They Fall Short

Approach 1: A ticketing system (Zendesk, Jira, Freshdesk). Ticketing systems are the operational workhorse — intake, routing, queues, human workflows. But they have no approval steps on sensitive actions. They do not enforce which operators can trigger which downstream changes. The audit trail lives in text comments. The operator writes "processed refund" in a ticket note, but there is no structured evidence of what was approved, what was executed, or what Stripe returned. The work happens. The evidence is scattered.

Approach 2: A governance or compliance platform. These systems manage policy, risk registries, and compliance reporting. They operate above the work. They can tell you the system is approved, the operator is credentialed, and the policy is active. They cannot tell you what the AI recommended on a specific case, what the operator chose, or what the downstream system returned. For teams that do not have a governance platform yet (most startups), this is not even on the radar — but the underlying need for provable decisions is.

Approach 3: An AI copilot. Copilots excel at what they do — generating summaries, classifying work, suggesting next steps. But they have no execution boundary. There is no gate between the AI's recommendation and the operator's action. There is no record of what the AI actually recommended versus what the operator actually did versus what was blocked versus what was approved. If the operator acts on a recommendation that turns out to be wrong, there is no structured evidence of the decision point.

Each of these is a real product category. Each solves part of the problem. None of them, alone, solves the problem that emerges when decisions lead to sensitive actions that cross system boundaries and must be provable later — whether for a manager asking "what happened?" or an auditor asking the same question months later.

A category of operational software is forming at this intersection. This post defines what the category is, what it is not, and why it exists.

What This Category Is

The runtime control layer combines five capabilities into a single operational surface.

First: Unified intake that preserves context across channels. Cases arrive from email, tickets, API signals, forms, or manual entries. The system consolidates them into a single operational record where context travels with the case through handoffs, reassignments, and escalations. The record is the case — not a pointer to a case in another system. Context does not get lost in a handoff because there is nowhere else for the case to live.

Second: AI recommendation with policy-bounded execution. AI assists with classification, prioritization, summarization, and suggested next steps. But AI outputs are bounded by policy. Some recommendations are advisory only. Some require human review before any action can proceed. Some are blocked entirely based on case type, risk tier, or operator role. The boundary between "AI suggests" and "operator acts" is explicit and enforceable, not implicit and aspirational.

Third: Approval and role enforcement attached to the case, not a separate system. Approval logic lives on the case workflow itself. The operator sees what they are authorized to do. Actions that require additional approval are gated at the point of execution. Denied and blocked actions are recorded as evidence, not swallowed silently in a log somewhere else.

Fourth: External action execution with result capture. The operator can trigger sensitive downstream actions — payment reprocessing, entitlement changes, service modifications, escalation workflows — from within the case. The system captures what was requested, what was authorized, what actually executed, and what the downstream system returned. The action result writes back into the case timeline. It does not disappear into a separate system's logs.

Fifth: Immutable, case-linked audit trail. The complete operational history stays on the case record: intake, AI recommendations, operator decisions, approval events, blocked attempts, external action results, status changes. The trail is append-only. It is not reconstructed after the fact from screenshots, chat messages, or operator memory.

The defining characteristic is that the case record is both where the work happens and where the evidence lives. These are not two systems connected by an integration. They are the same system.

This is what separates the category from its neighbors. Governance platforms manage policy above the work. Ticketing systems manage cases without governing the actions. AI copilots recommend without bounding execution. The runtime control layer does intake, govern, execute, and prove on one record.

What It Is Not

Four adjacent categories overlap with the runtime control layer but are fundamentally different things.

It is not a GRC platform. Governance, Risk, and Compliance platforms manage AI registries, policy libraries, risk assessments, and compliance reporting. They sit above the operational layer. They answer questions like: "Is this AI system approved?" and "Who is the risk owner?" and "When was the last review?" They do not handle cases, execute actions, or preserve runtime evidence. GRC platforms map to ISO 42001 Clause 8, NIST AI RMF Govern and Map functions, and EU AI Act Articles 9 and 10 conformity assessment requirements. A runtime control layer needs a GRC platform above it. But the GRC platform cannot substitute for the runtime layer.

It is not a workflow automation tool. Workflow automation platforms, iPaaS systems, and orchestration engines can trigger actions across systems, route work based on rules, and chain steps together. But they do not enforce approval boundaries at the point of execution. They do not attach role-based access to specific case actions. They do not preserve denied paths as evidence. They do not keep the audit trail on the case. A workflow automation tool might sit behind the runtime control layer as an execution engine. But the engine does not provide the governance surface.

It is not an AI agent framework. AI agent frameworks let developers build systems where AI reasons, plans, and executes tool calls autonomously or semi-autonomously. They are powerful for building AI capabilities. But they are not designed to preserve case-level evidence of each decision, enforce human approval gates on sensitive actions, or produce audit trails that survive regulatory scrutiny. An AI agent framework might power the AI inside a runtime control layer. But the framework alone is not the layer.

It is not a ticketing system with plugins. Traditional case management and ticketing systems handle intake, routing, assignment, and status tracking. Some support plugins or integrations that extend functionality. But the plugin model typically pushes execution outside the case. The operator clicks a button, switches to another system, performs the action, switches back, and types a note. The audit trail is whatever the operator remembered to write down. A runtime control layer keeps execution inside the case, with structured evidence capture, not free-text notes about what happened elsewhere.

How Adjacent Categories Relate

The runtime control layer does not exist in isolation. It sits within a broader operational and governance stack.

Above: Governance and compliance platforms. These manage the approval control plane — AI registries, risk assessments, policy libraries, release packets, review cadences. They map to ISO 42001 Clause 8, NIST AI RMF Govern and Map functions, and EU AI Act Articles 9 and 10 conformity assessment requirements. The NIST AI RMF treats approval and monitoring as integrated components of a single lifecycle — which is architecturally correct. But in practice, most organizations implement them in separate systems. The runtime control layer depends on governance platforms for policy inputs and implements the policies they define. It does not replace them.

Beside: AI observability and monitoring tools. These watch model behavior at the system level — performance metrics, drift detection, anomaly detection, latency, error rates. They map to ISO 42001 Clause 9, NIST AI RMF Measure function, and EU AI Act Articles 12 and 26 logging and monitoring requirements. The runtime control layer generates evidence that observability tools can consume. But observability tools do not capture case-level decision context. They do not know why a specific operator made a specific choice on a specific case.

Below: Execution engines and external systems. These are the downstream systems where actions actually happen — payment processors, CRM systems, entitlement databases, communication platforms. The runtime control layer calls them, captures their responses, and writes the results back to the case. The execution engine does the work. The runtime control layer governs and records it.

Overlapping: MLOps and model management platforms. These manage the model lifecycle — training, versioning, deployment, monitoring. They answer "Which model version is in production?" and "How is it performing?" They overlap with the governance layer on model versioning and with the monitoring layer on performance tracking. The runtime control layer consumes model outputs but does not manage model lifecycles.

Who Needs This and Who Does Not

Not every workflow needs a runtime control layer.

You probably need this when:

  • Decisions lead to sensitive downstream actions — refunds, vendor changes, account modifications, escalations — and someone might ask "who approved that?" later.
  • Operators handle cases where AI recommends and humans decide, but the decision must be provable afterward.
  • Workflows cross system boundaries. The action happens in Stripe, an ERP, or an internal API — not in the same tool where the case lives.
  • The team is growing. What one person could track in their head now needs to be visible to a manager, a second reviewer, or a new hire.
  • AI capabilities are changing. New models, new prompts, new retrieval sources, new plugins appear regularly. A static configuration document cannot describe what the system did on a given day.

You probably do not need this when:

  • The AI system is a stable, batch pipeline with well-defined inputs and outputs — document classification, OCR extraction, data enrichment. Simpler approval and monitoring may be sufficient.
  • AI is purely advisory with no downstream action. An operator reads a summary, but all execution happens manually in other systems with their own audit trails.
  • The workflow is internal-only with no customer or financial exposure.
  • The team is small enough and the workflow is stable enough that the current process (even if it is Slack and spreadsheets) genuinely works.

This is not a maturity spectrum where every team eventually needs the full stack. Some workflows genuinely do not require case-level control. The category exists because some workflows genuinely do — and the combination of sensitive actions, multiple systems, and the need to prove what happened creates a problem that no single adjacent tool solves alone.

Where Latch Fits

Latch is one implementation of this category.

It maps to the five core capabilities. Unified triage consolidates email, tickets, and operational signals into a single case record. AI recommendations are bounded by policy — advisory, reviewable, or blocked depending on workflow configuration. Approval and role enforcement attach to the case: operators see what they can do, and actions require appropriate authorization. External action execution happens from inside the case, with structured capture of what was requested, authorized, executed, and returned. The audit trail stays on the case record — append-only, case-linked, surviving after the action completes.

Latch does not manage AI model inventories, run conformity assessments, store enterprise risk treatment documents, or provide system-level observability dashboards. Those responsibilities belong to the governance, compliance, and monitoring layers that sit above and beside it.

The point is not that Latch is the only way to build this layer. The point is that this layer needs to exist. Teams who try to fill it with only a governance platform, only a ticketing system, or only an AI copilot will find the gaps when an auditor, a manager, or a regulator asks what actually happened on a specific case.

Continue Reading

Continue exploring
Next product path Approval Workflows See how sensitive actions run with reviewer checkpoints, policy checks, and execution history. Related path Finance Controls Explore four-eyes control, exception handling, and controlled recovery paths for finance teams. Related path Unified Triage See how Latch handles email, tickets, and queue routing in one operational workflow.
Related reads
What Is a Plugin Action? A plugin action is a step that runs on an external system from inside the case — with role checks, approval steps, and an audit trail. Here is how it works. AI Raises the Value of Control-First Software AI does not flatten all software into commodities. For platforms built around controlled execution, audit trails, and cross-system integration, AI increases ... Using ChatGPT Safely for Refunds and Customer Offers ChatGPT can help suggest refunds and customer offers, but safe approval, role checks, and audit history still need to stay in the workflow.