The narrative around AI and software has settled into a simple story: AI will flatten product differentiation, compress margins, and turn enterprise applications into thin wrappers over shared models.
That story is wrong for a large category of software. And the growing tendency to treat all enterprise software as equally exposed to AI substitution misses a distinction that operations teams already understand.
Not All Software Faces the Same Exposure
AI does compress value in certain kinds of products. The pattern is recognizable:
- generalist knowledge domains where the model already knows enough
- lightweight workflows that a prompt chain can replicate
- low switching costs because the product holds little operational state
- shallow integration into the systems where consequential work happens
If the product is mostly an interface over common information work, AI can collapse a large part of the value stack. That is real, and teams building in those categories should be honest about it.
But that pattern does not describe what we build for.
The operations teams we work with do not run on lightweight workflows. They run on consequence-heavy processes where the cost of a missed step, an unauthorized action, or a lost record is concrete and measurable: a payment reversal processed without independent review, a refund override approved by the same person who requested it, a reprocessing action executed outside the case with the evidence reconstructed from chat messages after the fact.
A smarter model does not solve those problems. A system that governs how work moves from intake to action to proof does.
AI Makes the Control Layer More Important, Not Less
AI is strong at reading, summarizing, classifying, and proposing next steps. Those capabilities are genuinely useful in operational triage. They help teams surface the right work faster and reduce the time spent on context assembly.
But a recommendation only helps if the surrounding workflow can absorb it safely.
Without a control layer, better AI just creates faster fragmentation:
- The model suggests an action.
- The operator copies context into another tool.
- The downstream system is changed outside the case record.
- Someone reassembles the evidence later from notes, screenshots, and memory.
That is not AI-enabled operations. That is the same broken process running at higher throughput.
The more capable the model becomes, the more pressure it places on everything around it: who is allowed to act, which systems need to change, what approval path applies, and whether the outcome is recorded in a way that survives scrutiny.
Those are control questions. And they are exactly where Latch operates.
Where This Shows Up in Practice
This is not an abstract argument. It maps directly to the workflows teams run inside Latch every day.
Unified Triage with AI Assistance
Issues arrive from email, forms, APIs, and internal alerts. AI helps classify and prioritize incoming work, but the triage model itself is what creates operational consistency. One queue, one status model, one set of ownership rules. AI makes the queue smarter. The controlled triage model is what makes it reliable.
Without that structure, AI classification just produces better-sorted chaos across the same fragmented set of inboxes and side channels.
Finance Controls and Controlled Execution
A finance team handling payment reversals, write-off exceptions, or vendor changes needs more than a recommendation engine. They need four-eyes control: one person prepares, a second person reviews independently before money moves.
AI can surface the relevant case context, flag anomalies, and suggest the appropriate action. But the maker-checker boundary, the permission policy, and the denied-attempt visibility cannot live inside the model. They have to live in the workflow.
Latch keeps AI guidance and controlled execution on the same case record, so the recommendation and the plugin action stay connected instead of drifting into separate tools.
External Actions Through Plugins
When a case requires a downstream system change, whether that is a Stripe refund, a core banking adjustment, or an internal API call, the action runs through a plugin inside the case. Role controls and permission policy apply before execution. The request, the response, and the outcome are preserved in the case timeline.
AI can help decide which action to recommend. But the execution boundary, the approval check, and the audit trail are the system's job, not the model's.
Audit Trails That Survive Real Questions
Every operational workflow eventually faces the question: what happened, who decided, and why?
AI can help generate case summaries. But the immutable record of intake, triage decisions, approval chains, executed actions, and denied attempts is what makes the answer defensible. That record has to be built into the workflow architecture, not bolted on afterward.
The Structural Difference
Software most exposed to AI compression tends to sit between the user and general information. The product is the interface, and when a model can replicate the interface, the product loses its reason to exist.
Software built around operational control sits between the team and consequential action. The product is the controlled pathway: intake, triage, human review, constrained execution, cross-system orchestration, and evidence capture. AI does not replace that pathway. It feeds into it.
That is a structurally different kind of software. And it is what we have built around.
For teams running high-stakes operations, AI is not the threat. The threat is AI without a control layer: model output that never connects to the approval path, downstream actions fired outside the case, decision trails that require manual reconstruction. When the system around the AI can absorb its output, enforce the right permissions, orchestrate cross-system execution, and preserve the record, AI makes the platform more valuable. When it cannot, AI just accelerates the same fragmentation that caused the control failures in the first place.
The Core Point
AI will compress shallow, interchangeable software. That repricing is already happening, and in many cases it is justified.
But the better AI gets at recommending, the more organizations need a system that controls what happens next. Software built around operational depth, controlled execution, and embedded workflow memory is not headed for compression. It is moving into a more central role.
Latch exists for that part of the workflow. One place to take an issue from intake to action to proof, with AI inside the process but not in uncontrolled charge of it.
As AI capabilities improve, that architecture does not become less necessary. It becomes the thing teams cannot operate without.