AI has changed how quickly teams can build workflow software.
That is mostly a good thing. A team can use ChatGPT or Gemini to sketch a plugin, connect a downstream system, or automate part of a repetitive operator task far faster than before.
But speed creates a tempting mistake.
Teams start treating every part of the workflow as equally safe to generate, revise, and improvise. It is not.
Some parts of the system can move fast. Some parts should be treated as safety-critical infrastructure.
Approval logic, authorization rules, and audit behavior belong in the second category.
The Risk Is Not "Using AI"
The problem is not that AI exists in the workflow.
The problem is what happens if the wrong part of the workflow becomes soft, unstable, or poorly reviewed.
If AI helps draft a plugin that summarizes a case, prepares a suggested refund amount, or proposes the next best customer offer, that can be useful.
If the same fast-moving layer also decides:
- who is allowed to act
- which actions need approval
- whether the requester and approver must be different people
- what gets logged
- which denied attempts are preserved
- what evidence survives after execution
then the control story gets weak very quickly.
That is because these are not convenience features. They are the rules that decide whether the workflow is safe, explainable, and defensible later.
The Core Is the Part That Must Stay Boring
In high-trust operations, the best control layer is usually the least exciting part of the system.
It should be boring in the best sense:
- predictable
- reviewed carefully
- deeply tested
- hard to bypass
- easy to explain
This is the layer that answers the questions a team will be asked after something sensitive happens:
- Who requested the action?
- Who had the right to run it?
- Did it require another approver?
- Was anyone blocked from doing it?
- What exactly happened?
- What proof still exists?
If that layer behaves differently every time a new plugin is added, the organization does not really have a control model. It has a moving target.
What Belongs in the Stable Core
The stable core is the part of the system that should stay consistent even when workflows, prompts, and plugins change around it.
In plain language, that core usually includes:
- identity and role checks
- permission boundaries
- approval thresholds
- maker-checker or four-eyes separation where needed
- denial and block handling
- execution history
- audit evidence retention
This is the part that says:
- this person can request, but not approve
- this amount can move fast, but that amount needs review
- this action is visible, but blocked for this role
- this case cannot skip the evidence requirement
- this execution must write a durable result back to the record
Those decisions should not vary because a prompt changed or because a plugin was written in a rush.
This is not the layer to vibe code on the fly.
What Can Move Faster at the Edge
The edge of the system is different.
This is where fast iteration is often useful:
- plugins that connect to refund, offer, or CRM systems
- AI-generated summaries or suggestions
- case-specific input collection
- recommendation logic for likely next actions
- tenant-specific integrations
These pieces can and should evolve more quickly.
They are closer to the changing details of the business. They benefit from experimentation. They can improve as teams learn more.
But they should operate inside a clear contract.
The plugin can suggest. The plugin can collect. The plugin can execute an approved step. The plugin should not get to redefine who is authorized, whether approval is required, or what counts as an audit trail.
Why This Separation Matters More With AI
Before AI-assisted development became common, teams still made risky workflow changes. They just moved more slowly.
Now the speed of change is much higher.
That creates leverage, but it also raises the cost of weak boundaries. A fast-moving team can produce a useful plugin in a day. It can also produce a subtle approval or logging flaw in a day.
That is why the architecture matters.
If the core control layer is stable, well-reviewed, and heavily tested, then teams can safely use AI to accelerate the adaptable edge:
- new plugin ideas
- new customer workflows
- new downstream integrations
- better suggestions for operators
If the core control layer is also changing casually, the whole system gets harder to trust.
Tested Control Logic Is What Makes Safe Speed Possible
This is the point teams often miss.
Careful core logic does not slow AI adoption down. It is what makes faster delivery possible without losing control.
When approval, auth, and audit behavior are deeply tested, the organization gains freedom everywhere else.
Teams can move faster on the edge because they are not renegotiating the safety model every time they ship.
That leads to a healthier operating pattern:
- keep the control boundary stable
- let plugins evolve quickly around that boundary
- review the small, risky surface with extra care
- preserve the same evidence model across every workflow
This is how you get both speed and accountability instead of trading one for the other.
A Simple Rule
If a new capability answers the question, "How should the work be done for this customer, queue, or integration?" it may belong in a plugin.
If it answers the question, "Who may act, who must approve, what gets blocked, and what evidence survives?" it belongs in the core.
That rule keeps the product honest.
It also keeps AI in the right role. ChatGPT or Gemini can help teams generate useful workflow behavior. They should not be allowed to casually redefine the control boundary that makes sensitive work safe in the first place.
Where Latch Fits
Latch is useful here because it keeps the control layer and the extension layer distinct.
The platform can keep approved roles, permission policy, denial visibility, execution history, and case-linked audit trails stable at the center, while teams add plugins around that core for refunds, offers, reprocessing, or other downstream work.
That gives operators flexibility without turning approval and audit behavior into a collection of one-off exceptions.