An AI-assisted plugin can draft a refund recommendation in an afternoon.
It should not be able to quietly redefine who is allowed to run the action, when approval is required, or what proof survives after the fact.
That is the distinction this article is about.
AI has made workflow software much faster to build. That is useful when the team is drafting plugins, integrations, summaries, or operator assistance.
It is dangerous when the same fast-moving layer is allowed to redefine the control boundary itself.
This Is For
- teams building AI-assisted approval or action workflows
- operators and architects deciding what belongs in a plugin versus the platform core
- leaders who need speed without turning approval, auth, and audit into a moving target
The Risk Is Not "Using AI"
The problem is not that AI exists in the workflow.
The problem is what happens if the wrong part of the workflow becomes soft, unstable, or poorly reviewed.
If AI helps draft a plugin that summarizes a case, prepares a suggested refund amount, or proposes the next best customer offer, that can be useful.
If the same fast-moving layer also decides:
- who is allowed to act
- which actions need approval
- whether the requester and approver must be different people
- what gets logged
- which denied attempts are preserved
- what evidence survives after execution
…then the control story gets weak very quickly.
That is because these are not convenience features. They are the rules that decide whether the workflow is safe, explainable, and defensible later.
What Must Stay Stable
Before the details, the practical rule is simple:
- plugins can suggest
- plugins can collect input
- plugins can execute an already-allowed step
- plugins should not redefine who is authorized, whether approval is required, or what counts as the audit trail
If you are evaluating a live workflow now, start with approval workflows in Latch or talk through the workflow directly.
The Core Is the Part That Must Stay Stable
This is the layer that answers the questions a team will be asked after something sensitive happens:
| Question | What needs to be tracked |
|---|---|
| Who requested the action? |
|
| Who had the right to run it? |
|
| Did it require another approver? |
|
| Was anyone blocked from doing it? |
|
| What exactly happened? |
|
| What proof still exists? |
|
If that layer behaves differently every time a new plugin is added, the organization does not really have a control model. It has a moving target.
What Belongs in the Stable Core
The stable core is the part of the system that should stay consistent even when workflows, prompts, and plugins change around it.
In plain language, the core handles:
- Who you are and what you can do
- When approvals are required
- Enforcing rules and blocking forbidden actions
- Keeping a permanent record of what happened
This is the part that says:
- "You can request this, but someone else must approve it."
- "Small amounts are fast, but big amounts need review."
- "You cannot skip the required evidence."
- "This action must be permanently logged."
These rules need to be rock solid. They should not break just because an AI prompt changed or a plugin was built in a hurry.
This is not the place to guess or write code on the fly.
What Can Move Faster at the Edge
The edge of the system is different.
This is where fast iteration is often useful:
- plugins that connect to refund, offer, or CRM systems
- AI-generated summaries or suggestions
- case-specific input collection
- recommendation logic for likely next actions
- tenant-specific integrations
These pieces can and should evolve more quickly.
They are closer to the changing details of the business. They benefit from experimentation. They can improve as teams learn more.
But they should operate inside a clear contract.
The plugin can suggest. The plugin can collect. The plugin can execute an approved step. The plugin should not get to redefine who is authorized, whether approval is required, or what counts as an audit trail.
Why This Separation Matters More With AI
Before AI-assisted development became common, teams still made risky workflow changes. They just moved more slowly.
Now the speed of change is much higher.
That creates leverage, but it also raises the cost of weak boundaries. A fast-moving team can produce a useful plugin in a day. It can also produce a subtle approval or logging flaw in a day.
That is why the architecture matters.
If the core control layer is stable, well-reviewed, and heavily tested, then teams can safely use AI to accelerate the adaptable edge:
- new plugin ideas
- new customer workflows
- new downstream integrations
- better suggestions for operators
If the core control layer is also changing casually, the whole system gets harder to trust.
Tested Control Logic Is What Makes Safe Speed Possible
The point of a stable core is not to slow down the team. It is to make safe speed possible.
When approval logic, authorization rules, and audit behavior stay stable, the team can move faster everywhere else without weakening the workflow every time a new plugin appears.
That is the real architectural win: the fast-moving edge gets more useful because the control model underneath it does not keep changing.
This is the point teams often miss.
Careful core logic does not slow AI adoption down. It is what makes faster delivery possible without losing control.
When approval, auth, and audit behavior are deeply tested, the organization gains freedom everywhere else.
Teams can move faster on the edge because they are not renegotiating the safety model every time they ship.
That leads to a healthier operating pattern:
This is how you get both speed and accountability instead of trading one for the other.
A Simple Rule
If a new capability answers the question, "How should the work be done for this customer, queue, or integration?" it may belong in a plugin.
If it answers the question, "Who may act, who must approve, what gets blocked, and what evidence survives?" it belongs in the core.
That rule keeps the product honest.
It also keeps AI in the right role. ChatGPT or Gemini can help teams generate useful workflow behavior. They should not be allowed to casually redefine the control boundary that makes sensitive work safe in the first place.
Where Latch Fits
Latch is useful here because it keeps the control layer and the extension layer distinct.
The platform can keep approved roles, permission policy, denial visibility, execution history, and case-linked audit trails stable at the center, while teams add plugins around that core for refunds, offers, reprocessing, or other downstream work.
That gives operators flexibility without turning approval and audit behavior into a collection of one-off exceptions.