> Provisional patent application filed: #64/054,240 (April 30, 2026). 35 claims covering state machine guardrail enforcement for LLM agent tool access. The core engine remains Apache 2.0 open source.
I'm not sure I understand what the "core engine" is if it's not the "state machine guardrail runtime" which is what the patent cover. What parts are the open source parts exactly?
I find the idea really interesting and was nodding along the way as I read what you wrote, makes sense both for the human and the agent, seems like a really nice idea that'd help, but the patent kind of makes me want to run away and not look into it too deeply.
Thanks for digging deeper and I'm happy to clarify all three aspects:
Re: Reproducing the results: the engine, agent crate and demo TUI are all in the repo. If you have ollama running with a 13B+ model, task run:bugfix reproduces the simple bugfix result end to end. What isn't published yet is the SWE-bench experiment harness (task selection, patch scoring, control runs). I need to get that out, I prioritized the end-to-end simple Claude Code plugin for the launch. The demo crate (crates/demo) contains a demo TUI which calls ollama and runs the bugfix state machine interactively with code.
Re: Engine: The core engine (crates/engine/) is the pure Rust state machine evaluator. It's what Statewright is running on the backend. JSON in => transition decisions out. Agent (crates/agent/) builds on top of it to make it useful for LLMs. That all is Apache 2.0 with no restrictions.
Re: the Patent: The patent covers the method of using state machines to constrain LLM agent tool access at the protocol layer. It's defensive, it helps protect the managed service and the idea from "being scooped" from a larger company with more personnel and resources. It's not targeted against solo developers, self-hosters or researchers.
You'll find that the portions that I've released FSL 1.1 have explicit grants which do not restrict solo developers or single team self-hosting. The code released this way becomes Apache 2 in exactly 3 years. This is not unlike what Sentry and MariaDB did. I am planning on releasing more portions as FSL 1.1, I just hadn't crossed that bridge and honestly this thing seems to have gotten popular at the moment so I thought I'd set the record straight a bit
The not-quite-Apache-2 "Fair Source License, Version 1.1, ALv2 Future License" (https://github.com/getsentry/fsl.software/blob/main/FSL-1.1-...) includes the Apache 2 patent grant. That grants you conditional permission to use the software in ways that would, without the grant, infringe upon their patent. One of the conditions is that you may not make a claim against any party that the software infringes upon any patent, or else your patent grant is terminated.
Unfortunately, the license actually in the repo is not even a not-quite-Apache-2 license. It doesn't appear to be FSL-1.1-ALv2 at all: https://github.com/statewright/statewright/blob/main/plugins.... This notably does not include the patent grant, which makes it unclear whether use of the software would infringe upon the patent.
You're right, and I have just corrected this. The license in the repo now uses the canonical FSL-1.1-ALv2 based on the template from fsl.software and now includes the patent grant clause.
The omission wasn't intentional -- the patent grant wasn't on my radar when the original license text was committed. FSL licensing is very new territory for me and I duffed it slightly, now corrected.
I also just updated the https://statewright.ai/research page to accurately reflect the intent and mention the patent grant afforded under FSL-1.1-ALv2. Thanks again for calling my attention to this.
I feel like caching should be mentioned in tradeoffs, right? If you change the tool list frequently, that's a cache bust. In long sessions that seems like it could significantly affect costs.
Great question... and there are two answers depending on what you were originally referring to:
re: Claude Code... we actually don't filter or modify the tool list so all tools stay visible -- disallowed calls get blocked at execution time with an error message. No cache busts on transitions, the model sees the full tool sets. The cost there is prompt caching dollars not latency I suppose
re: The research (Rust agent + Ollama) the model only receives tool schemas for the current states' allowed tools. Ollama does have a KV cache reuse facility so changing the tool list busts that cache. Depending on your workflow this can happen as many times as you expect your states to transition until completion. For simple workflows this is 3-5x. Within each state the tool list is stable and cache operates normally. Presenting fewer tools instead of dozens on every agent processing step reduces input tokens and decision complexity, which is where the measurable gains come from.
Both enforce the same constraints depending on the execution interface. The schema level filtering in the research is the S-tier approach. Adding tools/list filtering to the MCP gateway would be beneficial if possible (it looks like we could only filter MCP tools not core ones, which could provide tangible benefit. I've added this evaluation to the roadmap.
Interesting, I built a ticketing system similar to Beads which has yielded more predictable results with Claude and other models, and I'm currently building a custom harness, I'm able to use offline models though my GPU ram bandwidth is much lower, but I'm also planning on doing something similar to what you've built, namely the editing tools and what not, I hate how long it takes for Claude to look for files, it feels wasteful. I'm still astounded that everyone else has figured out ways to speed up harnesses, but Claude Code is still slow like a slug. I don't even care if I am waiting on the LLM in terms of slowness, but running local tools slowly bothers the living crap out of me, stop using grep, RIPGREP IS FASTER!
In any case, I'll have to check out Statewright after work ;)
I feel you on how sluggish Claude Code can be, you just never know what those pulsing prompts are doing in the background...
Given Statewright plugs into Claude Code, there is a little added overhead while managing the state machine logic, but for complicated workflows if it saves you a few debug loops, mass edit reversions or death spirals I think the case can be pretty solid for including it
I think this will be the next frontier for these models, improving the desktop tooling. I am surprised I've yet to see them go all in on hiring desktop app developers to overhaul Claude Code / Codex / Antigravity / etc because there's so many things they could do to reduce the footprint and issues drastically.
In your Github, the JSON format shown for defining custom workflows is very simple. I wonder if that limits the detail in the state-related instructions and error messages you can send to a model.
For example, in state transitions, does your tool just tell the model something like "you are in 'act' mode and no longer in 'plan' mode, here are your new available tools"? Seems difficult to give it any more informative messages given how simple the workflow definitions are. Likewise when the model attempts to do something that's not supported for tools in the given phase.
The workflow definition is intentionally simple... the enforcement layer handles the mechanics however the model gets more context than just "you're in <xyz> mode now"
Each state has an `instructions` field for phase specific guidance and when an agent's action (tool call) gets rejected the error message lets the model know what went wrong, and what's available to move forward
Tool 'Edit' is not available in the 'planning' phase.
Allowed Tools: Read, Grep, Glob
To advance, call statewright_transition with READY -> implementing
Models (even simple ones) tend to reason through these error messages, adjusting their approaches as opposed to retrying the blocked call. Additionally, on transitions the model is required to include a rationale explaining why it's transitioning (`data.rationale`) which creates an audit trail of the agent's reasoning at each phase boundary. That ends up being one of the most useful parts of the run history viewable on statewright.ai
If I build a workflow in the visual editor, can I use that same flow inside my own app just by using the runtime/engine? Or is it mainly tied to the Statewright platform and Claude Code plugin?
I’m wondering if the runtime can be used as a standalone piece to power apps I build.
Yes, the engine handles the full workflow schema including guards. There are some aspects of runtime enforcement (env vars/command filtering, etc. exposed via the UI) that currently only live in the plugin layer but the engine parses and exposes everything. All you would have to do is wire up enforcement on your end in your app the same way the plugin does.
Does it make sense to ship an MCP code mode API? I'm surprised you're recommending MCP as-is when concerned about context usage optimization. I don't have a lot of hands-on experience either way yet so I'm curious what's best and/or most popular... I understand MCP is less effort and still affordable at VC-subsidised prices.
for the integration piece that ties into Claude Code and other places where AI is used most frequently? yes I think it does... we're not fighting context in Opus/Sonnet as much as we are in smaller models and we're only adding about 6 tools here which is a smaller footprint than other MCP exposures. Smaller models have a more direct/tight interface that doesn't bloat the tool space in my experimentation (using the core directly)
Stately (and XState ^_^) is pretty neat, I hadn't come across it yet... (edit:) neat to see visual XState being used for application logic as well
I see constant posts on Reddit/HN about the ways that AI is amazing and at the same time is fudging it (literally). Nobody can make reliability guarantees on something that's non-deterministic and non-idempotent. Nobody's AI workflow suite of tools can claim this. Prompting gets you closer to the mark but still non-deterministic. Breaking down the problem into chunks with valid transition criterion so that even tiny models can step through them I believe gets us closer to where we want to be semantically
Mr. Claude's Opus says that this is a very feasible thing. It has better support for hooks than Cursor and full MCP support so protocol-layer blocking (like Claude) is possible. Adding to the roadmap...
Nice project... the per-agent tool restriction is the same core insight (smaller tool space -> better reasoning)
The main difference with Statewright is that tool access changes over time within a single agent. Planning phase gets read-only tools, edit capability unlocks after the agent proves it has adequate understanding... test tools unlock after the fix. State machines handle the phase transitions, guards and retry loops.
Your multi-agent approach decomposes by role instead of by phase/state. Both are valid. Since you're already in Rust, the engine crate (crates/engine) is a pure library with no deps. It might be interesting to see if putting a state machine around your orchestration layer improves your observed performance
First thought: But why do we need statewright.ai external api? Why can't we do everything locally?
Second thought: enforcing tools is useful and I built myself a Pi extension to deny access to particular tools in some workflows.
But we need somehow to force agents obey the rules.
For example I have rules when using Pi to ask main agent to dispatch implementer agents in parallel using git worktrees. Some time it uses git worktrees, sometimes not.
The thoughts are like this: "the user asked me to use git worktrees so let me start using git worktrees. But wait, the task is simple so maybe I don't need git worktrees..."
If I ask why it didn't follow the rules, it says something like: "The user is right, I should have followed the rules..."
> example I have rules when using Pi to ask main agent to dispatch implementer agents in parallel using git worktrees. Some time it uses git worktrees, sometimes no
I've taken the approach that whenever this happens, it's my fault. The instructions were not clear enough, not direct enough, or more often, there's just too many of them.
I'm now at the point where my pi system prompt + agents + skills + tools starts out at just 7k context. It's all very clear and concise. I almost never have ambiguous responses like this, at least not bear the start of a session.
Combined with instructions to keep the main session as a coordinator and use subagents for all non trivial work, I can get a lot of work done before hitting 100k context and basically never go over 150k.
It's a stark contrast with Claude code where I was starting at about 35k context even after trimming my stuff down. It's hardly surprising if an agent doesn't know what to do if you dump 30k+ of context with all kinds of rules and workflows, most of them unrelated to the current tasks, before you even do anything.
I just have a smart model write a testable phased plan, have a cheaper model implement them, and yet another model to review each phase. I don't see the value of adding a Rust state engine. Algorithmically verifiable things can be tests, and more nebulous things (like pattern compliance) need an LLM to do the heavy lifting and can make mistakes, so what does the state engine buy you?
the state engine is the part that can't hallucinate. even with simple steps/prompting the review model can miss things... it's still an LLM making a judgement call at the end of the day.
the state engine doesn't judge, it enforces... with code and not transformers ^_^
if a tool (or any other guardrail) isn't valid at a given state the model call gets rejected before the model sees the result. that's the gap between "a model said this is okay" vs. "the system structurally prevents this"
I don't understand. Let's stay my state is whether we are in conformance with repo patterns. Walk me through how you don't/can't hallucinate, given that you need an LLM to determine the state. For state variables that don't need LLMs, you can simply use tests and commit hooks, no?
the LLM doesn't determine the state... it requests a transition to change the state. the engine evaluates guards (data carried along the way) to decide if the transition is valid.
it (the LLM) can't skip from implementation to deploy if the guard says the tests haven't passed. the model will receive feedback that what it's tried to do is invalid and give the reasons why. it can't be skipped. it then tries to resolve that new information to make the state transition... almost like it would responding to a human in the chair denying a step.
the model can't merge if it hasn't gone through your review state, even if it wants to (it'll try though)
The research page (https://statewright.ai/research) mentions a patent, and a "core engine";
> Provisional patent application filed: #64/054,240 (April 30, 2026). 35 claims covering state machine guardrail enforcement for LLM agent tool access. The core engine remains Apache 2.0 open source.
I'm not sure I understand what the "core engine" is if it's not the "state machine guardrail runtime" which is what the patent cover. What parts are the open source parts exactly?
I find the idea really interesting and was nodding along the way as I read what you wrote, makes sense both for the human and the agent, seems like a really nice idea that'd help, but the patent kind of makes me want to run away and not look into it too deeply.
Re: Reproducing the results: the engine, agent crate and demo TUI are all in the repo. If you have ollama running with a 13B+ model, task run:bugfix reproduces the simple bugfix result end to end. What isn't published yet is the SWE-bench experiment harness (task selection, patch scoring, control runs). I need to get that out, I prioritized the end-to-end simple Claude Code plugin for the launch. The demo crate (crates/demo) contains a demo TUI which calls ollama and runs the bugfix state machine interactively with code.
Re: Engine: The core engine (crates/engine/) is the pure Rust state machine evaluator. It's what Statewright is running on the backend. JSON in => transition decisions out. Agent (crates/agent/) builds on top of it to make it useful for LLMs. That all is Apache 2.0 with no restrictions.
Re: the Patent: The patent covers the method of using state machines to constrain LLM agent tool access at the protocol layer. It's defensive, it helps protect the managed service and the idea from "being scooped" from a larger company with more personnel and resources. It's not targeted against solo developers, self-hosters or researchers.
You'll find that the portions that I've released FSL 1.1 have explicit grants which do not restrict solo developers or single team self-hosting. The code released this way becomes Apache 2 in exactly 3 years. This is not unlike what Sentry and MariaDB did. I am planning on releasing more portions as FSL 1.1, I just hadn't crossed that bridge and honestly this thing seems to have gotten popular at the moment so I thought I'd set the record straight a bit
Unfortunately, the license actually in the repo is not even a not-quite-Apache-2 license. It doesn't appear to be FSL-1.1-ALv2 at all: https://github.com/statewright/statewright/blob/main/plugins.... This notably does not include the patent grant, which makes it unclear whether use of the software would infringe upon the patent.
The omission wasn't intentional -- the patent grant wasn't on my radar when the original license text was committed. FSL licensing is very new territory for me and I duffed it slightly, now corrected.
https://github.com/statewright/statewright/blob/main/Cargo.t...
Is that wrong?
re: Claude Code... we actually don't filter or modify the tool list so all tools stay visible -- disallowed calls get blocked at execution time with an error message. No cache busts on transitions, the model sees the full tool sets. The cost there is prompt caching dollars not latency I suppose
re: The research (Rust agent + Ollama) the model only receives tool schemas for the current states' allowed tools. Ollama does have a KV cache reuse facility so changing the tool list busts that cache. Depending on your workflow this can happen as many times as you expect your states to transition until completion. For simple workflows this is 3-5x. Within each state the tool list is stable and cache operates normally. Presenting fewer tools instead of dozens on every agent processing step reduces input tokens and decision complexity, which is where the measurable gains come from.
Both enforce the same constraints depending on the execution interface. The schema level filtering in the research is the S-tier approach. Adding tools/list filtering to the MCP gateway would be beneficial if possible (it looks like we could only filter MCP tools not core ones, which could provide tangible benefit. I've added this evaluation to the roadmap.
what's the difference between a "transition" (purple line, not shown in the workflow) as opposed to happy path / failure?
In any case, I'll have to check out Statewright after work ;)
Given Statewright plugs into Claude Code, there is a little added overhead while managing the state machine logic, but for complicated workflows if it saves you a few debug loops, mass edit reversions or death spirals I think the case can be pretty solid for including it
In your Github, the JSON format shown for defining custom workflows is very simple. I wonder if that limits the detail in the state-related instructions and error messages you can send to a model.
For example, in state transitions, does your tool just tell the model something like "you are in 'act' mode and no longer in 'plan' mode, here are your new available tools"? Seems difficult to give it any more informative messages given how simple the workflow definitions are. Likewise when the model attempts to do something that's not supported for tools in the given phase.
Each state has an `instructions` field for phase specific guidance and when an agent's action (tool call) gets rejected the error message lets the model know what went wrong, and what's available to move forward
Tool 'Edit' is not available in the 'planning' phase. Allowed Tools: Read, Grep, Glob To advance, call statewright_transition with READY -> implementing
Models (even simple ones) tend to reason through these error messages, adjusting their approaches as opposed to retrying the blocked call. Additionally, on transitions the model is required to include a rationale explaining why it's transitioning (`data.rationale`) which creates an audit trail of the agent's reasoning at each phase boundary. That ends up being one of the most useful parts of the run history viewable on statewright.ai
Is the editor/composer separate from the runtime?
If I build a workflow in the visual editor, can I use that same flow inside my own app just by using the runtime/engine? Or is it mainly tied to the Statewright platform and Claude Code plugin?
I’m wondering if the runtime can be used as a standalone piece to power apps I build.
I see constant posts on Reddit/HN about the ways that AI is amazing and at the same time is fudging it (literally). Nobody can make reliability guarantees on something that's non-deterministic and non-idempotent. Nobody's AI workflow suite of tools can claim this. Prompting gets you closer to the mark but still non-deterministic. Breaking down the problem into chunks with valid transition criterion so that even tiny models can step through them I believe gets us closer to where we want to be semantically
nocodo is one of my product experiments, currently using 120B model but I have tested a few agents inside it with 20B models.
I create a bunch of agents, each with very specific goals. Like Project Manager, Backend Engineer, etc.
Each agent gets a very compact list of tools and access to only certain parts of the filesystem or commands.
https://github.com/brainless/nocodo/tree/main/agents/src
The main difference with Statewright is that tool access changes over time within a single agent. Planning phase gets read-only tools, edit capability unlocks after the agent proves it has adequate understanding... test tools unlock after the fix. State machines handle the phase transitions, guards and retry loops.
Your multi-agent approach decomposes by role instead of by phase/state. Both are valid. Since you're already in Rust, the engine crate (crates/engine) is a pure library with no deps. It might be interesting to see if putting a state machine around your orchestration layer improves your observed performance
Second thought: enforcing tools is useful and I built myself a Pi extension to deny access to particular tools in some workflows.
But we need somehow to force agents obey the rules.
For example I have rules when using Pi to ask main agent to dispatch implementer agents in parallel using git worktrees. Some time it uses git worktrees, sometimes not.
The thoughts are like this: "the user asked me to use git worktrees so let me start using git worktrees. But wait, the task is simple so maybe I don't need git worktrees..."
If I ask why it didn't follow the rules, it says something like: "The user is right, I should have followed the rules..."
I've taken the approach that whenever this happens, it's my fault. The instructions were not clear enough, not direct enough, or more often, there's just too many of them.
I'm now at the point where my pi system prompt + agents + skills + tools starts out at just 7k context. It's all very clear and concise. I almost never have ambiguous responses like this, at least not bear the start of a session.
Combined with instructions to keep the main session as a coordinator and use subagents for all non trivial work, I can get a lot of work done before hitting 100k context and basically never go over 150k.
It's a stark contrast with Claude code where I was starting at about 35k context even after trimming my stuff down. It's hardly surprising if an agent doesn't know what to do if you dump 30k+ of context with all kinds of rules and workflows, most of them unrelated to the current tasks, before you even do anything.
the state engine doesn't judge, it enforces... with code and not transformers ^_^
if a tool (or any other guardrail) isn't valid at a given state the model call gets rejected before the model sees the result. that's the gap between "a model said this is okay" vs. "the system structurally prevents this"
it (the LLM) can't skip from implementation to deploy if the guard says the tests haven't passed. the model will receive feedback that what it's tried to do is invalid and give the reasons why. it can't be skipped. it then tries to resolve that new information to make the state transition... almost like it would responding to a human in the chair denying a step.
the model can't merge if it hasn't gone through your review state, even if it wants to (it'll try though)