Ask HN: Why are so many rolling out their own AI/LLM agent sandboxing solution?

32 points | by ATechGuy 17 days ago

6 comments

  • varshith17 16 days ago
    Same reason everyone rolled their own auth in 2010, the problem is simple enough to DIY badly, complex enough that no standard fits everyone. My Claude Code needs SSH access but not rm. Your agent needs filesystem writes but not network. There's no "OAuth for syscalls" yet.
    • ATechGuy 15 days ago
      > There's no "OAuth for syscalls" yet.

      This exists today in OSes in form of discretionary/mandatory permissions (e.g., SELinux, AppArmor, LandLocked).

      • verdverm 15 days ago
        Yea, but that's not click button imported from clerk oauth easy
      • verdverm 16 days ago
        this is the most insightful comment I've heard on this in a while

        To me, OCI seems the best foundation to build on. It has the features, is widely disseminated, and we have a lot of practice and tooling already

      • verdverm 17 days ago
        I started building my own agent when I became frustrated with copilot not reading my instruction files reliably. Looked at the code, and wouldn't you know they let the LLM decide...

        Once started down this path, I knew I was going to need something for isolated exec envs. I ended up building something I think is quite awesome on Dagger. Let's me run in containers without running containers, can get a diff or rewind history, can persist and share wvia any OCI registry.

        So on one hand, I needed something and chose a technology that would offer me interesting possibilities, and on the other I wanted to have features I don't expect the likes of Microsoft to deliver with Copilot, only one of which is my sandbox setup.

        I'm not sure I would call it rolling my own completely, I'm building on established technology (OCI, OCR)

        I don't expect a standard to arise, OCI is already widely adopted and makes sense, but there are other popular techs and there will be a ton of reimplementations by another name/claim. The other half of this is that AI providers are likely to want to run and charge money for this, I personally expect more attempts at vendor lock in in this space. In example, Anthropic bought Bun and I anticipate some product to come of this, isolation and/or canvas related

        • ATechGuy 17 days ago
          What was the first concrete thing you needed that existing sandboxing tools (Docker/VMs/bwrap) just didn't provide?
          • verdverm 17 days ago
            This question reads like HN market research and not genuine curiosity

            Go look at what dagger provides over those technologies as a basis for advanced agent env capabilities. I use it for more than just sandboxing with my agent

            I would also point out sandboxing is just one feature, that is approaching required status, for an agentic framework and unlikely to be an independent product or solution

        • wassel 11 days ago
          I think a lot of teams realize “agent sandboxing” isn’t just isolation, it’s about making long-running agent work actually converge.

          In practice, agents don’t fail only because the model is wrong. They fail because the environment is flaky: missing deps, slow setup, weird state, unclear feedback loops. If you give an agent an isolated, secure environment that’s already set up for the repo, you remove a ton of friction and iterations become much more reliable.

          The other piece is “authority” / standards. You can write guidelines, but what keeps agents (and humans) aligned is the feedback: tests, linters, CI rules, repo checks. Centralizing those standards and giving the agent a clean place to run them makes compliance much more deterministic.

          We built this internally for our own agent workflows and we’re debating whether it’s worth offering the sandbox part as a standalone service (https://envs.umans.ai), because it feels like the part everyone ends up rebuilding.

          • jacobgadek 11 days ago
            The "token and time sink" point is huge. I've found that even when agents can install deps, they often get stuck in reasoning loops trying to fix a "build toolchain issue" that is actually just a hallucinated package name.

            I built a local runtime supervisor (Vallignus) specifically to catch these non-converging loops. It wraps the agent process to enforce egress filtering (blocking those random pip installs) and hard execution limits so they don't burn $10 retrying a fail state.

            It's effectively a "process firewall" for the agentic workflow. Open source if you want to see the implementation: https://github.com/jacobgadek/vallignus

            • ATechGuy 11 days ago
              > They fail because the environment is flaky: missing deps, slow setup, weird state, unclear feedback loops.

              Why can't agents install missing deps based on the error message?

              • wassel 11 days ago
                They often try, but two things bite in practice:

                - Permissions and sandbox limits. Many agents don’t run on a dev’s laptop with admin access They run in the cloud or in locked down sandboxes: no sudo, restricted filesystem, restricted network egress. So “just install it” is sometimes not allowed or not even possible.

                - It is a token and time sink and easy to go down the wrong path. Dependency errors are noisy: missing system libs, wrong versions, build toolchain issues, platform quirks. Agents can spend a lot of iterations trying fixes that don’t apply, or that create new mismatches.

                Repo ready environments don’t replace agents installing deps. They just reduce how often they have to guess.

            • kaffekaka 16 days ago
              Speaking for myself, a bash script and a Dockerfile (coupled with dedicated user on linux system) seemed simpler than discovering and understanding some other, over complicated tool built by someone else. Example: a coworker vibe coded a bloated tool but it was not adapted to other OS:s than his own, it was obviously LLM generated so neither one of us actually knew the code, etc. My own solution has shortcomings too but at least I can be aware of them.

              It simply feels as if there is no de facto standard yet (there surely will be).

              • verdverm 14 days ago
                I expect OCI will be the standard, largely because of the ubiquity and experience we already have.

                I'm building on OCI (via Dagger), so you are in good company, if I may say so

              • rvz 17 days ago
                This is no different to people rolling their own and DIY'ing custom cryptography, which is absolutely not recommended.

                The question is how easy is it to bypass these DIY 'sandboxes'?

                As long as there is a full OS running, you are one libc function away from a sandbox escape.

                • ATechGuy 17 days ago
                  > As long as there is a full OS running, you are one libc function away from a sandbox escape.

                  Does this mean, all software in the world is just one function away from escape?

                  • sargstuff 17 days ago
                    Yup. Technically, just one external reference outside of the sandbox environment from within the sandbox environment ("software stargate portal address to alternate environment" / one evaluated part of the s-expression using a system() reference).

                    Running software is insecure the moment the electrical switch is on / start checking out shrodingers box. Although, reverse shrodingers cat might be more accurate. aka can escape the box if someone peaks from outside the box.

                • aristofun 15 days ago
                  Can you explain me like im 5 - how does that even work?

                  If you cut network and files for Claude, for example, how is it even going to do the useful work?

                  • hahahahhaah 14 days ago
                    You dont cut all network just decide what you allow to pierce.

                    For files it has an isolated file system. That can have a git clone.