9 comments

  • bgwalter 104 days ago
    All these agent documentations seem to compete for the most complex set of flow charts imaginable without ever mentioning what the Rube Goldberg machine is supposed to accomplish. Given that the real output in open source of these contraptions is zero, it seems that the flow charts are the goal. Some kind of modern art.
    • CjHuber 103 days ago
      "the absolute trainer to light up AI agents". Doesn't that say enough?? no really tho, I've read the documentation and all I see is a worse DSPy
    • ramanvarma 104 days ago
      do you have benchmarks on tasks with sparse rewards or partial observability? i feel like thats where most "train any agent" claims tend to break down
      • PaulRobinson 103 days ago
        It doesn't replace core algorithms. It plumbs things together. It means you're not having to write the framework to connect things, your algos are still going to have the same problems as they had before.
      • ultmaster 101 days ago
        Interesting that the project is re-alive on HackerNews! I've posted about the project weeks ago on HackerNews. Got no attention at all haha!

        https://news.ycombinator.com/item?id=44861765

        • throwaway314155 104 days ago
          > Turn your agent into an optimizable beast with ZERO CODE CHANGE (*almost*)!

          OP didn’t think to include this very important fine print. Thanks OP!

          • ripped_britches 104 days ago
            What actually is this?
            • cpard 104 days ago
              A framework for optimizing LLM agents, including but not limited to RL. You can even do fine tuning, they have an example with unsloth in there.

              The design of this is pretty nice, it's based on a very simple to add instrumentation to your agent and the rest happens in parallel while your workload runs which is awesome.

              You can probably do also what DSPy does for optimizing prompts but without having to rewrite using the DSPy API which can be a big win.

              • ramesh31 104 days ago
                >What actually is this?

                Based on the number of emojis, I doubt the author even knows.

              • corranh 103 days ago
                Let’s see…excessive emojis and wacky punctuation hmm maybe this whole readme is LLM generated.
                • PaulRobinson 103 days ago
                  This is just a style I've seen a lot of people who are a generation or so younger than me enjoy.

                  I'm not expected to write docs the way my father's generation did (thank god), so I don't expect them to write the docs the way I would. If this gets people engaged and excited, I lose nothing, they get something, we're fine.

                  As to the LLM generation claim, I don't care if it is or it isn't. The project seems legit, they're making claims that 3rd parties have verified ("Community Projects"), it looks useful and interesting, so I might spend more time with it soon.

                  • tonyhart7 103 days ago
                    I bet 80% of the project is LLM generated anyway

                    if its came at this point, why would we write readme md ourselves????

                  • lqstuart 103 days ago
                    So it’s some brittle crap built on verl, which is already pretty much train by config (and makes breaking changes with _every single commit_), with no documentation, no examples, and no clear purpose? Heck yeah Microsoft
                    • vodkastingerxf8 104 days ago
                      Parsing entireties of the I/O agent release version, which is the precommit as text prior to evaluation.