I believe this is the direction enterprise software is generally going. An open-source base with a very permissive license that then each company can adapt (with claude, codex, etc.) for it's own needs. It's either running it on it's own infrastructure or in hosted environment by the author. I've built a similarly extensible codebase for an ERP: https://github.com/lambdadevelopment/lambda-erp
Presumably this is an issue for the commercial competitors too, but in light of the recent court ruling in United States v. Heppner that AI chatbots can break attorney-client privilege and/or work product doctrine, what kinds of things can this be safely used for? (I would assume you want to avoid sending anything with client-confidential information in it to a service provider like OpenAI or Anthropic.)
Potentially if used with a local LLM and not a service provider, this might protect attorney-client privilege?
It’s not different from googling. If a non-lawyer googles legal advice (”how to give yourself an alibi after murdering someone”) it will not be protected by attorney-client privilege. Same if you ask OpenAI.
United States v. Heppner mentioned a public chatbot service. If a law firm (or specialized provider) offered a chatbot using their own servers and hosted the traces and other data on the law firms own servers it would almost certainly be protected. But another case would need to happen to determine that.
But that only applies for clients using the chatbot. If a lawyer is using the LLM it is definitely protected. No different if a lawyer searches something on Google or Lexis Nexis. The search itself is protected. I guess you could debate metadata but the content surely is protected.
you can have dedicated deployment per customer per case, segregating it logically. I have seen this happen in larger law firms. It could be based on groups, teams, partners etc.
For a moment I thought it was some open-source LLM trained on legal. It's not, it's a web app wrapping major LLM providers and streamlining legal workflows, uploading documents, and having the LLM providers interact with them.
Harvey made it a point to FT ChatGPT models for a year or so but they were struggling to keep up with the pace of new model deployments and quit. They never went as far as Cursor AFAIK which produced its own routers/"composer" models.
Self-hostable legal AI as open source is a useful direction in principle. Hard to tell how mature the actual implementation is though, the repo is pretty fresh and the marketing site is doing a lot of heavy lifting compared to what's in the code right now. Will be more interesting to revisit in a few weeks.
Why don´t you put a direct link that redirect users to some proprietary AI providers instead of making it look fancy. (If I ask whatever AI model will produce same outputs/forms, structured as you wish, and even locally).
To qualify as some wrapper you need to add a layer of creativity by you on top of the existing ones.
I always wondered if Justin Kan’s Atrium closed door prematurely by just 2-3 years. It would have been cool to see a “technology” driven law firm and how it would have adjusted to LLMs.
Interested to try it out!
Some feedback on the homepage there's nothing above the fold, or directly below that says its a Legal AI platform. I would like a legal AI tool, but I'm not familiar with the space don't know what Harvey or Legora are. It was only the hackernews title "Mike: open-source legal AI" that gave the context.
It's called "We just discovered Claude Code and so we think Anthropic is Amazing so everything they do is godlike and thus their design choices must also be god like. Apple is Dead, Long Live Anthropic" style.
Hm, I don't think this looks like Anthropic's design style. Anthropic is kind of doing a Chobanicore + Corporate Memphis design system that I personally find kind of creepy. But the website here just feels fresh and pleasant.
Agreed; that's a beautiful site. The main design style apart from minimalism that I notice is glassmorphism. Well, that and a very well chosen Monet to set the tone.
Potentially if used with a local LLM and not a service provider, this might protect attorney-client privilege?
But that only applies for clients using the chatbot. If a lawyer is using the LLM it is definitely protected. No different if a lawyer searches something on Google or Lexis Nexis. The search itself is protected. I guess you could debate metadata but the content surely is protected.
Cool project regardless!
That may be confusing on the naming.
https://github.com/anthropics/claude-code/tree/main/plugins/...
Except that the font that it is using is EB Garamond and Apple was heavily using the Garamond font in the mid-1980s to 2000s.
Given that almost everyone is copying both, it is now garbage.
laywers live in docx not pdf