I guess the type of lawyer that does this would be the same that would offload research to paralegals, without properly valuing that as real work, and somehow believe it can be substituted by AI, maybe they never engage their braincells, and just view lawyering as a performative dance to appease the legal gods?
Stupid sexy robot judge…
One thing an adversarial judicial system like the American one is that if one party sloppily use GenAI to write their documents, they can lose, because the other party can point that out. A lot of the excuses to use LLMs in software development is that modern software development is terrible anyway, so if you can get your slop to market faster than some other schlub, you probably won’t lose customers. When there’s a balanced incentive to point out hallucinations, they (hopefully) won’t get that far.
When there’s a balanced incentive to point out hallucinations, they (hopefully) won’t get that far.
That I can see. Unlike software “engineering”, law is a field which has high and exacting standards - and faltering even slightly can lead to immediate and serious consequences.
I get the problem with using a made up citations in your filing, but the idea for Harvey mentioned in the article is not all bad. If they can combine their LLM with a database of cases and create their software to not use any case not in this database, they would have a great start.
When confronted with a problem like “your search engine imagined a case and cited it”, the next step is to wonder what else it might be making up, not to just quickly slap a bit of tape over the obvious immediate problem and declare everything to be great.
The other thing to be concerned about is how lazy and credulous your legal team are that they cannot be bothered to verify anything. That requires a significant improvement in professional ethics, which isn’t something that is really amenable to technological fixes.
When confronted with a problem like “your search engine imagined a case and cited it”, the next step is to wonder what else it might be making up, not to just quickly slap a bit of tape over the obvious immediate problem and declare everything to be great.
Exactly. Even if you ensure the cited cases or articles are real it will misrepresent what said articles say.
Fundamentally it is just blah blah blah ing until the point comes when a citation would be likely to appear, then it blah blah blahs the citation based on the preceding text that it just made up. It plain should not be producing real citations. That it can produce real citations is deeply at odds with it being able to pretend at reasoning, for example.
Ensuring the citation is real, RAG-ing the articles in there, having AI rewrite drafts, none of these hacks do anything to address any of the underlying problems.
Yea, and if you’re going to let the AI write the structure and have a lawyer go and rewrite the whole thing after validating it, why not remove the step and just have said lawyer actually write the brief and put their accreditation on the line?
We have got to bring back the PE exam for software engineering.
That requires a significant improvement in professional ethics, which isn’t something that is really amenable to technological fixes.
That goes some way to explaining why programmers don’t have a moral compass.