Back in 2023, we wrote how lawyers were filing briefs they’d written with ChatGPT. They thought it was a search engine, not a lying engine — and the bot would proceed to cite a whole pile of suppor…
I get the problem with using a made up citations in your filing, but the idea for Harvey mentioned in the article is not all bad. If they can combine their LLM with a database of cases and create their software to not use any case not in this database, they would have a great start.
When confronted with a problem like “your search engine imagined a case and cited it”, the next step is to wonder what else it might be making up, not to just quickly slap a bit of tape over the obvious immediate problem and declare everything to be great.
The other thing to be concerned about is how lazy and credulous your legal team are that they cannot be bothered to verify anything. That requires a significant improvement in professional ethics, which isn’t something that is really amenable to technological fixes.
When confronted with a problem like “your search engine imagined a case and cited it”, the next step is to wonder what else it might be making up, not to just quickly slap a bit of tape over the obvious immediate problem and declare everything to be great.
Exactly. Even if you ensure the cited cases or articles are real it will misrepresent what said articles say.
Fundamentally it is just blah blah blah ing until the point comes when a citation would be likely to appear, then it blah blah blahs the citation based on the preceding text that it just made up. It plain should not be producing real citations. That it can produce real citations is deeply at odds with it being able to pretend at reasoning, for example.
Ensuring the citation is real, RAG-ing the articles in there, having AI rewrite drafts, none of these hacks do anything to address any of the underlying problems.
Yea, and if you’re going to let the AI write the structure and have a lawyer go and rewrite the whole thing after validating it, why not remove the step and just have said lawyer actually write the brief and put their accreditation on the line?
I get the problem with using a made up citations in your filing, but the idea for Harvey mentioned in the article is not all bad. If they can combine their LLM with a database of cases and create their software to not use any case not in this database, they would have a great start.
When confronted with a problem like “your search engine imagined a case and cited it”, the next step is to wonder what else it might be making up, not to just quickly slap a bit of tape over the obvious immediate problem and declare everything to be great.
The other thing to be concerned about is how lazy and credulous your legal team are that they cannot be bothered to verify anything. That requires a significant improvement in professional ethics, which isn’t something that is really amenable to technological fixes.
Exactly. Even if you ensure the cited cases or articles are real it will misrepresent what said articles say.
Fundamentally it is just blah blah blah ing until the point comes when a citation would be likely to appear, then it blah blah blahs the citation based on the preceding text that it just made up. It plain should not be producing real citations. That it can produce real citations is deeply at odds with it being able to pretend at reasoning, for example.
Ensuring the citation is real, RAG-ing the articles in there, having AI rewrite drafts, none of these hacks do anything to address any of the underlying problems.
Yea, and if you’re going to let the AI write the structure and have a lawyer go and rewrite the whole thing after validating it, why not remove the step and just have said lawyer actually write the brief and put their accreditation on the line?
We have got to bring back the PE exam for software engineering.
That goes some way to explaining why programmers don’t have a moral compass.