You can hardly get online these days without hearing some AI booster talk about how AI coding is going to replace human programmers. AI code is absolutely up to production quality! Also, you’re all…
I’m a pretty big proponent of FOSS AI, but none of the models I’ve ever used are good enough to work without a human treating it like a tool to automate small tasks. In my workflow there is no difference between LLMs and fucking grep for me.
People who think AI codes well are shit at their job
I’m guessing if it would actually work for that, somebody would have done it by now.
But it probably just does it’s usual thing of bullshitting something that looks like code, only now you’re wasting the time of maintainers as well who have to confirm that it is bobbins.
It’s already doing that, some FOSS projects regularly get weird PRs that on first glance look good, but if you look closer are either total nonsense or riddled with bugs. Especially awful are security-related PRs; although those are never made in good faith, that’s usually grifting (throwing AI at the wall trying to cash in as many bounties as possible). The project lead of curl recently announced that anyone who posts a PR that’s obviously AI, or is made with AI, will get banned.
Like, it’s really good as a learning tool as long as you don’t blindly believe everything it says given you can ask stuff in natural language and it will resolve possible knowledge dependencies for you that you’d otherwise get stuck on in official docs, and since you can ask contextual questions receiving contextual answers (no logical abstraction). But code generation… please don’t.
Nice conversation you had right there in your head. I assume you also took a closer look at it to get a neutral opinion and didn’t just ride one of the two waves “blind AI hype” or “blind AI hate”?
I’ve taken a closer look at Codestral (which is locally hostable), threw stuff at it and got a sense for what it can and can’t do. The general gist is that its (Python) syntax is basically always correct, however it sometimes messes up the actual code logic or gets the user request wrong. That makes it a good tool for code questions aimed at specific features, how certain syntax in a language works or to look up potential alternative solutions for smaller code snippets. However it should absolutely not be used to create huge chunks of your code logic, that will always backfire.
And since some people will read this and think I’m some AI worshipper, fuck no. They’re amoral as fuck, the only models not screwed up through their creation process are those very few truly FOSS ones. But if you hate on something you have to actually know shit about it and understand its appeal and non-hyped usecases (they do have them, even LLMs). Otherwise you’ll end up in a social corner filled with bitterness and, depending on the topic, perhaps even increasingly extreme opinions (not saying we shouldn’t smash OpenAI and other corposcum into tiny pieces, we absolutely should).
There are technologies that are utter bullshit like NFTs. However (unfortunately?) that isn’t the case for AI. We just live in an economy that’s good in abusing everything and everyone.
Otherwise you’ll end up in a social corner filled with bitterness
This is a standard Internet phenomenon (I generalize) called a Sneer Club, i.e. people who enjoy getting together and picking on designated targets. Sneer Clubs (I expect) attract people with high Dark Triad characteristics, which is (I suspect) where Asshole Internet Atheists come from - if you get a club together for the purpose of sneering at religious people, it doesn’t matter that God doesn’t actually exist, the club attracts psychologically f’d-up people. Bullies, in a word, people who are powerfully reinforced by getting in what feels like good hits on Designated Targets, in the company of others doing the same and congratulating each other on it.
Hey, Devin! Really impressive that the product best known for literally lying about all of its functionality in its release video still somehow exists and you can pay it money. Isn’t the free market great.
No the fuck it’s not
I’m a pretty big proponent of FOSS AI, but none of the models I’ve ever used are good enough to work without a human treating it like a tool to automate small tasks. In my workflow there is no difference between LLMs and fucking
grep
for me.People who think AI codes well are shit at their job
Well grep doesn’t hallucinate things that are not actually in the logs I’m grepping so I think I’ll stick to grep.
(Or ripgrep rather)
With grep it’s me who hallucinates that I can right good regex :,)
Hallucinations become almost a non issue when working with newer models, custom inference, multishot prompting and RAG
But the models themselves fundamentally can’t write good, new code, even if they’re perfectly factual
If LLM hallucinations ever become a non-issue I doubt I’ll be needing to read a deeply nested buzzword laden lemmy post to first hear about it.
@vivendi @V0ldek * hallucinations are a fundamental trait of LLM tech, they’re not going anywhere
Because it’s a upscaled translation tech maybe?
These views on LLMs are simplistic. As a wise man once said, “check yoself befo yo wreck yoself”, I recommend more education thus
LLM structures arw over hyped, but they’re also not that simple
There are plenty of open issues on open source repos it could open PRs for though?
I’m guessing if it would actually work for that, somebody would have done it by now.
But it probably just does it’s usual thing of bullshitting something that looks like code, only now you’re wasting the time of maintainers as well who have to confirm that it is bobbins.
Yea it’s a problem already for security bugs, llms just waste maintainers time and make them angry.
They are useless and make more work for programmers, even on python and js codebases that they are trained on the most and are the “easiest”.
It’s already doing that, some FOSS projects regularly get weird PRs that on first glance look good, but if you look closer are either total nonsense or riddled with bugs. Especially awful are security-related PRs; although those are never made in good faith, that’s usually grifting (throwing AI at the wall trying to cash in as many bounties as possible). The project lead of curl recently announced that anyone who posts a PR that’s obviously AI, or is made with AI, will get banned.
Like, it’s really good as a learning tool as long as you don’t blindly believe everything it says given you can ask stuff in natural language and it will resolve possible knowledge dependencies for you that you’d otherwise get stuck on in official docs, and since you can ask contextual questions receiving contextual answers (no logical abstraction). But code generation… please don’t.
Fuck you were doing so well in the first half, ahhh,
the poster: “it’s really good as a learning tool”
the poster: “but don’t blindly believe it”
the learner: “how should I know when to believe it?”
the poster: “check everything”
the learner: “so you’re saying I should just read the actual documentation and/or source?”
the poster: “how are you going to ask that anything? how can you fondle something that isn’t a prompt?!”
the learner: “thanks for your time, I think I’m going to find another class”
Nice conversation you had right there in your head. I assume you also took a closer look at it to get a neutral opinion and didn’t just ride one of the two waves “blind AI hype” or “blind AI hate”?
I’ve taken a closer look at Codestral (which is locally hostable), threw stuff at it and got a sense for what it can and can’t do. The general gist is that its (Python) syntax is basically always correct, however it sometimes messes up the actual code logic or gets the user request wrong. That makes it a good tool for code questions aimed at specific features, how certain syntax in a language works or to look up potential alternative solutions for smaller code snippets. However it should absolutely not be used to create huge chunks of your code logic, that will always backfire.
And since some people will read this and think I’m some AI worshipper, fuck no. They’re amoral as fuck, the only models not screwed up through their creation process are those very few truly FOSS ones. But if you hate on something you have to actually know shit about it and understand its appeal and non-hyped usecases (they do have them, even LLMs). Otherwise you’ll end up in a social corner filled with bitterness and, depending on the topic, perhaps even increasingly extreme opinions (not saying we shouldn’t smash OpenAI and other corposcum into tiny pieces, we absolutely should).
There are technologies that are utter bullshit like NFTs. However (unfortunately?) that isn’t the case for AI. We just live in an economy that’s good in abusing everything and everyone.
This is a standard Internet phenomenon (I generalize) called a Sneer Club, i.e. people who enjoy getting together and picking on designated targets. Sneer Clubs (I expect) attract people with high Dark Triad characteristics, which is (I suspect) where Asshole Internet Atheists come from - if you get a club together for the purpose of sneering at religious people, it doesn’t matter that God doesn’t actually exist, the club attracts psychologically f’d-up people. Bullies, in a word, people who are powerfully reinforced by getting in what feels like good hits on Designated Targets, in the company of others doing the same and congratulating each other on it.
@froztbyte @Natanox
In that moment, the novice was enlightened
People have done it, there’s a bunch of services that do it. But they’re paid.
ie https://devin.ai/
Hey, Devin! Really impressive that the product best known for literally lying about all of its functionality in its release video still somehow exists and you can pay it money. Isn’t the free market great.
“a fool and their money are soon parted”
fuck off with the unrequested advertising kthx