please don’t encourage them, someones got to review that shit!
Ai review baby!!! Here we go!
No the fuck it’s not
I’m a pretty big proponent of FOSS AI, but none of the models I’ve ever used are good enough to work without a human treating it like a tool to automate small tasks. In my workflow there is no difference between LLMs and fucking
grep
for me.People who think AI codes well are shit at their job
Don’t fucking encourage them
It’s so bad at coding… Like, it’s not even funny.
So how do you tell apart AI contributions to open source from human ones?
if it’s undisclosed, it’s obvious from the universally terrible quality of the code, which wastes volunteer reviewers’ time in a way that legitimate contributions almost never do. the “contributors” who lean on LLMs also can’t answer questions about the code they didn’t write or help steer the review process, so that’s a dead giveaway too.
GitHub, for one, colors the icon red for AI contributions and green/purple for human ones.
You can hardly get online these days without hearing some AI booster talk about how AI coding is going to replace human programmers.
Mostly said by tech bros and startups.
That should really tell you everything you need to know.
why is no-one demanding to know why the robot is so sexay
Hi hi please explain my boner
I don’t know what this has to do with this thread, but maybe ask Hajime Sorayama, he kind of came up with the whole concept of sexy robots.
not super into cyber facehugger tbh
but look how delighted they are!
As a non-programmer, I have zero understanding of the code and the analysis and fully rely on AI and even reviewed that AI analysis with a different AI to get the best possible solution (which was not good enough in this case).
This is the most entertaining thing I’ve read this month.
“I can’t sing or play any instruments, and I haven’t written any songs, but you *have* to let me join your band”
yeah someone elsewhere on awful linked issue a few days ago, and throughout many of his posts he pulls that kind of stunt the moment he gets called on his shit
he also wrote a 21.KiB screed very huffily saying one of the projects’ CoC has failed him
long may his PRs fail
I tried asking some chimps to see if the macaques had written a New York Times best seller, if not MacBeth, yet somehow Random house wouldn’t publish my work
Man trust me you don’t want them. I’ve seen people submit ChatGPT generated code and even generated the PR comment with ChatGPT. Horrendous shit.
The maintainers of
curl
recently announced any bug reports generated by AI need a human to actually prove it’s real. They cited a deluge of reports generated by AI that claim to have found bugs in functions and libraries which don’t even exist in the codebase.you may find, on actually going through the linked post/video, that this is in fact mentioned in there already
Today the CISO of the company I work for suggested that we should get qodo.ai because it would “… help the developers improve code quality.”
I wish I was making this up.
My boss is obsessed with Claude and ChatGPT, and loves to micromanage. Typically, if there’s an issue with what a client is requesting, I’ll approach him with:
- What the issue is
- At least two possible solutions or alternatives we can offer
He will then, almost always, ask if I’ve checked with the AI. I’ll say no. He’ll then send me chunks of unusable code that the AI has spat out, which almost always perfectly illuminate the first point I just explained to him.
It’s getting very boring dealing with the roboloving freaks.
Hot take, people will look back on anyone who currently codes, as we look back on the NASA programmers who got the equipment and people to the moon.
They won’t understand how they did so much with so little. You’re all gourmet chefs in a future of McDonalds.
Nah, we’re plumbers in an age where everyone has decided to DIY their septic system.
Please, by all means, keep it up.
This is dead on! 99% of the fucking job is digital plumbing so the whole thing doesn’t blow the up when (a) there’s a slight deviation from the “ideal” data you were expecting, or (b) the stakeholders wanna make changes at the last minute to a part of the app that seems benign but is actually the crumbling bedrock this entire legacy monstrosity was built upon. Both scenarios are equally likely.
deleted by creator
Hot take, people will look back on anyone who currently codes, as we look back on the NASA programmers who got the equipment and people to the moon.
I doubt it’ll be anything that good for them. By my guess, those who currently code are at risk of suffering some guilt-by-association problems, as the AI bubble paints them as AI bros by proxy.
I think most people will ultimately associate chatbots with corporate overreach rather rank-and-file programmers. It’s not like decades of Microsoft shoving stuff down our collective throat made people think particularly less of programmers, or think about them at all.
Perhaps! But not because we adopted vibe coding. I do have faith in our ability to climb out of the Turing tarpit (WP, Esolangs) eventually, but only by coming to a deeper understanding of algorithmic complexity.
Also, from a completely different angle: when I was a teenager, I could have a programmable calculator with 18MHz Z80 in my hand for $100. NASA programmers today have the amazing luxury of the RAD750, a 110MHz PowerPC chipset. We’re already past the gourmet phase and well into fusion.
Damn, this is powerful.
If AI code was great, and empowered non-programmers, then open source projects should have already committed hundreds of thousands of updates. We should have new software releases daily.
That illustration is bonkers
this guy, use his stuff a lot
Wait what this looks exactly like the art in the dentist office I go to. They have superheroes doing dental things, like Catwoman aggressively using their nails to pick at her teeth lol. Is this person near Seattle do you know?
See Pop art and Roy Lichtenstein.
I think the guy’s in Russia! It’s a pretty common style.
If LangChain was written via VibeCoding then that would explain a lot.
so what are the sentiments about langchain? I was recently working with it to try to build some automatic PR generation scripts but I didn’t have the best experience understanding how to use the library. the documentation has been quite messy, repetitive and disorganized—somehow both verbose and missing key details. but it does the job I wanted it to, namely letting me use an LLM with tool calling and custom tools in a script
seems like garbage to me
Given the volatility of the space I don’t think it could have been doing stuff much better, doubt it’s getting out of alpha before the bubble bursts and stuff settles down a bit, if at all.
Automatic pr generation sounds like something that would need a prompt and a ten-line script rather than langchain, but it also seems both questionable and unnecessary.
If someone wants to know an LLM’s opinion on what the changes in a branch are meant to accomplish they should be encouraged to ask it themselves, no need to spam the repository.
sounds like you figured out the referenced problem for yourself already
AI isn’t bad when supervised by a human who knows what they’re doing. It’s good to speed up programmers if used properly. But business execs don’t see that.
Even when I supervise it, I always have to step in to clean up it’s mess, tell it off because it randomly renames my variables and functions because it thinks it knows better and oversteps. Needs to be put in it’s place like a misbehaving dog, lol
autoplag isn’t bad when supervised by a human
even when I supervise it, it’s bad
my god you people are a whole kind of poster and it fucking shows
yeah nah, it’s bad then too actually
How? It’s just like googling stuff but less annoying
also, fucking ew:
Needs to be put in it’s place like a misbehaving dog, lol
why do AI guys always have weird power fantasies about how they interact with their slop machines
It’s almost as if they have problematic conceptions (or lack thereof) of exploitation and power dynamics!
given your posts in this thread, I don’t think I trust your judgement on what less annoying looks like
Google used to return helpful results that answered questions without needing to be corrected before it started returning AI slop. So maybe that is true now, but only because the search results are the same AI slop as the AI.
For example, results in stack overflow generally include some discussion about why a solution addressed the issue that provided extra context for why you might use it or do something else instead. AI slop just returns a result which may or may not be correct but it will be presented as a solution without any context.
Google became shit not because of AI but because of SEO.
The enshitification was going on long before OpenAI was even a thing. Remember when we had to add the “reddit” tag just to make sure to get actual results instead of some badly written bloated text?
Google search became shit when they made the guy in charge of ads also in charge of search.
this is actually the correct case - it is both written about (prabhakar raghavan, look him up), and the exact mechanics of how they did it were detailed in documents surfaced in one of the lawsuits that google recently lost (the ones that found they them to be a monopoly)
The funny thing about stack overflow is that the vocal detractors have a kernel of truth to their complaints about elitism, but if you interact with them enough you realize they’re often the reason the gate keeping is necessary to keep the quality high.
Stack overflow resulted in people with highly specialised examples that wouldn’t suit your application. It’s easier to just ask an AI to write a simple loop for you whenever you forget a bit of syntax
wow imagine needing to understand the code you’re dealing with and not just copypasting a bunch of shit around
reading documentation and source code must be an excruciating amount of exercise for your poor brain - it has to even do something! poor thing
You’ve inadvertently pointed out the exact problem: LLM approaches can (unreliably) manage boilerplate and basic stuff but fail at anything more advanced, and by handling the basic stuff they give people false confidence that leads to them submitting slop (that gets rejected) to open source projects. LLMs, as the linked pivot-to-ai post explains, aren’t even at the level of occasionally making decent open source contributions.
Man i remember eclipse doing code completion for for loops and other common snippets in like 2005. LLM riders don’t even seem to know what tools have been in use for decades and think using an LLM for these things is somehow revolutionary.
the promptfondlers that make their way into our threads sometimes try to brag about how the LLM is the only way to do basic editor tasks, like wrapping symbols in brackets or diffing logs. it’s incredible every time
Promptfondlers 🤣
Air so polluted it makes people sick, but it’s all worth it because you can’t be arsed to remember the syntax of a for loop.
it is not just like googling stuff if it actively fucks up already existing parts of the code