You can hardly get online these days without hearing some AI booster talk about how AI coding is going to replace human programmers. AI code is absolutely up to production quality! Also, you’re all…
AI isn’t bad when supervised by a human who knows what they’re doing. It’s good to speed up programmers if used properly. But business execs don’t see that.
Even when I supervise it, I always have to step in to clean up it’s mess, tell it off because it randomly renames my variables and functions because it thinks it knows better and oversteps. Needs to be put in it’s place like a misbehaving dog, lol
Google used to return helpful results that answered questions without needing to be corrected before it started returning AI slop. So maybe that is true now, but only because the search results are the same AI slop as the AI.
For example, results in stack overflow generally include some discussion about why a solution addressed the issue that provided extra context for why you might use it or do something else instead. AI slop just returns a result which may or may not be correct but it will be presented as a solution without any context.
Google became shit not because of AI but because of SEO.
The enshitification was going on long before OpenAI was even a thing. Remember when we had to add the “reddit” tag just to make sure to get actual results instead of some badly written bloated text?
this is actually the correct case - it is both written about (prabhakar raghavan, look him up), and the exact mechanics of how they did it were detailed in documents surfaced in one of the lawsuits that google recently lost (the ones that found they them to be a monopoly)
The funny thing about stack overflow is that the vocal detractors have a kernel of truth to their complaints about elitism, but if you interact with them enough you realize they’re often the reason the gate keeping is necessary to keep the quality high.
Stack overflow resulted in people with highly specialised examples that wouldn’t suit your application. It’s easier to just ask an AI to write a simple loop for you whenever you forget a bit of syntax
You’ve inadvertently pointed out the exact problem: LLM approaches can (unreliably) manage boilerplate and basic stuff but fail at anything more advanced, and by handling the basic stuff they give people false confidence that leads to them submitting slop (that gets rejected) to open source projects. LLMs, as the linked pivot-to-ai post explains, aren’t even at the level of occasionally making decent open source contributions.
Man i remember eclipse doing code completion for for loops and other common snippets in like 2005. LLM riders don’t even seem to know what tools have been in use for decades and think using an LLM for these things is somehow revolutionary.
the promptfondlers that make their way into our threads sometimes try to brag about how the LLM is the only way to do basic editor tasks, like wrapping symbols in brackets or diffing logs. it’s incredible every time
yep, I came up with promptfans (as a reference for describing all the weirdos who do free PR and hype work for this shit), and then @skillsissuer came up with promptfondlers for describing those that do this kind of bullshit
(and promptfuckers has become to collective noun I think of for all of them)
AI isn’t bad when supervised by a human who knows what they’re doing. It’s good to speed up programmers if used properly. But business execs don’t see that.
Even when I supervise it, I always have to step in to clean up it’s mess, tell it off because it randomly renames my variables and functions because it thinks it knows better and oversteps. Needs to be put in it’s place like a misbehaving dog, lol
my god you people are a whole kind of poster and it fucking shows
yeah nah, it’s bad then too actually
How? It’s just like googling stuff but less annoying
also, fucking ew:
why do AI guys always have weird power fantasies about how they interact with their slop machines
It’s almost as if they have problematic conceptions (or lack thereof) of exploitation and power dynamics!
Google used to return helpful results that answered questions without needing to be corrected before it started returning AI slop. So maybe that is true now, but only because the search results are the same AI slop as the AI.
For example, results in stack overflow generally include some discussion about why a solution addressed the issue that provided extra context for why you might use it or do something else instead. AI slop just returns a result which may or may not be correct but it will be presented as a solution without any context.
Google became shit not because of AI but because of SEO.
The enshitification was going on long before OpenAI was even a thing. Remember when we had to add the “reddit” tag just to make sure to get actual results instead of some badly written bloated text?
Google search became shit when they made the guy in charge of ads also in charge of search.
this is actually the correct case - it is both written about (prabhakar raghavan, look him up), and the exact mechanics of how they did it were detailed in documents surfaced in one of the lawsuits that google recently lost (the ones that found they them to be a monopoly)
The funny thing about stack overflow is that the vocal detractors have a kernel of truth to their complaints about elitism, but if you interact with them enough you realize they’re often the reason the gate keeping is necessary to keep the quality high.
Stack overflow resulted in people with highly specialised examples that wouldn’t suit your application. It’s easier to just ask an AI to write a simple loop for you whenever you forget a bit of syntax
wow imagine needing to understand the code you’re dealing with and not just copypasting a bunch of shit around
reading documentation and source code must be an excruciating amount of exercise for your poor brain - it has to even do something! poor thing
You’ve inadvertently pointed out the exact problem: LLM approaches can (unreliably) manage boilerplate and basic stuff but fail at anything more advanced, and by handling the basic stuff they give people false confidence that leads to them submitting slop (that gets rejected) to open source projects. LLMs, as the linked pivot-to-ai post explains, aren’t even at the level of occasionally making decent open source contributions.
Man i remember eclipse doing code completion for for loops and other common snippets in like 2005. LLM riders don’t even seem to know what tools have been in use for decades and think using an LLM for these things is somehow revolutionary.
the promptfondlers that make their way into our threads sometimes try to brag about how the LLM is the only way to do basic editor tasks, like wrapping symbols in brackets or diffing logs. it’s incredible every time
Promptfondlers 🤣
yep, I came up with
promptfans
(as a reference for describing all the weirdos who do free PR and hype work for this shit), and then @skillsissuer came up withpromptfondlers
for describing those that do this kind of bullshit(and
promptfuckers
has become to collective noun I think of for all of them)Air so polluted it makes people sick, but it’s all worth it because you can’t be arsed to remember the syntax of a for loop.
given your posts in this thread, I don’t think I trust your judgement on what less annoying looks like
it is not just like googling stuff if it actively fucks up already existing parts of the code