After two years’ wait, and Sam Altman talking up how scary smart it was, OpenAI finally released GPT-5! And it was … meh. It could program a bit better? It didn’t do anything else much better…
There’s value in the underlying theory of machine learning. ML models are exceptionally good at sifting enormous amounts of data so long as you’re cross checking outputs - which sounds like doing the work anyway, except now you know what to look for and check. Astronomers have been using this shit for years to sift radio telescope data, and particle physicists use it to sift collider results.
Is there danger? Absolutely. But saying it’s worthless is like saying the theory of relativity is worthless because it created nukes. Throwing the underlying theory out because people are using it to do shitty things is going to rapidly shrink your world, because a LOT of science has been used to do a LOT of harm, yet we still use it.
and a bunch of waffle about unrelated ML advancements in robotics, and it confused you into giving me a shit lecture on tech I already know about? why?
Most frustrating thing about this AI Psychosis crap, they think machines trained on inherently biased data can communicate fully without bias, and forget the rule parameters of the systems (and that they’re products designed in the Skinner mindset of monopolizing the user’s time and creating “positive experiences”).
The machines aren’t designed to scrub bias, they’re designed to appear to while alligning with their corporate developer’s goals. (Which is also fucked from a consent-autonomy angle if they ever do design AGI, which is essentially what Detroit Become Human was talking about).
There’s value in the underlying theory of machine learning. ML models are exceptionally good at sifting enormous amounts of data so long as you’re cross checking outputs - which sounds like doing the work anyway, except now you know what to look for and check. Astronomers have been using this shit for years to sift radio telescope data, and particle physicists use it to sift collider results.
Is there danger? Absolutely. But saying it’s worthless is like saying the theory of relativity is worthless because it created nukes. Throwing the underlying theory out because people are using it to do shitty things is going to rapidly shrink your world, because a LOT of science has been used to do a LOT of harm, yet we still use it.
you saw this:
and a bunch of waffle about unrelated ML advancements in robotics, and it confused you into giving me a shit lecture on tech I already know about? why?
And if we were talking about “the underlying theory of machine learning”, you might have a point.
Most frustrating thing about this AI Psychosis crap, they think machines trained on inherently biased data can communicate fully without bias, and forget the rule parameters of the systems (and that they’re products designed in the Skinner mindset of monopolizing the user’s time and creating “positive experiences”).
The machines aren’t designed to scrub bias, they’re designed to appear to while alligning with their corporate developer’s goals. (Which is also fucked from a consent-autonomy angle if they ever do design AGI, which is essentially what Detroit Become Human was talking about).