On a wider front, I expect machine learning as a whole will see its funding go down the shitter once the AI bubble bursts - useful ML applications are losing funding in favour of magical chatbots, and the stench of AI is definitely wafting all over it as well.
For an off-the-cuff prediction, I expect the number of AI/ML researchers to steeply drop in the coming years, from funding for AI/ML drying up post bubble, and from researchers viewing the field as too tainted to touch.
oh absolutely. there’s a lot of mathematicians who have discovered there’s fucking buckets of money right now, and it’s all gonna dry up. hope they’re socking it away while they can.
how did they all choose this weekend to find us? it’s like they’re trying to get their bans in before the downtime
ADQ any% speedruns
(A = assholes)
"The current AI investment cycle differs from previous technology bubbles. Unlike dot-com era startups that burned through venture capital with no path to profitability, the largest AI investors—Microsoft, Google, Meta, and Amazon—generate hundreds of billions of dollars in annual profits from their core businesses.
Microsoft alone plans to spend $80 billion on AI data centers this fiscal year. These companies can potentially sustain losses from AI development for years without facing the cash crises that typically trigger bubble collapses." https://mastodon.social/@arstechnica/115069034230061095
It will stick around in the form of general use cases, and specially trained sets.
So Boston Dynamics for instance is training their human robots to move like humans based on specially trained sets for instance.
LLMs are good for all sorts of things.
Nobody’s using datasets made of copyrighted literature and 4chan to teach robots how to move, what are you even on about.
That example is more ML and not LLM, no?
Surveillance algorithms for the Police State. -drops mic
tailored spam, scams, finely targeted propaganda/influence operations, erosion of expertise,
Unfucking forntunately, gotta go out of our ways to be as confusing as possible.
I noticed the down voting… Lemmy is full of delusional people who seem to have a problem with reality… And it seems to getting worse. They’re also rude af and getting up votes for said rudeness. Maybe it’s time to look for pastures new or just ditch social media entirely. It seems do develop into a blob of shit no matter the infrastructure/population type…
oh no the downvotes
don’t let the door hit you on the way the fuck out
dont let … the door … …
read to the tune of, naturally
oh woe is me, never in all my 68 posts have I seen such rudeness
People don’t think it be like this but it do
k fuck off
Charming arguments…
oh my dearie me, I shall have to clutch my motherfuckin pearls
Yep. Terrible at many things, very good at others. At the end of the day, very useful technology.
Just as my grandmother always used ti say, “You can’t use a knife to beat an egg but you can fuck a player up with it.” RIP Gammy. She was the sweetest.
just some uwu itsy bitsy critihype for my favorite worthless fashtech ❤️
how about you and your friend and your grandma all go fuck themselves ❤️
Tech can’t be fascist only people can. You seem like you’re loosing it… Get some help mate.
Guns don’t kill people, people kill people.
tech absolutely can have political inclination, crypto is libertarian, surveillance is fash, and whatever ai-bros are cooking is somewhere in between
holy shit, across 3 comments you did a full distributed darvo
stellar example of shitheadery so early on a sunday!
I have plenty of help! one of the people who actually post here are gonna come help me tell you to fuck yourself! isn’t that fun?
There’s value in the underlying theory of machine learning. ML models are exceptionally good at sifting enormous amounts of data so long as you’re cross checking outputs - which sounds like doing the work anyway, except now you know what to look for and check. Astronomers have been using this shit for years to sift radio telescope data, and particle physicists use it to sift collider results.
Is there danger? Absolutely. But saying it’s worthless is like saying the theory of relativity is worthless because it created nukes. Throwing the underlying theory out because people are using it to do shitty things is going to rapidly shrink your world, because a LOT of science has been used to do a LOT of harm, yet we still use it.
you saw this:
LLMs are good for all sorts of things.
and a bunch of waffle about unrelated ML advancements in robotics, and it confused you into giving me a shit lecture on tech I already know about? why?
And if we were talking about “the underlying theory of machine learning”, you might have a point.
Most frustrating thing about this AI Psychosis crap, they think machines trained on inherently biased data can communicate fully without bias, and forget the rule parameters of the systems (and that they’re products designed in the Skinner mindset of monopolizing the user’s time and creating “positive experiences”).
The machines aren’t designed to scrub bias, they’re designed to appear to while alligning with their corporate developer’s goals. (Which is also fucked from a consent-autonomy angle if they ever do design AGI, which is essentially what Detroit Become Human was talking about).