• Mirshe@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    14
    ·
    2 days ago

    There’s value in the underlying theory of machine learning. ML models are exceptionally good at sifting enormous amounts of data so long as you’re cross checking outputs - which sounds like doing the work anyway, except now you know what to look for and check. Astronomers have been using this shit for years to sift radio telescope data, and particle physicists use it to sift collider results.

    Is there danger? Absolutely. But saying it’s worthless is like saying the theory of relativity is worthless because it created nukes. Throwing the underlying theory out because people are using it to do shitty things is going to rapidly shrink your world, because a LOT of science has been used to do a LOT of harm, yet we still use it.

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      16
      ·
      2 days ago

      you saw this:

      LLMs are good for all sorts of things.

      and a bunch of waffle about unrelated ML advancements in robotics, and it confused you into giving me a shit lecture on tech I already know about? why?

      • scathliath@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        11
        ·
        2 days ago

        Most frustrating thing about this AI Psychosis crap, they think machines trained on inherently biased data can communicate fully without bias, and forget the rule parameters of the systems (and that they’re products designed in the Skinner mindset of monopolizing the user’s time and creating “positive experiences”).

        The machines aren’t designed to scrub bias, they’re designed to appear to while alligning with their corporate developer’s goals. (Which is also fucked from a consent-autonomy angle if they ever do design AGI, which is essentially what Detroit Become Human was talking about).