• BlueMonday1984@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    19 hours ago

    On a wider front, I expect machine learning as a whole will see its funding go down the shitter once the AI bubble bursts - useful ML applications are losing funding in favour of magical chatbots, and the stench of AI is definitely wafting all over it as well.

    For an off-the-cuff prediction, I expect the number of AI/ML researchers to steeply drop in the coming years, from funding for AI/ML drying up post bubble, and from researchers viewing the field as too tainted to touch.

    • David Gerard@awful.systemsOPM
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 hours ago

      oh absolutely. there’s a lot of mathematicians who have discovered there’s fucking buckets of money right now, and it’s all gonna dry up. hope they’re socking it away while they can.

  • mapto
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    "The current AI investment cycle differs from previous technology bubbles. Unlike dot-com era startups that burned through venture capital with no path to profitability, the largest AI investors—Microsoft, Google, Meta, and Amazon—generate hundreds of billions of dollars in annual profits from their core businesses.

    Microsoft alone plans to spend $80 billion on AI data centers this fiscal year. These companies can potentially sustain losses from AI development for years without facing the cash crises that typically trigger bubble collapses." https://mastodon.social/@arstechnica/115069034230061095

  • DarkCloud@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    14
    ·
    edit-2
    1 day ago

    It will stick around in the form of general use cases, and specially trained sets.

    So Boston Dynamics for instance is training their human robots to move like humans based on specially trained sets for instance.

    LLMs are good for all sorts of things.

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      25
      ·
      1 day ago

      Nobody’s using datasets made of copyrighted literature and 4chan to teach robots how to move, what are you even on about.

    • courval@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      10
      ·
      23 hours ago

      I noticed the down voting… Lemmy is full of delusional people who seem to have a problem with reality… And it seems to getting worse. They’re also rude af and getting up votes for said rudeness. Maybe it’s time to look for pastures new or just ditch social media entirely. It seems do develop into a blob of shit no matter the infrastructure/population type…

    • saltesc@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      12
      ·
      1 day ago

      Yep. Terrible at many things, very good at others. At the end of the day, very useful technology.

      Just as my grandmother always used ti say, “You can’t use a knife to beat an egg but you can fuck a player up with it.” RIP Gammy. She was the sweetest.

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        18
        ·
        1 day ago

        just some uwu itsy bitsy critihype for my favorite worthless fashtech ❤️

        how about you and your friend and your grandma all go fuck themselves ❤️

        • courval@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          10
          ·
          23 hours ago

          Tech can’t be fascist only people can. You seem like you’re loosing it… Get some help mate.

          • fullsquare@awful.systems
            link
            fedilink
            English
            arrow-up
            12
            ·
            23 hours ago

            tech absolutely can have political inclination, crypto is libertarian, surveillance is fash, and whatever ai-bros are cooking is somewhere in between

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            10
            ·
            23 hours ago

            holy shit, across 3 comments you did a full distributed darvo

            stellar example of shitheadery so early on a sunday!

          • self@awful.systems
            link
            fedilink
            English
            arrow-up
            8
            ·
            23 hours ago

            I have plenty of help! one of the people who actually post here are gonna come help me tell you to fuck yourself! isn’t that fun?

        • Mirshe@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          13
          ·
          1 day ago

          There’s value in the underlying theory of machine learning. ML models are exceptionally good at sifting enormous amounts of data so long as you’re cross checking outputs - which sounds like doing the work anyway, except now you know what to look for and check. Astronomers have been using this shit for years to sift radio telescope data, and particle physicists use it to sift collider results.

          Is there danger? Absolutely. But saying it’s worthless is like saying the theory of relativity is worthless because it created nukes. Throwing the underlying theory out because people are using it to do shitty things is going to rapidly shrink your world, because a LOT of science has been used to do a LOT of harm, yet we still use it.

          • self@awful.systems
            link
            fedilink
            English
            arrow-up
            14
            ·
            1 day ago

            you saw this:

            LLMs are good for all sorts of things.

            and a bunch of waffle about unrelated ML advancements in robotics, and it confused you into giving me a shit lecture on tech I already know about? why?

            • scathliath@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              10
              ·
              1 day ago

              Most frustrating thing about this AI Psychosis crap, they think machines trained on inherently biased data can communicate fully without bias, and forget the rule parameters of the systems (and that they’re products designed in the Skinner mindset of monopolizing the user’s time and creating “positive experiences”).

              The machines aren’t designed to scrub bias, they’re designed to appear to while alligning with their corporate developer’s goals. (Which is also fucked from a consent-autonomy angle if they ever do design AGI, which is essentially what Detroit Become Human was talking about).