• BigMuffin69@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    6 months ago

    Yann and co. just dropped llama 3.1. Now there’s an open source model on par with OAI and Anthropic, so who the hell is going to pay these nutjobs for access to their apis when people can get roughly the same quality for free without the risk of having to give your data to a 3rd party?

    These chuckle fucks are cooked.

    • Takumidesh@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      For “free” except you need thousands of dollars upfront for hardware and a full hardware/software stack you need to maintain.

      This is like saying azure is cooked because you can rack mount your own PC

      • CubitOom@infosec.pub
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        That’s mostly true. But if you have a GPU to play video games on a PC running Linux, you can easily use Ollama and run llama 3 with 7 billion parameters locally without any real overhead.

        • BlueMonday1984@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          Just an off-the-cuff prediction: I fully anticipate AI bros are gonna put their full focus on local models post-bubble, for two main reasons:

          1. Power efficiency - whilst local models are hardly power-sippers, they don’t require the planet-killing money-burning server farms that the likes of ChatGPT require (and which have helped define AI’s public image, now that I think about it). As such, they won’t need VC billions to keep them going - just some dipshit with cash to spare and a GPU to abuse (and there’s plenty of those out in the wild).

          2. Freedom/Control - Compared to ChatGPT, DALL-E, et al, which are pretty locked down in an attempt to keep users from embarrassing their parent corps or inviting public scrutiny, any local model will answer whatever dumbshit question you ask for make whatever godawful slop you want, no questions asked, no prompt injection/jailbreaking needed. For the kind of weird TESCREAL nerd which AI attracts, the benefits are somewhat obvious.

          • vrighter@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 months ago

            you almost always get better efficiency at scale. If the same work is done by lots of different machines instead of one datacenter, they’d be using more energy overall. You’d be doing the same work, but not on chips specifically designed for the task. If it’s already really inefficient at scale, then you’re just sol.