as seen here and here, some instances are feeding posts wholesale to prompts, for what seem like extremely unsound reasons to me

any of you run into this shit yet?

  • [deleted]@piefed.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    Having a LLM confirm a decision is the same thing as having the LLM make a decision and then figure out if the mod agrees with it. If they would have chosen not to rule based on the LLM output, then the LLM was part of the decision making process. The order does not matter.

    Including the LLM outputting something that implies a determination at any step automatically makes it part of the process.

    • Simon_Shitewood@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      If they would have chosen not to rule based on the LLM output

      They literally just said they had already made the decision when unruffled ran his little experiment. To be clear it’s extremely pathetic of db0 to use ai in any capacity, but you are also showing a severe lack of literacy.

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 days ago

        hey fucko, you know we don’t have to take their word for it right? we can read all the relevant posts and come to the conclusion that actually the use of LLMs as stated fucking sucks, and that we don’t fucking want it. we can read something and come to a different conclusion than you, believe it or not.