Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • corbin@awful.systems
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    6
    ·
    3 days ago

    I’m gonna be polite, but your position is deeply sneerworthy; I don’t really respect folks who don’t read. The article has quite a few quotes from neuroscientist Anil Seth (not to be confused with AI booster Anil Dash) who says that consciousness can be explained via neuroscience as a sort of post-hoc rationalizing hallucination akin to the multiple-drafts model; his POV helps deflate the AI hype. Quote:

    There is a growing view among some thinkers that as AI becomes even more intelligent, the lights will suddenly turn on inside the machines and they will become conscious. Others, such as Prof Anil Seth who leads the Sussex University team, disagree, describing the view as “blindly optimistic and driven by human exceptionalism.” … “We associate consciousness with intelligence and language because they go together in humans. But just because they go together in us, it doesn’t mean they go together in general, for example in animals.”

    At the end of the article, another quote explains that Seth is broadly aligned with us about the dangers:

    In just a few years, we may well be living in a world populated by humanoid robots and deepfakes that seem conscious, according to Prof Seth. He worries that we won’t be able to resist believing that the AI has feelings and empathy, which could lead to new dangers. “It will mean that we trust these things more, share more data with them and be more open to persuasion.” But the greater risk from the illusion of consciousness is a “moral corrosion”, he says. “It will distort our moral priorities by making us devote more of our resources to caring for these systems at the expense of the real things in our lives” – meaning that we might have compassion for robots, but care less for other humans.

    A pseudoscience has an illusory object of study. For example, parapsychology studies non-existent energy fields outside the Standard Model, and criminology asserts that not only do minds exist but some minds are criminal and some are not. Robotics/cybernetics/artificial intelligence studies control loops and systems with feedback, which do actually exist; further, the study of robots directly leads to improved safety in workplaces where robots can crush employees, so it’s a useful science even if it turns out to be ill-founded. I think that your complaint would be better directed at specific AGI position papers published by techbros, but that would require reading. Still, I’ll try to salvage your position:

    Any field of study which presupposes that a mind is a discrete isolated event in spacetime is a pseudoscience. That is, fields oriented around neurology are scientific, but fields oriented around psychology are pseudoscientific. This position has no open evidence against it (because it’s definitional!) and aligns with the expectations of Seth and others. It is compatible with definitions of mind given by Dennett and Hofstadter. It immediately forecloses the possibility that a computer can think or feel like humans; at best, maybe a computer could slowly poorly emulate a connectome.

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      3 days ago

      I am not sure that having “an illusory object of study” is a standard that helps define pseudoscience in this context. Consider UFOlogy, for example. It arguably “studies” things that do exist — weather balloons, the planet Venus, etc. Pseudoarchaeology “studies” actual inscriptions and actual big piles of rocks. Wheat gluten and seed oils do have physical reality. It’s the explanations put forth which are unscientific, while attempting to appeal to the status of science. The “research” now sold under the Artificial Intelligence banner has become like Intelligent Design “research”: Computers exist, just like bacterial flagella exist, but the claims about them are untethered.

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        3 days ago

        Scientists and philosophers have spilled a tanker truck of ink about the question of how to demarcate science from non-science or define pseudoscience rigorously. But we can bypass all that, because the basic issue is in fact very simple. One of the most fundamental parts of living a scientific life is admitting that you don’t know what you don’t know. Without that, it’s well-nigh impossible to do the work. Meanwhile, the generative AI industry is built on doing exactly the opposite. By its very nature, it generates slop that sounds confident. It is, intrinsically and fundamentally, anti-science.

        Now, on top of that, while being anti-science the AI industry also mimics the form of science. Look at all the shiny PDFs! They’ve got numbers in them and everything. Tables and plots and benchmarks! I think that any anti-science activity that steals the outward habits of science for its own purposes will qualify as pseudoscience, by any sensible definition of pseudoscience. In other words, wherever we draw the line or paint the gray area, modern “AI” will be on the bad side of it.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      3 days ago

      No, I think BlueMonday is being reasonable. The article has some quotes from scientists with actually relevant expertise, but it uncritically mixes them with LLM hype and speculation in a typical both sides sort of thing that gives lay readers the (false) impression that both sides are equal. This sort of journalism may appear balanced, but it ultimately has contributed to all kinds of controversies (from Global Warming to Intelligent Design to medical pseudoscience) where the viewpoints of cranks and uninformed busybodies and autodidacts of questionable ability and deliberate fraudsters get presented equally with actually educated and researched viewpoints.

    • o7___o7@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 days ago

      …fields oriented around neurology are scientific, but fields oriented around psychology are pseudoscientific.

      When a good man gazes into the palantir and sees L Ron Hubbard looking back

      • Amoeba_Girl@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 days ago

        To be fair I also believe psychology is by and large pseudoscience, but the answer to it is sociology, not the MRI gang.

        • scruiser@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          3 days ago

          There are parts of the field that have major problems, like the sorts of studies that get done on 20 student volunteers and then get turned into a pop psychology factoid that gets tossed around and over-generalized while the original study fails to replicate, but there are parts that are actually good science.