It’s not always easy to distinguish between existentialism and a bad mood.

  • 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle
  • I like how he even had someone with art expertise literally explain it to him and he writes it off as “lol she must have super artist vision for details.”

    I’ll quote her since it’s by far the only worthwhile part of the article:

    When real pictures have details, the details have logic to them. I think of Ancient Gate being in the genre “superficially detailed, but all the details are bad and incoherent”. The red and blue paint and blank stone feel like they’re supposed to evoke worn-ness, but it’s not clear what style this is supposed to be a worn-down version of. One gets the feeling that if all the paint were present it would look like a pile of shipping containers, if shipping containers were only made in two colors.

    It has ornaments, sort of, but they don’t look like anything, or even a worn-down version of anything. There are matchy disks in the left, center, and right, except they’re different sizes, different colors, and have neither “detail which parses as anything” nor stark smoothness. It has stuff that’s vaguely evocative of Egyptian paintings if you didn’t look carefully at all. The left column has a sort of door with a massive top-of-doorway-thingy over it. Why? Who knows? The right column doesn’t, and you’d expect it to. Instead, the right column has 2.5 arches embossed into it that just kind of halfheartedly trail off.

    I’m not even sure how to describe the issues with the part a little above the door. It kind of sets a rhythm but then it gets distracted and breaks it. Are these semi-top protruding squares supposed to be red or blue? Ehh, whatever. Does the top border protrude the whole way? Ehh, mostly. Human artists have a secret technique, which is that if they don’t know what all the details should be they get vague. And you can tell it’s vague and you’re not drawn to go “hmm, this looks interesting, oh wait it’s terrible”.

    I think part of the problem with AI art is that it produces stuff non-artists think look good but which on close inspection looks terrible, and so it ends up turning search results that used to be good into sifting through terrible stuff. Imagine if everyone got the ability to create mostly nutritional adequate meals for like five cents, but they all were mediocre rehydrated powder with way too much sucralose or artificial grape flavor or such. And your friends start inviting you over to dinner parties way more often because it’s so easy to deal with food now, but practically every time, they serve you sucralose protein shake. (Maybe they do so because they were used to almost never eating food? This isn’t a perfect analogy.) Furthermore, imagine people calling this the future of food and saying chefs are obsolete. You’d probably be like “wow, I’m happy that you have easy access to food you enjoy, and it is convenient for me to use sometimes, but this is kind of driving me crazy”. I feel like this is relevant to artist derangement over AI art, though of course a lot of it is economic anxiety and I’m a hobbyist who doesn’t feel like a temporarily embarrassed professional and thus can’t relate.

    according to someone who goes by Ilzo on the socials.

    image in question: https://pbs.twimg.com/media/GQ3wnEZWAAA_mY8?format=jpg



  • It might be just the all but placeholder characters that give it a b-movie vibe. I’d say it’s a book that’s both dumber and smarter that people give it credit for, but even the half-baked stuff gets you thinking. Especially the self-model stuff, and how problematic it can be to even discuss the concept in depth in languages that have the concept of a subject so deeply baked in.

    I thought that at worst one could bounce off to the actual relevant literature like Thomas Metzinger’s pioneering, seminal and terribly written thesis, or Sack’s The Man Who Mistook His Wife For A Hat.

    Blindsight being referenced to justify LLM hype is news to me.



  • That’s a good way to put it. Another thing that was really en vogue at one point and might have been considered hard-ish scifi when it made it into Rifters was all the deep water telepathy via quantum brain tubules stuff, which now would only be taken seriously by wellness influencers.

    not a fan of trump for example

    In one the Eriophora stories (I think it’s officially the sunflower circle) I think there’s a throwaway mention about the Kochs having been lynched along with other billionaires on the early days of a mass mobilization to save what’s savable in the face of environmental disaster (and also rapidly push to the stars because a Kardashev-2 civilization may have emerged in the vicinity so an escape route could become necessary in the next few millenia and this scifi story needs a premise).



  • Sentience is overrated

    Not sentience, self awareness, and not in a parτicularly prescriptive way.

    Blindsight is pretty rough and probably Watt’s worst book that I’ve read but it’s original, ambitious and mostly worth it as an introduction to thinking about selfhood in a certain way, even if this type of scifi isn’t one’s cup of tea.

    It’s a book that makes more sense after the fact, i.e. after reading the appendix on phenomenal self-model hypothesis. Which is no excuse – cardboard characters that are that way because the author is struggling to make a point about how intelligence being at odds with self awareness would lead to individuals with nonexistent self-reflection that more or less coast as an extension of their (ultrafuturistic) functionality, are still cardboard characters that you have to spend a whole book with.

    I remember he handwaves a lot of stuff regarding intelligence, like at some point straight up writing that what you are reading isn’t really what’s being said, it’s just the jargonaut pov character dumbing it way down for you, which is to say he doesn’t try that hard for hyperintelligence show-don’t-tell. Echopraxia is better in that regard.

    It just feeds right into all of the TESCREAL nonsense, particularly those parts that devalue the human part of humanity.

    Not really, there are some common ideas mostly because tesrealism already is scifi tropes awkwardly cobbled together, but usually what tescreals think is awesome is presented in a cautionary light or as straight up dystopian.

    Like, there’s some really bleak transhumanism in this book, and the view that human cognition is already starting to become alien in the one hour into the future setting is kind of anti-longtermist, at least in the sense that the utilitarian calculus turns way messed up.

    And also I bet there’s nothing in The Sequences about Captain Space Dracula.










  • This almost reads like an attempt at a reductio ad absurdum of worrying about animal welfare, like you are supposed to be a ridiculous hypocrite if you think factory farming is fucked yet are indifferent to the cumulative suffering caused to termites every time an exterminator sprays your house so it doesn’t crumble.

    Relying on the mean estimate, giving a dollar to the shrimp welfare project prevents, on average, as much pain as preventing 285 humans from painfully dying by freezing to death and suffocating. This would make three human deaths painless per penny, when otherwise the people would have slowly frozen and suffocated to death.

    Dog, you’ve lost the plot.

    FWIW a charity providing the means to stun shrimp before death by freezing as is the case here isn’t indefensible, but the way it’s framed as some sort of an ethical slam dunk even compared to say donating to refugee care just makes it too obvious you’d be giving money to people who are weird in a bad way.




  • I could go over Wolfram’s discussion of biological pattern formation, gravity, etc., etc., and give plenty of references to people who’ve had these ideas earlier. They have also had them better, in that they have been serious enough to work out their consequences, grasp their strengths and weaknesses, and refine or in some cases abandon them. That is, they have done science, where Wolfram has merely thought.

    Huh, it looks like Wolfram also pioneered rationalism.

    Scott Aaronson also turns up later for having written a paper that refutes a specific Wolfram claim on quantum mechanics, reminding us once again that very smart dumb people are actually a thing.

    As a sidenote, if anyone else is finding the plain-text-disguised-as-an-html-document format of this article a tad grating, your browser probably has a reader mode that will make it way more presentable, it’s F9 on firefox.