• 4 Posts
  • 163 Comments
Joined 2 years ago
cake
Cake day: July 19th, 2023

help-circle
  • Steve Yegge has created Gas Town, a mess of Claude Code agents forced to cosplay as a k8s cluster with a Mad Max theme. I can’t think of better sneers than Yegge’s own commentary:

    Gas Town is also expensive as hell. You won’t like Gas Town if you ever have to think, even for a moment, about where money comes from. I had to get my second Claude Code account, finally; they don’t let you siphon unlimited dollars from a single account, so you need multiple emails and siphons, it’s all very silly. My calculations show that now that Gas Town has finally achieved liftoff, I will need a third Claude Code account by the end of next week. It is a cash guzzler.

    If you’re familiar with the Towers-of-Hanoi problem then you can appreciate the contrast between Yegge’s solution and a standard solution; in general, recursive solutions are fewer than ten lines of code.

    Gas Town solves the MAKER problem (20-disc Hanoi towers) trivially with a million-step wisp you can generate from a formula. I ran the 10-disc one last night for fun in a few minutes, just to prove a thousand steps was no issue (MAKER paper says LLMs fail after a few hundred). The 20-disc wisp would take about 30 hours.

    For comparison, solving for 20 discs in the famously-slow CPython programming system takes less than a second, with most time spent printing lines to the console. The solution length is exponential in the number of discs, and that’s over one million lines total. At thirty hours, Yegge’s harness solves Hanoi at fewer than ten lines/second! Also I can’t help but notice that he didn’t verify the correctness of the solution; by “run” he means that he got an LLM to print out a solution-shaped line.


  • NEOM is a laundry for money, religion, genocidal displacement, and the Saudi reputation among Muslims. NEOM is meant to replace Wahhabism, the Saudi family’s uniquely violent fundamentalism, with a much more watered-down secularist vision of the House of Saud where the monarchs are generous with money, kind to women, and righteously uphold their obligations as keepers of Mecca. NEOM is not only The Line, the mirrored city; it is multiple different projects, each set up with the Potemkin-village pattern to assure investors that the money is not being misspent. In each project, the House of Saud has targeted various nomads and minority tribes, displacing indigenous peoples who are inconvenient for the Saudi ethnostate, with the excuse that those tribes are squatting on holy land which NEOM’s shrines will further glorify.

    They want you to look at the smoke and mirrors in the desert because otherwise you might see the blood of refugees and the bones of the indigenous. A racing team is one of the cheaper distractions.



  • Nah, it’s more to do with stationary distributions. Most tokens tend to move towards it; only very surprising tokens can move away. (Insert physics metaphor here.) Most LLM architectures are Markov, so once they get near that distribution they cannot escape on their own. There can easily be hundreds of thousands of orbits near the stationary distribution, each fixated on a simple token sequence and unable to deviate. Moreover, since most LLM architectures have some sort of meta-learning (e.g. attention) they can simulate situations where part of a simulation can get stuck while the rest of it continues, e.g. only one chat participant is stationary and the others are not.







  • Today, in fascists not understanding art, a suckless fascist praised Mozilla’s 1998 branding:

    This is real art; in stark contrast to the brutalist, generic mess that the Mozilla logo has become. Open source projects should be more daring with their visual communications.

    Quoting from a 2016 explainer:

    [T]he branding strategy I chose for our project was based on propaganda-themed art in a Constructivist / Futurist style highly reminiscent of Soviet propaganda posters. And then when people complained about that, I explained in detail that Futurism was a popular style of propaganda art on all sides of the early 20th century conflicts… Yes, I absolutely branded Mozilla.org that way for the subtext of “these free software people are all a bunch of commies.” I was trolling. I trolled them so hard.

    The irony of a suckless developer complaining about brutalism is truly remarkable; these fuckwits don’t actually have a sense of art history, only what looks cool to them. Big lizard, hard-to-read font, edgy angular corners, and red-and-black palette are all cool symbols to the teenage boy’s mind, and the fascist never really grows out of that mindset.


  • Sadly, it’s a Chomskian paper, and those are just too weak for today. Also, I think it’s sloppy and too Eurocentric. Here are some of the biggest gaffes or stretches I found by skimming Moro’s $30 book, which I obtained by asking a shadow library for “impossible languages” (ISBN doesn’t work for some reason):

    book review of Impossible Languages (Moro, 2016)
    • Moro claims that it’s impossible for a natlang to have free word order. There’s many counterexamples which could be argued, like Arabic or Mandarin, but I think that the best counterexample is Latin, which has Latinate (free) word order. On one hand, of course word order matters for parsers, but on the other hand the Transformers architecture attends without ordering, so this isn’t really an issue for machines. Ironically, on p73-74, Moro rearranges the word order of a Latin phrase while translating it, suggesting either a use of machine translation or an implicit acceptance of Latin (lack of) word order. I could be harsher here; it seems like Moro draws mostly from modern Romance and Germanic languages to make their points about word order, and the sensitivity of English and Italian to word order doesn’t imply a universality.
    • Speaking of universality, both the generative-grammar and universal-grammar hypotheses are assumed. By “impossible” Moro means a non-recursive language with a non-context-free grammar, or perhaps a language failing to satisfy some nebulous geometric requirements.
    • Moro claims that sentences without truth values are lacking semantics. Gödel and Tarski are completely unmentioned; Moro ignores any sort of computability of truth values.
    • Russell’s paradox is indirectly mentioned and incorrectly analyzed; Moro claims that Russell fixed Frege’s system by redefining the copula, but Russell and others actually refined the notion of building sets.
    • It is claimed that Broca’s area uniquely lights up for recursive patterns but not patterns which depend on linear word order (e.g. a rule that a sentence is negated iff the fourth word is “no”), so that Broca’s area can’t do context-sensitive processing. But humans clearly do XOR when counting nested negations in many languages and can internalize that XOR so that they can handle utterances consisting of many repetitions of e.g. “not not”.
    • Moro mentions Esperanto and Volapük as auxlangs in their chapter on conlangs. They completely fail to recognize the past century of applied research: Interlingue and Interlingua, Loglan and Lojban, Láadan, etc.
    • Sanskrit is Indo-European. Also, that’s not how junk DNA works; it genuinely isn’t coding or active. Also also, that’s not how Turing patterns work; they are genuine cellular automata and it’s not merely an analogy.

    I think that Moro’s strongest point, on which they spend an entire chapter reviewing fairly solid neuroscience, is that natural language is spoken and heard, such that a proper language model must be simultaneously acoustic and textual. But because they don’t address computability theory at all, they completely fail to address the modern critique that machines can learn any learnable system, including grammars; they worst that they can say is that it’s literally not a human.



  • They (or the LLM that summarized their findings and may have hallucinated part of the post) say:

    It is a fascinating example of “Glue Code” engineering, but it debunks the idea that the LLM is natively “understanding” or manipulating files. It’s just pushing buttons on a very complex, very human-made machine.

    Literally nothing that they show here is bad software engineering. It sounds like they expected that the LLM’s internals would be 100% token-driven inference-oriented programming, or perhaps a mix of that and vibe code, and they are disappointed that it’s merely a standard Silicon Valley cloudy product.

    My analysis is that Bobby and Vicky should get raises; they aren’t paid enough for this bullshit.

    By the way, the post probably isn’t faked. Google-internal go/ URLs do leak out sometimes, usually in comments. Searching GitHub for that specific URL turns up one hit in a repository which claims to hold a partial dump of the OpenAI agents. Here is combined_apply_patch_cli.py. The agent includes a copy of ImageMagick; truly, ImageMagick is our ecosystem’s cockroach.


  • Now I’m curious about whether Disney funded Glaze & Nightshade. Quoting Nightshade’s FAQ, their lab has arranged to receive donations which are washed through the University of Chicago:

    If you or your organization may be interested in pitching in to support and advance our work, you can donate directly to Glaze via the Physical Sciences Division webpage, click on “Make a gift to PSD” and choose “GLAZE” as your area of support (managed by the University of Chicago Physical Sciences Division).

    Previously, on Awful, I noted the issues with Nightshade and the curious fact that Disney is the only example stakeholder named in the original Nightshade paper, as well as the fact that Nightshade’s authors wonder about the possibility of applying Glaze-style techniques to feature-length films.





  • Linear no-threshold isn’t under attack, but under review. The game-theoretic conclusions haven’t changed: limit overall exposure, radiation is harmful, more radiation means more harm. The practical consequences of tweaking the model concern e.g. evacuation zones in case of emergency; excess deaths from radiation exposure are balanced against deaths caused by evacuation, so the choice of model determines the exact shape of evacuation zones. (I suspect that you know this but it’s worth clarifying for folks who aren’t doing literature reviews.)