• 0 Posts
  • 107 Comments
Joined 2 years ago
cake
Cake day: August 29th, 2023

help-circle
  • Following up because the talk page keeps providing good material…

    Hand of Lixue keeps trying to throw around the Wikipedia rules like the other editors haven’t seen people try to weaponize the rules to push their views many times before.

    Particularly for the unflattering descriptions I included, I made sure they reflect the general view in multiple sources, which is why they might have multiple citations attached. Unfortunately, that has now led to complaints about overcitation from @Hand of Lixue. You can’t win with some people…

    Looking back on the original lesswrong brigade organizing discussion of how to improve the wikipedia article, someone tried explaining to Habyrka the rules then and they were dismissive.

    I don’t think it counts as canvassing in the relevant sense, as I didn’t express any specific opinion on how the article should be edited.

    Yes Habyrka, because you clearly have such a good understanding of the Wikipedia rules and norms…

    Also, heavily downvoted on the lesswrong discussion is someone suggesting Wikipedia is irrelevant because LLMs will soon be the standard for “access to ground truth”. I guess even lesswrong knows that is bullshit.


  • The wikipedia talk page is some solid sneering material. It’s like Habryka and HandofLixue can’t imagine any legitimate reason why Wikipedia has the norms it does, and they can’t imagine how a neutral Wikipedian could come to write that article about lesswrong.

    Eigenbra accurately calling them out…

    “I also didn’t call for any particular edits”. You literally pointed to two sentences that you wanted edited.

    Your twitter post also goes against Wikipedia practices by casting WP:ASPERSIONS. I can’t speak for any of the other editors, but I can say I have never read nor edited RationalWiki, so you might be a little paranoid in that regard.

    As to your question:

    Was it intentional to try to pick a fight with Wikipedians?

    It seems to be ignorance on Habyrka’s part, but judging by the talk page, instead of acknowledging their ignorance of Wikipedia’s reasonable policies, they seem to be doubling down.






  • If you wire the LLM directly into a proof-checker (like with AlphaGeometry) or evaluation function (like with AlphaEvolve) and the raw LLM outputs aren’t allowed to do anything on their own, you can get reliability. So you can hope for better, it just requires a narrow domain and a much more thorough approach than slapping some extra firm instructions in an unholy blend of markup languages in the prompt.

    In this case, solving math problems is actually something Google search could previously do (before dumping AI into it) and Wolfram Alpha can do, so it really seems like Google should be able to offer a product that does math problems right. Of course, this solution would probably involve bypassing the LLM altogether through preprocessing and post processing.

    Also, btw, LLM can be (technically speaking) deterministic if the heat is set all the way down, its just that this doesn’t actually improve their performance at math or anything else. And it would still be “random” in the sense that minor variations in the prompt or previous context can induce seemingly arbitrary changes in output.



  • We barely understsnd how LLMs actually work

    I would be careful how you say this. Eliezer likes to go on about giant inscrutable matrices to fearmoner, and the promptfarmers use the (supposed) mysteriousness as another avenue for crithype.

    It’s true reverse engineering any specific output or task takes a lot of effort and requires access to the model’s internals weights and hasn’t been done for most tasks, but the techniques exist for doing so. And in general there is a good high level conceptual understanding of what makes LLMs work.

    which means LLMs don’t understand their own functioning (not that they “understand” anything strictly speaking).

    This part is absolutely true. If you catch them in mistake, most of their data about responding is from how humans respond, or, at best fine-tuning on other LLM output and they don’t have any way of checking their own internals, so the words they say in response to mistakes is just more bs unrelated to anything.





  • So, I’ve been spending too much time on subreddits with heavy promptfondler presence, such as /r/singularity, and the reddit algorithm keeps recommending me subreddit with even more unhinged LLM hype. One annoying trend I’ve noted is that people constantly conflate LLM-hybrid approaches, such as AlphaGeometry or AlphaEvolve (or even approaches that don’t involve LLMs at all, such as AlphaFold) with LLMs themselves. From their they act like of course LLMs can [insert things LLMs can’t do: invent drugs, optimize networks, reliably solve geometry exercise, etc.].

    Like I saw multiple instances of commenters questioning/mocking/criticizing the recent Apple paper using AlphaGeometry as a counter example. AlphaGeometry can actually solve most of the problems without an LLM at all, the LLM component replaces a set of heuristics that make suggestions on proof approaches, the majority of the proof work is done by a symbolic AI working with a rigid formal proof system.

    I don’t really have anywhere I’m going with this, just something I noted that I don’t want to waste the energy repeatedly re-explaining on reddit, so I’m letting a primal scream out here to get it out of my system.



  • The promptfondlers on places like /r/singularity are trying so hard to spin this paper. “It’s still doing reasoning, it just somehow mysteriously fails when you it’s reasoning gets too long!” or “LRMs improved with an intermediate number of reasoning tokens” or some other excuse. They are missing the point that short and medium length “reasoning” traces are potentially the result of pattern memorization. If the LLMs are actually reasoning and aren’t just pattern memorizing, then extending the number of reasoning tokens proportionately with the task length should let the LLMs maintain performance on the tasks instead of catastrophically failing. Because this isn’t the case, apple’s paper is evidence for what big names like Gary Marcus, Yann Lecun, and many pundits and analysts have been repeatedly saying: LLMs achieve their results through memorization, not generalization, especially not out-of-distribution generalization.




  • I’ve been waiting for this. I wish it had happened sooner, before DOGE could do as much damage it did, but better late than never. Donald Trump isn’t going to screw around, and, ironically, DOGE has shown you don’t need congressional approval or actual legal authority to screw over people funded by the government, so I am looking forward to Donald screwing over SpaceX or Starlink’s government contracts. On the returning end… Elon doesn’t have that many ways of properly screwing with Trump, even if he has stockpiled blackmail material I don’t think it will be enough to turn MAGA against Trump. Still, I’m somewhat hopeful this will lead to larger infighting between the techbro alt-righters and the Christofascist alt-righters.


    • “tickled pink” is a saying for finding something humorous

    • “BI” is business insider, the newspaper that has the linked article

    • “chuds” is a term of online alt-right losers

    • OFC: of fucking course

    • “more dosh” mean more money

    • “AI safety and alignment” is the standard thing we sneer at here: making sure the coming future acasual robot god is a benevolent god. Occasionally reporter misunderstand it to mean or more PR-savvy promptfarmers misrepresent it to mean stuff like stopping LLMs from saying racist shit or giving you recipes that would accidentally poison you but this isn’t it’s central meaning. (To give the AI safety and alignment cultists way too much charity, making LLMs not say racist shit or give harmful instructions has been something of a spin-off application of their plans and ideas to “align” AGI.)