

well, I can’t counter it because I don’t think they do know how it works. the theory is shallow and the outputs of, say, an LLM are of remarkably high quality and in an area (language) that is impossibly baroque. the lack of theory and fundamental understanding is a huge problem for them because it means “improvements” can only come about by throwing money and conventional engineering at their systems. this is what I’ve heard from people for about ten years.
to me that also means it isn’t something that needs to be countered. it’s something the context of which needs to be explained. it’s bad for the ai industry that they don’t know what they’re doing


another analogy might be an ancient builder who gets really good at building pyramids, and by pouring enormous amounts of money and resources into a project manages to build a stunningly large pyramid. “im now going to build something as tall as what will be called the empire state building,” he says.
problem: he has no idea how to do this. clearly some new building concepts are needed. but maybe he can figure those out. in the meantime he’s going to continue with this pyramid design but make them even bigger and bigger, even as the amount of stone required and the cost scales quadratically, and just say he’s working up to the reallyyyyy big building…