

Yes, but that’s not how LLMs work. My statement depends heavily on the fact that a LLM like GPT is coaxed into coherence by unsupervised or semi-supervised training. That the training process works is the evidence of an internal model (of language/related concepts), not just the fact that something outputs coherent statements.
Yes, speed and the benefits of all the tooling and static analysis they’re bringing to Python. Python is great for many things but “analyzing Python” isn’t necessarily one of them.