The edge is where it’s at
Interview with Nick about the post:
https://www.youtube.com/watch?v=a5rLzNxRjEQ&list=UU9rJrMVgcXTfa8xuMnbhAEA - video
https://pivottoai.libsyn.com/20251107-nicholas-weaver-the-futile-future-of-the-gigawatt-datacenter - podcast
time: 26 min 53 sec



Eh, Local LLMs don’t really scale, you can’t do much better than one person per one computer, unless it’s really sparse usage, and buying everyone a top-of-the-line GPU only works if they aren’t currently on work laptops and VMs.
Sparks type machines will do better eventually but for now they’re supposedly geared more towards training than inference, it says here that running a 70b model there returns around one word per second (three tokens) which is snail’s pace.
yeah. LLMs are fat. Lesser ML works great tho.
@dgerard @Architeuthis
Lard Language Model