

This is especially ironic with all of Elon’s claims about making Grok truth seeking. Well, “truth seeking” was probably always code for making an LLM that would parrot Elon’s views.
Elon may have failed at making Grok peddle racist conspiracy theories like he wanted, but this shouldn’t be taken as proof that LLMs can’t be manipulated that way. He probably went with the laziest option possible of directly prompting it as opposed to fine tuning it on racist content or anything more advanced.
He also wants instant gratification, so taking months to have a team put together a racist data set is a lot of effort for him.