Good stuff near the end:
I will never forgive these people for what they’ve done to the computer, and the more I learn about both their intentions and actions the more certain I am that they are unrepentant and that their greed will never be sated.
These men lace our digital lives with asbestos and get told they’re geniuses for doing so because money comes out.
I care about you. The user. The person reading this. The person that may have felt stupid, or deficient, or ignorant, all because the services you pay for or that monetize you have been intentionally rigged against you.
You aren’t the failure. The services, the devices, and the executives are.
I don’t feel like Zitron completely addressed my remark in the parent comment, but the end result/destination is the same.
How feasible is it to configure my server to essentially perform a reverse-slow-lorris attack on these LLM bots?
If they won’t play nice, then we need to reflect their behavior back onto themselves.
Or perhaps serve a 404, 304 or some other legitimate-looking static response that minimizes load on my server whilst giving then least amount of data to train on.