83
Excerpt from a message I just posted in a #diaspora team internal f...
pod.geraspora.deExcerpt from a message I just posted in a #diaspora team internal forum category. The context here is that I recently get pinged by slowness/load spikes on the diaspora* project web infrastructure (Discourse, Wiki, the project website, ...), and looking at the traffic logs makes me impressively angry.
In the last 60 days, the diaspora* web assets received 11.3 million requests. That equals to 2.19 req/s - which honestly isn't that much. I mean, it's more than your average personal blog, but nothing that my infrastructure shouldn't be able to handle.
However, here's what's grinding my fucking gears. Looking at the top user agent statistics, there are the leaders:
2.78 million requests - or 24.6% of all traffic - is coming from Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; GPTBot/1.2; +https://openai.com/gptbot).
1.69 million reuqests - 14.9% - Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/600.2.5 (KHTML, like Gecko) Version/8.0.2 Safari/600.2.5 (Amazonb...
Evidence for the DDoS attack that bigtech LLM scrapers actually are.
How feasible is it to configure my server to essentially perform a reverse-slow-lorris attack on these LLM bots?
If they won’t play nice, then we need to reflect their behavior back onto themselves.
Or perhaps serve a 404, 304 or some other legitimate-looking static response that minimizes load on my server whilst giving then least amount of data to train on.
The only simple possibles ways are:
From the article, they try to bypass all of them:
It then become a game of whac a mole with big tech 😓
The more infuriating for me is that it’s done by the big names, and not some random startup.Edit: Now that I think about it, this doesn’t prove it is done by Google or Amazon: it can be someone using random popular user agentsI do believe there’s blocklists for their IPs out there, that should mitigate things a little
A possibility to game this kind of bots is to add a hidden link to a randomly generated page, which contain itself a link to another random page, and so one.: The bots will still consume resources but will be stuck parsing random garbage indefinitely.
I know there is a website that is doing that, but I forget his name.Edit: This is not the one I had in mind, but I find https://www.fleiner.com/bots/ describes be a good honeypot.
maybe you mean this incident https://news.ycombinator.com/item?id=40001971
This is it, thanks: https://www.web.sp.am/