83
Excerpt from a message I just posted in a #diaspora team internal f...
pod.geraspora.deExcerpt from a message I just posted in a #diaspora team internal forum category. The context here is that I recently get pinged by slowness/load spikes on the diaspora* project web infrastructure (Discourse, Wiki, the project website, ...), and looking at the traffic logs makes me impressively angry.
In the last 60 days, the diaspora* web assets received 11.3 million requests. That equals to 2.19 req/s - which honestly isn't that much. I mean, it's more than your average personal blog, but nothing that my infrastructure shouldn't be able to handle.
However, here's what's grinding my fucking gears. Looking at the top user agent statistics, there are the leaders:
2.78 million requests - or 24.6% of all traffic - is coming from Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; GPTBot/1.2; +https://openai.com/gptbot).
1.69 million reuqests - 14.9% - Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/600.2.5 (KHTML, like Gecko) Version/8.0.2 Safari/600.2.5 (Amazonb...
Evidence for the DDoS attack that bigtech LLM scrapers actually are.
I used to get a lot of scrappers hitting my Lemmy instance, most of them using a bunch of IP ranges, some of them masquerading their user agents as a regular browser.
What’s been working for me is using a custom nginx log format with a custom fail2ban filter that mets me easily block new bots once I identify some kind of signature.
For instance, one of these scrappers almost always sends requests that are around 250 bytes long, using the user agent of a legitimate browser that always sends requests that are 300 bytes or larger. I can then add a fail2ban jail that triggers on seeing this specific user agent with the wrong request size.
On top of this, I wrote a simple script that monitors my fail2ban logs and writes CIDR ranges that appear too often (the threshold is proportional to
1.5^(32-subnet_mask)
). This file is then parsed by fail2ban to block whole ranges. There are some specific details I omitted regardingbantime
andfindtime
, that ensure that a small malicious range will not be able to trick me into blocking a larger one. This has worked flawlessly to block “hostile” ranges with apparently 0 false positives for nearly a yearwhat are you basing that prefix length decision off? whois/NIC allocation data?
is the decision loop running locally to any given f2b instance, or do you aggregate for processing then distribute blocklist?
either way, seems like an interesting approach for catching the type of shit that likes to snowshoe from random cloud providers while lying in agent signature