• 0 Posts
  • 4 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle
  • There are a few things I don’t like about this scoring system :

    • Why is there a “Top Provider Content Share” metric if its gonna score the same as the “Top Provider User Share” every time ?
    • Why is the Top Provider Content Share not higher than the user share ? For instance, emails usually have at least one sender and one recipient, making it twice as likely that at least one of them is using gmail. If an email has 10 recipients across 10 different providers, each provider has a copy of the data
    • Why is ease of hosting a mail server rated so well ? How is “leveraging email hosting services” decentralized in any way ?
    • Why are we using a random repo created a few hours ago by a random github user as a reference ?

  • CIDR ranges (a.b.c.d/subnet_mask) contain 2^(32-subnet_mask) IP addresses. The 1.5 I’m using controls the filter’s sensitivity and can be tuned to anything between 1 and 2

    Using 1 or smaller would mean that the filter gets triggered earlier for larger ranges (we want to avoid this so that a single IP can’t trick you into banning a /16)

    Using 2 or more would mean you tolerate more fail/IP for larger ranges, making you ban all smaller subranges before the filter gets a chance to trigger on a larger range.

    This is running locally to a single f2b instance, but should work pretty much the same with aggregated logs from multiple instances


  • I used to get a lot of scrappers hitting my Lemmy instance, most of them using a bunch of IP ranges, some of them masquerading their user agents as a regular browser.

    What’s been working for me is using a custom nginx log format with a custom fail2ban filter that mets me easily block new bots once I identify some kind of signature.

    For instance, one of these scrappers almost always sends requests that are around 250 bytes long, using the user agent of a legitimate browser that always sends requests that are 300 bytes or larger. I can then add a fail2ban jail that triggers on seeing this specific user agent with the wrong request size.

    On top of this, I wrote a simple script that monitors my fail2ban logs and writes CIDR ranges that appear too often (the threshold is proportional to 1.5^(32-subnet_mask)). This file is then parsed by fail2ban to block whole ranges. There are some specific details I omitted regarding bantime and findtime, that ensure that a small malicious range will not be able to trick me into blocking a larger one. This has worked flawlessly to block “hostile” ranges with apparently 0 false positives for nearly a year