We have paused all crawling as of Feb 6th, 2025 until we implement robots.txt support. Stats will not update during this period.

  • Boomer Humor Doomergod@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 days ago

    Robots.txt is a lot like email in that it was built for a far simpler time.

    It would be better if the server could detect bots and send them down a rabbit hole rather than trusting randos to abide by the rules.

    • jagged_circle@feddit.nl
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      4 days ago

      It is not possible to detect bots. Attempting to do so will invariably lead to false positives denying access to your content to what is usually the most at-risk & marginalized folks

      Just implement a cache and forget about it. If read only content is causing you too much load, you’re doing something terribly wrong.

      • Rimu@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        Maybe the definition of the term “crawler” has changed but crawling used to mean downloading a web page, parsing the links and then downloading all those links, parsing those pages, etc etc until the whole site has been downloaded. If there were links going to other sites found in that corpus then the same process repeats for those. Obviously this could cause heavy load, hence robots.txt.

        Fedidb isn’t doing anything like that so I’m a bit bemused by this whole thing.

    • mesamune@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      No idea honestly. If anyone knows, let us know! I dont think its necessarily a bad thing, If their crawler was being too aggressive, then it can accidentally DDOS smaller servers. Im hoping that is what they are doing and respecting the robot.txt that some sites have.

      • Ada@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 days ago

        Gotosocial has a setting in development that is designed to baffle bots that don’t respect robots.txt. FediDB didn’t know about that feature and thought gotosocial was trying to inflate their stats.

        In the arguments that went back and forth between the devs of the apps involved, it turns out that FediDB was ignoring robots.txt. ie, it was badly behaved

          • Pika@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            4 days ago

            might be in relates to issue link here

            It was a good read, personally speaking I think it probably would have just been better off to block gotosocial(if that’s possible since if seems stuff gets blocked when you check it) until proper robot support was provided I found it weird that they paused the entire system.

            Being said, if I understand that issue correctly, I fall under the stand that it is gotosocial that is misbehaving. They are poisoning data sets that are required for any type of federation to occur(node info, v1 and v2 statistics), under the guise that they said program is not respecting the robots file. Instead arguing that it’s preventing crawlers, where it’s clear that more than just crawlers are being hit.

            imo this looks bad, it defo puts a bad taste in my mouth regarding the project. I’m not saying an operator shouldn’t have to listen to a robots.txt, but when you implement a system that negatively hits third party, the response shouldn’t be the equivalent of sucks to suck that’s a you problem, your implementation should either respond zero or null, any other value and you are just being abusive and hostile as a program