From 2ff9c01eb332bfde0613bdbef6a257de90e7b724 Mon Sep 17 00:00:00 2001 From: WeebDataHoarder <57538841+WeebDataHoarder@users.noreply.github.com> Date: Mon, 14 Apr 2025 13:53:06 +0200 Subject: [PATCH] Add Happy Eyeballs information to README --- README.md | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 02561df..cc9a8a5 100644 --- a/README.md +++ b/README.md @@ -195,6 +195,12 @@ You can modify the path where challenges are served and package name, if you don No source code editing or forking necessary! +### IPv6 Happy Eyeballs challenge retry + +In case a client connects over IPv4 first then IPv6 due to [Fast Fallback / Happy Eyeballs](https://en.wikipedia.org/wiki/Happy_Eyeballs), the challenge will automatically be retried. + +This is tracked by tagging challenges with a readable flag indicating the type of address. + ## Why? In the past few years this small git instance has been hit by waves and waves of scraping. This was usually fought back by random useragent blocks for bots that did not follow [robots.txt](/robots.txt), until the past half year, where low-effort mass scraping was used more prominently. @@ -203,7 +209,7 @@ Recently these networks go from using residential IP blocks to sending requests If the server gets sluggish, more requests pile up. Even when denied they scrape for weeks later. Effectively spray and pray scraping, process later. -At some point about 300Mbit/s of incoming requests (not including the responses) was hitting the server. And all at nonsense URLs +At some point about 300Mbit/s of incoming requests (not including the responses) was hitting the server. And all of them nonsense URLs, or hitting archive/bundle downloads per commit. If AI is so smart, why not just git clone the repositories?