There's also Python bindings for the fork for anyone who uses Python: https://github.com/lexiforest/curl_cffi
If what I've seen from CloudFlare et.al. are any indication, it's the exact opposite --- the amount of fingerprinting and "exploitation" of implementation-defined behaviour has increased significantly in the past few months, likely in an attempt to kill off other browser engines; the incumbents do not like competition at all.
The enemy has been trying to spin it as "AI bots DDoSing" but one wonders how much of that was their own doing...
No, they're discussing increased fingerprinting / browser profiling recently and how it affects low-market-share browsers.
This is entirely web crawler 2.0 apocolypse.
I love this curl, but I worry that if a component takes on the role of deception in order to "keep up" it accumulates a legacy of hard to maintain "compatibility" baggage.
Ideally it should just say... "hey I'm curl, let me in"
The problem of course lies with a server that is picky about dress codes, and that problem in turn is caused by crooks sneaking in disguise, so it's rather a circular chicken and egg thing.
What? Ideally it should just say "GET /path/to/page".
Sending a user agent is a bad idea. That shouldn't be happening at all, from any source.
If not, then fingerprinting could still be done to some extent at the IP layer. If the TTL value in the IP layer is below 64, it is obvious this is either not running on modern Windows or is running on a modern Windows machine that has had its default TTL changed, since by default the TTL of packets on modern Windows starts at 128 while most other platforms start it at 64. Since the other platforms do not have issues communicating over the internet, so IP packets from modern Windows will always be seen by the remote end with TTLs at or above 64 (likely just above).
That said, it would be difficult to fingerprint at the IP layer, although it is not impossible.
Only if you're using PaaS/IaaS providers don't give you low level access to the TCP/IP stack. If you're running your own servers it's trivial to fingerprint all manner of TCP/IP properties.
If everywhere is reachable in under 64 hops, then packets sent from systems that use a TTL of 128 will arrive at the destination with a TTL still over 64 (or else they'd have been discarded for all the other systems already).
If you count up from zero, then you'd also have to include in every packet how high it can go, so that a router has enough info to decide if the packet is still live. Otherwise every connection in the network would have to share the same fixed TTL, or obey the TTL set in whatever random routers it goes through. If you count down, you're always checking against zero.
Based on the fact that they are requesting the same absolutely useless and duplicative pages (like every possible combniation of query params even if it does not lead to unique content) from me hundreds of times per url, and are able to distribute so much that I'm only getting 1-5 requests per day from each IP...
...cost does not seem to be a concern for them? Maybe they won't actually mind ~5 seconds of CPU on a proof of work either? They are really a mystery to me.
I currently am using CloudFlare Turnstile, which incorporates proof of work but also various other signals, which is working, but I know does have false positives. I am working on implementing a simpler nothing but JS proof of work (SHA-512-based), and am going to switch that in and if it works great (becuase I don't want to keep out the false positives!), but if it doesn't, back to Turnstile.
The mystery distributred idiot bots were too much. (Scaling up resources -- they just scaled up their bot rates too!!!) I don't mind people scraping if they do it respectfully and reasonably; taht's not what's been going on, and it's an internet-wide phenomenon of the past year.
I, too, am saddened by this gatekeeping. IIUC custom browsers (or user-agent) from scratch will never work on cloudflare sites and the like until the UA has enough clout (money, users, etc) to sway them.
There's too much lost revenue in open things for companies to embrace fully open technology anymore.
One may posit "maybe these projects should cache stuff so page loads aren't actually expensive" but these things are best-effort and not the core focus of these projects. You install some Git forge or Trac or something and it's Good Enough for your contributors to get work done. But you have to block the LLM bots because they ignore robots.txt and naively ask for the same expensive-to-render page over and over again.
The commercial impact is also not to be understated. I remember when I worked for a startup with a cloud service. It got talked about here, and suddenly every free-for-open-source CI provider IP range was signing up for free trials in a tight loop. These mechanical users had to be blocked. It made me sad, but we wanted people to use our product, not mine crypto ;)
Writing a browser is hard, and the incumbents are continually making it harder.
Doesn't get more fingerprintable than that. They provide an un-falsifiable certificate that "the current browser is an unmodified Chrome build, running on an unmodified Android phone with secure boot".
If they didn't want to fingerprintable, they could just not do that and spend all the engineering time and money on something else.
[1]: https://en.wikipedia.org/wiki/Web_Environment_Integrity