Most web admins do not care. I’ve lost count of how many sites make me jump through CAPTCHAS or outright block me in private browsing or on VPN. Most of these sites have no sensitive information, or already know exactly who I am because I am already authenticating with my username and password. It’s not something the actual site admins even think about. They click the button, say “it works on my machine!” and will happily blame any user whose client is not dead-center average.
Enter username, but first pass this CAPTCHA.
Enter password, but first pass this second CAPTCHA.
Here’s another CAPTCHA because lol why not?
Some sites even have their RSS feed behind Cloudflare. And guess what that means? It means you can’t fucking load it in a typical RSS reader. Good job!
The web is broken. JavaScript was a mistake. Return to monke gopher.
I get why you’re frustrated and you have every right to be. I’m going to preface what I’m going to say next by saying I work in this industry. I’m not at Cloudflare but I am at a company that provides bot protection. I analyze and block bots for a living. Again, your frustrations are warranted.
Even if a site doesn’t have sensitive information, it likely serves a captcha because of the amount of bots that do make requests that are scraping related. The volume of these requests can effectively DDoS them. If they’re selling something, it can disrupt sales. So they lose money on sales and eat the load costs.
With more and more username and password leaks, credential stuffing is getting to be a bigger issue than anyone actually realizes. There aren’t really good ways of pinpointing you vs someone that has somehow stolen your credentials. Bots are increasingly more and more sophisticated. Meaning, we see bots using aged sessions which is more in line with human behavior. Most of the companies implementing captcha on login segments do so to try and protect your data and financials.
The rise in unique, privacy based browsers is great and it’s also hard to keep up with. It’s been more than six months, but I’ve fingerprinted Pale Moon and, if I recall correctly, it has just enough red flags to be hard to discern between a human and a poorly configured bot.
Ok, enough apologetics. This is a cat and mouse game that the rest of us are being drug into. Sometimes I feel like this is a made up problem. Ultimately, I think this type of thing should be legislated. And before the bot bros jump in and say it’s their right to scrape and take data it’s not. Terms of use are plainly stated by these sites. They consider it stealing.
Thank you for coming to my Tedx Talk on bots.
Edit: I just want to say that allowing any user agent with “Pale Moon” or “Goanna” isn’t the answer. It’s trivially easy to spoof a user agent which is why I worked on fingerprinting it. Changing Pale Moon’s user agent to Firefox is likely to cause you problems too. The fork they are using has different fingerprints than an up to date Firefox browser.
Welcome to bot detection. It’s a cat and mouse game, an ever changing battle where each side makes moves and counter moves. You can see this with the creation of captcha-less challenges.
But to say captcha are useless because bots can pass them is somewhat similar to saying your antivirus is useless because certain malware and ransomware can bypass it.
How are you measuring this? On my end, when I look at the metrics I have available to me, the volume of bot requests that are passing captcha does not exceed that of humans. We continually review false positives and false negatives to make sure we aren’t impacting humans while still making it hard for bots.
Hi! I didn’t forget about your response. I sifted through the links to find the study in question. I imagine my response isn’t going to satisfy you but please hear me out. I’m open to hearing your rebuttals regarding this too.
The study is absolutely correct with what they studied and the results they found. My main issues are the scope and some of the methodologies.
On one hand, I see the “AI” they used was able to solve captchas better than humans. My main issue with this is that this is one tool. Daily, I work on dozens of different frameworks and services, some that claim to leverage AI. The results and ability to pass captcha varies with each tool. There’s an inevitable back and forth with each tool as these tools learn how to bypass us and as we counter these changes. There’s not just one tool that everyone is using as their bot as is the case in the study, so it’s not exactly how this works in the real world.
I recognize that the list of sites they chose were the top 200 sites on the web. That said, there are more, up-and-coming captcha services that weren’t tested. I think it’s worth noting that the “captcha-less”, like Turnstile, approaches are still captcha but skip straight to proof of work and cutting out the human altogether.
We should absolutely take studies like this to heart and find better ways that don’t piss off humans. But the reality is that these tools are working to cut down on a vast amount of bot traffic that you don’t see. I understand if you’re not ok with that line of reasoning because I’m asking you to trust me, a random Internet stranger. I imagine each company can show you metrics regarding FP rates and how many bots are actually passing their captcha. Most do their best to cut down on the false positive rate.
I mean, it’s a while since i worked in backend. But one of the base tools was to limit requests per second per IP, so they can’t DDOS you. If a bot crawls your webpage you host with the intention to share, what’s the harm? And if one particukar bot/crawler misbehaves, block it. And if you don’t intend to share, put it in a VPN.
Disgusting and unsurprising.
Most web admins do not care. I’ve lost count of how many sites make me jump through CAPTCHAS or outright block me in private browsing or on VPN. Most of these sites have no sensitive information, or already know exactly who I am because I am already authenticating with my username and password. It’s not something the actual site admins even think about. They click the button, say “it works on my machine!” and will happily blame any user whose client is not dead-center average.
Enter username, but first pass this CAPTCHA.
Enter password, but first pass this second CAPTCHA.
Here’s another CAPTCHA because lol why not?
Some sites even have their RSS feed behind Cloudflare. And guess what that means? It means you can’t fucking load it in a typical RSS reader. Good job!
The web is broken. JavaScript was a mistake. Return to
monkegopher.Fuck Cloudflare.
I get why you’re frustrated and you have every right to be. I’m going to preface what I’m going to say next by saying I work in this industry. I’m not at Cloudflare but I am at a company that provides bot protection. I analyze and block bots for a living. Again, your frustrations are warranted.
Even if a site doesn’t have sensitive information, it likely serves a captcha because of the amount of bots that do make requests that are scraping related. The volume of these requests can effectively DDoS them. If they’re selling something, it can disrupt sales. So they lose money on sales and eat the load costs.
With more and more username and password leaks, credential stuffing is getting to be a bigger issue than anyone actually realizes. There aren’t really good ways of pinpointing you vs someone that has somehow stolen your credentials. Bots are increasingly more and more sophisticated. Meaning, we see bots using aged sessions which is more in line with human behavior. Most of the companies implementing captcha on login segments do so to try and protect your data and financials.
The rise in unique, privacy based browsers is great and it’s also hard to keep up with. It’s been more than six months, but I’ve fingerprinted Pale Moon and, if I recall correctly, it has just enough red flags to be hard to discern between a human and a poorly configured bot.
Ok, enough apologetics. This is a cat and mouse game that the rest of us are being drug into. Sometimes I feel like this is a made up problem. Ultimately, I think this type of thing should be legislated. And before the bot bros jump in and say it’s their right to scrape and take data it’s not. Terms of use are plainly stated by these sites. They consider it stealing.
Thank you for coming to my Tedx Talk on bots.
Edit: I just want to say that allowing any user agent with “Pale Moon” or “Goanna” isn’t the answer. It’s trivially easy to spoof a user agent which is why I worked on fingerprinting it. Changing Pale Moon’s user agent to Firefox is likely to cause you problems too. The fork they are using has different fingerprints than an up to date Firefox browser.
But captchas have now proven useless, since bots are better at solving them now than humans?
Welcome to bot detection. It’s a cat and mouse game, an ever changing battle where each side makes moves and counter moves. You can see this with the creation of captcha-less challenges.
But to say captcha are useless because bots can pass them is somewhat similar to saying your antivirus is useless because certain malware and ransomware can bypass it.
But they are better than humans at solving them.
How are you measuring this? On my end, when I look at the metrics I have available to me, the volume of bot requests that are passing captcha does not exceed that of humans. We continually review false positives and false negatives to make sure we aren’t impacting humans while still making it hard for bots.
https://duckduckgo.com/?q=captcha+ai+better&t=fpas&ia=web
Hi! I didn’t forget about your response. I sifted through the links to find the study in question. I imagine my response isn’t going to satisfy you but please hear me out. I’m open to hearing your rebuttals regarding this too.
The study is absolutely correct with what they studied and the results they found. My main issues are the scope and some of the methodologies.
On one hand, I see the “AI” they used was able to solve captchas better than humans. My main issue with this is that this is one tool. Daily, I work on dozens of different frameworks and services, some that claim to leverage AI. The results and ability to pass captcha varies with each tool. There’s an inevitable back and forth with each tool as these tools learn how to bypass us and as we counter these changes. There’s not just one tool that everyone is using as their bot as is the case in the study, so it’s not exactly how this works in the real world.
I recognize that the list of sites they chose were the top 200 sites on the web. That said, there are more, up-and-coming captcha services that weren’t tested. I think it’s worth noting that the “captcha-less”, like Turnstile, approaches are still captcha but skip straight to proof of work and cutting out the human altogether.
We should absolutely take studies like this to heart and find better ways that don’t piss off humans. But the reality is that these tools are working to cut down on a vast amount of bot traffic that you don’t see. I understand if you’re not ok with that line of reasoning because I’m asking you to trust me, a random Internet stranger. I imagine each company can show you metrics regarding FP rates and how many bots are actually passing their captcha. Most do their best to cut down on the false positive rate.
I mean, it’s a while since i worked in backend. But one of the base tools was to limit requests per second per IP, so they can’t DDOS you. If a bot crawls your webpage you host with the intention to share, what’s the harm? And if one particukar bot/crawler misbehaves, block it. And if you don’t intend to share, put it in a VPN.
Is that out of date?