I get why you’re frustrated and you have every right to be. I’m going to preface what I’m going to say next by saying I work in this industry. I’m not at Cloudflare but I am at a company that provides bot protection. I analyze and block bots for a living. Again, your frustrations are warranted.
Even if a site doesn’t have sensitive information, it likely serves a captcha because of the amount of bots that do make requests that are scraping related. The volume of these requests can effectively DDoS them. If they’re selling something, it can disrupt sales. So they lose money on sales and eat the load costs.
With more and more username and password leaks, credential stuffing is getting to be a bigger issue than anyone actually realizes. There aren’t really good ways of pinpointing you vs someone that has somehow stolen your credentials. Bots are increasingly more and more sophisticated. Meaning, we see bots using aged sessions which is more in line with human behavior. Most of the companies implementing captcha on login segments do so to try and protect your data and financials.
The rise in unique, privacy based browsers is great and it’s also hard to keep up with. It’s been more than six months, but I’ve fingerprinted Pale Moon and, if I recall correctly, it has just enough red flags to be hard to discern between a human and a poorly configured bot.
Ok, enough apologetics. This is a cat and mouse game that the rest of us are being drug into. Sometimes I feel like this is a made up problem. Ultimately, I think this type of thing should be legislated. And before the bot bros jump in and say it’s their right to scrape and take data it’s not. Terms of use are plainly stated by these sites. They consider it stealing.
Thank you for coming to my Tedx Talk on bots.
Edit: I just want to say that allowing any user agent with “Pale Moon” or “Goanna” isn’t the answer. It’s trivially easy to spoof a user agent which is why I worked on fingerprinting it. Changing Pale Moon’s user agent to Firefox is likely to cause you problems too. The fork they are using has different fingerprints than an up to date Firefox browser.
Welcome to bot detection. It’s a cat and mouse game, an ever changing battle where each side makes moves and counter moves. You can see this with the creation of captcha-less challenges.
But to say captcha are useless because bots can pass them is somewhat similar to saying your antivirus is useless because certain malware and ransomware can bypass it.
How are you measuring this? On my end, when I look at the metrics I have available to me, the volume of bot requests that are passing captcha does not exceed that of humans. We continually review false positives and false negatives to make sure we aren’t impacting humans while still making it hard for bots.
Hi! I didn’t forget about your response. I sifted through the links to find the study in question. I imagine my response isn’t going to satisfy you but please hear me out. I’m open to hearing your rebuttals regarding this too.
The study is absolutely correct with what they studied and the results they found. My main issues are the scope and some of the methodologies.
On one hand, I see the “AI” they used was able to solve captchas better than humans. My main issue with this is that this is one tool. Daily, I work on dozens of different frameworks and services, some that claim to leverage AI. The results and ability to pass captcha varies with each tool. There’s an inevitable back and forth with each tool as these tools learn how to bypass us and as we counter these changes. There’s not just one tool that everyone is using as their bot as is the case in the study, so it’s not exactly how this works in the real world.
I recognize that the list of sites they chose were the top 200 sites on the web. That said, there are more, up-and-coming captcha services that weren’t tested. I think it’s worth noting that the “captcha-less”, like Turnstile, approaches are still captcha but skip straight to proof of work and cutting out the human altogether.
We should absolutely take studies like this to heart and find better ways that don’t piss off humans. But the reality is that these tools are working to cut down on a vast amount of bot traffic that you don’t see. I understand if you’re not ok with that line of reasoning because I’m asking you to trust me, a random Internet stranger. I imagine each company can show you metrics regarding FP rates and how many bots are actually passing their captcha. Most do their best to cut down on the false positive rate.
I mean, it’s a while since i worked in backend. But one of the base tools was to limit requests per second per IP, so they can’t DDOS you. If a bot crawls your webpage you host with the intention to share, what’s the harm? And if one particukar bot/crawler misbehaves, block it. And if you don’t intend to share, put it in a VPN.
IPs used by bots are now *highly * distributed. We will see the same bot use hundreds of thousands of IP addresses. Each IP can easily only make one or two requests which is hard to limit with volume based detections. Also, I’m not sure where you’re at in the world, but it’s more common in countries outside of North America to have IP addresses that are heavily shared. Not to mention, there are companies in Europe that will pay you for use of your IP address explicitly for bots.
You might think you could limit by IP classification but bots increasingly use residential classified IPs.
As for allowing good bots, that isn’t so much an issue. They respect the robots.txt that companies implement. We see bots scraping data for LLMs more and more that don’t respect this file. Also, bots that are scraping prices and anything else you don’t want them doing, like credential stuffing, aren’t going to listen or respect that either.
In terms of using a VPN, absolutely limit outside access to sensitive infrastructure but that’s not really where most companies experience pain from bots. That’s not to say that we don’t see bots attempting vulnerability scanning. These requests can be highly distributed too.
Companies ultimately reach out to companies like Cloudflare because the usual methods aren’t working for them. Onboarding some clients, I’ve seen more bot requests than human requests which can be detrimental for business.
I’m happy to answer any other questions you might have. While I do work in the industry, I don’t know everything. I just want to reiterate that I am not a fan of how things are currently on the Internet. I wish this was illegal as I think it would cut down on a lot of bot traffic which would make it much more manageable for everyone.
I get why you’re frustrated and you have every right to be. I’m going to preface what I’m going to say next by saying I work in this industry. I’m not at Cloudflare but I am at a company that provides bot protection. I analyze and block bots for a living. Again, your frustrations are warranted.
Even if a site doesn’t have sensitive information, it likely serves a captcha because of the amount of bots that do make requests that are scraping related. The volume of these requests can effectively DDoS them. If they’re selling something, it can disrupt sales. So they lose money on sales and eat the load costs.
With more and more username and password leaks, credential stuffing is getting to be a bigger issue than anyone actually realizes. There aren’t really good ways of pinpointing you vs someone that has somehow stolen your credentials. Bots are increasingly more and more sophisticated. Meaning, we see bots using aged sessions which is more in line with human behavior. Most of the companies implementing captcha on login segments do so to try and protect your data and financials.
The rise in unique, privacy based browsers is great and it’s also hard to keep up with. It’s been more than six months, but I’ve fingerprinted Pale Moon and, if I recall correctly, it has just enough red flags to be hard to discern between a human and a poorly configured bot.
Ok, enough apologetics. This is a cat and mouse game that the rest of us are being drug into. Sometimes I feel like this is a made up problem. Ultimately, I think this type of thing should be legislated. And before the bot bros jump in and say it’s their right to scrape and take data it’s not. Terms of use are plainly stated by these sites. They consider it stealing.
Thank you for coming to my Tedx Talk on bots.
Edit: I just want to say that allowing any user agent with “Pale Moon” or “Goanna” isn’t the answer. It’s trivially easy to spoof a user agent which is why I worked on fingerprinting it. Changing Pale Moon’s user agent to Firefox is likely to cause you problems too. The fork they are using has different fingerprints than an up to date Firefox browser.
But captchas have now proven useless, since bots are better at solving them now than humans?
Welcome to bot detection. It’s a cat and mouse game, an ever changing battle where each side makes moves and counter moves. You can see this with the creation of captcha-less challenges.
But to say captcha are useless because bots can pass them is somewhat similar to saying your antivirus is useless because certain malware and ransomware can bypass it.
But they are better than humans at solving them.
How are you measuring this? On my end, when I look at the metrics I have available to me, the volume of bot requests that are passing captcha does not exceed that of humans. We continually review false positives and false negatives to make sure we aren’t impacting humans while still making it hard for bots.
https://duckduckgo.com/?q=captcha+ai+better&t=fpas&ia=web
Hi! I didn’t forget about your response. I sifted through the links to find the study in question. I imagine my response isn’t going to satisfy you but please hear me out. I’m open to hearing your rebuttals regarding this too.
The study is absolutely correct with what they studied and the results they found. My main issues are the scope and some of the methodologies.
On one hand, I see the “AI” they used was able to solve captchas better than humans. My main issue with this is that this is one tool. Daily, I work on dozens of different frameworks and services, some that claim to leverage AI. The results and ability to pass captcha varies with each tool. There’s an inevitable back and forth with each tool as these tools learn how to bypass us and as we counter these changes. There’s not just one tool that everyone is using as their bot as is the case in the study, so it’s not exactly how this works in the real world.
I recognize that the list of sites they chose were the top 200 sites on the web. That said, there are more, up-and-coming captcha services that weren’t tested. I think it’s worth noting that the “captcha-less”, like Turnstile, approaches are still captcha but skip straight to proof of work and cutting out the human altogether.
We should absolutely take studies like this to heart and find better ways that don’t piss off humans. But the reality is that these tools are working to cut down on a vast amount of bot traffic that you don’t see. I understand if you’re not ok with that line of reasoning because I’m asking you to trust me, a random Internet stranger. I imagine each company can show you metrics regarding FP rates and how many bots are actually passing their captcha. Most do their best to cut down on the false positive rate.
I mean, it’s a while since i worked in backend. But one of the base tools was to limit requests per second per IP, so they can’t DDOS you. If a bot crawls your webpage you host with the intention to share, what’s the harm? And if one particukar bot/crawler misbehaves, block it. And if you don’t intend to share, put it in a VPN.
Is that out of date?
Unfortunately it is out of date.
IPs used by bots are now *highly * distributed. We will see the same bot use hundreds of thousands of IP addresses. Each IP can easily only make one or two requests which is hard to limit with volume based detections. Also, I’m not sure where you’re at in the world, but it’s more common in countries outside of North America to have IP addresses that are heavily shared. Not to mention, there are companies in Europe that will pay you for use of your IP address explicitly for bots.
You might think you could limit by IP classification but bots increasingly use residential classified IPs.
As for allowing good bots, that isn’t so much an issue. They respect the robots.txt that companies implement. We see bots scraping data for LLMs more and more that don’t respect this file. Also, bots that are scraping prices and anything else you don’t want them doing, like credential stuffing, aren’t going to listen or respect that either.
In terms of using a VPN, absolutely limit outside access to sensitive infrastructure but that’s not really where most companies experience pain from bots. That’s not to say that we don’t see bots attempting vulnerability scanning. These requests can be highly distributed too.
Companies ultimately reach out to companies like Cloudflare because the usual methods aren’t working for them. Onboarding some clients, I’ve seen more bot requests than human requests which can be detrimental for business.
I’m happy to answer any other questions you might have. While I do work in the industry, I don’t know everything. I just want to reiterate that I am not a fan of how things are currently on the Internet. I wish this was illegal as I think it would cut down on a lot of bot traffic which would make it much more manageable for everyone.