Luc Ienn
Friendly Neighborhood Skullbird
A bit long, so the TLDR is this: captchas have a new system that might be training AI models how to bypass anti-recognition techniques people use to protect their images from being processed into image recognition systems. To me this is the most egregious thing captchas have done.
EDIT: I wish I had a photo of one of these captchas to share. If I get another one of the same, I’ll post a picture in this thread.
I was subjected to a new sort of captcha recently that had me selecting AI-generated images that had a unique “visual filter” applied to them. It was like a layer of colorful transparent polka dots, making each picture seem like I was viewing it through stained glass. I recognize what this filter is supposed to be.
There’s two things about this. The first is the actual pictures; I think I was being used to outsource the process of training AI to be able to recognize its own creations. If asked to make a chair, it is expected to be confident and accurate in knowing what a chair is. I don’t like that I’m being used for this as unpaid labor, but I have little say. (Some might argue that the “pay” is being allowed to use a service for free, but considering that captchas are used to keep bots from getting access, it’s a bit of a lopsided transaction).
Then, the filter. It serves two purposes, I think. The first is to prevent someone from using a bot to run through captcha, since we’ve finally gotten to the point where asking a user account to prove that they’re not a robot is made more difficult by advances in the AI technology. The second purpose though is a bit more of a theory, so take it with a grain of salt. I think the filter is on these images to train AI models how to eventually see past these filters.
Some artists, celebrities, and other people hoping to protect their images have been putting those filters on their photos to prevent AI from recognizing or imprinting them. Normally, they’re applied at a transparency so imperceptible that it doesn’t affect the human experience, but it messes with image recognition software. I feel upset that there is a strong possibility captcha is now enlisting all websites that use captcha for security into breaking down that protection. We’ve already had to train their self driving cars and text recognition technology. I regret that in order to access many services on the internet, my human ability to recognize images is being used to violate people’s privacy. I know this is not anything new, but this feels the worst yet.
EDIT: I wish I had a photo of one of these captchas to share. If I get another one of the same, I’ll post a picture in this thread.
I was subjected to a new sort of captcha recently that had me selecting AI-generated images that had a unique “visual filter” applied to them. It was like a layer of colorful transparent polka dots, making each picture seem like I was viewing it through stained glass. I recognize what this filter is supposed to be.
There’s two things about this. The first is the actual pictures; I think I was being used to outsource the process of training AI to be able to recognize its own creations. If asked to make a chair, it is expected to be confident and accurate in knowing what a chair is. I don’t like that I’m being used for this as unpaid labor, but I have little say. (Some might argue that the “pay” is being allowed to use a service for free, but considering that captchas are used to keep bots from getting access, it’s a bit of a lopsided transaction).
Then, the filter. It serves two purposes, I think. The first is to prevent someone from using a bot to run through captcha, since we’ve finally gotten to the point where asking a user account to prove that they’re not a robot is made more difficult by advances in the AI technology. The second purpose though is a bit more of a theory, so take it with a grain of salt. I think the filter is on these images to train AI models how to eventually see past these filters.
Some artists, celebrities, and other people hoping to protect their images have been putting those filters on their photos to prevent AI from recognizing or imprinting them. Normally, they’re applied at a transparency so imperceptible that it doesn’t affect the human experience, but it messes with image recognition software. I feel upset that there is a strong possibility captcha is now enlisting all websites that use captcha for security into breaking down that protection. We’ve already had to train their self driving cars and text recognition technology. I regret that in order to access many services on the internet, my human ability to recognize images is being used to violate people’s privacy. I know this is not anything new, but this feels the worst yet.