In what feels like just a few short years, advancements in generative artificial intelligence have seemingly transformed “AI” from a buzzword slapped onto boring business to attack investor capital into an an actual tool with clear, real -world use cases. Writer and business have already begun using image generator system like GIVE HER and Stable Diffusion to create their own, low-cost in house art department. Open AI’s ChatGPT chatbot, on the other hand, gave users a glimpse into a possible future internet ripe with sophisticated digital assistants able to spew out decent advice. Some even believe ChatGPT could unseat Google search.
Though much of the conversation about AI focuses on raw data and computing power, it would be a mistake to separate these tools from humans. From its inception, supposed AI tools have always relied on humans hiding behind the curtain, silently labelling images, sorting data, and making moral and political judgement calls that might seem trival to the living, but are next to impossible for machines.
2023 is poised to become a banner year for AI, with new startups racing forward to present new generative systems and legacy companies like Google and Meta likely to hand over more and more responsibility to its automated systems. That wave of projected hype makes it all the more important to take a step back and look at the many humans working behind the scenes to make the magic of AI a reality.
Over its 25 year history, Google has amassed a fortune and conquered the top layer of the internet thanks, in no small part, to human workers who test and oversee the company’s numerous search and ranking algorithm The proper functioning of those tools are paramount to the tech giant’s success, but the laborer responsible for that work are often contractors lacking the same pay and protections afforded to other full time Googlers. These members of the company, according to the Alphabet Workers Unionare referred to as “ghost workers.”
Dozens of those raters petitioned the company in early February, claiming they made poverty wages, despite helping create some of Google’s most profitable products. In the petition the raters claimed they and their colleagues, who work exclusively for Google, did not receive health care, family leave, or paid time off benefits.
“Since my time as a rater started, we have lost the ability to work full-time and were capped at 26 hours, and actually received a cut in our hourly pay,” Google Rater Michelle Curtis said in a statement sent to Gizmodo. “That did not stop the growing demands placed on us—we’ve been asked to do more work in the same amount of time and are increasingly exposed to violent and disturbing content.:
“Today, I could work at Wendy’s and make more than what I make working for Google,” Curtis added.
Even the next generation of cutting edge generative AI can’t break free of its human overseers. A Time investigation last month revealed OpenAI relied on outsourced Kenyan laborers, some paid as little as $2 per hour, to sift through dark disturbing content to create an AI filter that would be embedded in ChatGPT and eventually used to scan for signs of horrific content. The content detector, made possible by the human reviewers, is used to reduce the change the popular AI tool will serve up harmful content to consumers. The detector also helps remove toxic entries from the datasets used to train ChatGPT.
Workers training the AI said they regularly scanned through text depicting vivid accounts of torture, suicide, incest, and child sexual abuse, all to make ChatGPT palatable for the public. One of the workers told Time they suffered from haunting, recurring visions after reading vile descriptions of beastiality. That was torture,” the worker said.
For years Meta, which owns Facebook and Instagram, has faced waves of criticism from all sides of the political aisle for its content moderation decisions. Meta often responds to these criticisms publicly by telling critics it relies on seemingly objective and politically neutral AI systems to detect and remove harmful content. That description, while likely to become more true in the coming years, downplays the company’s continued reliance on an army of contracted human content moderators spread out around the world. Those workers are regularly exposed to the darkest corners of humanity and view videos and images depicting brutal killings, self harm, and mutilation, all for fractions of what full-time Facebook engineers earn.
Previous reports documenting the lives of Facebook moderators in Arizona felt compelled to turn to sex and drugs to cope with the stress of the content they were viewing. In another case, some of the content moderators reportedly began to believe some of the outlandish conspiracy theories they were hired to suss out. Meta paid traumatized workers $52 million as a part of a settlement in 2020 and promised workplace improvements following the reports, but workers say little has changed on the ground.
While many tech executives have tried to underplay or disguise humanity’s hidden role in article intelligence, Amazon founder Jeff Bezos learned into the idea full tilt with Amazon’s Mechanical Turk. Reportedly named after an 18th century automated chess playing machine that was actually just a person hiding in a box, Mechanical Turk is a service researchers and data scientists can turn to complete simple tasks like image labeling, all for morsels of money. Those labeling task are trivially easy for humans but incredibly difficult for AI. Bezos reportedly referred to his new business at the time as “artificial artificial intelligence.”
“Normally, a human makes a request of a computer, and the computer does the computation of the task,” Bezos said in an interview with The New York Times in 2017. “But artificial artificial intelligences like Mechanical Turk invert all that. The computer has a task that is easy for a human but extraordinarily hard for the computer. So instead of calling a computer service to perform the function, it calls a human.”
Similar to Meta, YouTue also employs an armada of contract workers to work alongside its AI detection systems to keep its platform relatively toxicity free. Former moderators, some of which have filed lawsuits against the company, claim they were expected to review 100 to 300 pieces of content per day with an expected error rate of between 2% and 5%. hose vidoes ran the gambit of horrible online content and reportedly include videos of animal mutilation, child rape and murder. The former moderators say they weren’t given the proper records to cope with that burden. In some case, according to a former YouTube moderator, contractors had to pay for their own medical treatment to deal with the trauma spurred by the work.
TikTok may not have quite the same reputation for hosting controversial content as some of its other social media counterparts, however, its relatively squeaky clean exterior is still made possible by human reviewers acting as digital custodians. The video sharing platform is known for its uniquely powerful AI recommendation algorithmbut it’s the low-paid human workers monitoring the platform that ensure it actually remains usable for most viewers.
One of those former content moderators filled a lawsuit against the company in 2021 claiming videos wretched on the job led them and other workers to develop post traumatic stress disorder. In the lawsuit, the former moderators’ workers could spend up to 12 hours per day viewing content. The sheer volume of video uploaded to the platform means reviewers allegedly only have around 25 seconds to view each piece of content. In some cases, workers would have it iview three to ten videos simultaneously to keep up their pace.
The rapid rise of generative AI chatbots and more rudimentary assistants like Apple’s Siri before it has made interacting with AI systems an increasingly common part of online life. While user have confidence the assistant and chatbots they are speaking with are in fact machines, that wasn’t always the case.
Back in 2015, Facebook briefly launched its own virtual assistant competitor for Messenger called “M.” Facebook’s digital assistant helped users arrange deliveries, reserve tickets for shows, make reservations, and accomplish any number of other tasks, all with stunning efficiency and human-like competence. M, it turns out, wasn’t really a super advanced AI at all, but rather, mostly consisted of a team of human employees fielding questions. While there was software involved at some level, these human reviews worked alongside the AI to make sure users’ requests were still being fulfilled even if the AI was unable to complete the task. Users on the other end weren’t fully aware of whether they were being assisted by a human or a bot.
“It’s primarily powered by people,” former Facebook CTO Mike Schroepfer told Vox at the time. “But those people are effectively backed up by AIs.” Facebook eventually shut down M in 2018 after only ever serving around 2,000 users.
Note: This article have been indexed to our site. We do not claim legitimacy, ownership or copyright of any of the content above. To see the article at original source Click Here