Credit: VentureBeat made with Midjourney
Join leaders in Boston on March 27 for an exclusive night of networking, insights, and conversation. Request an invite here.
Google officially apologized on Friday for embarrassing and inaccurate images generated by its new Gemini AI tool. The apology comes after users highlighted problems with Gemini producing ahistorical and racially diverse images for specific prompts about groups like Nazi soldiers and U.S. Founding Fathers.
In a blog post, Google’s senior vice president Prabhakar Raghavan said some of the images were “inaccurate or even offensive,” acknowledging that the company had “missed the mark.”
Raghavan explained Google had sought to avoid bias by ensuring the AI produced a diverse range of people for open-ended prompts. However, for specific historical contexts, the results should accurately reflect the prompt.
“If you ask for a picture of football players, or someone walking a dog, you may want to receive a range of people,” Raghavan explained. “You probably don’t just want to only receive images of people of just one type of ethnicity.”
VB Event
The AI Impact Tour – Boston
We’re excited for the next stop on the AI Impact Tour in Boston on March 27th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on best practices for data integrity in 2024 and beyond. Space is limited, so request an invite today.
Request an invite
“However, if you prompt Gemini for images of a specific type of person…or people in particular cultural or historical contexts, you should absolutely get a response that accurately reflects what you ask for,” he added.
Bias and diversity remain challenges for AI
The issue highlights the challenges of biases in AI systems and how difficult they are to squelch. Several other AI systems have faced criticism for amplifying stereotypes and lack of diversity in the past. Google appears to have overcorrected, seeking diversity even for historical contexts where it makes little sense.
Following its launch, Gemini regularly produced images of non-white U.S. senators in the 1800s, and racially diverse Nazi soldiers, sparking ridicule and accusations of political correctness. Critics ranged from tech leaders to right-wing figures who see Google as promoting a “woke” ideology.
Google responded by temporarily pausing Gemini’s ability to generate images of people on Thursday. The company says it will work to improve the feature before relaunching it.
The Gemini mess exposes deeper issues
The Gemini controversy caps a rocky start for Google’s recent AI ambitions. This latest blunder comes not long after the company faced backlash for a staged promotional video that exaggerated Gemini’s capabilities and criticism of its previous AI model, Google Bard.
With competitors like Microsoft and OpenAI racing ahead in AI, Google is now scrambling to establish its vision for a “Gemini Era.” However, the rapid succession of AI product launches and rebranding, from Bard to Gemini and its many versions, has left many consumers confused.
The company’s recent AI failures highlight the difficulty of balancing historical accuracy with diversity and representation. But they also point to deeper issues in Google’s strategy. The company that once led in search now struggles to deliver coherent and useful AI products.
Partly this stems from tech’s “move fast and break things” culture. Google rushed Gemini and its siblings to market to vie with OpenAI’s ChatGPT. But its scattershot approach only seems to spread consumer distrust. Google must rebuild confidence with a thoughtful AI roadmap, not more gimmicky launches.
This latest mistake also exposes problems inside Google’s AI development process. As has been previously reported, efforts to bake ethics into AI at the company have repeatedly stalled. Google needs more inclusive and diverse AI teams. And it needs leaders who understand deploying powerful technology safely, not just quickly.
If Google fails to learn from its stumbling first steps in this AI age, it risks falling further behind. Users need clarity, not more chaos. The Gemini mess shows Google has lost control of both its AI and its messaging. Only a back-to-basics approach can restore public trust. Google’s AI future may very well depend on it.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Note: This article have been indexed to our site. We do not claim legitimacy, ownership or copyright of any of the content above. To see the article at original source Click Here