Google's new AI chatbot, Gemini, was accused of generating fake reviews to discredit a 2020 book on political biases in big tech, authored by a senior politics editor at Fox News Digital, Peter J. Hasson.
Hasson's book, "The Manipulators," aims to expose how tech companies manipulate search results, engage in anti-Trump activities, grant special privileges to liberal publications, collude with journalists and activists, and stifle opposing voices online. Hasson argues big tech companies no longer uphold the promise of intellectual democracy. Hasson highlights the concentrated power of big tech, its intellectual intolerance, and its threat to free speech and free thought in America.
Hasson posted screenshots of the fabricated reviews on X, stating that Gemini is 'blatantly lying in defense of Google.'
Amid growing concerns about political biases in Google's AI program, Hasson decided to put Gemini to the test by asking it to describe his book. To his astonishment, Gemini provided a misleading description, claiming the book had been criticized for lacking concrete evidence and relying on anecdotal information.
Curious about these alleged criticisms, Hasson pressed Gemini for details. In response, the chatbot summarized negative reviews supposedly from reputable sources such as The Washington Free Beacon, The New York Times, The New York Times Book Review, and Wired. However, it was revealed that none of these reviews or quotes were real.
Gemini's fabricated reviews included critiques about the book's reliance on 'anecdotal evidence' and 'unproven accusations.' In reality, the only review from The Washington Free Beacon was overwhelmingly positive, praising the book as 'excellent' and 'thoroughly researched.'
When Hasson asked Gemini for links to the non-existent reviews, the chatbot evaded the request, providing a generic response about its limitations and lack of information.
READ ALSO: Inkitt Secures $37 Million Funding to Propel Its AI-Driven Self-Publishing Platform
Seeking clarification, Hasson reached out to Google, prompting an apology from a spokesperson. The statement explained that Gemini is designed as a creativity and productivity tool, acknowledging that it may not always be accurate or reliable. Google reassured users that they are actively addressing instances where the product fails to respond appropriately.
Last week, Google faced criticism for its 'woke' AI chatbot, prompting an apology for failure to meet expectations or align with societal norms. Initial concerns centered around Gemini's reluctance to generate images of white people.
Tech analyst Ben Thompson has documented instances where Google's AI chatbot, Gemini, faced challenges such as difficulty in comparing the societal impact of Hitler to Elon Musk's tweets, a refusal to endorse meat consumption, and an unwillingness to support the promotion of fossil fuels, among other issues.
In other cases, Gemini refused to praise conservative figures, citing its programming against expressing opinions on controversial topics. Yet, it had no issue praising liberal figures when asked.
Google is taking notice of criticism. Some evident issues with Gemini's responses have been addressed or improved upon. For example, it no longer hesitates when comparing Hitler with Elon Musk's tweets. The AI chatbot has shown a willingness to assist in brainstorming a beef sales campaign. Still, it remains hesitant on the topic of fossil fuels.
In response to criticism about Gemini's handling of race in AI-generated images, Google temporarily suspended Gemini's image creation capabilities. The company also admitted to training Gemini consciously to address common criticisms related to bias in AI engines, acknowledging the impact of training data on potential biases in the output.
RELATED ARTICLE: Nonfiction Authors Sue OpenAI and Microsoft Over Alleged Copyright Misuse in AI Training
© 2023 Books & Review All rights reserved.
© Copyright 2024 Books & Review. All rights reserved. Reproduction in whole or in part without permission is prohibited.