-+ 0.00%
-+ 0.00%
-+ 0.00%

Sam Altman Hypes GPT-4o's 'Freedom' — But Critics Point Out AI's Real Problem Is Censorship, Not Bias

Benzinga·03/28/2025 02:05:53
Listen to the news

On Thursday, OpenAI CEO Sam Altman touted the latest version of GPT-4o's improved "freedom." However, digital rights advocates have previously argued that the real threat isn't political bias but growing AI-driven censorship.

What Happened: Altman took to X, formerly Twitter, and said that the new version of GPT-4o is "particularly good at coding, instruction following, and freedom."

Earlier this week, OpenAI upgraded GPT-4o, which debuted about a year ago, by introducing advanced image-generation capabilities. Along with this new capability, the AI startup also outlined its updated safety measures and policies for GPT-4o.

See Also: Tesla And Other US Robotics Giants Demand Federal Strategy To Compete With China’s $138 Billion Push — Warn America Will Lose The Race Without It

“In line with our Model Spec, we aim to maximize creative freedom by supporting valuable use cases like game development, historical exploration, and education—while maintaining strong safety standards,” the company stated, “At the same time, it remains as important as ever to block requests that violate those standards.”

Why It's Important: Altman's statement came at a time when the debate around AI's political leanings and bias mitigation strategies continues to swirl, particularly following missteps by Alphabet Inc.'s (NASDAQ:GOOG) (NASDAQ:GOOGL) Google Gemini and Adobe Inc.'s (NASDAQ:ADBE) Firefly, which drew backlash last year over "woke" image outputs.

However, some experts say the bigger issue isn't about whether AI is biased—it's about whether AI is suppressing speech altogether.

A 2023 report from Freedom House, titled "The Repressive Power of Artificial Intelligence," warned that AI is enabling governments and companies to scale censorship, surveillance, and disinformation faster and more cheaply than ever.

Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox.

A 2024 report by The Conversation said, "We found that the use policies of major chatbots don't meet United Nations free speech standards," adding, "These vague and expansive rules give companies too much discretion to silence controversial or politically sensitive topics."

For instance, Google's Gemini bans content that "promotes or encourages hatred," a guideline critics say is overly broad and potentially suppressive.

As generative AI tools like ChatGPT, Gemini, and Anthropic's Claude gain widespread influence, their usage policies shape public discourse.

Without transparent safeguards that align with global human rights norms, free speech advocates warn that AI companies risk enabling digital repression—even while claiming to promote "freedom."

Photo by jamesonwu1972 on Shutterstock

Check out more of Benzinga's Consumer Tech coverage by following this link.

Read Next:

Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors.