OpenAI on the defensive after multiple PR setbacks in one week

The OpenAI logo under a raincloud.

Since the launch of its latest AI language model, GPT-4o, OpenAI has found itself on the defensive over the past week due to a string of bad news, rumors, and ridicule circulating on traditional and social media. The negative attention is potentially a sign that OpenAI has entered a new level of public visibility—and is more prominently receiving pushback to its AI approach beyond tech pundits and government regulators.

OpenAI’s rough week started last Monday when the company previewed a flirty AI assistant with a voice seemingly inspired by Scarlett Johansson from the 2013 film Her. OpenAI CEO Sam Altman alluded to the film himself on X just before the event, and we had previously made that comparison with an earlier voice interface for ChatGPT that launched in September 2023.

While that September update included a voice called “Sky” that some have said sounds like Johansson, it was GPT-4o’s seemingly lifelike, new conversational interface, complete with laughing and emotionally charged tonal shifts, that led to a widely circulated Daily Show segment ridiculing the demo for its perceived flirty nature. Next, a Saturday Night Live joke reinforced an implied connection to Johansson’s voice.

After hearing from Johansson’s lawyers, OpenAI announced it was pausing use of the “Sky” voice in ChatGPT on Sunday. The company specifically mentioned Sky in a tweet and Johansson defensively in its blog post: “We believe that AI voices should not deliberately mimic a celebrity’s distinctive voice—Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice,” the company wrote.

On Monday evening, NPR news reporter Bobby Allyn was the first to publish a statement from Johansson saying that Altman approached her to voice the AI assistant last September, but she declined. She says that Altman then attempted to contact her again before the GPT-4o demo last week, but they did not connect, and OpenAI went ahead with the apparent soundalike anyway. She was then “shocked, angered, and in disbelief” and hired lawyers to send letters to Altman and OpenAI asking them for detail on how they created the Sky voice.

“In a time when we are all grappling with deepfakes and the protection of our own likenesses, our own work, our own identities, I believe these are questions that deserve absolute clarity,” Johansson said in her statement. “I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.”

The repercussions of these alleged actions on OpenAI’s part are still unknown but are likely to ripple outward for some time.

Superalignment team implodes

The AI research company’s PR woes continued on Tuesday with the high-profile resignations of two key safety researchers: Ilya Sutskever and Jan Leike, who led the “Superalingment” team focused on ensuring that hypothetical, currently non-existent advanced AI systems do not pose risks to humanity. Following his departure, Leike took to social media to accuse OpenAI of prioritizing “shiny products” over crucial safety research.

In a joint statement posted on X, Altman and OpenAI President Greg Brockman addressed Leike’s criticisms, emphasizing their gratitude for his contributions and outlining the company’s strategy for “responsible” AI development. In a separate, earlier post, Altman acknowledged that “we have a lot more to do” regarding OpenAI’s alignment research and safety culture.

Meanwhile, critics like Meta’s Yann LeCun maintained the drama was much ado about nothing. Responding to a tweet where Leike wrote, “we urgently need to figure out how to steer and control AI systems much smarter than us,” LeCun replied, “It seems to me that before ‘urgently figuring out how to control AI systems much smarter than us’ we need to have the beginning of a hint of a design for a system smarter than a house cat.”

LeCun continued: “It’s as if someone had said in 1925 ‘we urgently need to figure out how to control aircrafts [sic] that can transport hundreds of passengers at near the speed of the sound over the oceans.’ It would have been difficult to make long-haul passenger jets safe before the turbojet was invented and before any aircraft had crossed the Atlantic non-stop. Yet, we can now fly halfway around the world on twin-engine jets in complete safety.  It didn’t require some sort of magical recipe for safety. It took decades of careful engineering and iterative refinements.”

Source link