AI resurrections of dead celebrities amuse and rankle

AI resurrections of dead celebrities amuse and rankle

WASHINGTON
AI resurrections of dead celebrities amuse and rankle

Hyper-realistic artificial intelligence videos depicting deceased public figures are spreading rapidly online, amusing some users while provoking concern over ethics, consent and control of a person’s likeness after death.

Generated using tools such as OpenAI’s Sora, the clips show figures including Queen Elizabeth II, Saddam Hussein and Pope John Paul II placed in fictional and often absurd scenarios. Since its launch in September, Sora has been widely used to create lifelike videos of historical figures and celebrities, earning it a reputation as a powerful deepfake tool.

While some content is treated as parody, others have drawn sharp criticism. In October, OpenAI blocked the creation of videos depicting civil rights leader Martin Luther King Jr. after his estate objected to disrespectful portrayals. Some clips manipulated his “I Have a Dream” speech, underscoring how easily AI can distort a public figure’s legacy.

“We’re getting into the uncanny valley,” said Constance de Saint Laurent, a professor at Ireland’s Maynooth University, warning that such content can be disturbing, particularly for families of the deceased. “If suddenly you started receiving videos of a deceased family member, this is traumatizing,” she said.

Relatives of late figures including Robin Williams, George Carlin and Malcolm X have condemned AI-generated depictions of their fathers. Zelda Williams, the actor’s daughter, urged users to stop circulating such videos, calling them “maddening.”

An OpenAI spokesperson said there are “strong free speech interests in depicting historical figures,” but added that families should ultimately have control over a person’s likeness. For recently deceased individuals, estates can now request that their likeness not be used in Sora.

Critics say the safeguards are insufficient. Hany Farid, a professor at the University of California, Berkeley, warned that even if one platform imposes limits, others may not, allowing the problem to grow.

Experts caution that the risks extend beyond celebrities, as ordinary people may also be vulnerable to synthetic manipulation. Researchers warn that the spread of AI-generated content could erode trust in online information, making users increasingly skeptical of real news.

“The issue isn’t just believing misinformation,” Saint Laurent said. “It’s that people stop trusting what’s real.”