ARTICLE AD BOX
‘Sora’ can generate complex scenes, but will be restricted before its eventual release
OpenAI, the company behind ChatGPT, has announced a new tool that turns text prompts into computer-generated videos. The program will be released to the public only after OpenAI builds in a range of censorship features.
Nicknamed ‘Sora’, the program “is able to generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background,” all based on prompts from users, OpenAI said in a statement on its website on Thursday.
Sora can also create videos based on user-submitted images, or take existing video footage and extend it with new material, the company said.
In a series of posts on X, OpenAI shared multiple Sora-created videos, including one generated from the prompt: “Beautiful, snowy Tokyo city is bustling. The camera moves through the bustling city street, following several people enjoying the beautiful snowy weather and shopping at nearby stalls. Gorgeous sakura petals are flying through the wind along with snowflakes.”
Introducing Sora, our text-to-video model.
Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions. https://t.co/7j2JN27M3W
Prompt: “Beautiful, snowy… pic.twitter.com/ruTEWn87vf
OpenAI founder Sam Altman then posted videos suggested by his followers on X, including “Two golden retrievers podcasting on top of a mountain” and a lifelike “cooking session for homemade gnocchi hosted by a grandmother social media influencer.”
https://t.co/uCuhUPv51N pic.twitter.com/nej4TIwgaP
— Sam Altman (@sama) February 15, 2024https://t.co/rmk9zI0oqO pic.twitter.com/WanFKOzdIw
— Sam Altman (@sama) February 15, 2024OpenAI did not state when Sora would be released to the public. The firm said that it would first be handed to its so-called ‘Red Team’ to ensure that it cannot be used to create scenes of “extreme violence, sexual content, hateful imagery, celebrity likeness, or the [intellectual property] of others.”
Read more
The company also noted that the technology is still prone to glitches and errors. “It may struggle with accurately simulating the physics of a complex scene, and may not understand specific instances of cause and effect. For example, a person might take a bite out of a cookie, but afterward, the cookie may not have a bite mark,” OpenAI said on its site.
AI technology has improved at a rapid pace over the last two years, with OpenAI’s GPT language model going from powering a chatbot program in late 2022 to performing in the 93rd percentile on a SAT reading exam and at the 89th percentile on a SAT math test just four months later.
Altman has previously acknowledged that he is “a little bit scared” of his technology’s potential. However, despite forbidding customers from using OpenAI to “develop or use weapons, injure others or destroy property, or engage in unauthorized activities that violate the security of any service or system,” the organization nevertheless announced in January that it is working with the US military on several artificial intelligence projects.
OpenAI partnered up with the Pentagon after dropping its prior prohibition on the use of its technologies for “military and warfare” purposes, company executive Anna Makanju told the World Economic Forum’s annual meeting in Davos.