Meta just announced its own media-focused AI model, called Movie Gen, that can be used to generate realistic video and audioclips.
The company shared multiple 10-second clips generated with Movie Gen, including a Moo Deng-esque baby hippo swimming around, to demonstrate its capabilities. While the tool is not yet available for use, this Movie Gen announcement comes shortly after its Meta Connect event, which showcased new and refreshed hardware and the latest version of its large language model, Llama 3.2.
Going beyond the generation of straightforward text-to-video clips, the Movie Gen model can make targeted edits to an existing clip, like adding an object into someone’s hands or changing the appearance of a surface. In one of the example videos from Meta, a woman wearing a VR headset was transformed to look like she was wearing steampunk binoculars.
Audio bites can be generated alongside the videos with Movie Gen. In the sample clips, an AI man stands near a waterfall with audible splashes …