A text-to-video model is a machine learning model which takes as input a natural language description and produces a video matching that description. Video prediction on making objects realistic in a stable background is performed by using recurrent neural network for a sequence to sequence model with a connector convolutional neural network encoding and decoding each frame pixel by pixel, creating video using deep learning. Data collection and data set preparation using clear video from kinetic human action video. Training the convolutional neural network for making video. Keywords extraction from text using natural-language programming . Testing of Data set in conditional generative model for existing static and dynamic information from text by variational autoencoder and generative adversarial network. There are different models including open source models. CogVideo presented their code in GitHub. Meta Platforms uses text-to-video with makeavideo.studio.Google used Imagen Video for converting text-to-video. Antonia Antonova presented another model. In March 2023, a landmark research paper by Alibaba research was published, applying many of the principles found in latent image diffusion models to video generation. Many services like or have since adopted similar approaches to video generation in their respective products. Although alternative approaches exist, full latent diffusion models are currently regarded to be state of the art for video diffusion.
Anders Meibom, Devis Tuia, Guilhem Maurice Louis Banc-Prandi, Jonathan Paul Sauder
Pierre Dillenbourg, Richard Lee Davis, Kevin Gonyop Kim, Thiemo Wambsganss, Wei Jiang