Runway, an AI startup, has released its first iOS app that allows users to use Gen-1, the company’s videotovideo generative AI system. Free users are offered a limited amount of credits.
Gen-1 transforms an existing video using a text, video, or image input. It works very much like a style-transfer tool, but instead of applying filters, it creates new videos. Upload a video, say, of someone riding a bike in a park and then apply whichever aesthetic or theme you like. You can make the video look like a charcoal drawing or watercolor painting.
The output of a generative AI is usually… odd. Your models will not behave like real Claymation if you add a “claymation” effect. Models will warp, limbs and features will melt or smear between frames. It’s not a problem, but it’s still fun.
Here are three different renderings, for instance, of an icon clip from Heat (1996). The clip at the bottom left is my favorite, as it uses a photo I took of a kitten to create the clip. The model did not ask me to explain that she applied Pacino’s cat face and gave him a little fur on his hands, but left his suit intact. The two other clips in the top row are preset filter.
Another example is a video taken of St. Paul’s Cathedral, London. The “paper and Ink” filter was applied. The effect is not spectacular, but was very easy to create. In the hands of someone more creative and experienced, it would be stunning.
The Runway app has been on my phone for a couple of days, and I can say that it makes the process of making this type of video a lot more fluid. Runway’s software suite can be accessed via the internet, so the distance between recording footage and creating it is wider. Of course, it’s not an entirely seamless experience. You’d expect the typical inefficiencies and unanticipated errors to occur in the initial release of an application. Cristobal Vallenzuela, Runway’s CEO, told The Verge that the most important thing is to make these tools mobile.
Valenzuela said that the phone is a great option because it allows you to record directly from your device and then tell Gen-1 what you want them to do with the video.
Other limitations are worth mentioning. There are also some restrictions. You cannot use footage that is longer than 5 seconds and certain prompts are prohibited. Nudity is not allowed, and copyrighted works are also off limits. The video I created “in the style” of a Studio Ghibli movie was rejected. It takes about two to three minute to make each video, which may not sound long but is a lifetime in an era where mobile editing is instant. Processing is performed in the cloud, and should become faster over time. Valenzuela said that the Gen-2 model will be added shortly.
These notes do not capture the potential that tools of this kind offer. AI text-to image models were also initially smudged and unrealistic. They’re now being used to trick the public by using pictures of the pope.
Valenzuela has likened today’s era of generative artificial intelligence to the ” toys ” phase of the nineteenth century. At that time, scientists and inventors created a range of devices which were not only trivial but were also the forerunners of modern cameras. Runway’s app is similar to these toys. I can’t see it being used in professional production, but I can imagine the impact that tools like these will have on future fashion.