AI video generators are improving rapidly and are more widely available, with Google’s Veo 2 now made in the Gemini app for anyone paying for Google One AI premium scheme. Like openi’s Sora, Runway, Adobe’s firefly and others, VO2 enables you to make a professional looking video from nothing more than a text prompt.
Now available to users with VO2 to pay, it seems a good opportunity to test these separate AI video generators against each other, and compare their strengths and weaknesses – and to assess that we are commonly with AI video. We are being told that these devices will replace movie-making, or at least fill the Internet with AI slopes, but are they really practically useful?
Microsoft looks like this, being Recently used it in an advertisementHowever, only parts of the clip were AI-made-shots with a cut cut and limited speed, where the chances of hallucinations or viewing are less.
For the purposes of this guide, I will take a look at Google Veo 2 and will keep it against Sora, runway and firefly. Other video generators are available, but these are the four most prominent: they spend money to access (starting at $ 20 per month), so you have to sign up for a month to play with them at least.
bouncing balls
If you are as old as you are, you will remember Sony, an incredible advertisement created to promote your new 1080p Bravia television in 2005 (above). More than 100,000 jumble balls were dropped on the steep roads of San Francisco, while the cameras rolled, and it was a compelling watch (background story Very funny, too).
This is a real challenge for AI, including lots of physics and movements. The sign I used was: “Thousands of individual, bright colored balls, a standing, Sunny Street in San Francisco, jumping down at a slow pace. The camera goes on the road carefully to bounce down the balls, bounning the balls downwards, trees and parked cars downwards.”
The effort of Google VEO 2 is not bad. Here is some strange physics going on, but it looks appropriately natural, and can act as a small clip if you are not looking very closely. The background elements are well presented, and the instructions in my original signal were very closely followed.
Sora is confused about the scene that is considered to be rendering. There are colored balls to ensure, but they move in the form of a misleading meat, and refer to gravity. The speed of the video is fine, even if it is going in a opposite direction requested by me, and the background parts of the video look fine on the whole.
If you compare it to the original Sony clip, the runway gets very close to the vibes, but again, there are many problems: balls are not consistent at all, the movement is not what I have sought, and it seems that it seems that a foreigner is watching a foreigner from a window in the top right corner. Although the road looks very good.
Jugnu is probably the worst of the bunch, here. Most balls are stable, and those who are running are not presented very well. The road seems to be fine, but it’s nothing special – certainly a retro video game feels it. With Sora Clip, the camera is taking me to the street when I really wanted to go down.
“Jurassic Park” scene
If the AI film is going to change the real people making the film, then Spielberg’s 1993 film needs to be able to be able to make it powerful as “Welcome to Jurassic Park”: The Moment where John Hammond revealed his visitors for the first time (above) as Richard Etanboro.
I was curious to see what would happen to the AI scene. This indication was: “At the top of a hill, two fossilists gently staggered through the grass. As they do that the camera pulls back to a broad shot, a detailed clearing and revealing a lake below. The dinosaurs slowly pass through the lake and trees.”
The clip from Google Ve2 looks great. The camera is not really not moving forward in the manner described by me, and peliytologists are not really shocking (and they are not either on a hill), but the views look good and dinosaurs look fine. This is normal altogether, but it is a good effort.
Sora goes a little crazy with this signal. The camera movements are jerky and do not follow the instructions made by me, and dinosaurs look like strange shape-transfer organisms. Regarding this effort I can say best that all the elements described by me are, and the surrounding scenes are done properly well.
For the runway, it is probably closest to which I wanted when it comes to the overall experience of camera movements and view. Lakes and dinosaurs look quite realistic, but this is not an ideal rendering in any way-where the red-shirted peliytologist disappears?
This is another bad effort from firefly. I am not sure that it knows what fossilists are, and dinosaurs are very small. The lake and the surrounding forest are performed for an OK standard, however, even if there is a noticeable Ai Sheen for everything in the frame. Camera movements have been well translated here.
“The living daylights” scene
Another: Memorable Bonding and Kara in border-crossing scene The living daylightsWhere they scoot an icy mountain on a cello case (above). I do not need to hire Timothy Dalton or Marim D’bo, learn to operate a camera or travel to Austria, as AI can create a full scene for me.
The indication for this one was: “A man and a woman in winter clothing is slipping on a snow -covered road on a Cello case. There is a barrier on the road, and as they reach it, both characters do ducks under it.”
What do you think so far?
Google VEO 2 manages this very well, everything is considered – the view looks mostly appropriate and fun, and it looks like a Cello case. We have to ignore two people passing through road barrier as it is not there, but at least there is a hindrance (some other AI models could not understand).
Above Sora, and again, it is not terrible. Well, this is not really a cello case, and of course two people will face forward, but the snowy road and the surrounding trees look good – this is a great view. Where is my road barrier, Sora? I want to see these people duck under it.
For the runway, whatever was trained on the video, they certainly did not have videos of people riding from Cello cases under the mountains. People are combining each other, the elements in the shot are changing the shape, and it just looks strange. The snowy view and the real live snow effects look good though.
Who knows what Adobe Jugnu is thinking here. Physics in this one does not mean exactly, the characters are not consistent, and there is no road barrier to the duck below. It is really unstable to see. We find two people in an icy road, a cello case and clip.
No clear winner
I think the VO2 video impressed me the most, although the runway looks more often for realism. Across the board, we have a lot of problems with physics, realism and early interpretation. All these are clearly AI videos, with many strange quirks and anomalies.
Now, I was not expecting these AI generators to go anywhere near the quality of professional advertisements or films: it is not possible to make those scenes again with a lesson prompt and a few minutes time and effort. I am not trying to take a cheap shot on these devices, which are clearly very clever, but indicate some fundamental issues with AI video.

These balls are not jumping.
Credit: Adobe Jugnu/Lifehakar
With more careful work and expertise, I might achieve something that seemed much better, and clearly these video generators are going to improve over time. Who knows what they will be able to produce in five or 10 years? If you check the showcased video on these platforms, you can see that great results are possible.
Personally, however, I am not sure that these AI devices will ever completely change the work of traditional film, no matter how well they are trained. To get something like a Sony advertisement in AI, you have to write Reems and Reems of incredibly detailed signals, and yet you can’t find what you wanted. Does AI feel that the frog will have to jump out of the drain? The results are quick and easy, certain, but you are unloading most of the creative decisions for AI. These videos feel computers.

One of these people is going to disappear.
Credit: Runway/Lifehakar
AI really does not know how a ball jumps, or what a dinosaur looks, or how people should cope with people slide on an icy road on a cello case. This is an estimated and calculation based on all the videos that have been seen earlier, and they show a lot more in the drawbacks, as they do with images or text. You will watch most AI videos, including the above examples, it does not include elements that come inside and out of the shot, because AI is likely to forget what they look if they do not appear.
And I did not have a place here to cover copyright issues or the energy costs for the planet. There is no doubt that we will see a growing number AI-made advertisements And over time -shorts and technology improves, but it is worth going back Famous warning In Jurassic Park: We can do what we can do, we don’t stop to think about what we can do Needed,
Disclosure: Lifehacker’s original company, Ziff Davis, filed a case against Openai in April, alleging that it violates Ziff Davis copyrights to train and operate its AI system.