No products in the cart.

No products in the cart.

Runway Gen-2 Ultimate Guide

Runway Gen-2 Ultimate Guide

Watch the Runway Gen 2 tutorial, unlocking the potential of this advanced AI platform and empowering your creative journey.

This is what text-to-video is now capable of.

And from today you can get access to RunwayML Gen 2.

In this video, I’m going to break down what’s now possible and show you how to get the best out of this new technology in this Runway Gen 2 tutorial.

Runway Gen 2 Tutorial

Once you’ve signed up for RunwayML, you get a free trial.

runway gen 2 tutorial If you go to their page, you’ll be able to go straight in to generate a video with a text prompt.

So, you can put anything you like there.

If you scroll down, there are a number of examples that are good starting points because you know they’re going to get a good render.

So, I suggest you go down and try out any of these that take your fancy.

I particularly like this mountain scene, and you can see that it puts in an example prompt for you to play around with.

What we have is an aerial drone shot of a mountain range in the style of cinematic video: shallow depth of field, subject in focus, and dynamic movement.

All you have to do is go ahead and press generate.

Currently, you get about 120 seconds free, which equates to around 40 different short videos.

At the moment, it’s exporting fairly quickly for a video.

It’s not quite as fast as image generation, but you can see here that it will take around one minute, depending on how busy the servers are.

So, while that is rendering, you can also open it in a new tab and render out other videos simultaneously.

Using a Text Prompt

In this stage in our Runway Gen 2 tutorial, I’ve opened a new tab, and I’m going to pop in my own prompt here.

For that, I will try something simple like ‘underwater neon jellyfish dancing to techno.’

runway gen 2 tutorial

So, while that one is rendering, we can go back to our initial mountain shot.

You can see it has loaded.

And you can scroll down to press play.

There we can see a pretty decent aerial shot.

It doesn’t look entirely lifelike, but the capability is absolutely alarming.

It certainly has got the content right.

There are no outstanding parts or artifacts that make you think it is absolute nonsense.

So, it is remarkable in my mind.

Let’s take a look at our neon jellyfish.

Oh boy, that is looking like a dancing little trippy jellyfish.

Hello, sir!

Using an Image as a Prompt

Another cool thing you can do is use an image as a prompt.

Let me detail how to do it in this Runway Gen 2 tutorial.

To do that, all you have to do is go to the right-hand side and click on the little image box.

Now, one thing that I’ve been enjoying doing is taking an image from Midjourney, and saving it.

I’m using this futuristic African-painted lady, and I’m going to upload it.

So, you select to upload, go to the drag and drop box, and choose your image to upload.

runway gen 2 tutorial

I have found this to work quite similarly to Midjourney’s image-prompting feature.

So, it will give you some sort of resemblance, but by no means, it completely outputs a coherent replica of the image that you use.

It’s also useful to add a prompt that enhances the likeness of the image you’re putting in.

So, I can actually even take the prompt that I was using in Midjourney and use this as my prompt while adding it as a verb.

So, I give it a little bit of action.

Now I can go ahead and press generate.

So, I added ‘speaking’ as my verb. Words, yeah.

Useful Runway Tips

While that’s rendering, I will tell you my top tips for using RunwayML.

Number one is to use an illustrative style rather than trying to achieve a realistic output.

Number two, make sure you’re putting in an image as a base and try to get a number of different iterations to ensure you’re getting the best possible outputs.

Beyond this, you can take your prompting skills from Midjourney or other AI art generators and apply them in the same way to RunwayML.

So, make sure you’re defining the style, the type of camera, the shot, and the composition.

And you’re starting to get a feel for what works well and what doesn’t.

So, abstract moving images with some dynamic elements are great, whereas very realistic faces are very bad.

That is the spectrum.

But if we go through some of the previews they show in their tutorial video, you can see some really excellent outputs coming out.

Testing Some Prompts

Now, it’ll be interesting to see how they got such a high-quality output because a lot of my attempts did not match the coherence that we see in this image.

Now, you can see this cat, there is pretty much nothing wrong, I would say, with the composition, the proportions, all the lighting.

Even the whiskers look well-defined and in sensible positions.

I mean, there are no whiskers coming out of the eyes.

Cat Experiencing the Cosmos

And yet they’re saying that the only input they put was ‘cat experiencing the cosmos’.

So I will try that out myself as well, just to see what we get.

runway gen 2 tutorial

Let’s take a look at our cat experiencing the cosmos.

And you can see that my cat is not quite as coherent as theirs.

As mine only has, well, it either has three legs and no tail or one leg, a strange tail, and a stump.

But let’s play it and see.

Well, well, it’s quite engaging, but obviously, there are some serious defects in the coherence of our dear little potato.

Palm Tree on a Tropical Beach

So in this Runway Gen 2 tutorial, let’s also try a palm tree on a tropical beach as they have demonstrated in their example.

You can see that this palm tree is a lot more similar to the one that they showed in the example.

So it does give me hope that they are not just completely having us on with an absolute marketing ploy.

Human Faces

We can come back and see how the image of the woman came out.

You can see here that it’s given her quite a broad face, but it’s really taken a lot of the style from the image that we created.

Another one I had more luck with was something that was not a close-up feature of a face.

So I found that close-up macros of realistic faces are really hard to achieve good likeness with.

This is partly because human faces are so complicated and it’s also because we are so used and attuned to recognizing faces.

It’s very easy for us to determine when a face does not match our expectations.

But I used this image of a samurai using a laptop and it came out with a much better output.

You can see here the samurai got a little bit older and it lost some of the aesthetic coherence from the Midjourney image.

But it’s done a pretty reasonable job.

Cyborg

Next up in our Runway Gen 2 tutorial, I input this sad Cyborg and this was the version I got out.

You can see that there is some noise and quite a lot of distortion to the image, but it’s kept something of the aesthetic, the tone, and the feel.

runway gen 2 tutorial

But I’ve been playing around with trying to get some more illustrative images from Midjourney.

And I’m going to be using them in Runway because things that are slightly more abstract, more painted, they come out a lot better in Runway ML.

So I’ve got these buildings and this Swan, and I’m going to give these a go.

Now, if you want to play along too, I recommend that this is the best workflow.

Once you’ve created your images, you’ve saved them, you can go back into RunwayML.

Then go into the Gen 2 model, upload your new images, and put in a simple text prompt that relates to your image.

Make sure to include a verb.

Dancing Swan

Here, I’ve used “dancing,” the dancing Swan.

So I’m going to open Runway in a new tab so I can work on a few concurrent jobs to speed up the workflow.

And this time, I will upload my beautifully painted lighthouse.

This time, I will simply add “night time lapse.”

Upgraded Features of Runway

Now, there are a few more options that you can explore, but these are upgraded features.

So if you do want to upgrade, you do have the opportunity to upscale because currently, the resolution that you get out is a fairly low 768 by 448 pixels.

The other thing that you get is much longer videos.

So with the free version, you get a maximum of four seconds.

With the upgraded version, you get up to 15 seconds per video generation, and of course, you get to remove the watermark.

So I’ll get this one to generate.

Dancing Swan and Nighttime Lapse Evaluation

Let’s have a look at our dancing swan.

Ladies and gentlemen, I introduce you to the world’s first International performance of the dancing swan.

So it’s very elegant.

It lacks a lot of movement, and there are some strange artifacts of watermarks in the corner.

But what you can do is regenerate, and I’ll show you what happens if you regenerate with the same image and the same prompt.

It’s actually quite interesting.

But whilst that loads, we’ll come back to see a delightful, magnanimous nighttime lapse.

And this is quite a beautiful instance here, though it has lost a lot of the painting feel.

So I will add a few extra keywords to refine the image and the video closer to what I had in mind.

Now I’ve added the “Illustrated Van Gogh” style.

You see we’re getting closer to the image-style end of what we had in mind.

It’s starting to look pretty good, actually.

And it’s certainly a style, an image, an appropriate aesthetic choice that could work really well for actually crafting a story, a narrative, an entire film.

So the possibilities of using this in your own YouTube videos, for your own short films, for music videos, for marketing, is really endless.

And if you look at how we rendered the second version of our dancing techno swan, you can see that this time, it’s absolutely beautiful.

It really is quite a mesmerizing scene and gives us a great idea of what is possible.

So let me show you a few more experiments that I had.

Other Examples

In this part of our Runway Gen 2 tutorial, let me show you the macro of an eye.

This is a drone shot of a cyberpunk city

Here is a nature time-lapse.

It’s interesting that this one actually split the scene.

I did not inform it to do that.

And here’s a man and a goat.

Here are some other examples from other people, and you can see there are some sci-fi spaceships working really well.

This guy Nick Floats has created a number of excellent works that I’m very impressed with.

Especially these abstract neon textures and working these into a more coherent narrative.

Here are a few more examples that were particularly impressive.

Here is a jungle river being rendered in excellent detail.

And now a jungle waterfall during the day, and a Desert Oasis.

And these grasses as well as this roaring campfire.

But what I really like about this Grass in the Wind is the attention to detail, the coherence in the image, as well as the lighting.

It really captures the highlights of the individual strains.

The Runway ML’s Features

Now there are a couple of other interesting models that you can use inside Runway ML.

If we take a look at their preview video model that allows you to use words and images to generate new out of the existing ones.

In the week since launching the model, they’re demonstrating how they have evolved from Runway One, which was impressive.

But the images and videos coming out were, quite honestly, very abstract, to put it politely.

They were absolute rubbish.

It also has better temporal consistency.

So here he mentions that the improvement in temporal consistency has increased immensely.

And this relates to how well different frames connect.

So if you have a frame at the start and a few frames later, how well do they actually relate to each other?

So this makes sure that the machine actually creates a video that is coherent and understandable.

See better Fidelity.

Fidelity is related to the exactness or the quality of the video that is being produced.

Better results, more people gained access.

We unlocked entirely new use cases and displays of creativity.

Text-to-Video

And today we’re excited to announce our biggest unlock yet: text to video.

So this is what we all have access to now, the wonderful text-to-video engine with Gen 2.

Now you can generate a video with nothing but words, no driving video, and no input image.

Gen 2 represents yet another major research.

So basically, before Gen 1, you had to use a base image or a base video.

And it would overlay new types of styles to that to get you a new video.

But now we can start with just a text prompt, which is a huge leap forward.

A research milestone and another monumental step forward in generative AI.

With Gen 2, anyone anywhere can realize entire worlds.

So this gives us a whole host of new capabilities and possibilities to generate our own moving images.

As you can see from a lot of these examples, I think the best use case for creating a video is for stylistic interpretations.

So for creating animated, painted, illustrative videos rather than trying to achieve realistic outputs.

And let’s put that into action ourselves and try a few more of the exciting modes available.

Video-to Video

Another awesome mode to try out is the video-to-video.

And if you come here from the home page, select video-to-video and then upload your own video.

I shall be using a video of me on a toy horse.

What I do in my spare time is none of your business.

So you can scroll down, and you can apply a stylistic image.

So this will overlay the style from one image and put that onto the video.

Let’s show you an example of that from Runway Ml’s previews.

You can see here, this is the base video.

This was the image they included, and together you have this dancing fireman.

So there is a set of demo images that they also allow you to choose from.

You do have the fire or the cyberpunk options.

I will try the cyberpunk option.

Now you can go down, and you can adjust the style strength.

That’s essentially how much of the image style is applied to the video.

More image style makes it more like the image.

I will leave it on 50 and go ahead and select generate video.

So let’s go back and see how I’m getting on riding my cyberpunk unicorn, and we can take a look.

That is pretty disturbing.

So I then took this image of me throwing an ax, yeah, and turned it into this.

And you can see that’s a fairly interesting, yeah, composition that’s come out from there.

However, the details and the artifacts do prevent it from being potentially incredibly usable.

Now there are a couple of other interesting models that you can use inside of Runway ml.

Masking Different Areas

One is that you can mask different areas.

For example, you can take this little Golden Retriever and turn him into a Dalmatian.

So if you’re bored with your dog, you can change it to a different breed.

The other one you can do is you can take untextured renders and apply textures and extra atmosphere to your scenes.

For example, here they’ve taken this swimming gentleman and added bubbles and colors.

So I saw this one example on Instagram of this guy dancing and being turned into a Greek god.

And you can see it’s worked really well.

Removing Background

There are other AI-powered features inside of Runway ml that are also worth checking out.

For example, the removal of the background, which automatically identifies the subject and removes them from the background.

This can be great for actually building out scenes and rendering them separately and then combining them later.

Text to Image Generator

You also have the option to use a text-to-image generator.

So if you want to build your own images and use those within your video generation, it’s entirely possible inside of Runway ml.

Runway Pricing

And if you do want to upgrade, they have a pretty handsome option for a standard plan.

It’s $12 a month, and we get 625 credits a month, which is roughly 625 seconds.

You’re able to buy more credits on a pay-as-you-go basis.

And you get to upscale the resolution and remove watermarks, which are great features for allowing you to actually use your work commercially.

Of course, you get up to 15 seconds, unlimited projects, and 100GB bytes of asset storage.

And you’re allowed up to seven people to edit on the same project.

If you go all the way up to the pro model, you get 2250 credits and 500 gigabytes of asset storage, and that’s the main difference between the plans.

There is also an iOS app version of Runway ml.

So you can actually do this on your iPhone.

Reviews of Runway

It has some mixed reviews.

Some people are blown away, wow.

I must say I was blown away.

Oh, somebody needs to get out of the house a bit more.

This app is truly a game-changer in the world of video creation.

Somebody was definitely paid to write a review.

‘No, first up, it takes forever just to set up the app.

You also go through an endless registration process where I am asked a million questions.

Three of my emails were denied because of not being accepted.’

You can see that this person is just rather difficult.

Final Thoughts

So do try the iOS app out if you want to use it on your phone.

I can’t wait to see where this technology goes.

I’ve also just started a newsletter.

So if you’re interested in getting weekly updates on the latest AI news from yours truly, make sure to sign up below, and thanks for watching.

If you enjoyed this article on Runway Gen 2 tutorial, you might want to check out the other articles below.

Can You Copyright AI-Generated Content?

AI Digital Product Ideas: Niche Opportunities & Maximizing Profits

10Web AI Website Builder Tutorial

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top
Social