No products in the cart.

No products in the cart.

ControlNet in Stable Diffusion | Google Colab One-Click

Learn how to install ControlNet of Stable Diffusion using Google Colabs with this easy-to-follow tutorial.

Stable Diffusion’s new extension ControlNet gives you complete control over composition so you can put your characters in exact positions.

I’m going to show you how to use this new evolution in Stable Diffusion.

How to Install ControlNet

1. Open Google Colab

Step one, come to this link and make sure to click on open with Google collaboratory.

stable diffusion controlnet google colabs install tutorial

You can save a copy in your own drive.

I believe this makes it more stable by going to save a copy in the drive.

From here, the first step is to select the model that we want to run from Stable Diffusion.

I suggest using anything V3.

stable diffusion controlnet google colabs install tutorial

2: Delete # from Your Chosen Model

And to activate this all we have to do is remove the # at the start of the four lines of code underneath the model we want to use.

So, remove the hashtag from curl and MV.

3: Run the cell

Next up, we come to the top left-hand corner and click on the play button to run the cell.

stable diffusion controlnet google colabs install tutorial

4: Click the Public Link

Once you’ve got the two links to appear at the bottom of the code.

It says, running on local URL and running on public URL.

stable diffusion controlnet google colabs install tutorial

You’re ready to roll and you can go ahead and click on run on public URL.

This will open Stable Diffusion in a new window with ControlNet installed.

Now we’ve got the ControlNet loaded.

What Is Control Net

Let’s give you a better understanding of what it is and what it does.

ControlNet includes a set of pre-trained models that provide better control over the image-to-image process.

These include models for:

  • edge or line detection
  • boundary detection
  • depth information
  • sketch processing
  • human pose
  • semantic map detection.

This means that ControlNet has a series of models that are specifically trained by the image-to-image mode in Stable Diffusion.

So, it can tell what the boundary of a subject is.

How much depth is inside of an image, meaning how much distance there is between the foreground to the background, and understanding how objects relate in a three-dimensional space inside of a scene.

Beyond that, it has a model specifically for human poses as well as interpreting sketches drawn by hand.

Here’s an example of a hot air balloon sketch being rendered and the same with the dog.

stable diffusion controlnet google colabs install tutorial

This is using the scribble mode.

stable diffusion controlnet google colabs install tutorial

And you can see again a basic turtle has been turned into a fine aesthetic masterpiece.

stable diffusion controlnet google colabs install tutorial

There is also an architectural mode for building design.

conteolnet openpose mode

And here is a human pose model where you can input a pose and get characters in exactly the same position.

How to Use It

1: Go to img2img

The first thing to do is to come to the img2img.

controlnet img2img mode

Scroll down to the ControlNet panel and open it up.

controlnet control panel

2. Upload image

Insert an image here.

controlnet insert image

I’ll be using my lovely face.

3: Enable ControlNet

And we can then enable ControlNet using the checkbox below.

enable controlnet

4. Select Models

Select the model we want to use.

controlnet models

There are various models here including depth for looking at the relationship between 3D objects in space, openpose for human positioning, and scribble which is for artistic sketches.

Select the corresponding model.

select models

Scroll back up.

5. Enter a Prompt

Add the same image to img2img and type in your text prompt.

img2img type prompt

6. Generate

Go ahead and generate.

Sit back.

Ponder the meaning of life once more.

Now you’ll get two images out.

The first one will be involving your new text prompt and the second one is a map generated by Canny Edge so that the algorithm can understand how the image works in more detail.

controlnet generated image

This is what gives us greater control over our images.

7. Adapt Prompt and Repeat

Now we can go ahead and update the prompt.

I’m going to add glasses.

controlnet upgraded generated image

And you can see this time I’ve got glasses.

Final Thoughts

I hope you enjoyed this video.

If you did, check out this one on the best Stable Diffusion negative prompts and stick around for more on AI art.

I’m Samson Vowles, this is Delightful Design.

If you liked this tutorial, you may want to check out our other reads below.

How to Create Video With AI

Lensa AI Avatars: A Complete Guide

Install Stable Diffusion 2.0 On Google Colabs In 1 Minute (Run For FREE with Web UI)

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top
Social