No products in the cart.

No products in the cart.

negative prompts stable diffusion

BEST Negative Prompts For Stable Diffusion 2.0

 

Do you want to learn how to use negative prompts in a Stable Diffusion Web UI?

In this article, we’ll explain to you how to use negative prompts and we’ll go over more reasons why prompts are important.

Stable Diffusion 2 Prompts

You need to use negative prompts because then you can go from images like this.
negative prompts stable diffusion
To images like this.
negative prompts stable diffusion
I’ve collected a list of some of the best negative prompts that you can use.
These are collected from Emad from the Reddit community and a few of my own suggestions.
Including words like disfigured, kitsch, ugly, oversaturated, low-res, deformed, and blurry, tells Stable Diffusion what not to include.
negative prompts stable diffusion
And this is having a dramatic effect on the quality of images coming out of Stable Diffusion 2.

How to Use Negative Prompts

To use the negative prompts in a web UI like this one.
All you have to do is copy this list and paste it into a negative prompt text box here.
negative prompts stable diffusion
And then you can go ahead and generate your image.
negative prompts stable diffusion
And you can see it goes from that all the way up to this.
generated image

Importance of Negative Prompts

The reason why negative prompts are carrying such greater importance in Stable Diffusion version two is that the model process deduped and flattened latent space so negative prompts and waiting has a huge impact.
Many people have been complaining about Stable Diffusion being a downgrade.
The fact that anatomy has taken a backward step.
That there are no celebrities.
That there’s no naked nudity included has angered many users.
But it has definitely taken a step forward in other ways.
I think the detail and coherence of the images are working much better.
The still life and landscapes are vastly improving.
still life and landscape
And it’s becoming easier to train the model on your own data sets.
Somebody has already trained the model on a few images of Taylor Swift and is getting some excellent results.
image of taylor swift
Here is an example image.
stunning ai image
It’s absolutely stunning.
Emad Mostaque, the father of Stable Diffusion himself has been saying, “Like actually try it folks it’s only going to get better from here, each of these was generally rated in a couple of seconds.”
tweet of emad mostaque
With some really beautiful examples of work from Stable Diffusion 2.
Many of the benefits of Stable Diffusion also cause inverse difficulties.
For example, the fact that it’s open source and can be easily modified upgraded, and changed means that there are endless possibilities to add and remove functionality.
But it makes it more difficult to create a coherent user experience for people to be able to switch between or to create good learning materials that are applicable from one platform to another, from one version of Stable Diffusion to another.
So in one way you’re gaining with one hand and losing with the other.

Final Thoughts

I’m a great fan of this technology and what they’ve been creating.
And I’m only expecting it to improve.
So remember to use those negative prompts.
Here’s a video on the quickest way to install Stable Diffusion 2.
And here’s a video looking at all of the other new features available in this version.
I hope you enjoyed this video.
Why not stick around for more AI art news?
I’m Samson Vowles, this is Delightful Design.
I hope you have a delightful day.
Do check out these articles for more interesting reads.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top
Social