This article will take you through an in-depth understanding of how to set up a stable diffusion environment and how to complete a vertical drawing. In the software part, we will recommend some excellent tutorials of the up master of station b, and their explanations are very detailed. First, we'll introduce the three concepts of model, VAE, and LoRa to help you better understand the process. Next, we will enter the actual combat part, through the process of sketching, a large number of AI drawings, local modifications (AI iterations) and re-modification, to complete a vertical drawing.
This article is divided into two main parts:
1. Software (how to build a stable diffusion environment).
2. Practical combat (how to complete a vertical painting).
The software part will recommend some excellent up master tutorials of station b, which are explained in great detail.
In this article, we will first take you through the three concepts of model, VAE and LoRa. The actual combat part will be a process of sketching a large number of AI drawings and local modifications (AI iteration) - revising again.
1. Software
In the early stage, you need to prepare: Stable Diffusion local package, Stable Diffusion model, VAE, and LoRa.
1.Stable Diffusion local package
Here we recommend the local stable diffusion pack for autumn leaves, as shown in the image below
2.How to the stable diffusion model**
Click on the upper right corner of the eye to switch on and off the 18+ mode) Most models can be found at station C, priority is recommended to find at station C, if you don't find it, then go to Huggingfacecom, it has more resources, but it is not as convenient to use as station C.
3.How the model is chosen
The advantage of station C is that you can directly see the renderings and the return pictures of netizens. You can even click on the exclamation mark in the lower right corner of ** to see most of the parameters of the figure, which can be directly copied to stable diffusion for use, which greatly facilitates our drawing process.
Reverse search for the model based on the original image.
Open Stable Diffusion, enter **Info, drag the original image in, and you can see all the information of ** on the right side.
The blue box is the prompt word, the yellow box is the reverse prompt word, and the red box is the model hash value (that is, the model number) we need.
4.Model VAE
VAE is used to enhance the color information of the picture, if you feel that the picture is gray, it is mostly caused by VAE problems. Most models will offer a VAE's. If there is no VAE display, the default one is generally used
5.What is LoRa
LoRa is used to stabilize your picture elements, such as wanting to make the picture appear specific characters, ** clothing, etc., all need to use relative LoRa. You can think of lora as something that stabilizes your picture.
Second, the actual combat chapter
1.Sketching
The important thing to note here is to outline the movements and shapes you need as clearly as possible, and the lines should be deep enough and clear enough, otherwise stable diffusion will not be recognized. Here we can use the color level option in PS to draw the line drawing.
2.Keywords:
Keywords are divided into two types: positive and negative. The content of positive keywords can be divided into the head, upper body, limbs, etc., which is relatively clear. Explain the desired action (such as holding a sword in your hand), the material and color of the clothes, etc., and add parentheses to emphasize the weight of important content.
Finally, add some graphics and style complements (solo, highest quality, realistic, etc.). Since the model has already determined the style, we don't need to add too many stylistic descriptors.
The negative keywords are the content that you don't want to appear on the screen, and here is a group of more versatile negative keywords: (bad-artist:1.).0), loli:1.2), worst quality, low quality:1.4), bad_prompt_version2:0.8), bad-hands-5,lowres, bad anatomy, bad hands, (text)),watermark), error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, (username)),blurry, (extra limbs),3.Build settings
We copy the generated descriptor into it, click on the loar model, click on the model we want, and a keyword about lora will be automatically generated in the positive descriptor, and change the last number 1 of the phrase to 05. Reduce its weight.
The upper left menu drops down to select the model you need, this time we use the three major models of national style, the sampler selects DPM++ stable diffusione karras, the number of steps is selected 30, and the generation batch is selected 6
4.Control Net settings and applications
Scroll to the bottom of the page and click on the Control Net option, drag the line art into it, click Enable, click on the pixel Sensitivity at the back, and then select Allow Preivew, select the longer option of LineArt and the corresponding model.
In the new version of Stable Diffusion, there will be an option to choose directly, just choose the option of LineArt. (Note: This requires a model relative to the control net).
Click on the small icon, and we can see the generated line artwork relative to it. Click the small arrow below the preview image, and the dimensional resolution of the line drawing will be relative to the configuration.
5.Batch generation
Click Generate to start the batch calculation. Here I've tried different large models to generate different styles.
Here we can see that through the LineArt of Control Net, we can well identify the action posture and clothing shape of our sketch, and we can choose a more satisfactory one to proceed to the next step.
Note: If the computer configuration is relatively low, you can first compress the sketch to the size and then pull in to identify the line drawing, this step we mainly look for the overall picture of the big feeling, for the picture blur or local bugs do not need to pay special attention, will be refined in the following operations to remedy.
6.Diagram of the Sketch
Next, select a picture that we are relatively satisfied with,Click the diagram button below the preview to enter,After entering, you can see that our corresponding parameters have been automatically configured,What we need to do here is to re-select our sampler dpm++ stable diffusione karras,Then enlarge the height and width,The redraw amplitude is recommended to be controlled at 0.4. The number of sampling steps can be appropriately increased, about 30-50, click on the generation, and proceed to the next step of fine generation.
Note: If the computer configuration is relatively low, the height and width should not be too high at one time.
Here, after I refined the image and generated it, I adjusted it in PS and cropped some places that needed more details, and then enlarged and regenerated.
7.Refinement
Finally, we can focus on adjusting the facial depiction, here is a recommended plug-in Face Editor, which can effectively repair facial collapse, strange hair, etc. Then optimize the bug in PS.
For example, if there is a mistake in the hand structure, we can optimize it by following the AI-generated image, or we can make a depth map and enter the AI again to generate it further, which varies from person to person, so we can take the method that suits you best. And adjust the light and shadow of the picture and the overall atmosphere, because this is just a small case, it has been roughly repaired.
It can be seen that the overall action and the restoration degree of the clothes relative to the sketch are still relatively high. With Control Net's LineArt, we can precisely control the dynamic clothing of the characters in the picture to make it more suitable for our needs.
The above is how to quickly generate a fixed pose with stable diffusion, if you have other quick methods, please leave a message in the comment area.