How to use stable diffusion for brand visual extension

Designers are engaged inBrand visionIn the process of related design, it is often necessary to use brand visual symbols for thematic or stylized design. The following will share with you how to use stable diffusion to assist in brand visual design.logoExtended design.
Environment Construction
Before designing, we need to download the Controlnet QR Code Monster model. This model can help us to accurately control the brand visual symbol logo graphics.
Model download portal:https://huggingface.co/monster-labs/control_v1p_sd15_qrcode_monster
How to use stable diffusion for brand visual extension
Enter the portal to downloadcontrol_v1p_sd15_qrcode_monster.safetensors,control_v1p_sd15_qrcode_monster.yaml Two files.
And store the file inextensions\sd-webui-controlnet\models in the file.

How to use stable diffusion for brand visual extension

Material preparation

After completing the basic environment construction, we entered the preliminary material preparation stage. Prepare the original brand symbol materials that you want to extend visually and the expected visual style prompt keywords.

Taking the Apple logo as an example, the original material image used is 512x512 pixels in size. The material size can be adjusted according to personal needs. In order to facilitate controlnet to recognize the control structure, the color of the material needs to be uniformly processed into black and white.

How to use stable diffusion for brand visual extension

 

PromptConception
After preparing the materials, enter the Stable diffusion image interface.

Here, epicrealism_pureEvulutionV5 is used as an example of the Stable diffusion model. If this model is not available, you can choose any other realistic model.

How to use stable diffusion for brand visual extension

We use the hardware theme as an example to visually expand the Apple brand, and radiate Prompt keywords such as computer, chip, chip, space, and multi-dimensional around the hardware theme. Enter Prompt in the text box:

geometric multidimensional space, many three-dimensional light emitting blocks, blue light and black background, with a glowing polygonal computer crystal chip,

How to use stable diffusion for brand visual extension

If you can't come up with suitable keywords, you can also collect some relevant design references on the Internet, insert the reference picture in the picture generation interface, click CLIP to infer the prompt words, select the generated words and then enter them into the prompt of the text generation picture.

 

Parameter debugging
After completing the prompt input, we proceed to the generation and debugging phase. Click the controlnet plug-in, drag and drop the black and white image material to controlnet, and check the "Start", "Pixel Perfect" and "Allow Preview" options.
The new version of ControlNet 1.1 adds Pixel Perfect (perfect pixel mode) to allow ControlNet to automatically calculate the optimal preprocessor resolution to achieve Stable Diffusion A perfect match.
If this option is not available, you can go to the extension options, click Install from URL, copy and paste the following link to the extension's git repository URL, and restart the interface after installation.
GitHub - Mikubill/sd-webui-controlnet: WebUI extension for ControlNet

How to use stable diffusion for brand visual extension

Select invert for preprocessing, select control_v1p_sd15_qrcode_monster for model, and keep other options as default. Click the red explosion icon in the middle to preview the inverted effect. If you want the final visual effect logo graphic to be mainly dark, you can select None for preprocessor.

How to use stable diffusion for brand visual extension

In the SDXL Styles interface, check "Enable Style Selector" and check "Digital Art" in Style.

This selector can provide options to generate different types of styles. If this option is not available, you can also copy the following link to install it online in the extension.

https://github.com/ahgsql/StyleSelectorXL

How to use stable diffusion for brand visual extension

Select the number of iterations, sampling method, width and height, and prompt the secondary bootstrap coefficient. The width and height are set to 512 here to facilitate the rapid generation of different effect diagrams.

How to use stable diffusion for brand visual extension

After setting the parameters, click Generate to start drawing cards.
How to use stable diffusion for brand visual extension
Select the direction that best matches your expectations in the generated images. Here we take the first image in the upper left corner as an example.
We input the image seed that matches the direction in the random number seed, and use high-resolution repair to enlarge the image precision. In the high-resolution repair (Hires. fix) interface, select the enlargement algorithm ESRGAN_4x, set the high-resolution iteration steps, redraw amplitude, and magnification. The iteration steps are generally selected between 10 and 20. If the number of steps exceeds 20, deformities may occur. The redraw amplitude is generally between 0.3 and 0.8. Adjust according to the actual effect. If you want a larger change, increase the value.

How to use stable diffusion for brand visual extension

Click to generate the redrawn image.

How to use stable diffusion for brand visual extension

The generated image is simply processed through PS to incorporate more luminous crystal elements into the center of the image.

How to use stable diffusion for brand visual extension

We switch to the image generation mode and put the processed image into the image generation mode for secondary generation. Enter the same prompt as the text generation mode. Set the number of iterations, sampling method, width and height, and redraw amplitude.
How to use stable diffusion for brand visual extension
How to use stable diffusion for brand visual extension
The controlnet parameters are basically the same as those in the previous figure.

How to use stable diffusion for brand visual extension

After setting the parameters, you can click Generate. During this process, you can adjust the redrawing amplitude according to the effect until a satisfactory picture is generated.

How to use stable diffusion for brand visual extension

The final effect.
How to use stable diffusion for brand visual extension
Other Topics
Using the same method, you can try different design themes. Let’s take the environment as the theme and enter a desert-related prompt, such asGolden sandstorms and floating sand on a black background, ray traced image,
Set basic parameters.

How to use stable diffusion for brand visual extension

For Style, you can choose cinema.

How to use stable diffusion for brand visual extension

By setting the Controlnet parameters, you can increase the Control Weight strength and make the graphic outline more obvious.

How to use stable diffusion for brand visual extension

After completing the parameter settings, start drawing cards to generate pictures.

How to use stable diffusion for brand visual extension

We select the right picture to proceed to the next step, and make simple adjustments to the bottom and head structure in PS, replacing the flames with dust effects.
How to use stable diffusion for brand visual extension
After completing PS, we switch to the raw image mode, insert the processed image, and enter the same prompt and parameter settings as before.
How to use stable diffusion for brand visual extension
Then you can click to generate the final screen effect that meets your expectations.

How to use stable diffusion for brand visual extension

With the main visual effect, related extended applications will come naturally, combining information content and adapting and expanding the visual effects according to the application environment.

How to use stable diffusion for brand visual extension

Conclusion
The above is our production process of using StableDiffuse to assist in brand visual extension. The overall production idea can be roughly broken down into 6 steps: take brand visual symbols as the core >> refine the theme style keywords >> use Contrnet to control brand recognition >> select pictures that meet the direction >> combine the picture generation mode to synthesize and refine the pictures >> complete the final brand main vision.
The emergence of AI saves us a lot of time in post-production, allowing us to achieve our goals faster and giving us more room to explore the form and content of expression. AI tools can determine the lower limit of output, while the designer's imagination determines the upper limit that can be reached. With the reasonable use of AI, give full play to your imagination and try various possibilities!
statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
TutorialEncyclopedia

Midjourney Portrait Series: Here are 10 Prompt Words for Dog Portraits!

2024-1-3 9:53:03

TutorialEncyclopedia

Use Stable Diffusion to change e-commerce models in 10 minutes

2024-1-5 10:04:14

Search