-
Getting Started with Stable Diffusion Tutorial: AI Painting with Stable Diffusion at LiblibAI with Free Online Native Interface
I found that there are still many of my friends who have not been exposed to AI painting before. In order to help you quickly understand AI painting, as soon as possible to cross the door into the AI painting, start the AI painting journey, I personally feel it is very necessary to write an article for beginners to start. Of course, not only for beginners, there are some partners belong to the computer configuration is not enough to build the local SD environment Do content partners, not too concerned about the tool itself and technical capabilities, focus on creativity and content expression, can generate the effect I want to picture. For these types of partners, I will generally recommend a suitable for novice AI...- 5.1k
-
AI painting tutorial, use Stable Diffusion to teach you how to turn pictures into illustrations in one minute
Previously, I shared with you the relevant production methods of line drawing transfer. I will continue to share new practices about line drawing transfer in the future. Today, we will start to share another kind of picture transfer: illustration transfer. 1. Production method of picture to illustration [Step 1]: Selection of large model Here we recommend: Vientiane Melting Furnace | Anything V5/V3. Model download address (you can also get the network disk address at the end of the article) LiblibAI: https://www.liblib.art/modelinfo/1f26c86ea6a8442c856…- 5.1k
-
Stable Diffusion line drawing transfer, architectural picture line drawing transfer production tutorial is here
Regarding line drawing transfer, I have shared with you two ways to achieve text-generated images and image-generated images. Stable Diffusion [Application] [Line Drawing Transfer]: One minute to achieve image transfer and line drawing Stable Diffusion [Application] [Line Drawing Transfer]: One minute to achieve image transfer and line drawing without loar The previous two methods did not perform well in achieving line drawing transfer of architectural images. Today, I will share my personal practice specifically for line drawing transfer of architectural images. If you have good ideas for implementation, you can leave a message or add WeChat to communicate privately…- 9.4k
-
AI painting hand-drawn drafts are quickly generated, and stable diffusion is used to convert images into line drafts
In the previous article, I shared with you how to achieve the Wenshengtu conversion of image line drawings. Stable Diffusion [Application] [Line Drawing Conversion]: One minute to achieve image conversion to line drawings Image line drawing conversion can also be achieved using the method of picture generation. Let's first look at the effect of using the picture generation method to achieve the image conversion to line drawings. 1. How to make a picture to line drawing [Step 1] Upload a white background image of the picture generation image Here we use Meitu Xiuxiu to make a pure white background background image. Upload this white background image in the picture generation function menu. [Step 2]: Selection of large models Here is recommended...- 20.5k
-
What software can convert photos into line drawings? Use Stable Diffusion to convert pictures into line drawings in one minute
Today we start sharing the series of image-to-drawing in the application chapter. Image-to-drawing has many different application scenarios, such as image-to-line drawing, image-to-illustration, image-to-comic, etc. This is also one of the more common application directions of AI painting in actual scenes. Here we start with the production method of image-to-line drawing. 1. Production method of image-to-line drawing [Step 1]: Selection of large model Here we recommend: ReVAnimated, version v122. Model download address (you can also get the network disk address at the end of the article) LiblibAI: https://www.libl…- 8.2k
-
Stable Diffusion painting tutorial, detailed explanation of Wensheng figure settings - random seed number
When using the Stable Diffusion tool for drawing, we often encounter this kind of problems, mainly manifested in (1) after a lot of drawing, we finally draw a picture that we are satisfied with, but there are some flaws in some parts of the picture, and the overall satisfaction of the picture can only reach 95%. (2) after a lot of drawing, we finally draw a picture that we are very satisfied with, and we want to get multiple pictures with similar effects. If we regenerate them, since SD generates pictures randomly by default each time, the elements of the newly generated pictures may not be what we want. For the above two scenarios,…- 5.7k
-
How to turn photos into line drawings? Tutorial on how to use Stable Diffusion to turn photos into line drawings
Recently I saw someone converting photos into line drawings in a book and the results were very good. I also referred to other people's methods and shared this technique with everyone. This method has two types: picture-based drawing and text-based drawing. Let me first explain the use of text-based drawing. Text-based drawing model: architecturerealmix This model can be downloaded on C station or Libu. Tips: BondyStudio, monochrome, greyscale, Dotted line, line shadow, exquisite, Lo…- 3.7k
-
AI painting realistic style generated beautiful pictures, two-dimensional illustration painting style Stable Diffusion large model recommendation
Today I would like to recommend a very good general-purpose large model based on SDXL: SDVN7-NijiStyleXL. The drawing style of this model is very similar to the Midjourney style. The author combines animation, sketching, realism and other styles to create many great styles. We can use various artist style combinations supported by SDXL. The artist styles supported by SDXL can be found on the website: https://rikkar69.github.io/SDXL-artist-study/. This large model is similar to the realistic style…- 13.3k
-
Teach you how to play with fruit photography and use AI painting Stable Diffusion to draw fruits
In the hot summer, cool fruits are the most comfortable and cozy choice. Today I recommend a LORA model based on SD1.5: KK-Watery Fruit Photography. This model is a fruit photography model, which brings a watery special effect when the fruit falls, bringing a cool visual experience to the hot summer. The latest version of the model is V2.0. Some training materials are added to the V1.0 version, and the number of learning times and optimizer are adjusted. However, after actual testing, I personally feel that the effect of V1.0 is better. The author gives the recommended parameter settings on the official website: • …- 8.2k
-
AI painting works, using the Stable Diffusion model to draw exquisite two-dimensional illustrations
Today I will share with you a two-dimensional model trained based on the Pony model: two-dimensional illustration. There are 4 different branch versions of this model. 1.5 version: loar model, recommended base model niji-anime two-dimensional 4.5. xl version: SDXL model version mix version: light and shadow weakened, reduce the SDXL version of the warm tone and a sense of overexposure. Pony version: based on the version trained by Pony Diffusion v6. The author's introduction to this model on the Liblib official website is relatively short, mainly related parameter recommended settings. Personal experience...- 7.1k
-
No need for models, use Stable Diffusion to generate AI clothing model scene images at low cost
This tutorial mainly uses the tool Stable Diffusion to operate. How can we use Stable Diffusion to assist us in completing the model effect display? This tutorial is suitable for e-commerce design scenes, photography scenes and other practical uses of character design. The whole process is practical, and you need to absorb it slowly. After learning it, you can easily control the model's dress change. Let's go! Vincent map Libulibu generates it online. First use Vincent map, open the website, keep up with the rhythm, and start making https://www.liblib.art/s…- 6.4k
-
Stable Diffusion model recommendation, flat style two-dimensional large model
Today I would like to share with you a flat-style 2D model based on SD1.5: flat simple color flatanime-mix. For those who like flat style, this model is still very worth recommending. The latest version is V1.0, which is very friendly to the recognition of prompt words, and can express characters/scenes/objects. More often, this model can be used as a 2D base film in a flat illustration style. Model download address (you can also get the network disk address at the end of the article) LiblibAI: https://www.liblib.ar…- 9.6k
-
Stable Diffusion beginner's guide, basic operation tutorial of image generation
Stable Diffusion Quick Start: Image Generator This article provides a detailed image generator tutorial for Stable Diffusion beginners, guiding you on how to use the text-to-image function to start your AI art creation journey. Compared with Midjourney's image generator, the image generator in Stable Diffusion is much more powerful. In addition to image control, it also includes graffiti, partial redrawing, graffiti redrawing, uploading redraw masks, and batch processing. 1. Image generator interface introduction 2. Image generator first experience We can use the image material...- 3.5k
-
Stable Diffusion beginner's guide, basic operation tutorial of Wenshengtu
1. Introduction to the Wenshengtu function The Wenshengtu function refers to the process of converting the input text description into an image. Through the Wenshengtu function of Stable Diffusion, we can convert the creativity and imagination in our minds into specific images, providing more sources of inspiration for designers and artists. 2. Wenshengtu operation steps 1. Prepare text description First, we need to prepare the text description to be converted into an image. The more detailed and specific the description is, the more the generated image will meet our expectations. We prepare a prompt word here: Portrait, a young…- 2.9k
-
Photographic AI image effects, AI drawing done with Stable Diffusion
Today I would like to share with you a large model based on SD1.5 with a camera theme: Dreams and bubbles. This version mainly includes the following improvements: (1) The character poses and picture composition are richer, more dynamic, and more visually attractive; (2) The character composition and natural language understanding are both ideal. Adding (8k, RAW photo, best quality, masterpiece:1.2,) to the prompt words will greatly improve the output effect (3) Recommended parameter settings sampler...- 5.5k
-
AI painting Stable Diffusion character realistic model, youthful and beautiful Asian girl character photography effect
[Realistic Photography] Dreams and bubbles L Introduction Today we introduce a LoRA model from the Dream Building Industry series: [Realistic Photography] Dreams and bubbles, which is a realistic photography model with Asian faces. The model has richer, more dynamic and visually attractive character postures and picture composition, and has good drawing effects in character composition and natural language understanding. It also includes excellent viewing effects, such as night and low-light photography, portrait photography, background blur effects, etc. It can directly produce the photography effects of young and beautiful Asian girls, such as students, car models, etc. For…- 10.9k
-
AI painting tool Stable Diffusion basic tutorial, draw a pencil sketch illustration that you are satisfied with
Prompt word [SUBJECT], crisp neo-pop illustration, pencil sketch on old paper [Subject], fresh neo-pop illustration, pencil sketch on old paper Parameter settings Large model: RealVisXL V4.0 Lightning Sampler: DPM++ SDE Karras Sampling iterations: 5 CFG: 2 Reverse prompt word: (octane render, render, …- 7.5k
-
Stable Diffusion intelligent AI drawing generates long exposure photography effects, suitable for night, light trail, waterscape and other photography
Introduction to Long Exposure Photography Today's article is about a common photography technique: long exposure photography. Long exposure photography is a photography technique that opens the shutter for a long time. Usually, any exposure slower than 1 second can be called a long exposure. Long exposure photography is often used in night photography, light trail photography, waterscape photography, etc. It can make dark scenes clearer and can also capture dreamlike pictures. Long exposure photography drawing experience In this article, we will use the photography designer - professional...- 3.3k
-
How to create a comic effect from a real person's picture? Using Stable Diffusion to achieve real person comic adaptation
The so-called live-action comic adaptation is to generate a new two-dimensional picture from a real person's picture. In AI painting, this is a very common application scenario. Regarding live-action comic adaptation, several production methods have been shared in the previous advanced series, but the implementation methods are relatively simple. It is difficult to achieve consistency between real people and corresponding two-dimensional effect pictures, whether it is character clothing, background elements, picture colors, etc. Today, I will share a live-action comic adaptation production method that can basically achieve commercial delivery. Let's first look at the renderings of live-action comic adaptation. 1. Production method of live-action comic adaptation Let's take the real-person picture below as an example...- 9.8k
-
Stable Diffusion drawing model selection, AI drawing generates your favorite two-dimensional anime style
Today I would like to share with you a large model that can draw the second dimension style of the Midjourney tool: niji-Anime Second Dimension Enhanced Edition. This model is a large model trained based on SD1.5. It is called the enhanced version because the author has launched a similar large model: niji-Anime Second Dimension. Niji-Anime Second Dimension access address: https://www.liblib.art/modelinfo/a9a92acbcd2f4033856db53ede728f51 to ensure the uniformity of the output style. In addition…- 13.2k
-
AI painting, how to create a real person picture with Stable Diffusion super texture? Teach you in a few simple steps!
Introduction to the Ultimate Texture DgirlV5 Model Today, we introduce a super textured real person model: D series - ultimate texture - DgirlV5. This is an optimized realistic model that the author has continuously optimized in pursuit of character texture, even down to the hair details. This model is a drawing model trained based on the SD1.5 model. The current open version is the latest version V5.1, which focuses on enhancing the details and correcting the overall reddish color problem of the drawing before the V5 version. This is a large model that can directly produce high-quality and beautiful pictures according to a simple prompt word Prompt…- 10.4k
-
ComfyUI uses the SDXL-ControlNet model to realize AI painting from line sketches to graffiti
Today, I will introduce you to a SDXL-ControlNet, universal control model: Anytest. It is very simple to operate and does not require a preprocessor. You can use it directly. The basic functions include generating images based on line drawings and redrawing pictures in different styles. There is nothing much to say about this. Anytest has several interesting uses. It can also generate good quality pictures for compositions with unclear outlines. Not only that, Anytest can also help you improve incomplete sketches, including secondary creation on pictures. You may not feel much about this, so let's go straight to it...- 41.1k
-
Generate pictures with AI and teach you how to deploy Stable Diffusion 3 locally with ComfyUI
I will not elaborate on the advantages of Stable diffusion 3. Here I will mainly talk about how ordinary users can deploy it locally. Currently, the SD3 model has been open sourced on HuggingFace. The address is: https://huggingface.co/stabilityai/stable-diffusion-3-medium. However, you need to log in to your Hugging Face account and sign a license agreement to download the model. After signing, you can see the file list of the entire project.- 41.7k
-
Stable Diffusion realizes AI photography, AIGC drawing prompt keyword reference
Introduction to Photographic Composition and Angle In the field of real-life photography, creating excellent photographic images involves many key technical elements, such as the choice of light and shadow effects, photographic composition (camera position: the distance between the camera and the subject) and photographic angle (the position of the camera relative to the subject). These core elements are also extremely important for the creation of AIGC drawings (Stable Diffusion 1.5/XL, Playground, Midjourney). Therefore, this article summarizes the AIGC drawing prompts related to the commonly used photography composition and angle topics,…- 9.1k
❯
Search
Scan to open current page
Top
Checking in, please wait
Click for today's check-in bonus!
You have earned {{mission.data.mission.credit}} points today!
My Coupons
-
¥CouponsLimitation of useExpired and UnavailableLimitation of use
before
Limitation of usePermanently validCoupon ID:×Available for the following products: Available for the following products categories: Unrestricted use:Available for all products and product types
No coupons available!
Unverify
Daily tasks completed: