DISNEY-FY Yourself For Free In Stable Diffusion! Amazing New Disney Model!
Stable Diffusion has just released a brand new model called “mo-di-diffusion” based on 3D Disney movies, thanks to the user Nitrosocke who used Dreambooth to fine-tune the model using screenshots of popular Disney movies. This amazing model allows you to generate images in the modern 3D Disney style for free, as well as turn yourself into a Disney character for free too!
If you’ve ever wanted to see yourself as a Disney character or create images in the beloved Disney animation style, then this new Stable Diffusion model is perfect for you. The “mo-di-diffusion” model is easy to use and provides stunning results that will make your creations look like they came straight out of a Disney movie.
In addition to showcasing how to use this model, the video also offers tips and tricks to help you get the best results possible. By following these tips, you can enhance your creations and bring your imagination to life in the magical world of Disney.
Have you tried turning yourself into a Disney character yet? Share your experience in the comments and let us know what you think of this exciting new Stable Diffusion model! Don’t miss out on this opportunity to unleash your creativity and immerse yourself in the enchanting world of Disney with “mo-di-diffusion.”
HELLO HUMANS! Thank you for watching & do NOT forget to LIKE and SUBSCRIBE For More Ai Updates. Thx <3
"K" – Your Ai Overlord
Is 1660ti enough?
Just installed this model and was very disappointed in its performance. Many of the classic Disney characters, like Princess Jasmine and Snow White, have sort of a generic Disney look but don't look like the actual characters. For example, you don't get that distinctive nose that helps give Jasmine her look. Maybe this model is OK if you want to turn a real person into a sort-of Disney-looking image, but otherwise don't waste your time with this model.
Now that I have experience with controlnet + inpainting + prompting, I am now very thankful for all the other videos in your channel. 😀 Thank you for sharing!
Fantastic!
Hi there. Theres a Box on Stable Diffusion named Style 1 under the Generate button. How do we use that? Need to promp the Activation tag of the model and remember that is a pain, thougt that box was for that
Yo could you explain how to create mid journey like faces for uploaded images , most of the time the faces get completely different to what I uploaded
What software is this?
Horrifying..
Why isn't it possible to train one of such model with Dreambooth with your own face?
Can someone try to train a Sam Yang style model?
Thank you for showing new features 🙂
Can you please make a video about inpainting while show few specific examples and explain how to make it work as it's best:
1 – Instead of using photoshop to get rid of an object in any image, Mask in SD but… what exactly prompt and setting to "REMOVE" the object?
For example if the image is simple, like an object on the floor we can mask it and type "floor" but what about more crowded / complexed cases, example will be great!
2 – Inpainting with the different Masked Content types, Fill, Original are pretty much easy to understand… but the Latent Noise / NOTHING – Examples?
These are just suggestions of course, I hope you'll consider to make a video focused on these inpaint with examples, prompt and what properties to get best results for different inpainting tasks.
Thanks ahead, keep it up!
how do I install this on a locally installed stable diffusion? I've put the checkpoint file into the stable diffusion's model folder but it doesn't install it
8:20 how i can generated this comparsion ?, thank you
Hugging face rules!!
Really awesome, I love it! Do you know by any chance if there's a possibility to add your own images to this specific model / retrain the model with own pictures?
Thank you again for the Great video.
Great vid ! But i think i'm the only guy on earth who have an big red ERROR when i load the model…
Where can I find those grids that say which model is best with which sampler?
Where can I get your rhaerya models?
This is all really good information. And I'm glad people are working on coming out with specialized models. Though after playing with it for a little bit I feel like it better results with a specialized prompt that I have using the end painting model with version 1.5.
Tangled is one of the best movies ever made.
Very informative and useful as always. Do you have any plans to cover the Deforum extension? It seems quite complicated and you always make it easier to understand. Sincerely, your loyal subject.
Was just trying this model using a hypernetwork. Have you tried that yet? There have been some changes to the UI since your last tutorial on it
The only channel I ever switched notification on for. Can you please do a short video about saving styles and what that means?
Thank you!!!!!!!!!!!!!!!!!!!!!
would you make a video explaining how to install the program on Mac please !!
Anyone know if SD can run on an AMD GPU and how to install it? I might get a new GPU soon and AMD is looking pretty tempting rn.
Thanks for the video, your channel is the best, always new, fast and cool topics, easily explained.
Is it possible to train your own images on this model? So you can add yourself into the model?
These models are great, the only issue I have is that either the denoising is too low to make it disney, or it's higher and it deforms the face entirely. Why doesn't the conditional mask weight value work with other models than the run inpainting model?
What’s the difference between the super 2.0 and the 1.5?
thanks man, very usefull all of your videos =)
Thank you again for the feature! Great video and good showcase of the model!
I'd like to share some tips as well, for users trying it out:
– negative prompts are a great way to control the model even better
– using "blender" in the negative prompt gives it a great boost in visual quality
– using negative prompt "fog" and "blurry" add great detail to landscapes and other scenes
– negative prompt "concerned" or "angry" can help with the eyebrows without needing the inpaint feature
– negative prompt "singing" and "open mouth" helps with the open mouth issue
– for a easy description of your img2img input you can use Clip Interrogate and just add "modern disney style" to the front
– experiment with putting "modern disney style" in front of your prompt or in the back, as that shifts weighting of it and improves some renders
thank for the video man! can you use the checkpoint merger with this models and other models . For example i have a trained model of myself but i want to merge both models . what's the best way to do so ? weighted sum or add difference
Your inprinting conditioning mask strength was set to 1. But in your previous video, you showed that 0 gives a better result. When I put 0, I had no eyebrow problem.
Your channel is among the best when it comes to learning how to set all this up on local machine.
At around 5:40 in this video you suggest that the output images all contain a smiling woman due to many similar images in the training dataset.
I'm not sure this is correct.
Is the model not using the Inpainting conditional mask weights of the input image to create the output image (copying the image fundamentals)?
I assume as you lower the Inpainting conditional mask strength closer to zero the image will vary from the input image more and you will find a level that works for you.
🤜🤛
Great video, is there a way to access this model online somehow, using Google Collab or somewhere else? Please!!!
Nice tutorial! I tried doing this with a photo of a dog, with the command "modern disney style dog" and the result is completely different. I tried putting the conditioning mask strength to 0 but that didn't help. Do you have any idea why?
@10:40, why were you surprised that the models had open mouths? Your original image had an open mouth, so it was keeping the match.
Great video! the creator of this model also put up 10 other really cool models on hugginface.