in

Unleash Your Inner Disney Princess with Stable Diffusion’s Magical New Feature!

Mo-Di Diffusion

DISNEY-FY Yourself For Free In Stable Diffusion! Amazing New Disney Model!

Stable Diffusion has just released a brand new model called “mo-di-diffusion” based on 3D Disney movies, thanks to the user Nitrosocke who used Dreambooth to fine-tune the model using screenshots of popular Disney movies. This amazing model allows you to generate images in the modern 3D Disney style for free, as well as turn yourself into a Disney character for free too!

If you’ve ever wanted to see yourself as a Disney character or create images in the beloved Disney animation style, then this new Stable Diffusion model is perfect for you. The “mo-di-diffusion” model is easy to use and provides stunning results that will make your creations look like they came straight out of a Disney movie.

In addition to showcasing how to use this model, the video also offers tips and tricks to help you get the best results possible. By following these tips, you can enhance your creations and bring your imagination to life in the magical world of Disney.

Have you tried turning yourself into a Disney character yet? Share your experience in the comments and let us know what you think of this exciting new Stable Diffusion model! Don’t miss out on this opportunity to unleash your creativity and immerse yourself in the enchanting world of Disney with “mo-di-diffusion.”



動画はこちら

DISNEY-FY Yourself For Free In Stable Diffusion! Amazing New Disney Model! の画像

Written by Aitrepreneur

コメントを残す

メールアドレスが公開されることはありません。 が付いている欄は必須項目です

GIPHY App Key not set. Please check settings

42 Comments

  1. Just installed this model and was very disappointed in its performance. Many of the classic Disney characters, like Princess Jasmine and Snow White, have sort of a generic Disney look but don't look like the actual characters. For example, you don't get that distinctive nose that helps give Jasmine her look. Maybe this model is OK if you want to turn a real person into a sort-of Disney-looking image, but otherwise don't waste your time with this model.

  2. Thank you for showing new features 🙂
    Can you please make a video about inpainting while show few specific examples and explain how to make it work as it's best:
    1 – Instead of using photoshop to get rid of an object in any image, Mask in SD but… what exactly prompt and setting to "REMOVE" the object?
    For example if the image is simple, like an object on the floor we can mask it and type "floor" but what about more crowded / complexed cases, example will be great!
    2 – Inpainting with the different Masked Content types, Fill, Original are pretty much easy to understand… but the Latent Noise / NOTHING – Examples?

    These are just suggestions of course, I hope you'll consider to make a video focused on these inpaint with examples, prompt and what properties to get best results for different inpainting tasks.
    Thanks ahead, keep it up!

  3. This is all really good information. And I'm glad people are working on coming out with specialized models. Though after playing with it for a little bit I feel like it better results with a specialized prompt that I have using the end painting model with version 1.5.

  4. Thank you again for the feature! Great video and good showcase of the model!
    I'd like to share some tips as well, for users trying it out:
    – negative prompts are a great way to control the model even better
    – using "blender" in the negative prompt gives it a great boost in visual quality
    – using negative prompt "fog" and "blurry" add great detail to landscapes and other scenes
    – negative prompt "concerned" or "angry" can help with the eyebrows without needing the inpaint feature
    – negative prompt "singing" and "open mouth" helps with the open mouth issue
    – for a easy description of your img2img input you can use Clip Interrogate and just add "modern disney style" to the front
    – experiment with putting "modern disney style" in front of your prompt or in the back, as that shifts weighting of it and improves some renders

  5. At around 5:40 in this video you suggest that the output images all contain a smiling woman due to many similar images in the training dataset.
    I'm not sure this is correct.

    Is the model not using the Inpainting conditional mask weights of the input image to create the output image (copying the image fundamentals)?

    I assume as you lower the Inpainting conditional mask strength closer to zero the image will vary from the input image more and you will find a level that works for you.
    🤜🤛

GPT-4o攻破ARC-AGI无法被挑战的神话,71%准确率成新SOTA - 36氪

「GPT-4oがARC-AGIの“無敵神話”を破る—71%の正答率で新たなSOTA達成!」

gemini

Discover the Hidden Gem of Online Advertising with Google Gemini