Stable Diffusion Assembly 2
Stable Diffusion Assembly (SDA) 2 is a Windows helper toolbox designed to facilitate a fast, low-GPU, and ultra-high-definition workflow for local Stable
Diffusion installations like Automatic1111 and Forge. It aims to overcome challenges encountered when trying to guide generative AI to produce specific
images, particularly concerning issues like Stable Diffusion's inclination to "correct" non-vertical faces or objects
Ultra-High Definition Workflow for Stable Diffusion
(for Windows)
But why?
?!
In SDA export rotated
512x512 slice
Drag the result back to SDA
where it will be perfectly
merged with the main image
at the exact place and angle.
Load the slice in Stable
Diffusion for inpainting in
vertical alignment
Select and inpaint the face
Stable Diffusion hates faces that are not vertical with passion
Generative AI is incredibly confident when it comes to producing images it wants, but as soon as you attempt to make the image you want, it will fight
back. After hours of exporting parts of a composite project to Stable Diffusion and then adding them back in Photoshop while trying to align it again and
again, I decided to create a smart splice and merge tool that would allow you to export a Stable Diffusion-friendly slice in any rotation, then import and
blend it back in the exact same place without any guesswork.
If you've spent any amount of time using local generative Ai, you're
probably familiar with this scenario. You want to create an image
based on what's in your head, but Stable Diffusion seems determined
to make something very different...
Example: Since most faces were trained in an more or less upright
positions, SD has a strong inclination to “correct” your images when
they stray from being vertical. But that's just the tip of the iceberg.
SD Assembly uses an idea of splicing a large image into a base resolution slices at any angle, which you can then easily process in SD inpainting and then
assemble back in place and rotated correctly. It uses a lot of clever processing underneath to assure perfect and worry-free blending without any seams
including paint-in mask.
The main goal was to get 100% away from Photoshop for the typical Professional AI
workflow.
With this new version, We've added Painting/Cloning and a Liquify tool (just like in
Photoshop!) directly into SDA, so we can finally achieve that.
What is it?
Features
- NEW: Liquify Tool
- NEW Painting/Clonning tool
- allows comfortably working in Stable Diffusion on huge images without Memory/GPU limits
- various type of slices at different base dimensions including rotating slice
- automatically sending the slice to clipboard (in Automatic1111 you can just paste it in inpainting window)
- ability to have a ControlNet shadow image that gets sliced and exported at the same time and can be used as ControlNet source
- brush blending tool (similar to Photoshop brush layer blending
- special sizing and fitting methods that preserve color and have invisible seams
System Requirements
Windows 8, 10, 11 and working local installation of Stable Diffusion
Note: This software is an addition to Stable Diffusion. You need to have any Stable Diffusion UI running on your system
Uses
- Fully utilizing Stable Diffusion potential to generate images with an incredible amount of detail without loading huge files
- Use your photos as a base for img2img generation while SDA will keep track of the ControlNet slices of the original photo
- Fixing details on large images/photos/artwork without guesswork
Liquify
Liquify in SDA is an effect that is dearly missing from Stable Diffusion. It allows you to shift, expand, shrink or rotate part of the image and this allows you
to perfectly finetune expressions, sizes or positions.
Part of the initial image
In SD Assembly we can use Liquid
Effect Move to move part of the
image, in this case we want one
eyebrow up.
Using Stable Diffusion’s inpainting
the slice would be naturally and
transparently finished.
Part of the initial image
In SD Assembly we use Liquid Effect
Expand to expand part of the image,
in this case we want the necklace
bigger.
Using Stable Diffusion’s inpainting
the slice would be naturally and
transparently finished.
Painting/Clonning/Recoloring
SD Assembly has build-in extensive painting tool - not just for painting, but also recoloring, contrast, burn, dodge, as well as cloning.
Brushes:
•
Normal Paitbrush - from soft to hard, with opacity and
flow settings
•
Colorize - Changing colors
•
Soft Color - Another Changing colors
•
Recolor - Yet Another wauy of changing colors
•
Contrast - increase contrast where brushed
•
Doge/Burn - High Quality Dodge/Burn brushes
•
Desaturate/Saturate brush
•
Erase - draws back the original image’
•
Clone - clone tool, clones part of the image to another
Key Features and Purpose
•
High-Definition Workflow: SDA enables users to work on ultra-high-resolution images by breaking them down into smaller, Stable Diffusion-friendly
slices and then seamlessly merging them back into the main image. This significantly reduces GPU and memory requirements, allowing for detailed
work on large canvases.
•
Slice and Merge Tools: The core functionality revolves around its advanced slice and merge capabilities. Users can export specific areas (slices) of a
large image, process them in Stable Diffusion for inpainting or refinement, and then import them back into SDA, where they are perfectly blended at
the exact place and angle, eliminating manual alignment guesswork.
•
Rotating and Zooming Slices: SDA allows for exporting slices at any rotation, which is particularly useful for objects or faces that are not vertically
oriented, as Stable Diffusion often struggles with such angles.
•
Liquify and Painting Tools: Version 2.0 introduces a "Liquify" tool, similar to Adobe Photoshop's, for stretching or moving pixels, and a
painting/cloning tool, offering more comprehensive image manipulation within the application.
•
Seamless Blending: The tool uses "clever processing underneath" to ensure perfect and worry-free blending without any seams, including paint-in
masks, allowing users to adjust the blending mask before applying the merge.
•
Integration with Local Stable Diffusion: SDA does not generate images itself but relies on locally installed Stable Diffusion interfaces (like
Automatic1111 or Forge) for the generative AI processing.