Simply describe the image you desire, and the app will generate it for you like magic!
Developed exclusively for Apple silicon (M1/M2) - The app is NOT compatible with devices running on Intel chips.
Stable Diffusion is a deep learning, text-to-image model used to generate detailed images conditioned on text descriptions.
Click a thumbnail to view a larger version of it. Click again to exit.
When in preview mode, there are some keyboard shortcuts available:
- ◀ — Previous image
- ▶ — Next image
- Space — Save image
- Command + C — Copy image
- Esc — Exit preview
To write a negative prompt (what to exclude), write
## after your prompt, followed by your negative prompt. For example, “photo of a cake, high-quality ## strawberry, out of frame”, where
strawberry, out of frame is your negative prompt. Anything after the
## is your negative prompt. You only write
When you save a generated image, it includes a lot of useful metadata (prompt, steps, etc). You can view this in Finder by right-clicking the image file and selecting “Get Info”. The file also includes some relevant tags which can be used to create smart folders.
Frequently Asked Questions
I have a feature request, bug report, or some feedback
Why not use Stable Diffusion 2?
It will eventually be supported, but right now, it’s worse than 1.5.
Why does the app require macOS 13.1 and Apple silicon?
The app takes advantage of recent optimizations by Apple.
What are the usage restrictions for the generated images?
You can use the images for commercial or non-commercial purposes, but you must adhere to the Creative ML OpenRAIL-M license’s usage restrictions. These restrictions include not using the images for illegal activity, false information, discrimination, or medical advice.
Can you support custom models?
It’s something I plan to support, but other things are a higher priority at the moment.
Can you support inpainting/outpainting?
I don’t plan to support this. DiffusionBee supports this (see below for comparison).
Can it generate images with aspect ratios other than a square?
The Stable Diffusion library used by this app only supports squares.
Why does it take so long to generate?
Several factors can affect the speed of image generation, including the performance of your machine and the amount of available memory and CPU. Try closing down other apps or restarting your machine before generating images.
And bear in mind that the initial generation after installing the app may take longer due to model validation.
Why does the app take up so much space on disk and memory?
The AI model used to generate images is large. This is reasonable given the model’s capabilities.
Can you support iOS?
I plan to add iOS support when the app is more mature.
How does it compare to DiffusionBee?
Amazing AI benefits
- Faster and more energy efficient as it uses the Apple Neural Engine and recent macOS optimizations
- Native user interface (DiffusionBee is a web app wrapped with Electron which does not follow platform conventions)
- Batch generation of different prompts (DiffusionBee supports batch for the same prompt only)
- Shortcuts support
- Automatic upscaling (DiffusionBee requires you to manually click an upscale button for each image)
- Sandboxed (More secure)
- Available on the App Store
- Image to image (planned for Amazing AI too)
- Custom models (planned for Amazing AI too)
Why is this free without ads?
I just enjoy making Mac apps. Consider leaving a nice review on the App Store.
Where can I find the changelog?
Go here and click “Version History”.
Can you localize the app into my language?
I don’t have any immediate plans to localize the app.