How to Use Stable Diffusion on Mac: Step-by-Step Guide (2026)
Last updated: February 2026
Good news: Stable Diffusion runs surprisingly well on Apple Silicon Macs. M1, M2, M3, M4 — they all work. You don’t need an NVIDIA GPU. You don’t need Linux. You don’t need to spend $1,600 on a graphics card.
Bad news: the setup guides online are mostly outdated or written for Linux/Windows. Here’s the actual, current way to get Stable Diffusion running on your Mac in 2026.
What You Need
- Mac with Apple Silicon (M1 or later). Intel Macs technically work but are painfully slow.
- At least 16GB unified memory (8GB works but limits you to smaller models)
- 20GB free disk space (models are large)
- macOS 13 Ventura or later
- About 30 minutes
Option 1: ComfyUI (Recommended)
ComfyUI is the power user’s choice. It’s a node-based interface — think visual programming for image generation. Steeper learning curve, but far more flexible than alternatives.
Installation
Step 1: Install Homebrew (if you don’t have it)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
Step 2: Install Python and Git
brew install python@3.11 git
Step 3: Clone and set up ComfyUI
git clone https://github.com/comfyanonymous/ComfyUI.git
cd ComfyUI
pip3 install -r requirements.txt
Step 4: Download a model
You need a checkpoint model. For Mac, I recommend starting with:
- SDXL Base (~6.5GB) — best quality-to-speed ratio on Apple Silicon
- SD 1.5 (~4GB) — faster, lower quality, good for 8GB Macs
- FLUX Schnell (~12GB) — newest, best quality, needs 16GB+ RAM
Download from Hugging Face or CivitAI and place in ComfyUI/models/checkpoints/.
Step 5: Run it
python3 main.py --force-fp16
The --force-fp16 flag is important for Mac — it uses half-precision floating point, which runs faster on Apple’s Metal GPU.
Open http://127.0.0.1:8188 in your browser. You’re in.
Performance on Mac
| Mac Model | SDXL (512x512) | SDXL (1024x1024) | FLUX Schnell |
|---|---|---|---|
| M1 8GB | ~45s | ~90s | Too slow |
| M1 Pro 16GB | ~25s | ~50s | ~60s |
| M2 Pro 16GB | ~20s | ~40s | ~45s |
| M3 Pro 18GB | ~15s | ~30s | ~35s |
| M4 Pro 24GB | ~10s | ~20s | ~25s |
Not as fast as an RTX 4090 (which does SDXL in 3-5 seconds), but absolutely usable for personal projects and experimentation.
Option 2: Diffusion Bee (Easiest)
If you just want to generate images without touching the terminal, Diffusion Bee is a native Mac app with a clean GUI. Download, install, generate. That’s it.
Setup
- Download from diffusionbee.com
- Install like any Mac app
- It downloads a default model on first launch
- Start generating
Pros
- Zero terminal knowledge required
- Clean, native Mac interface
- Built-in model downloader
- Supports img2img, inpainting, upscaling
Cons
- Less flexible than ComfyUI
- Fewer model options
- No node-based workflows
- Updates lag behind ComfyUI
Best for: People who want Stable Diffusion without the complexity.
Option 3: Draw Things (iOS + Mac)
Draw Things is a native Apple app available on both iOS and macOS. It’s optimized specifically for Apple hardware — Core ML, ANE (Apple Neural Engine), Metal. On newer Macs, it’s actually faster than ComfyUI for some models because of the Core ML optimization.
Setup
- Download from the Mac App Store (free)
- Download models within the app
- Generate
Pros
- Best Apple hardware optimization
- Same app on iPhone, iPad, and Mac
- Free
- Supports LoRA, ControlNet, and most SD features
Cons
- Unique interface (not standard SD workflow)
- Community resources are smaller than ComfyUI
- Some advanced features are harder to access
Best for: Apple ecosystem users who want the fastest native performance.
Which Option Should You Choose?
ComfyUI if: You want maximum control, plan to use LoRA models and ControlNet, or want to learn Stable Diffusion properly. The learning curve pays off.
Diffusion Bee if: You just want to type a prompt and get an image. No fuss, no configuration.
Draw Things if: You want the best performance on Apple Silicon and like native Mac apps.
My recommendation: Start with Draw Things (free, fast, easy). If you outgrow it, move to ComfyUI (more powerful, more flexible). Skip Diffusion Bee unless you specifically want a simple desktop app.
Tips for Better Results on Mac
1. Use fp16 models. Full precision (fp32) models use twice the memory and run slower. Most models on CivitAI and Hugging Face have fp16 versions.
2. Start with smaller resolutions. Generate at 512x512 or 768x768, then upscale. Generating directly at 1024x1024 is 4x slower.
3. Close other apps. Stable Diffusion uses your unified memory — the same memory your apps use. Close Chrome (the memory hog) before generating.
4. Use SDXL Turbo or Lightning for speed. These distilled models generate in 4-8 steps instead of 20-30, making them 3-5x faster. Quality is slightly lower but often good enough.
5. Batch overnight. Need 50 images? Set up a batch in ComfyUI before bed. Your Mac will churn through them while you sleep.
Troubleshooting Common Mac Issues
“Out of memory” errors: Your model is too large for your RAM. Switch to a smaller model or close other applications.
Slow generation: Make sure you’re using --force-fp16 in ComfyUI. Check Activity Monitor — if “Memory Pressure” is red, you need to free up RAM.
Black or corrupted images: Usually a VAE issue. Download the SDXL VAE separately and load it in your workflow.
ComfyUI won’t start: Check your Python version (python3 --version). ComfyUI needs 3.10 or 3.11. Python 3.12+ sometimes has compatibility issues with dependencies.
This guide is updated regularly as tools and models evolve.