Learn how negative prompts work in Stable Diffusion, SDXL, and other image models. Includes templates and common patterns.
Negative prompts tell image generation models what to exclude from the output. They're supported by Stable Diffusion, SDXL, and other diffusion-based models (but NOT by Midjourney or DALL-E, which handle exclusion differently).
During the diffusion process, the model uses your negative prompt to steer away from certain concepts. Think of it as:
A good starting point for most image generation:
ugly, deformed, noisy, blurry, distorted, out of focus, bad anatomy,
extra limbs, poorly drawn face, poorly drawn hands, missing fingers,
watermark, signature, text, logo, username, low quality, worst quality,
normal quality, jpeg artifacts, cropped
cartoon, illustration, anime, drawing, painting, 3d render, CGI,
plastic skin, doll-like, uncanny valley, cross-eyed, asymmetric eyes,
extra fingers, mutated hands, bad teeth, unnatural pose
people, humans, figures, text, watermark, frame, border, oversaturated,
HDR artifacts, chromatic aberration, lens flare, vignette
blurry, low resolution, distorted, warped, cluttered background,
shadows on product, reflections, text overlay, watermark, frame
(blurry:1.3) increases the weight| Model | Negative Prompt Support |
|---|---|
| SDXL 1.0 | Full support |
| SD 3.5 | Full support |
| Flux | Partial (via guidance) |
| Midjourney | Use --no parameter instead |
| DALL-E 3 | Not supported |
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts!