Rotate for Promagen

Promagen is built for landscape viewing. Turn your phone sideways for the best experience.

How AI Image GeneratorsRead Your Prompts

Not every AI image generator reads prompts the same way. Some parse weighted keywords, others read natural sentences, and some work best with a single short phrase. Writing in the wrong format for your platform produces worse results — not because the prompt is bad, but because the platform cannot understand it. Promagen classifies every platform into one of four tiers based on its prompt architecture.

The 4-Tier Prompt Compatibility System

This classification is architectural, not a quality ranking. Each tier describes how the platform's text encoder processes your words — what it pays attention to, what it ignores, and what syntax it understands.

T1CLIP-Based

Weighted keywords with CLIP tokenisation

7 platforms: Dreamlike.art, DreamStudio, Fotor, Leonardo AI, Lexica, NovelAI, Stability AI / Stable Diffusion

T2Midjourney Family

Structured parameters with :: weighting and -- flags

1 platform: Midjourney

T3Natural Language

Conversational sentences, no special syntax

21 platforms: 123RF AI Generator, Adobe Firefly, Artbreeder, Bing Image Creator, Canva Magic Media, DALL·E 3, DeepAI, Flux (Black Forest Labs), Google Imagen, Ideogram, Imagine (Meta), Kling AI, Luma AI, MyEdit (CyberLink), Pixlr, Playground AI, Recraft, Runway ML, Simplified, Visme AI, VistaCreate

T4Plain Language

Short, focused prompts with minimal jargon

11 platforms: Artguru, Artistly, BlueWillow, Clipdrop, Craiyon, Hotpot.ai, Jasper Art, Microsoft Designer, Photoleap, Picsart, PicWish

Same Creative Intent, Four Different Formats

The following examples show the same scene described in each tier's native format:

A dramatic coastal lighthouse in a storm at sunset, with crashing waves, dark clouds, and warm golden light from the beacon.

Tier 1 — CLIP Weighted Keywords
(dramatic coastal lighthouse:1.4), (storm:1.3), sunset, (crashing waves:1.2), dark clouds, (warm golden light:1.3), beacon, cinematic, 8K, masterpiece

CLIP tokenisation breaks the prompt into weighted tokens. (term:1.3) increases that concept's influence by 30%. Order matters less than weights. Negative prompts go in a separate field.

Tier 2 — Midjourney Keywords
dramatic coastal lighthouse in a storm at sunset, crashing waves, dark clouds, warm golden light from the beacon::1.3 cinematic atmosphere::1.2 --ar 16:9 --no blur, haze, cartoon

Midjourney uses :: for emphasis weighting and --no for negative exclusions inline. Parameters like --ar control aspect ratio. Natural sentences work but keywords with weighting produce more precise results.

Tier 3 — Natural Language
A dramatic coastal lighthouse standing on a rocky cliff during a powerful storm at sunset. Massive waves crash against the base of the cliff. Dark, heavy clouds fill the sky while warm golden light streams from the lighthouse beacon, cutting through the rain and mist. Cinematic composition with rich detail.

Natural language platforms read full sentences. Descriptive, conversational writing works best. Weight syntax like (term:1.3) is ignored or treated as literal text. Detail and specificity come from vocabulary, not from numerical weights.

Tier 4 — Plain Language
Lighthouse in a storm at sunset with crashing waves and golden light

Plain language platforms work best with short, focused prompts. They have lower character limits and simpler text encoders. Overloading them with detail or long descriptions often produces worse results than a clear, concise description.

Want to see how Promagen handles the conversion?

Try the lighthouse scene in Prompt Lab

Why Your Prompt Doesn't Work on Another Platform

The most common reason a prompt “fails” on a new platform is format mismatch — not creative quality. Here are the patterns that trip people up:

CLIP syntax pasted into a natural language platform

Platforms like DALL·E 3, Flux, and Adobe Firefly do not understand parenthetical weights. (dramatic:1.4) is read as the literal text "(dramatic:1.4)" — the model sees brackets and numbers, not emphasis.

Pasting "(lighthouse:1.4), (storm:1.3), sunset" into DALL·E 3 produces confused results because it reads the weight syntax as literal words.

Midjourney --no flags sent to other platforms

The --no parameter is Midjourney-specific. Other platforms either ignore it or interpret "--no blur" as the literal words "no blur" in the prompt, which can actually increase the presence of blur.

Sending "--no watermark, text" to Stable Diffusion does nothing. Use the separate negative prompt field instead.

Long natural language prompts on plain language platforms

Tier 4 platforms (Canva, Jasper Art, Microsoft Designer) have short sweet spots (40–80 characters). Pasting a 300-character natural language prompt gets truncated or overwhelms the simpler text encoder.

A 250-word Flux prompt pasted into Canva Magic Media (sweet spot: 40 chars) produces generic results because most of the prompt is ignored.

Missing negative prompts on platforms without support

Platforms like DALL·E 3, Flux, Google Imagen, and Adobe Firefly have no negative prompt mechanism. Writing "no blur" in the main prompt can actually make blur more likely because the model processes the concept "blur".

Writing "no watermark, no text overlay" in a DALL·E 3 prompt. The model reads "watermark" and "text overlay" as concepts and may include them. Instead, write "clean image, professional finish".

Frequently asked questions

Why doesn't my Midjourney prompt work in DALL-E?

Midjourney uses keyword-weighted syntax (::) and --no flags that DALL-E ignores entirely. DALL-E reads natural language sentences. A Midjourney prompt pasted into DALL-E loses all its weighting and negative instructions. You need to rewrite the prompt as a conversational description — or use Promagen, which handles the conversion automatically.

What is the difference between CLIP and natural language prompts?

CLIP-based platforms (Tier 1) tokenise your prompt into weighted keywords — syntax like (term:1.3) increases that concept's influence by 30%. Natural language platforms (Tier 3) read conversational sentences and ignore weight syntax entirely. Writing CLIP syntax for a natural language platform wastes the weights and may add confusing literal characters to the prompt.

Can I use one prompt for all platforms?

Not effectively. The 40 platforms Promagen tracks use 4 fundamentally different prompt architectures. A prompt optimised for one tier will underperform on another because the text encoder processes it differently. Promagen's Prompt Lab solves this by writing your creative intent in the correct format for whichever platform you select.

What is a prompt tier?

A prompt tier is Promagen's classification of how an AI image generator's text encoder processes prompts. Tier 1 uses CLIP tokenisation with weighted keywords. Tier 2 is the Midjourney family with :: weighting and --no flags. Tier 3 reads natural language sentences. Tier 4 works best with short, focused phrases. The tier describes prompt architecture, not quality.

Why do plain language platforms work differently from natural language ones?

Natural language platforms (Tier 3) have sophisticated text encoders that understand complex sentences and follow detailed instructions. Plain language platforms (Tier 4) have simpler encoders with shorter character limits — they work best with brief, focused descriptions. Overloading a Tier 4 platform with a long Tier 3 prompt often produces worse results.

Explore further

Try Prompt Lab — Automatic format conversion for any platform

Tier classifications and platform data from Promagen's platform intelligence database (SSOT v1.1.0). Last verified: 2026-03-29.

All platform counts are dynamically derived. This guide is original research based on architectural analysis of each platform's text encoder behaviour.