Google search discovery and AI Overviews.
User-agent: * (wildcard fallback) disallows specific paths only — site root remains permitted.
The same Sentinel audit we ship to clients, run against promagen.com. Published every Monday with real numbers — including the zeros. If we can’t make Promagen citable for AI visibility queries, we have no business selling AI visibility audits.
Six measurements. Numbers in white are measured. Cards in slate are honestly pending or not yet configured — every empty seat is named, never hidden behind a dash.
Populates after the first Monday cron run.
Populates after the first Monday cron run.
Citation interrogator (Slice 1b) ships next — per-engine results across ChatGPT, Claude, Gemini, Perplexity, including the zeros.
GA4 AI-source filtering not yet wired. Will show sessions and landing pages by engine once configured.
Populates after the first Monday cron run.
Populates after the first Monday cron run.
The cron computes all five inputs every Monday and writes the composite above. Per-input numerical values surface here when sub-scores are persisted to sentinel_run_summaries in the next method update.
The full pipeline a Sentinel audit measures, end-to-end. Every stage renders with its honest state. Stages that aren’t yet wired show what they’ll measure and when they ship — no hidden steps, no skipped layers.
Bot permissions allow crawler access. Checks robots.txt across the monitored bot set (Googlebot, Bingbot, OAI-SearchBot, GPTBot, ClaudeBot, PerplexityBot, Google-Extended).
7 of 7 monitored bots permitted
Pages exposed via sitemap and internal links.
Sub-score persistence and bot permission matrix land in the Phase 3 method update.
Pages return HTTP 200 with parseable HTML.
Sub-score persistence and bot permission matrix land in the Phase 3 method update.
Title, H1, metadata, schema, headings, tables, FAQ structure all present and aligned.
Sub-score persistence and bot permission matrix land in the Phase 3 method update.
Page contains evidence-backed, dated, specific claims worth citing.
Sub-score persistence and bot permission matrix land in the Phase 3 method update.
A monitored AI engine named, linked, or described Promagen for a tracked query.
Citation interrogator (Slice 1b) ships next — per-engine results across ChatGPT, Claude, Gemini, Perplexity, including the zeros.
Users arrived from an AI source — utm_source=chatgpt.com, claude.ai, perplexity.ai, gemini.google.com.
GA4 AI-source filtering (utm_source=chatgpt.com, claude.ai, etc.) not yet wired.
AI-referred users completed a meaningful action — booking, purchase, signup.
Server-side conversion telemetry pending.
The status of every monitored crawler against https://promagen.com/robots.txt. Computed live from the production robots.txt every 24 hours. The funnel’s Allowed stage reads its number from this table — 7 of 7 monitored bots permitted.
Google search discovery and AI Overviews.
User-agent: * (wildcard fallback) disallows specific paths only — site root remains permitted.
Bing discovery and Microsoft AI citation surfaces.
User-agent: * (wildcard fallback) disallows specific paths only — site root remains permitted.
ChatGPT search and discovery surface.
User-agent: OAI-SearchBot disallows specific paths only — site root remains permitted.
OpenAI training-related crawler.
User-agent: GPTBot disallows specific paths only — site root remains permitted.
Anthropic crawler signal.
User-agent: ClaudeBot disallows specific paths only — site root remains permitted.
Perplexity crawler signal.
User-agent: PerplexityBot disallows specific paths only — site root remains permitted.
Google AI-related control signal.
User-agent: Google-Extended disallows specific paths only — site root remains permitted.
Last observed crawl per bot — Not configured. Server-side crawler-activity logs ship in a later method update. The state column above reflects today’s robots.txt; the “has this bot actually visited recently” column populates when access logs are wired.
Fetched Sat, 9 May 2026, 18:16 UTC · HTTP 200
Don’t take our word for any of this. Paste any of the four questions below into ChatGPT, Claude, Gemini, or Perplexity and see what the AI engines actually say. Each question tests a different layer of the funnel — buyer language, category shopping, entity recognition, product recognition.
“How do I rank on ChatGPT?”
Pending — citation interrogator (Slice 1b) ships next. When live, this row shows the engine-by-engine result of asking this exact question, including the zeros.
“Best AI visibility monitoring tools 2026”
Pending — citation interrogator (Slice 1b) ships next. When live, this row shows the engine-by-engine result of asking this exact question, including the zeros.
“What is Promagen?”
Pending — citation interrogator (Slice 1b) ships next. When live, this row shows the engine-by-engine result of asking this exact question, including the zeros.
“What does Sentinel by Promagen do?”
Pending — citation interrogator (Slice 1b) ships next. When live, this row shows the engine-by-engine result of asking this exact question, including the zeros.
The first Monday report is pending. This section will list the priority actions Sentinel generated for the week.
The biggest improvement and the biggest problem, side by side. A vendor that only shows wins is selling. A vendor that shows both is reporting.
Pending — populates after the first weekly run.
Pending — populates after the first weekly run.
Operator-grade diagnosis. One row per URL touched by this week’s regressions. Search by URL, filter by severity, status, and page class.
Pending — populates after the first weekly run.
Pending — populates after the first weekly run.
Every Monday at 06:00 UTC, the Sentinel cron crawls every public authority page on promagen.com, compares the result against the prior week, and detects regressions against a fixed policy matrix. The numbers above are that crawl’s output.
The proof is intentionally uncomfortable. If a page degraded this week, you see it. If the citation board is empty (it is — the citation interrogator ships in a later slice), you see that too. Other AI-visibility consultancies don’t publish their own audit because their own scores aren’t presentable. Ours are presentable because they’re shipped.
What this report does notdo (yet): track where ChatGPT, Claude, Perplexity and Gemini cite Promagen. That’s the citation interrogator — separate work in flight. When it lands, the citation board appears here with real per-engine numbers, including the zeros.
Status:the first weekly cron run is pending. Once Monday’s scheduled crawl reports, this page populates with real numbers.