Build Your Release Free →
⚾ The Scout — AI Visibility Score

Will ChatGPT, Gemini, and Perplexity cite your release?

The Scout reads your press release the way an answer engine does — looking for the signals that determine whether your news gets quoted, cited, and surfaced. Eight categories. Specific findings. Free.

Paste your full press release below. AI answer engines — ChatGPT, Gemini, Perplexity, Google AI Overviews — choose source material based on a small set of signals: specificity, quotability, authority, originality, citation potential, and readability. The Scout grades all six and tells you exactly what to fix. No signup. No data sent anywhere — analysis runs entirely in your browser.

0 words
Clear and start over
/ 100
How to read these scores
The Scout grades for AI-visibility against general-audience answer engines (ChatGPT, Gemini, Claude, Perplexity). It does not grade for trade-press readability, and the two are not the same. If your release serves a vernacular-heavy industry — legal, medical, financial services, scientific, energy, defense, or any technical sector with established working vocabulary — Readability, Originality, or Citation Potential may score lower for AI-citation purposes without making the release less effective for its intended trade-press audience. Both audiences now matter. The Scout helps you see how a release performs for the new one, so you can balance both.

What is Answer Engine Optimization (AEO)?

Search has changed. When buyers, journalists, and stakeholders ask a question now, they often ask ChatGPT, Gemini, Perplexity, or Google's AI Overview — and the answer they see is synthesized from a handful of sources the AI selected as authoritative.

The press releases that get cited share specific qualities: concrete details (real numbers, named entities, dates) AI can extract; quotable language with clear attribution; authority signals like proper boilerplate and dateline; originality (no clichés or generic filler); external references AI can cross-check; and readability short enough to parse cleanly.

That's what Answer Engine Optimization (AEO) means: writing releases AI can cite. The Scout grades you on all eight signals so you can fix what's weak before you distribute.

The Eight Signals The Scout Looks For

🔢
Specificity
Concrete numbers, percentages, dollar amounts, dates, and named entities. Vague releases don't get cited because there's nothing for AI to extract.
💬
Quotability
Direct quotes with named attribution using "said." AI engines surface quoted statements with credited speakers. No quotes = no quote-citations.
🏛️
Authority
Proper dateline, formal executive titles, and a real boilerplate. These signal "this is a real organization, not a content farm."
Originality
PR clichés ("industry-leading," "best-in-class," "pleased to announce") get filtered out as low-signal noise. AI prefers distinctive language.
🔗
Citation Potential
External references, statistics with sourcing, and named third-party data make your release verifiable — and verifiable releases get cited more.
📖
Readability
Sentences AI can parse cleanly. Long, dense, passive sentences confuse extraction. Short, active, concrete sentences win.
🕐
Timeliness
Time anchoring AI engines use to grade newness. Strong datelines, present-tense markers ("today," "this week"), specific forward dates — and no vague "soon" or "stay tuned" placeholders.
🎯
Entity Disambiguation
Consistent canonical names so AI engines can confidently identify who, what, and where. Full-name-and-title attribution, repeated canonical company names, and few generic "the firm" references.

An AI-visible release starts with the right structure.

Pitch'd builds the structure for you. Nineteen release types, AP Style coaching, headline grading, and now AI visibility — all in one guided flow. Free to start, no signup required.

Also free: Pitch Grade rates your headline · The Ump rates your newsworthiness · AP Style Checker. See all tools →