SEO Grade Calculator
Score your site's SEO performance
Run a full scan for discovery, crawl control, markdown, MCP, OAuth, and agent-facing standards. Then keep the score, the evidence, and the before-versus-after history.
A search engine crawler and an AI agent read your site for different reasons. Googlebot wants to rank a page in a results list. ChatGPT browse, Perplexity, Claude's web tools, and Gemini want to answer a question — often by quoting you directly, citing you in passing, or completing a task on a user's behalf. Those reading patterns expose different gaps. A site that scores fine on a traditional SEO audit can still be invisible (or, worse, misrepresented) by an AI agent because the signals the agent depends on simply aren't there.
This tool checks the four things that determine whether an AI agent can find, read, trust, and
act on your site. Discoverability covers the basics — robots.txt, sitemap.xml,
and the /.well-known/ entries agents now look for first. AI readability
is about whether your HTML is clean enough that a language model can parse it without JavaScript,
whether you publish a markdown alternate for content-heavy pages, and whether your structured data
actually maps to your visible content. Policy and identity tells agents which bots
you welcome, which you block, and how your content is allowed to be used (cited, summarized, used
for training). Action surfaces is the most ambitious slice — whether agents can
do anything beyond reading: an MCP server, an API catalog, schema.org actions, an OAuth-protected
programmatic surface.
We grade each category independently and combine them into a 0–100 readiness score, then map that score to a 0–5 readiness level so it's clear what the next move is. Most sites land at Level 1 or Level 2 on first scan, regardless of how big the brand is — agent-readiness is genuinely new ground, and getting from Level 0 to Level 2 is usually a weekend of hygiene work.
We score four categories of agent-readiness, each weighted by how much it affects whether AI agents can discover, read, and act on your site.
| Category | Weight | What we check |
|---|---|---|
| Discoverability | 20% | Checks whether agents can find your site-level instructions and machine-readable entry points quickly. |
| Content Accessibility | 15% | Looks at whether agents can consume your pages in a cleaner representation than raw rendered HTML. |
| Bot Access Control | 20% | Measures how clearly you express AI crawler policy and downstream usage preferences. |
| API, Auth, MCP & Skill Discovery | 45% | Measures whether an agent can discover actual machine interfaces instead of stopping at static content. |
Three patterns keep coming up — recognise yourself in any of these and the report is going to land cleanly.
Run a public scan before a sales call, screenshot the score, drop the share URL into the proposal. Treats agent-readiness as a measurable retainer line item.
Use the score history to prove that the bot infrastructure work moved the needle. Re-scan after each shipped fix and watch the level climb.
Get a one-page snapshot of where AI agents currently rank you, with a prioritized fix list and an honest assessment of how big each fix is.
A first scan gives you six panels. Here's what each one is for.
Overall score & level
0–100 score with a 0–5 level mapping. Level is the number you reference when discussing progress.
Category breakdown
Per-category bars. A category at red is where the next sprint should focus, regardless of overall score.
Top recommended fixes
Click-to-expand accordion. Each fix shows what's broken, why it matters, and the developer-ready instructions. There's a one-button "Copy developer brief" at the top.
Full audit evidence
Every check we ran, with the actual HTTP responses, headers, and snippets we used to score it. Open this when you want to verify a finding yourself.
Score history
Once you have two scans, this becomes a trend chart. Use it to prove that an infra change moved the score.
Benchmark
Where the scanned domain sits relative to other completed scans for the same site type — helpful for "is this score normal?" conversations.
Each check returns one of four states: pass (full credit), fail (zero credit), warn (partial credit, usually because we found a signal but it was incomplete), or neutral (informational, not scored). Within a category, the category score is the weighted average of its individual checks; an optional check that we couldn't reach is dropped from the denominator rather than counted as a failure.
The four category scores combine into the overall 0–100 score using the weights shown in the table above. We deliberately avoid a single-pass / single-fail gate: a low Discoverability score caps everything that depends on it, but a strong Action Surfaces score still pulls the overall up because it represents real work that helps real agents.
Level mapping: 0 (0–9, invisible), 1 (10–24, indexed only), 2 (25–49, readable), 3 (50–69, identified), 4 (70–84, transactable), 5 (85+, agent-native). The level is the headline number for stakeholders; the 0–100 score is for trend tracking.
We re-fetch the site fresh for each scan, including robots.txt, sitemap, and a
sample of pages from the sitemap. We do not rely on cached or third-party data. The sample is
small (typically 1–3 pages) so the scan is fast and the cost is bounded.
Three reference points for interpreting your score.
Level 0–1 · 0–24
Effectively invisible to AI agents
Missing robots.txt or sitemap, no structured data, JavaScript-rendered content, no /.well-known/ entries. ChatGPT browse and Perplexity will skip the site or use stale Google cache.
Level 2–3 · 25–69
Readable, but not yet identified
Agents can fetch and parse the site cleanly. What's missing is bot-specific identity (LLM policy, content-use signals) and discovery surfaces (API catalog, MCP). Most sites land here.
Level 4–5 · 70–100
Agent-transactable
Discovery, auth, and action surfaces are live. Agents can identify themselves, read your policy, and call your APIs on a user's behalf. A small percentage of sites are here today.
It means an AI system can discover your rules, read your content in a clean format, and find the machine-facing interfaces behind the site without reverse-engineering the frontend.
robots.txt is only one part of the picture. A site can be crawlable and still be hard for agents to use if it lacks markdown delivery, discovery headers, API metadata, OAuth discovery, or MCP-style machine entry points.
More discovery starts with AI systems and conversational interfaces. If your site is hard for machines to read or act on, you are less likely to be cited, routed to, or used in agent-led buying flows.
Reports are public at /ar/<domain>. A rerun does not overwrite the last completed scan. The public page keeps the latest finished report visible while a new run is still in progress, and same-day reruns stay distinct in history.
No, it complements it. Traditional SEO audits check whether Googlebot can crawl and rank your pages. Agent-readiness checks whether the new generation of AI agents can discover, read, and act on your site. A site can pass one and fail the other; in practice you want both.
llms.txt is a single file proposal for surfacing AI-readable content. It's one signal we check (under AI Readability), but agent-readiness covers a much wider surface — robots policy, structured data, action interfaces, identity. A perfect llms.txt alone gets you partial credit in one category.
No, MCP is one of several action-surface signals and only matters for sites that want agents to take actions (book, buy, query). A documentation site, a blog, or a marketing site can score 70+ without an MCP server. The scan tells you which surfaces are missing — you decide which ones make sense for your product.
After every shipped change to robots.txt, sitemap, structured data, or any of the discovery surfaces. The score history will show whether the change actually moved the needle. For passive monitoring, monthly is fine — the underlying standards don't change weekly.
Yes. Email vadim@seojuice.io with the domain and we'll delete the public report within one business day. Re-running a scan does not delete prior runs from history; only an explicit removal request does.
A few causes: a third-party service we depend on returned an error, the sample of pages we scanned was different, your CDN served slightly different bytes, or a check we recently calibrated weighed signals differently. If you see a score swing larger than ~5 points without a code change, open the report and compare the audit evidence — the diff is usually visible there.
Site type adjusts the benchmark we compare your score against and which optional checks are weighted higher. An ecommerce site benefits more from product schema and a robust API catalog; a content site benefits more from clean markdown alternates. The base scoring is the same — only the benchmark and a few weights change.
The report URL is publicly accessible (it has to be, for the share/benchmark feature to work). The raw HTTP fetches we ran during the scan are stored only as much as the report needs to show the evidence — we don't retain page bodies beyond the previews shown in the audit.
Link: rel="alternate" type="text/markdown".
robots.txt + AI-specific extensions describing which automated agents you allow and how your content may be used.
Score your site's SEO performance
Check domain authority for any site
Analyze keyword density on your pages
Generate SEO-friendly image descriptions
Data-backed SEO myth analysis
Boost your AI search visibility