Check your website before AI crawlers and search engines do

The checker fetches your public homepage, markdown mirrors, /llms.txt, /robots.txt, and sitemap, then shows deterministic pass, warning, and error findings. It also follows at most one same-origin page link to see what a real LLM-style fetch would actually get. No score. No vague advice. Just the exact metadata, crawler, and thin-HTML issues that need fixing.

Only the root website is checked first. Enter any public page URL and the checker will normalize it to the site root before fetching metadata.

The checker normalizes input to the site root and looks for homepage metadata, JSON-LD, markdown alternate links, llms.txt, robots.txt, sitemap discovery, and common AI crawler rules. It samples one internal link only, so it stays deterministic and does not flood the checked site.