Run a free Gatsby SEO audit. Catch deprecated Helmet configs, sitemap gaps, hydration CLS, missing AI crawler rules, and schema issues across every build.
Run Free Audit →No credit card required · Results in under 2 minutes
Gatsby pioneered the modern static site with React, and for years it was the default choice for performance-conscious developers building content sites. But the Gatsby ecosystem has evolved significantly — and not always in ways that are easy to track. The gatsby-plugin-react-helmet approach to metadata is deprecated in favor of the new Gatsby Head API, but countless production sites still run on the old pattern. Sitemap generation requires explicit plugin configuration that many projects skip. And Gatsby's React foundation introduces hydration-related CLS issues that only show up at scale, not in a single-page Lighthouse test.
seo.yatna.ai crawls your Gatsby site the way Google and AI assistants do — following rendered HTML across every page, checking meta tags, validating JSON-LD schema, and scoring across 7 weighted categories. You get a site-wide SEO picture, not a homepage snapshot.
gatsby-plugin-react-helmet deprecated — Gatsby Head API needs setup — The react-helmet approach to injecting <head> content was deprecated in Gatsby v4 and is no longer the recommended pattern. Sites still using it may encounter rendering inconsistencies where meta tags don't appear in server-rendered HTML, making them invisible to crawlers. Migrating to the Gatsby Head API fixes this, but the migration is non-trivial for sites with many templates.
Dynamic routes not generating sitemap entries — gatsby-plugin-sitemap generates sitemaps from your GraphQL page data, but it only includes pages that were built during gatsby build. Dynamic routes that depend on runtime data, pages generated outside the GraphQL layer, or routes added after the last build won't appear in the sitemap — and Google won't crawl them unless it finds an inbound link.
Client-side routing creating crawlability gaps — Gatsby uses React Router for client-side navigation, which means internal link transitions happen entirely in JavaScript. If Google's crawler doesn't fully execute the JS bundle, some internal links are never followed. Pages that only appear after a client-side navigation may not be indexed at all.
React hydration issues causing CLS — When Gatsby hydrates its static HTML with React on the client, layout shifts can occur if the server-rendered DOM doesn't exactly match what React renders on first hydration. This is a common source of CLS failures that are invisible in static analysis but appear clearly in field data from real users.
gatsby-plugin-image requires explicit width/height — The modern Gatsby Image plugin requires explicit width and height props or a layout mode that infers dimensions. Sites migrated from the older gatsby-image package sometimes have images missing these attributes, causing layout shift and failing the Images audit category.
Default gatsby-plugin-robots-txt doesn't include AI crawlers — The standard gatsby-plugin-robots-txt configuration generates a minimal file: User-agent: * Disallow:. This allows all crawlers but provides no explicit directives for GPTBot, ClaudeBot, PerplexityBot, or Amazonbot. As AI-generated search answers become a meaningful traffic channel, this omission means your site has no presence in AI training data.
JSON-LD schema must be manually added to each template — Gatsby has no built-in schema generation. Article schema, Organization schema, BreadcrumbList, and FAQ markup all require manual JSON-LD blocks in each relevant page template. Sites with multiple templates (blog, product, landing page) often have schema on some templates and nothing on others.
No llms.txt out of the box — llms.txt tells AI assistants how to accurately represent your site. Gatsby has no native support for generating this file. Without it, AI assistants synthesize a description of your site from crawled content — which may be outdated, incomplete, or focused on the wrong pages.
Build-time rendering means dynamic content missed at crawl time — Gatsby's SSG model renders pages at build time. Any content that's fetched client-side after the initial render — personalized sections, A/B test variants, data from third-party APIs — is invisible in the static HTML that crawlers evaluate. If this content contains important keywords or structured data, it's simply not there at crawl time.
Canonical tags generated but may conflict with multiple URL patterns — Gatsby generates canonical tags based on the page path from the build. If your site has pages accessible via multiple patterns (for example, both /blog/post-title/ and /blog/post-title resolving to the same page), the auto-generated canonical may not consistently point to the preferred version — splitting link equity between the variants.
seo.yatna.ai scores your Gatsby site across 7 weighted categories:
robots.txt for GPTBot/ClaudeBot, llms.txt presence, schema structured for AI citationA typical Gatsby site audited on seo.yatna.ai returns results like this:
| Category | Score | Key Finding |
|---|---|---|
| E-E-A-T | 55/100 | Author schema missing on blog templates |
| Technical SEO | 61/100 | 7 dynamic route pages missing from sitemap |
| On-Page SEO | 69/100 | 5 pages with duplicate title tags |
| Schema | 35/100 | JSON-LD only on homepage, not blog posts |
| Performance | 66/100 | Hydration CLS on product listing pages |
| AI Readiness | 25/100 | No AI crawler directives; no llms.txt |
| Images | 60/100 | 9 images missing explicit width/height |
| Overall | 57/100 | 18 actionable issues found |
Each finding links to the specific page where the issue was detected, with Gatsby-specific remediation guidance — including which plugin configuration to update.
Does this audit work with Gatsby sites using Contentful, Sanity, or other headless CMSes? Yes. seo.yatna.ai crawls the rendered HTML output of your live site. It doesn't matter where your content comes from — only what the built pages look like to Google and AI crawlers.
How do I know if my site is still using react-helmet?
The audit checks the server-rendered HTML structure of your <head> elements. If you're on a deprecated pattern, the meta tags may appear inconsistently or be absent from server-rendered responses. Our report flags this specifically when detected.
Is Gatsby still worth using for SEO in 2026? Gatsby's SSG output is still excellent for SEO fundamentals — fast static HTML, easily crawlable pages, good performance baseline. The issues are at the configuration and plugin layer, not the architecture. Our audit tells you exactly what to fix to make your existing Gatsby site competitive.
My Gatsby build is slow. Does that affect SEO? Build time doesn't affect SEO — only what gets served to crawlers and users. But a slow build often means stale content, which can mean outdated pages in Google's index. The audit checks what's live, not how long it took to get there.
Gatsby's static output is a solid SEO foundation. The audit identifies the plugin gaps, deprecated patterns, and AI readiness issues sitting between your current score and where you need to be.
Related reading:
7 AI agents. 7 audit categories. One score. Free for your first audit.
Run Free Audit →