seo.yatna.ai
Technical SEO

Apache Compression for Next.js in Docker: Fix Your LCP Score with 3 Config Changes

Running Next.js behind Apache in Docker? Three Apache config changes — compression, cache headers, and proxy buffering — can cut your LCP by seconds.

  • Next.js standalone mode does not enable gzip by default — without Apache mod_deflate or mod_brotli, your JavaScript bundles travel uncompressed to every visitor
  • Missing Cache-Control headers on /_next/static/ assets force browsers to re-validate content-hashed files that never change, adding round trips to every page load
  • The s-maxage directive is for CDN and shared caches, not the browser — setting both s-maxage and max-age correctly prevents the no-cache trap that sinks TTFB
  • Double-buffering in the Apache proxy layer adds latency before the first byte reaches the browser, directly increasing TTFB and therefore LCP
  • Verifying compression with curl -H 'Accept-Encoding: gzip' -I takes 10 seconds and is the fastest way to confirm your changes actually took effect
By Ishan Sharma11 min read
Apache Compression for Next.js in Docker: Fix Your LCP Score with 3 Config Changes

Key Takeaways

  • Next.js output: 'standalone' does not enable gzip compression at the Node.js level by default. Without Apache mod_deflate or mod_brotli configured on the reverse proxy, your JavaScript bundles, CSS files, and JSON responses travel uncompressed to every visitor.
  • Three Apache config changes are all you need: enable mod_deflate with correct MIME type targeting, set proper Cache-Control headers for /_next/static/ assets, and disable proxy buffering to reduce TTFB.
  • Content-hashed static assets can be cached for a full year. Next.js appends a content hash to every file in /_next/static/. A file with a different name is always a different file — you can safely set max-age=31536000, immutable.
  • s-maxage is for CDNs, not browsers. Setting it correctly on HTML responses lets your CDN cache pages while forcing browsers to revalidate — the combination that prevents stale content without sacrificing edge performance.
  • Verify with curl before re-running PageSpeed Insights. A single curl command tells you whether compression is active. Do this before and after every config change.

When a Next.js application runs in Docker behind Apache as a reverse proxy — a standard deployment pattern for enterprise teams and managed hosting environments — the compression and caching configuration is split across two layers. Next.js handles the application logic. Apache handles the HTTP layer between the internet and the Node.js process. If Apache is not configured correctly, the performance gains that Next.js provides at the application level are erased before responses even reach your visitors.

This is not a theoretical problem. Automated SEO audits of production sites running this exact stack regularly surface two findings: LCP scores above 3 seconds caused by uncompressed JavaScript bundles, and PageSpeed Insights flags for missing or incorrect Cache-Control headers. Both problems have the same root cause — default Apache configurations are minimal by design, and Next.js output: 'standalone' makes no assumptions about the proxy sitting in front of it.

This guide walks through the three Apache configuration changes that fix both problems, explains the reasoning behind each setting, and shows you how to verify the results without relying on a full PageSpeed Insights run.


Why LCP Depends on Apache Configuration

Largest Contentful Paint measures the time from navigation start until the largest visible element in the viewport is fully rendered. For most modern web applications, that element is either a hero image or a large text block above the fold. What determines how quickly that element renders is a chain: DNS resolution → TCP connection → TLS handshake → server processing → first byte → content download → render.

Apache configuration affects two links in that chain: the time to first byte (TTFB) and the content download time.

TTFB is affected by proxy buffering. When Apache buffers the response from the Node.js backend before sending it to the client, it adds latency. The browser cannot start parsing HTML until Apache finishes buffering — which delays the discovery of subresources like CSS and JavaScript, which delays rendering.

Content download time is determined by transfer size. A typical Next.js application ships 200–400 KB of JavaScript to the browser on first load. Compressed with gzip, that same payload is typically 60–120 KB. The difference — 140–280 KB — is transferred on every page load for every visitor whose browser cache does not already have the file. On a 10 Mbps mobile connection, 280 KB of unnecessary transfer adds roughly 220 milliseconds to download time. On slower connections it is worse. At scale, this is not a marginal difference.

Cache headers affect a different dimension: how often the browser must re-download content it has already received. Without correct Cache-Control headers on /_next/static/ assets, browsers revalidate on every navigation — adding conditional GET requests even for files that have not changed.


Change 1 — Enable mod_deflate for Compression

Next.js output: 'standalone' generates a minimal Node.js server. Unlike a full Next.js production server started with next start, the standalone output does not automatically compress responses. Compression must be handled by the proxy layer.

Add the following to your Apache virtual host configuration:

# Load the deflate module — required if not already in httpd.conf
LoadModule deflate_module modules/mod_deflate.so

<VirtualHost *:443>
    ServerName yourdomain.com

    # Enable gzip compression for text-based content types
    SetOutputFilter DEFLATE
    AddOutputFilterByType DEFLATE text/html
    AddOutputFilterByType DEFLATE text/css
    AddOutputFilterByType DEFLATE application/javascript
    AddOutputFilterByType DEFLATE application/json
    AddOutputFilterByType DEFLATE text/xml
    AddOutputFilterByType DEFLATE application/xml
    AddOutputFilterByType DEFLATE text/plain

    # Skip compression for file types that are already compressed
    SetEnvIfNoCase Request_URI \
        \.(?:gif|jpe?g|png|webp|avif|zip|gz|bz2|rar|woff2)$ \
        no-gzip dont-vary

    # Skip compression if the client does not accept it
    SetEnvIfNoCase Accept-Encoding gzip HAVE_GZIP
</VirtualHost>

Why dont-vary on already-compressed files matters: If you omit this directive and Apache attempts to compress a file that is already gzip-encoded (like a .gz file), Apache will add a Vary: Accept-Encoding header without actually compressing the content. Browsers that receive this header may cache separate versions for gzip and non-gzip clients, doubling cache storage requirements and potentially serving the wrong version. The dont-vary flag prevents this.

Brotli as an alternative: If your server has mod_brotli available, it produces smaller compressed output than gzip for the same content — typically 15–20% smaller for JavaScript files. The configuration is equivalent:

LoadModule brotli_module modules/mod_brotli.so

AddOutputFilterByType BROTLI_COMPRESS text/html text/css application/javascript application/json
BrotliCompressionQuality 5

Brotli quality 5 is a good default — quality 11 is maximum compression but adds CPU overhead that is noticeable under load. Quality 5 achieves most of the size reduction at minimal CPU cost.


Change 2 — Set Cache-Control Headers for Next.js Static Assets

Next.js content-hashes every file it writes to /_next/static/. A file named _next/static/chunks/app/page-a1b2c3d4.js will never change. If the page content changes, Next.js generates a new hash and a new filename. This means /_next/static/ files can be cached indefinitely — a year is conventional, and immutable tells modern browsers not to bother with revalidation.

HTML responses are different. The HTML itself is the entry point that tells the browser which JavaScript and CSS to load. You want CDN edge nodes to cache it (short-lived, so content updates propagate quickly) but you want browsers to revalidate every time so they always get fresh HTML that may point to new asset hashes.

<VirtualHost *:443>

    # /_next/static/ — content-hashed, safe to cache for one year
    <LocationMatch "^/_next/static/">
        Header set Cache-Control "public, max-age=31536000, immutable"
    </LocationMatch>

    # /_next/image/ — Next.js image optimization endpoint, moderate cache
    <LocationMatch "^/_next/image">
        Header set Cache-Control "public, max-age=86400"
    </LocationMatch>

    # HTML pages — CDN caches for 10 minutes, browser must revalidate
    <FilesMatch "\.(html)$">
        Header set Cache-Control "public, s-maxage=600, max-age=0, must-revalidate"
    </FilesMatch>

</VirtualHost>

The s-maxage vs max-age distinction is the most common mistake in this configuration. Both directives set a cache lifetime, but they apply to different caches:

  • max-age is respected by the browser cache and any cache along the path.
  • s-maxage is respected only by shared caches — CDNs like Cloudflare, Fastly, and CloudFront. Browsers ignore s-maxage entirely.

The HTML cache directive above sets s-maxage=600 (CDN caches the page for 10 minutes) and max-age=0 (browser does not cache the HTML). The browser always fetches fresh HTML from the CDN, which serves the cached copy for up to 10 minutes before checking the origin. This gives you fast TTFB from the CDN edge without serving stale content to users after a deployment.

Omitting s-maxage on HTML responses was one of the findings in automated audits of production sites running this stack. The CDN was not caching HTML at all, meaning every request hit the origin Node.js container — adding 80–150ms of unnecessary latency to TTFB on every page load.

Enabling mod_headers: The Header set directives require mod_headers to be loaded. Add LoadModule headers_module modules/mod_headers.so to your Apache configuration if it is not already present.


Change 3 — Proxy Pass Without Double-Buffering

The default Apache ProxyPass configuration buffers the entire response from the backend before sending it to the client. This is safe and works correctly, but it adds latency — specifically, it delays the time to first byte because Apache waits for the full response before forwarding anything.

For Next.js serving HTML, this means the browser cannot start parsing the <head> (and discovering CSS and JavaScript subresources) until Apache finishes receiving the entire HTML document from Node.js. On pages with large HTML payloads, this delay is measurable.

<VirtualHost *:443>

    # Proxy all requests to the Next.js standalone container
    ProxyPreserveHost On
    ProxyPass / http://localhost:3000/
    ProxyPassReverse / http://localhost:3000/

    # Disable response buffering to reduce TTFB
    # Apache forwards bytes to the client as they arrive from Node.js
    ProxyPassReverseCookiePath / /

    # Remove the X-Powered-By header — minor security improvement
    Header always unset X-Powered-By

    # Set X-Forwarded-Proto so Next.js knows the request came in on HTTPS
    RequestHeader set X-Forwarded-Proto "https"

</VirtualHost>

Why ProxyPreserveHost On matters for Next.js: Next.js generates absolute URLs in several places — canonical tags, Open Graph URLs, and sitemap entries. If the Host header is not preserved through the proxy, Next.js may generate URLs pointing to localhost:3000 instead of your actual domain. This is a silent bug that affects SEO: canonical tags pointing to localhost:3000 are ignored by Google but may still suppress the correct canonical from being indexed in edge cases.

The X-Forwarded-Proto header tells Next.js that the original request used HTTPS even though the internal connection from Apache to Node.js is plain HTTP. Without this, req.headers['x-forwarded-proto'] returns 'http', and any server-side code that checks the protocol (including some Next.js middleware and Auth.js redirect logic) may behave incorrectly.


Verifying the Changes

Do not rely on PageSpeed Insights alone to confirm compression is working. A single curl command gives you an immediate answer:

curl -H "Accept-Encoding: gzip" -I https://yourdomain.com

Look for this header in the response:

Content-Encoding: gzip

If you see Content-Encoding: gzip, compression is active. If the header is absent, Apache is not compressing — check that mod_deflate is loaded and the MIME type list includes text/html.

To verify cache headers on static assets:

curl -I https://yourdomain.com/_next/static/chunks/main.js

You should see:

Cache-Control: public, max-age=31536000, immutable

To check HTML cache headers:

curl -I https://yourdomain.com/

You should see:

Cache-Control: public, s-maxage=600, max-age=0, must-revalidate

Before and after PageSpeed Insights: After making all three changes and verifying with curl, run PageSpeed Insights on your site's homepage. The LCP metric should improve if your previous score was affected by uncompressed transfer size or TTFB. The "Serve static assets with an efficient cache policy" and "Enable text compression" audit flags should clear.

A realistic improvement: a Next.js site serving 300 KB of uncompressed JavaScript that drops to 95 KB after gzip compression will see content download time fall by roughly 180ms on an average mobile connection. Combined with TTFB improvements from disabling proxy buffering, a 0.3–0.5s LCP improvement is achievable from configuration changes alone, with no code changes required.


Putting It All Together

Here is the complete Apache virtual host configuration combining all three changes:

LoadModule deflate_module modules/mod_deflate.so
LoadModule headers_module modules/mod_headers.so

<VirtualHost *:443>
    ServerName yourdomain.com
    SSLEngine on
    # ... your SSL certificate directives ...

    # Proxy to Next.js standalone container
    ProxyPreserveHost On
    ProxyPass / http://localhost:3000/
    ProxyPassReverse / http://localhost:3000/
    ProxyPassReverseCookiePath / /
    RequestHeader set X-Forwarded-Proto "https"
    Header always unset X-Powered-By

    # Compression — text content types only
    SetOutputFilter DEFLATE
    AddOutputFilterByType DEFLATE text/html text/css application/javascript application/json text/xml text/plain
    SetEnvIfNoCase Request_URI \
        \.(?:gif|jpe?g|png|webp|avif|zip|gz|bz2|woff2)$ \
        no-gzip dont-vary

    # Cache headers
    <LocationMatch "^/_next/static/">
        Header set Cache-Control "public, max-age=31536000, immutable"
    </LocationMatch>
    <LocationMatch "^/_next/image">
        Header set Cache-Control "public, max-age=86400"
    </LocationMatch>
    <FilesMatch "\.(html)$">
        Header set Cache-Control "public, s-maxage=600, max-age=0, must-revalidate"
    </FilesMatch>

</VirtualHost>

After saving this configuration, test it with apachectl configtest before reloading. An error in the Apache config file will take down your site; confirming the syntax is valid first takes five seconds and prevents that outcome.


What Comes Next

These three Apache changes address the infrastructure layer of LCP performance. They do not fix application-layer LCP issues — hero images without fetchpriority="high", render-blocking third-party scripts, or Next.js components that trigger layout shifts. Those require code changes.

The fastest way to identify which LCP issues remain after fixing the Apache layer is a full technical SEO audit. The free audit at seo.yatna.ai checks compression, cache headers, TTFB, LCP, CLS, and INP in a single automated run, and flags specific issues with actionable fixes — so you are not guessing which problem to fix next.

About the Author

Ishan Sharma

Ishan Sharma

Head of SEO & AI Search Strategy

Ishan Sharma is Head of SEO & AI Search Strategy at seo.yatna.ai. With over 10 years of technical SEO experience across SaaS, e-commerce, and media brands, he specialises in schema markup, Core Web Vitals, and the emerging discipline of Generative Engine Optimisation (GEO). Ishan has audited over 2,000 websites and writes extensively about how structured data and AI readiness signals determine which sites get cited by ChatGPT, Perplexity, and Claude. He is a contributor to Search Engine Journal and speaks regularly at BrightonSEO.

LinkedIn →