If your site is “fast sometimes,” caching is usually the reason. You already paid for CPU, RAM, and bandwidth. The slow part is often repeated work: rebuilding the same HTML, re-reading the same files, or re-running the same database queries for every visit. Caching cuts that repetition. Misconfigured caching, though, is how you get stale pages, broken carts, and users seeing each other’s sessions.
This is a practical guide to configure caching in the places that actually matter: the edge (CDN), the web server, the app layer, and the database. You do not need all of them. You do need them aligned.
Caching is just storing an output so you can serve it again without recomputing it. On a typical hosted site, there are four common layers:
A CDN cache stores responses close to visitors. A server cache stores generated pages or fragments on the host. An app cache stores expensive results (objects, query results, computed data). A browser cache stores static files (CSS, JS, images) on the visitor’s device.
The trade-off is always the same: the more you cache, the less fresh things are. Your job is to decide what can be stale for 10 minutes, what must be live per request, and what must never be cached.
Do this first. It prevents the classic failures.
Identify routes that should not be cached by shared caches (CDN, reverse proxy, page cache). Typical examples: `/wp-admin/`, `/wp-login.php`, `/cart`, `/checkout`, `/my-account`, `/api/`, any page that shows per-user data, and any response that sets cookies.
Also identify routes that can be cached aggressively: the homepage (often), marketing pages, blog posts, product listing pages (sometimes), and static assets.
If you run ecommerce or membership, assume you need caching with exclusions. If you run a static or mostly-public site, assume you can cache hard.
Start at the edge and move inward. Each layer should respect the one outside it.
This is the safest win because it does not affect HTML personalization. Configure long cache lifetimes for versioned files like `app.3f2c1.js` or `style.91a0.css`. If the filename changes when content changes, you can cache it for a long time.
On Apache, you typically handle this with `mod_expires` and `Cache-Control` headers. On Nginx, you set headers inside a `location` block for file extensions. The goal is consistent headers like `Cache-Control: public, max-age=31536000, immutable` for assets that are fingerprinted.
The “it depends” part: if your filenames are not versioned, do not set a year-long max-age. You will ship updates and users will keep old files. Fix versioning first or keep max-age short (hours to a day).
If you already use a CDN, good. If you do not, the decision comes down to where your users are and how heavy your pages are. A CDN helps most with global audiences, large images, and spikes.
Configure two rulesets: one for static assets (cache long), one for HTML (cache carefully or not at all).
For static assets, cache by URL and ignore cookies. For HTML, do one of these:
If your site is mostly public content, cache HTML at the edge with a short TTL (like 1 to 10 minutes) and purge on deploy or publish.
If your site is personalized or transactional, do not cache HTML at the edge by default. Cache only specific public paths.
Make sure the CDN respects origin headers when you intend it to. If the CDN has “cache everything” toggles, use them only with explicit exclusions for login, cart, checkout, and admin.
Full-page caching is where you see the biggest TTFB improvement, and also where you can break the most.
On common stacks, full-page caching is implemented via a reverse proxy cache (like Nginx FastCGI cache), a hosting layer cache, or an application plugin (WordPress page cache, framework middleware).
The core rules are boring but strict:
Cache only GET/HEAD requests. Do not cache POST.
Do not cache responses that set a session cookie, or requests that include an auth cookie.
Bypass caching for known sensitive paths.
Vary on what matters. If you serve different HTML by device, locale, or currency, you need correct variation. If you vary incorrectly, you can serve the wrong content to the wrong user.
If you are on Nginx with PHP-FPM, FastCGI cache is common. A typical approach is to define a cache zone, build a cache key, and set conditions to skip cache when cookies indicate login or cart state. Then add headers like `X-Cache: HIT|MISS` so you can verify behavior.
If you are on WordPress, do not stack two full-page caches unless you know exactly how they interact. A CDN caching HTML plus a server page cache plus a plugin cache can work, but it can also create purge confusion. Pick one primary HTML cache layer and make the others either pass-through or strictly controlled.
Object caching is for expensive repeat work inside the app: database query results, computed fragments, API call results. It helps even when you cannot cache HTML.
For WordPress, this usually means Redis or Memcached with a persistent object cache. For Laravel, Django, Rails, or Node apps, it’s the same idea: cache common queries and computed data with TTLs.
Object caching trade-offs are more subtle: stale data bugs can show up as “why didn’t my change apply?” Keep TTLs short for frequently updated content, and use explicit cache invalidation when you can.
If you do ecommerce, object caching can help product pages and category filters without caching the entire HTML for logged-in users.
Most performance problems blamed on “the database” are really missing indexes or unbounded queries. Database-level caching exists (buffer pools, query caches in some engines), but you typically get better ROI by fixing queries and adding app-level caching.
If you have control, tune the database buffer pool (InnoDB) so hot data stays in memory. If you don’t control it (managed hosting), focus on reducing query count and caching repeat results at the app layer.
If you only do one technical check, do this: confirm your cache headers match your intent.
For public HTML you want cached, you generally want `Cache-Control: public` and a reasonable `max-age` or `s-maxage` for shared caches. If you want browsers to revalidate but still get CDN caching, you can use `s-maxage` for CDN and a smaller `max-age` for browsers.
For private or personalized content, use `Cache-Control: private, no-store` or at least `private, max-age=0, must-revalidate`. “Private” tells shared caches not to store it.
For authenticated areas, be careful with `Vary: Cookie`. It can explode your cache key cardinality and crush hit rates. Better: bypass caching when auth cookies are present.
After changes, verify caching behavior with repeatable checks.
Hit the same URL multiple times and confirm whether the cache transitions from MISS to HIT. Use response headers you control (`X-Cache`, `X-Cache-Status`, `Age`) to see what layer is serving.
Test these scenarios explicitly: logged out homepage, logged in dashboard, add-to-cart, checkout, and any page that displays user-specific info. If any authenticated page returns cache HIT from a shared cache, treat it as a failure.
Also test invalidation. Publish a change, purge what you expect to purge, and confirm visitors see the new version within the intended window.
Stale HTML after deploy usually means your purge process is missing a path pattern, or you are caching HTML in two layers and only purging one. Fix by choosing a single source of truth for HTML caching and making purges target that layer.
Broken logins and carts usually means you cached pages that set cookies, or you didn’t bypass cache when auth/cart cookies exist. Fix by adding bypass rules and verifying with real flows, not just status codes.
Low cache hit rate usually means you vary on too many things (cookies, query strings, headers) or your TTL is too short. Fix by normalizing URLs, ignoring irrelevant query parameters at the CDN, and limiting variation.
If you want the smallest configuration that still moves the needle, do browser caching for static assets plus either a CDN for static content or a server page cache for public HTML. If you run dynamic, logged-in experiences, add object caching before you try to cache personalized HTML.
If your hosting setup is designed to route quickly and keep the front door light, align caching with that mindset. Keep rules explicit, keep bypass paths short and obvious, and add only the layers you can observe and purge. If you’re using a routing-style gateway domain for hosting operations, keep the operational surface minimal and fast – like turbo.host does.
Caching is not a feature you “turn on.” It’s a contract: what can be reused, for how long, and under what conditions. Write that contract down as a few clear rules, then enforce it in headers and bypass logic. Your future self will thank you the next time you push a change five minutes before a traffic spike.
A practical guide to high uptime hosting architecture, covering redundancy, failover, DNS, databases, monitoring, and…
Learn how to fix redirect loop on hosting domain setups by checking SSL, DNS, CMS,…
Find the best web hosting for uptime monitoring with practical criteria on alerts, logs, regions,…
Dedicated server hosting gives you full server resources, tighter control, and steadier performance when shared…
Learn how to prevent open redirect vulnerabilities with safe redirect patterns, validation rules, allowlists, and…
Find the best web hosting for ecommerce stores based on speed, uptime, scaling, security, and…