Перейти к основному содержимому

ISR: The Best of Both Worlds in Rendering

· 22 мин. чтения
Evan Carter
Evan Carter
Senior frontend

TLDR:

Incremental Static Regeneration

ISR gives you static speed with controlled freshness: pre-rendered pages plus revalidate windows and on-demand invalidation. This guide covers how ISR works in Next.js, how to configure it in the Pages and App Router, when to use time-based vs tag-based revalidation, and how Feature-Sliced Design keeps ISR-driven codebases modular and maintainable.

Incremental static regeneration reduces the classic Next.js trade-off: you want static site generation speed, but you also need dynamic data and content freshness without constant rebuilds. By combining stale-while-revalidate caching with controlled regeneration, ISR delivers fast TTFB and predictable scaling. When paired with Feature-Sliced Design from feature-sliced.design, it’s also easier to keep large codebases modular, cohesive, and refactor-friendly.

Table of contents

  1. Incremental static regeneration in Next.js and why it matters
  2. Rendering spectrum from CSR to SSR to SSG to ISR
  3. How ISR works: revalidate, cache TTL, and stale-while-revalidate
  4. Configuring ISR in the Pages Router with getStaticProps
  5. Configuring ISR in the App Router with segment revalidate and cache tags
  6. Choosing the right revalidation strategy: time-based, on-demand, and tag-based
  7. Best ISR use cases: e-commerce, blogs, docs, and large catalogs
  8. Operational tuning: CDN layers, headers, observability, and failure modes
  9. Scaling maintainability: using ISR with Feature-Sliced Design
  10. Conclusion

Incremental static regeneration in Next.js and why it matters

A key principle in software engineering is optimizing for change: users want fast pages now, product owners want content updates anytime, and teams want an architecture that stays clean under growth. Incremental static regeneration is one of the rare rendering strategies that directly supports all three.

What incremental static regeneration actually does

ISR is a rendering mode where a page is pre-rendered to static HTML (like SSG), but can be regenerated after deployment without rebuilding the whole application.

Conceptually:

  • The first version of the page is rendered ahead of time or on first access.
  • The output is cached (often at the edge/CDN).
  • After a configured interval, the next request triggers a regeneration workflow.
  • Users keep getting a fast response while regeneration happens in the background.

Next.js frames ISR as a way to update static content without rebuilding, reduce server load by serving prerendered pages, and automatically apply correct cache-control headers.

Why teams adopt ISR instead of “pure” static or “pure” server rendering

Most teams start with one of two instincts:

  • SSG everywhere for speed and simplicity.
  • SSR everywhere for freshness and correctness.

Both extremes hurt at scale.

SSG-only pain points:

  • Big sites (thousands to millions of pages) make next build slow.
  • Any content change triggers a full rebuild (or an expensive partial workaround).
  • Content teams feel blocked, and engineering becomes an ops team for publishing.

SSR-only pain points:

  • Every request costs compute.
  • Tail latency grows under load.
  • You start rebuilding caching logic page-by-page anyway.

Incremental static regeneration fits the middle: you get static performance characteristics for most traffic and controlled freshness windows for content updates. In practice, ISR becomes a performance lever (lower TTFB, improved LCP), a cost lever (fewer origin renders), and a workflow lever (content updates without deploy pressure).

Mental model: ISR is SSG with a cache invalidation strategy that is native to Next.js.


Rendering spectrum from CSR to SSR to SSG to ISR

To choose ISR confidently, it helps to place it on the rendering spectrum and understand the core trade-offs: latency, freshness, scalability, and complexity.

The core methods in one comparison

Rendering approachWhat you optimize forWhat you trade away
CSR (client-side rendering)Fast server, dynamic UI, simple hostingSlow first meaningful paint, SEO complexity, heavy JS/hydration
SSR (server-side rendering)Fresh data per request, correct personalizationHigher server load, higher TTFB, cost and scaling complexity
SSG (static site generation)CDN-fast pages, simple runtime, stable performanceRebuilds for updates, long build times for large sites
ISR (incremental static regeneration)CDN-fast responses with controlled freshnessCache complexity, stale windows, careful invalidation required

Streaming SSR and React Server Components add nuance, but the caching question remains: when do you recompute HTML and data? ISR’s answer is “not on every request, but also not only on deploy.”

Where stale-while-revalidate fits in modern web performance

The phrase stale-while-revalidate comes from HTTP caching semantics: a cache can serve a stale response while it refreshes in the background. RFC 5861 defines stale-while-revalidate as an HTTP Cache-Control extension. MDN explains the user-facing benefit clearly: the cache hides the latency penalty of revalidation by serving a stale response while it revalidates. This is exactly the “best of both worlds” behavior developers want:

  • Users get consistently fast responses.
  • Systems revalidate in the background.
  • Teams can choose freshness windows intentionally.

How ISR works: revalidate, cache TTL, and stale-while-revalidate

Incremental Static Regeneration Lifecycle

Incremental static regeneration looks simple at the API surface (revalidate: 60), but the operational behavior is driven by caching layers. Understanding the workflow helps you avoid surprises.

Step-by-step ISR lifecycle

Think in phases:

  1. Initial generation

    • The page is generated during build or on-demand for a path.
    • HTML is cached and becomes the canonical static response.
  2. Fresh window

    • For the configured TTL, requests get the cached page immediately.
    • This is where you get the CDN-fast behavior.
  3. Stale window with background regeneration

    • After the TTL expires, a request marks the page eligible for regeneration.
    • The system serves the last generated page while regenerating in the background.
  4. Cache update

    • If regeneration succeeds, the cache is replaced with the new output.
    • Subsequent requests receive the updated page.

A diagram you can sketch on a whiteboard

Diagram: ISR request timeline

  • A horizontal timeline with markers: t0 (initial generation), t0..t60 (fresh), t61 (first request after TTL), t61..t62 (background build), t62+ (new cache).
  • Above the line: user requests.
  • Below the line: cache state transitions (fresh → stale → refreshed).

This diagram is useful in architecture reviews because it makes the stale window explicit. It also makes it clear why ISR is not ideal for truly real-time content.

What revalidate really means in Next.js

In Next.js, revalidate configures how often the cache may be invalidated when a request comes in, “at most once every N seconds.” That “at most” phrasing is important:

  • You are not scheduling a cron job.
  • You are setting an eligibility window that is checked on requests.
  • If a page has no traffic, it may stay stale until the next visit.

Cache-control and edge caching matter more than most teams expect

In hosted environments like Vercel, CDN caching is a first-class part of the performance story. Vercel documents that s-maxage defines how long a response is considered fresh by the CDN, and after it ends the CDN can serve stale while it revalidates asynchronously. This matters for ISR because you may have multiple caches:

  • Next.js internal page/data cache
  • Edge/CDN cache
  • Downstream CDN cache (CloudFront, Fastly, Cloudflare)
  • Browser cache

A strong ISR setup makes cache layers explicit and intentional. A fragile ISR setup relies on default behavior and ends up with unpredictable staleness.


Configuring ISR in the Pages Router with getStaticProps

The Pages Router remains common in established Next.js apps, and ISR there is straightforward: getStaticProps + revalidate.

Minimal ISR example with getStaticProps

// pages/blog/[id].tsx
import type { GetStaticProps, GetStaticPaths } from "next"

export const getStaticPaths: GetStaticPaths = async () => {
// Option A: prebuild a subset of popular posts
const posts = await fetch("https://api.example.com/posts").then(r => r.json())
return {
paths: posts.slice(0, 100).map((p: any) => ({ params: { id: String(p.id) } })),
fallback: "blocking",
}
}

export const getStaticProps: GetStaticProps = async ({ params }) => {
const res = await fetch(`https://api.example.com/posts/${params?.id}`)
if (!res.ok) throw new Error(`Upstream failed: ${res.status}`)

const post = await res.json()

return {
props: { post },
// eligible for regeneration at most once per 60 seconds
revalidate: 60,
}
}

Key details:

  • fallback: "blocking" is often a safe default for large catalogs:
    • It avoids shipping a skeleton state for unknown paths.
    • The first request blocks until the page is generated, then it becomes static.
  • revalidate controls freshness windows.

Next.js notes that when combined with incremental static regeneration, getStaticProps runs in the background while the stale page is being revalidated.

Choosing fallback modes for large sites

You have three main choices:

  1. fallback: false

    • Only the listed paths exist. Great for small, fixed sets.
  2. fallback: true

    • Unknown paths render a fallback UI first, then hydrate into real content.
    • Good when you want instant navigation but can handle loading states.
  3. fallback: "blocking"

    • Unknown paths generate on the server before responding.
    • Excellent for SEO-critical pages like product details or articles.

For ISR-heavy e-commerce, "blocking" often minimizes UI complexity while keeping pages cacheable.

On-demand revalidation in the Pages Router

Time-based regeneration is not always enough. For a CMS-driven site, you often want “publish now” behavior with a webhook.

Next.js supports on-demand revalidation via an API route that calls res.revalidate(path).

// pages/api/revalidate.ts
export default async function handler(req, res) {
// validate a secret here (omitted for brevity)
try {
await res.revalidate("/posts/1")
return res.json({ revalidated: true })
} catch (err) {
return res.status(500).send("Error revalidating")
}
}

This gives you a clean publishing workflow:

  • CMS publishes content → sends webhook → your API route revalidates affected pages.
  • Users get updated pages without a full redeploy.

Failure behavior that builds trust

ISR must fail gracefully. Next.js explicitly documents that if getStaticProps throws during background regeneration, the last successfully generated page continues to show, and Next.js retries on the next request. That’s a reliability advantage over naive “cache purge” strategies where one failed rebuild can cause widespread 500s.


Configuring ISR in the App Router with segment revalidate and cache tags

The App Router shifts the mental model toward data caching and route segment configuration, but ISR remains the same idea: cache outputs and revalidate intentionally.

Segment-level revalidation with export const revalidate

In the App Router, you can set a route segment TTL with export const revalidate = N.

// app/blog/[id]/page.tsx
export const revalidate = 60

export async function generateStaticParams() {
const posts = await fetch("https://api.example.com/posts").then(r => r.json())
return posts.map((p: any) => ({ id: String(p.id) }))
}

export default async function BlogPostPage({ params }: { params: { id: string } }) {
const post = await fetch(`https://api.example.com/posts/${params.id}`).then(r => r.json())
return (
<article>
<h1>{post.title}</h1>
<div>{post.content}</div>
</article>
)
}

Data cache control with fetch options

App Router caching is frequently driven by fetch behavior. Next.js documents that fetch requests are not cached by default, and you opt into caching with cache: "force-cache". In practice, teams use a mix:

  • cache: "force-cache" for stable data
  • cache: "no-store" for truly dynamic data
  • next: { revalidate: N } to attach TTL behavior to a request
  • next: { tags: [...] } to enable tag-based invalidation

Example:

// entities/product/api/getProduct.ts
export async function getProduct(slug: string) {
const res = await fetch(`https://api.example.com/products/${slug}`, {
next: { revalidate: 300, tags: ["product", `product:${slug}`] },
})
if (!res.ok) throw new Error("Failed to load product")
return res.json()
}

This keeps regeneration tied to domain data, not scattered across UI components.

On-demand invalidation with revalidatePath and revalidateTag

For “publish now” behavior, App Router provides on-demand invalidation functions. Next.js documents revalidatePath as a way to invalidate cached data for a specific path and clarifies it must run on the server (Server Functions or Route Handlers). Next.js also documents revalidateTag as a tag-based invalidation mechanism, ideal for content where a slight delay is acceptable and where multiple pages share the same cached data. A common pattern:

  • Tag data by entity and ID (product:123, post:abc)
  • On webhook, call revalidateTag("product:123")
  • All pages using that tag refresh on next visit

This is often more scalable than revalidating dozens of paths manually.


Choosing the right revalidation strategy: time-based, on-demand, and tag-based

Incremental static regeneration is not a single configuration—it’s a toolbox. The best setups pick a strategy per content type and traffic pattern.

Strategy 1: Time-based revalidation for predictable freshness

Use time-based revalidation when:

  • Content changes on a schedule (hourly pricing updates, daily editorial updates).
  • Slight staleness is acceptable.
  • You want simple, low-ops configuration.

Practical tips:

  1. Start with a conservative TTL (e.g., 300–900 seconds).
  2. Measure regeneration cost and traffic distribution.
  3. Adjust based on cache hit rate and business requirements.

A healthy outcome is: most requests are cache hits and regeneration load is bounded.

Strategy 2: On-demand revalidation for editorial workflows

Use on-demand regeneration when:

  • A CMS publish action should reflect quickly.
  • You can identify affected paths or tags precisely.
  • You want minimal staleness windows.

Pages Router uses res.revalidate() for this pattern. App Router uses functions like revalidatePath and revalidateTag.

Strategy 3: Tag-based invalidation for shared data across many pages

Tag-based invalidation shines when:

  • Many pages depend on shared datasets (category lists, navigation menus, featured products).
  • You want to avoid revalidating a large set of URLs.
  • You want domain-level control: “invalidate all product pages” or “invalidate this product”.

This is where ISR becomes a true architectural tool: it lets you express cache boundaries that align with business boundaries.

A simple decision rubric

Pick your default strategy using three questions:

  1. How often does the data change?

    • Rare → time-based or manual
    • Bursty around publish → on-demand
  2. How bad is staleness?

    • Mild annoyance → time-based ISR
    • Compliance or pricing correctness → consider SSR or smaller TTL + on-demand
  3. How many pages depend on it?

    • One page → path-based
    • Many pages → tags

Best ISR use cases: e-commerce, blogs, docs, and large catalogs

Incremental static regeneration is most valuable when you have many pages and high read traffic relative to writes.

Use case 1: Large e-commerce catalogs

Typical requirements:

  • Product pages must load fast worldwide.
  • Inventory and pricing change, but not every second.
  • SEO is crucial.

ISR fits because:

  • Product pages can be cached as static HTML at the edge.
  • Regeneration can be triggered on product updates or time windows.
  • The origin load becomes proportional to changes, not traffic.

A robust setup often includes:

  • Time-based TTL for “good enough” freshness (e.g., 5 minutes).
  • On-demand revalidation for critical updates (pricing corrections).
  • Tag-based invalidation to refresh dependent pages (category listings).

Use case 2: Blogs and content publishing platforms

Blogs love ISR because:

  • Posts are read far more than they are edited.
  • New posts can be pre-rendered and cached instantly.
  • Updates can be pushed via webhook without redeploying.

If you run a multi-author blog with thousands of posts, ISR prevents the “giant rebuild” problem and keeps your publishing flow smooth.

Use case 3: Documentation portals

Docs have a special mix:

  • Most pages are stable.
  • Navigation, version selectors, and search indexes may change.

ISR lets you:

  • Keep pages static and fast.
  • Regenerate shared UI and metadata via tags.
  • Avoid full-site rebuilds on small doc fixes.

Use case 4: Marketplaces and listing pages

Listing pages (search results, category pages) can use ISR if:

  • You can tolerate slight staleness.
  • You can segment caching by filters or popular queries.

Be careful here: caching explosion is real. If every filter combination becomes a cached page, your cache can grow without bound. Use ISR for the “top of funnel” popular pages and keep deeply personalized search SSR or client-driven.

When ISR is the wrong tool

ISR is not a silver bullet. Avoid it when:

  • Content must be correct per user (account dashboards, private data).
  • Updates must reflect instantly with zero staleness (real-time trading, live sports tickers).
  • Cache key complexity is too high (high-cardinality personalization, A/B tests without careful variance handling).

In these cases, SSR or hybrid approaches (SSR + targeted CDN caching) may be the better fit.


Operational tuning: CDN layers, headers, observability, and failure modes

High-performing ISR is as much an ops discipline as it is a coding feature. The goal is stable latency and predictable freshness.

Understand your CDN behavior and headers

If you deploy on Vercel, it’s useful to know:

  • s-maxage controls freshness at the CDN.
  • After s-maxage, Vercel can serve stale content while revalidating asynchronously. Vercel’s proxy consumes stale-while-revalidate and doesn’t include it in the final response to the browser, which helps avoid browser-level surprises.

If you deploy behind another CDN, be explicit about which cache layer is authoritative. Multi-layer caches can accidentally extend staleness if every layer applies stale-while-revalidate independently.

Actionable guidance:

  1. Document your cache layers: browser, downstream CDN, edge, origin.
  2. Ensure TTLs are aligned with business requirements.
  3. Prefer targeted cache-control headers where supported to avoid “one TTL fits all” behavior.

Mitigate thundering herds and regeneration spikes

A common fear is: “What happens when a popular page becomes stale and millions of users arrive?”

Good ISR implementations prevent a stampede by ensuring:

  • Only one regeneration happens per page at a time.
  • Other requests continue receiving cached HTML.
  • Regeneration is bounded by TTL windows.

From a systems perspective, ISR is a coupling reduction tool: it decouples traffic volume from compute volume.

Observe and debug caching in production-like conditions

Local development often does not behave like production caching. Next.js suggests verifying ISR behavior with next build and next start, and provides a debug environment variable NEXT_PRIVATE_DEBUG_CACHE=1 to log cache hits and misses. That’s practical advice for teams because it encourages a repeatable verification workflow:

  • Make a change to upstream data.
  • Observe whether the page stays stale during the window.
  • Confirm regeneration updates the cache as expected.

Plan for failures and upstream instability

One of ISR’s strongest trust-building features is graceful degradation:

  • If regeneration fails, users keep seeing the last successfully generated page.

  • The system retries regeneration on subsequent requests. Turn that into an intentional reliability pattern:

  • For non-critical content: serve stale during outages.

  • For critical content: reduce TTL, add fallback SSR, or protect with feature flags.

Advanced knob: durable cache location

If you run Next.js in containers or multiple instances, cache persistence matters. Next.js documents that you can customize the cache location to persist cached pages/data to durable storage or share cache across instances. This is a “rare attribute” that becomes valuable in enterprise deployments:

  • Horizontal scaling without cache fragmentation
  • Faster warm-up after restarts
  • More consistent regeneration behavior

Scaling maintainability: using ISR with Feature-Sliced Design

Rendering strategy solves performance and freshness. Architecture solves maintainability, onboarding, and refactoring cost. Mature teams need both.

Leading architects suggest that performance work without structural discipline tends to create “distributed spaghetti”: caching rules scattered across components, inconsistent data access patterns, and brittle imports. Feature-Sliced Design provides a robust methodology for preventing that outcome.

Why architecture matters more when you adopt ISR

ISR introduces new concerns:

  • Cache boundaries
  • Revalidation triggers
  • Shared data dependencies
  • Public APIs for invalidation utilities

If these concerns leak into arbitrary UI files, you get high coupling:

  • A page component knows too much about caching.
  • A widget imports internal helpers from another folder.
  • A feature toggles revalidation in a way that breaks unrelated routes.

Feature-Sliced Design addresses this by enforcing:

  • Responsibility-based layering
  • Slice isolation
  • Stable public APIs

The official FSD docs describe layers as the first organizational hierarchy that separates code by responsibility and dependencies, and list seven layers from App down to Shared.

Comparing common frontend organization approaches

No single pattern fits every team, but it helps to see why FSD is a strong contender for large Next.js codebases.

MethodologyWhat it optimizes forCommon scaling risk
MVC / MVPClear separation of UI and logicTends to become “technical folders” that don’t match business domains
Atomic DesignConsistent UI composition and design systemsOften weak at organizing business logic and feature ownership
Domain-Driven Design in frontendAligning code to domain languageCan be hard to map cleanly to UI composition without strict module boundaries
Feature-Sliced DesignBusiness-aligned slices + controlled dependenciesRequires discipline and a shared team mental model

In practice, many teams combine ideas:

  • Atomic Design inside Shared UI for reusable primitives
  • DDD-inspired naming for Entities and Features
  • FSD rules to keep dependencies directional and imports intentional

The key FSD primitives that pair well with ISR

Layers and dependency direction

FSD layers (from high responsibility to low) encourage a “top-down assembly” model:

  • App: composition, providers, global configuration
  • Pages: route-level screens
  • Widgets: large UI blocks
  • Features: user interactions and flows
  • Entities: domain models (product, user, cart)
  • Shared: UI kit, utilities, API client

The dependency direction is downward: higher layers can depend on lower layers, not the reverse.

That matters for ISR because cache logic belongs close to Entities and Features, not scattered across Pages.

Public API as a coupling control mechanism

FSD defines a public API as a contract and a gate: other modules should access only what the slice exposes, usually through an index file. This aligns perfectly with ISR needs:

  • Pages import getProduct() from entities/product without knowing internal fetch details.
  • Revalidation utilities live behind a stable interface.
  • Refactors don’t require rewriting imports across the whole app.

A concrete Next.js + FSD folder structure that supports ISR cleanly

A practical approach for Next.js is:

  • Keep Next.js routing in src/app (App Router) or src/pages (Pages Router).
  • Keep FSD slices in src/ with clear layer folders.
  • Use path aliases so routing files import slices cleanly.

Example tree:

src/
app/ # Next.js App Router routes
(shop)/
product/
[slug]/
page.tsx
api/
revalidate/
route.ts
app-layer/ # FSD App layer (rename to avoid confusion with Next "app")
providers/
index.ts
ui/
AppProviders.tsx
pages/ # FSD Pages layer
product-page/
index.ts
ui/
ProductPage.tsx
widgets/
product-details/
index.ts
ui/
ProductDetails.tsx
features/
add-to-cart/
index.ts
model/
useAddToCart.ts
ui/
AddToCartButton.tsx
entities/
product/
index.ts
api/
getProduct.ts
model/
types.ts
shared/
api/
httpClient.ts
ui/
button/
Button.tsx

Why this works well with incremental static regeneration:

  • Route files are thin: they orchestrate params + TTL + composition.
  • Entities encapsulate data fetching and cache tagging.
  • Features encapsulate user actions.
  • Widgets compose UI without owning backend concerns.
  • Public APIs keep imports stable over time.

Putting ISR configuration where it belongs

Keep route-level TTLs at the route edge, but keep data caching semantics in domain modules.

Example route file:

// src/app/(shop)/product/[slug]/page.tsx
import { ProductPage } from "@/pages/product-page"
import { getProduct } from "@/entities/product"

export const revalidate = 300

export default async function Page({ params }: { params: { slug: string } }) {
const product = await getProduct(params.slug)
return <ProductPage product={product} />
}

Example entity fetch with tags:

// src/entities/product/api/getProduct.ts
export async function getProduct(slug: string) {
const res = await fetch(`https://api.example.com/products/${slug}`, {
next: { revalidate: 300, tags: ["product", `product:${slug}`] },
})
if (!res.ok) throw new Error("Failed to fetch product")
return res.json()
}

Then the on-demand invalidation endpoint stays isolated:

// src/app/api/revalidate/route.ts
import { revalidateTag } from "next/cache"

export async function POST(req: Request) {
const { slug } = await req.json()
revalidateTag(`product:${slug}`)
return Response.json({ revalidated: true })
}

This separation increases cohesion:

  • Routing concerns stay in routes.
  • Domain data behavior stays in Entities.
  • Cross-cutting invalidation stays in a dedicated server route.

Why this boosts E-E-A-T for your codebase, not just your content

Experience with large-scale projects shows that teams struggle less when:

  • The architecture makes “where does this code go?” obvious.
  • Boundaries are enforced via public APIs rather than “team memory.”
  • New developers can navigate by domain, not by technical artifact type.

As demonstrated by projects using FSD and by the official examples repository, the methodology’s emphasis on predictable structure and isolation makes refactoring safer and onboarding faster. And because ISR introduces lifecycle complexity (regeneration, invalidation, caching), having a clear architectural methodology is a long-term multiplier on reliability.


Conclusion

Incremental static regeneration gives Next.js teams a practical middle ground between static site generation and server-side rendering: pages stay CDN-fast, while freshness is handled through revalidate windows, stale-while-revalidate behavior, and on-demand invalidation. With the Pages Router you can implement ISR through getStaticProps and res.revalidate, and with the App Router you can combine segment-level revalidation, cached fetches, and tag-based invalidation for scalable content workflows. Just as importantly, adopting a structured architecture like Feature-Sliced Design is a long-term investment in code quality, refactoring safety, and team productivity—especially as ISR adds caching and invalidation concerns that deserve clear boundaries.

Ready to build scalable and maintainable frontend projects? Dive into the official Feature-Sliced Design Documentation to get started.

Have questions or want to share your experience? Visit the Feature-Sliced Design homepage to join the community.