Skip to content

Next.js 16.2.2 standalone + Cache Components: cached internal streamed fetches cause unbounded arrayBuffers growth and OOM #92287

@mdotk

Description

@mdotk

Link to the code that reproduces this issue

https://github.com/mdotk/next-standalone-memory-repro

To Reproduce

  1. Clone the repro repo and install dependencies.
npm install
  1. Build the app.
npm run build
  1. Start the standalone server.
PORT=3025 npm run start:standalone
  1. In another shell, run the reproducing load.
BASE_URL=http://127.0.0.1:3025 \
DURATION_MS=180000 \
CONCURRENCY=64 \
PAGE_WEIGHT_KB=2048 \
API_WEIGHT_KB=2048 \
MIX=page,api,page,page \
MAX_LOGGED_FAILURES=20 \
npm run load
  1. Sample memory during the run.
curl 'http://127.0.0.1:3025/api/health?sample=1'

Current vs. Expected behavior

Current:

On next@16.2.2 with output: "standalone" and cacheComponents: true, the standalone server shows rapid unbounded memory growth under sustained high-cardinality traffic once the app is doing cached internal server-side fetch() calls against a streamed JSON route.

In one local run:

  • baseline was about 95 MB rss
  • after ~28s, memory reached about 1.58 GB rss / 589 MB arrayBuffers
  • after ~38s, memory reached about 2.36 GB rss / 1.04 GB arrayBuffers
  • after ~62s, memory reached about 2.42 GB rss / 1.95 GB arrayBuffers
  • after ~180s, memory reached about 3.43 GB rss / 4.31 GB arrayBuffers

The standalone server then died with:

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory

While the server is destabilizing, the load generator starts receiving:

  • ECONNRESET
  • UND_ERR_SOCKET
  • ECONNREFUSED

Expected:

Memory should stabilize or at least be reclaimed after responses complete. Large temporary spikes under load are one thing, but the standalone process should not continue retaining arrayBuffers/external memory until it exits with OOM.

Provide environment information

Operating System:
  macOS 15.x arm64
Node.js version:
  25.1.0
Next.js version:
  16.2.2
Output mode:
  standalone
Other config:
  cacheComponents: true
  compress: false

Additional context

A few things make this feel close to the existing response-retention / runtime-retention family:

  • the failing repro requires cached internal server-side fetch()
  • the internal fetched route returns a streamed JSON response
  • the exploding categories are rss, external, and especially arrayBuffers
  • the app is generating many unique request paths over time

I also ran the same repro app locally with next start as a control. I know that is not the supported way to run an app configured with output: "standalone", so I am not presenting it as the main repro. But it is useful signal:

  • with next start, the app still showed large temporary growth under the same load
  • after the shorter control run stopped, memory recovered back down instead of the server dying
  • with the standalone server, the same app kept climbing and eventually exited with OOM

That difference made me file this specifically against the standalone runtime path.

This also overlaps with symptoms in:

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions