Jere Codes
Galleries
Back to Blog

Why pnpm vitest --run Hangs Forever (and How I Fixed It)

Jere on March 8, 2026
•
4 min read
debugging
vitest
pnpm
tooling

Running pnpm vitest --run started hanging indefinitely. It would print the version header — RUN v4.0.16 — and then just... nothing. No tests ran, no errors, no timeout. Just silence. Since this was the command we run before every PR, it completely blocked our workflow.

Here's the story of debugging it.

Narrowing It Down

First question: is vitest itself broken? I ran it through npx instead:

npx vitest --run --reporter=verbose

165 tests passed in 14 seconds. Vitest is fine.

Maybe a config issue? Tried bypassing the config entirely:

pnpm vitest --run --config /dev/null

Still hangs. Not the config.

Then I tried running vitest directly from node_modules:

./node_modules/.bin/vitest --run

Works perfectly. 14 seconds, all tests pass. But pnpm vitest --run hangs. So the problem is specifically in how pnpm wraps the vitest process.

The Pipe Deadlock

I started looking at output handling. Backgrounding pnpm vitest --run without piping stderr worked fine. But the moment I added 2>&1, it hung. Redirecting to a file produced only 3 lines after 25 seconds.

This told me it wasn't a "tests are slow" problem. It was a pipe buffer deadlock -- stdout/stderr getting jammed somewhere in the process chain.

Dead Ends

Before finding the real cause, we burned time on a bunch of things that didn't help:

  • Updating pnpm from 10.6.1 to 10.31.0 (25 minor versions!)
  • Setting pool: 'forks' in vitest config (forces child processes instead of worker threads)
  • Setting reporters: ['verbose'] in vitest config
  • Both pool and reporters together
  • NO_COLOR=1 and FORCE_COLOR=0 environment variables
  • pnpm node ./node_modules/.bin/vitest --run
  • pnpm exec vitest --run
  • Changing the test script in package.json to various alternatives

None of these changed anything. The hang happened before vitest even started running tests, so vitest config changes were never going to help.

strace Reveals the Truth

We pulled out strace to trace what processes were actually being spawned:

strace -f -e trace=process pnpm vitest --run --version

The output revealed a triple process chain:

  1. Volta intercepts the pnpm command via its PATH shim (~/.volta/bin/pnpm)
  2. Volta spawns its own pnpm image (~/.volta/tools/image/packages/pnpm/bin/pnpm) using Node 20.12.2
  3. That pnpm detects corepack's pnpm via the packageManager field and spawns another pnpm from ~/.local/share/pnpm/.tools/pnpm/10.31.0
  4. The inner pnpm finally spawns vitest
shell → volta shim → volta's pnpm (Node 20) → corepack's pnpm (Node 24) → vitest workers

Three layers of process wrapping, each with its own pipe buffers. Vitest uses worker threads that write to stdout and stderr concurrently. That output has to travel through all those layers back to the shell. The nested pipe buffers fill up, neither side drains, and everything deadlocks.

Confirming the Root Cause

To confirm, we ran the innermost pnpm directly:

~/.local/share/pnpm/.tools/pnpm/10.31.0/bin/pnpm vitest --run

14 seconds. All tests pass. The double-wrapping through Volta was the root cause.

The Fix

The project had "packageManager": "pnpm@10.31.0" in package.json, meaning corepack was set up to manage pnpm. But Volta had also installed pnpm@10.15.0 as a global package at some earlier point, creating the invisible double-shim. Running volta list all showed pnpm in the list — that's how you'd spot this.

volta uninstall pnpm
corepack enable pnpm

That's it. Removed Volta's pnpm shim so corepack is the sole manager. No vitest config changes, no pool settings, no reporter changes — none of the workarounds were needed.

After the fix:

pnpm vitest --run  # 14 seconds, 165 tests passing

We also updated pnpm from 10.6.1 to 10.31.0 (packageManager field in package.json) while investigating, which we kept since it's a good idea regardless.

Environment

  • Volta managing Node (pinned to 24.8.0 in package.json)
  • Volta had also installed pnpm@10.15.0 as a package (running under Node 20.12.2)
  • Corepack managing pnpm@10.31.0 via the packageManager field
  • vitest 4.0.16 with worker threads (default pool)
  • Linux (WSL2), kernel 6.6.87.2

How to Spot This

If a CLI tool hangs when run through pnpm but works fine via ./node_modules/.bin/:

  1. Run volta list all — if pnpm appears there and you have a packageManager field in package.json, you've got a double-shim
  2. Run strace -f -e trace=process pnpm <command> --version and count how many pnpm processes get spawned
  3. Try running the innermost pnpm binary directly to confirm

The Takeaway

The core issue isn't Volta or corepack individually — both work fine on their own. It's the combination that creates an extra process layer. Threaded tools like vitest write to stdout/stderr from multiple workers concurrently, and the nested pipe buffers between process layers can deadlock. The fix is simple: pick one manager for pnpm and remove the other.

If you're using Volta for Node version management and corepack for package manager versions, that's a clean separation. Just make sure Volta isn't also managing the package manager.

J

About Jere

Software developer passionate about indoor mapping, web technologies, and building useful tools.

GitHubTwitter/XYouTube