1. What RSS actually measures

RSS (Resident Set Size) is the pages of physical RAM currently loaded for this process. "Resident" means the kernel has placed these pages in RAM; they can be read or written without a page fault. RSS includes the process's own executable code, its stack, its heap allocations, and the portions of shared libraries that happen to be in RAM at this moment.[1]

RSS doesn't count pages that have been swapped to disk. The kernel evicted those to free space for other processes. So RSS can drop under memory pressure without the process having freed anything at all.

RSS answers the practical question: how much RAM is this process using right now? It's the number that determines whether your process gets killed, and the one to watch over time.

When RSS misleads

  • Two processes sharing a 20 MB library both report ~20 MB RSS for it, even though the kernel stores one copy. Sum RSS across all processes and you'll overcount system memory usage.[13]
  • Pages swapped to disk disappear from RSS. A process hemorrhaging memory looks smaller after the kernel evicts its cold pages.
  • On macOS (since 10.9), inactive pages are compressed in place rather than swapped. Compressed pages still count as resident, so RSS doesn't shrink the way you'd expect.[9]
  • On Bun, mimalloc retains freed pages in internal arenas rather than returning them to the OS. The JS heap can shrink while RSS stays elevated — see Section 3 for details.

2. Node.js and the V8 memory model

To understand why RSS grows beyond the JS heap, you need to know how V8 manages memory internally. The heap is only part of the picture.[4]

How V8's heap is structured

When you call process.memoryUsage(), you're seeing the JS heap. But several other regions also contribute to RSS:

V8 Heap — New Space Short-lived objects (1–8 MB). Garbage collected frequently with Cheney's algorithm.
V8 Heap — Old Space Long-lived objects. Default limit ~1.4–8 GB depending on Node.js version and system RAM.
V8 Heap — Code Space JIT-compiled code. The only executable heap region.
External / Native Memory C++ bindings, Buffers, native modules. Lives outside V8's heap entirely.
Stack Local variables and call frames. Fixed size, very fast.
V8 Engine Code The runtime itself, plus libuv, OpenSSL, ICU data.

A fresh Node.js process typically reports approximately:[4]

process.memoryUsage()
{
  rss:          ~25 MB   ← physical RAM used
  heapTotal:    ~ 5 MB   ← V8 heap allocated
  heapUsed:     ~ 3.5 MB ← V8 heap in active use
  external:     ~ 1.2 MB ← C++ / native bindings
  arrayBuffers: ~10 KB   ← ArrayBuffer allocations
}

RSS (~25 MB) is already 5–7× heapUsed (~3.5 MB) at startup, before your code has done anything. The engine itself, native libraries, and stack frames take up the rest. As your app grows, the gap can widen further — especially if you use Buffers, since Buffer data lives in native memory. HeapUsed stays flat while RSS climbs, which trips up a lot of leak investigations.[14]

3. Bun and JavaScriptCore

Claude Code runs on Bun, not Node.js. Bun uses JavaScriptCore (JSC), the WebKit engine that powers Safari, instead of V8. The memory behavior is different enough that Node.js intuitions don't always transfer.

JSC's Riptide garbage collector

JSC's GC, called Riptide, works quite differently from V8's:[7][8]

Property V8 (Node.js) JSC / Riptide (Bun)
Compaction Yes (Mark-Compact phase) No — objects are not moved
Concurrency Mostly-concurrent marking Mostly-concurrent with retreating wavefront
Generations New / Old Space (semi-space + promotion) Eden + Old (sticky mark bits)
Allocator V8's own slab allocator mimalloc (external, from Microsoft)
Throttling GC triggers based on heap size Space-time scheduler prevents GC death spirals

Bun vs Node.js: actual numbers

Benchmarks comparing Bun 1.x to Node.js 20.x across workload types:[11]

Scenario Node.js 20.x (RSS) Bun 1.x (RSS) Difference
Fresh / idle process 30–35 MB 15–20 MB ~45% lower
Simple HTTP server 50–60 MB 28–35 MB ~42% lower
Moderate load (REST API) 180–220 MB 100–130 MB ~40% lower
Sustained production load baseline ~15% lower Advantage shrinks

The gap is real at idle and startup. Under sustained load the runtimes converge to within ~15%. The 200–420 MB RSS range in this Claude Code session is normal for an active Bun process.

The RSS retention problem in Bun

Known issue: Bun's allocator (mimalloc) can retain RSS even after the GC has cleaned up the heap. In documented SSR workloads, the JS heap shrank from 362 MB to 141 MB while RSS stayed locked at 1.5–1.7 GB, roughly 5× the heap. The allocator holds freed pages in internal arenas rather than returning them to the OS.[10]

This matters because it changes what the numbers mean:

  • A saw-tooth GC pattern in the heap may produce no visible drop in RSS
  • A "stable" RSS may hide a heap that's churning underneath
  • Bun.gc(true) forces a collection but can't force mimalloc to release pages to the OS[10]
  • Long sessions accumulate RSS that doesn't fully return. Worth knowing before you spend time hunting a leak that isn't there.

The gradual RSS increase in this dashboard (206 MB at start, 424 MB peak over ~13 minutes) fits both normal session growth and mimalloc retention. Without heap-level data, you can't tell which it is from RSS alone.

4. Operating system differences

RSS is reported differently across operating systems, and "memory" means different things in each OS's tooling. Cross-platform comparisons require some care.

Linux

Linux uses demand paging with overcommit. The kernel creates virtual mappings immediately but doesn't allocate physical pages until first access. The vm.overcommit_memory sysctl controls how far this goes:[2]

  • Mode 0 (default): heuristic overcommit. Allocations that look reasonable succeed even if total commitments exceed RAM + swap. When physical memory is truly gone, the OOM killer picks a process and terminates it.
  • Mode 1 (always overcommit): never refuse any mmap(). Risky on memory-constrained machines.
  • Mode 2 (strict): total committed memory can't exceed CommitLimit (RAM × ratio + swap). Allocations fail with ENOMEM instead of succeeding and later causing an OOM kill.

Linux exposes much richer per-process memory data than ps or top show. /proc/[pid]/status gives VmRSS, VmHWM (peak RSS), and VmSwap.[1] /proc/[pid]/smaps gives a per-mapping breakdown including PSS and USS.

macOS

macOS uses the Mach VM subsystem with 16 KB pages on Apple Silicon vs 4 KB on Intel. Two things make macOS memory accounting work differently from Linux:

  • Memory compression: since OS X 10.9, inactive pages are compressed in place rather than swapped to disk. Compressed pages still count as resident, so RSS doesn't shrink the way it would on Linux. You can have 12 GB of "resident" memory on an 8 GB machine because half of it has been compressed into 4 GB of physical space. [9]
  • Unified Memory on Apple Silicon: CPU and GPU share the same physical pool. No discrete VRAM. A GPU-heavy workload (video editing, ML inference) directly competes with your running applications. On an 8 GB base M-series Mac, this is a real constraint.

Activity Monitor has several columns that map loosely to Linux concepts:[9]

Activity Monitor columnClosest Linux equivalentNotes
Memory (default)~PSSAccounts for shared frameworks proportionally. Usually smaller than Real Memory.
Real MemoryRSSAll pages currently resident, including shared library pages.
Real Private Memory~USSPages exclusively owned by this process.

Windows

Windows uses different terminology, which causes real confusion when reading cross-platform memory guides:[15]

Windows metricClosest Linux equivalentMeaning
Working SetRSSTotal RAM currently accessible (private + shared pages).
Private Working SetRSS minus sharedRAM exclusively for this process. Task Manager's default "Memory" column.
Commit SizeNo direct matchTotal committed private memory, whether in RAM or page file. The number to watch for Windows capacity planning.

On Windows, Commit Size is the metric that matters most for capacity planning. Windows enforces a system-wide commit limit (RAM + page file). If total committed memory across all processes hits that ceiling, allocations fail. Users have reported Claude Code reaching 47 GB Commit Size on Windows, which is a real problem on machines with limited page file configurations.[16]

5. Modern vs older hardware

400 MB RSS means something very different on an 8 GB machine vs a 64 GB workstation. The raw number only makes sense relative to what's available.

8 GB machine

e.g., base M2 MacBook Air

  • 400 MB RSS = ~5% of total RAM
  • Memory pressure activates sooner; macOS compresses aggressively
  • SSD swap can activate with several applications open
  • Unified Memory means GPU workloads compete directly for the same pool
  • Claude Code at 400+ MB alongside a browser, IDE, and other tools is meaningful pressure. A leak to 2+ GB will visibly degrade the system.

16 GB machine

e.g., mid-range laptop, M2 Pro MacBook

  • 400 MB RSS = ~2.5% of total RAM
  • Comfortable headroom for typical developer workloads
  • Swap rarely activates under normal usage
  • JSC heap limits scale up automatically
  • A normal Claude Code session is unnoticeable. A leak to 5+ GB will start degrading things.

32–64 GB machine

e.g., Mac Studio, workstation, high-end server

  • 400 MB RSS = <1% of total RAM
  • Swap essentially never activates for normal workloads
  • If swap does activate, something is genuinely wrong
  • Raw MB numbers matter less; watch the trend instead
  • RSS only becomes a concern at multi-GB levels, and even then only if it's still climbing.

SSD swap is better, but not free

Modern NVMe SSDs hit 3–7 GB/s sequential throughput, which makes swap far less catastrophic than it was on spinning disks. The latency gap still hurts though:

Storage typeTypical latencyRelative to RAM
DRAM~100 ns
NVMe SSD (random)~20 μs200×
SATA SSD (random)~100 μs1,000×
HDD (random)~10 ms100,000×

Application memory access is mostly random, not sequential, so the throughput advantage of NVMe doesn't help much. Sustained swapping on an 8 GB machine is still slow, and on Apple Silicon the write amplification can add up over time.

Memory compression changes things on macOS

macOS can compress 2 GB of inactive pages into ~1 GB of physical space. An 8 GB Mac can sustain the equivalent of 12–16 GB of working memory before actually swapping to disk, but compressed pages are slower to access than uncompressed. On macOS, the Memory Pressure graph in Activity Monitor is a better signal of actual system strain than RSS alone.[9]

6. Why RSS is the right metric to watch

RSS is the right number to watch for most developers, for these reasons:

  1. It directly reflects physical RAM pressure. When total RSS across running processes approaches available RAM, things slow down. The Linux OOM killer and Kubernetes pod eviction both make their decisions based on RSS (via cgroup memory.usage_in_bytes).[4]
  2. It's available everywhere. ps aux on Linux/macOS, Real Memory in Activity Monitor, Working Set in Task Manager. Same mental model across all three, even if the exact accounting differs.
  3. RSS uncovers native memory leaks. If RSS climbs steadily while heapUsed stays flat, the problem is in native memory — Buffers, C++ addons, stream handles, or in Bun's case, mimalloc's retained arenas. That's the investigation path you'd miss if you only watched the JS heap.[14]
  4. The trend line tells you more than any single reading. A saw-tooth (rise, GC drop, rise again) is healthy. A staircase with no drops means something is accumulating. You can only see this if you're plotting RSS over time.
  5. It's the number that determines whether your process gets killed. On Linux, the OOM killer scores processes partly by RSS. Kubernetes evicts pods when RSS hits the memory limit. Monitoring RSS is monitoring your process's survival odds.[2]

7. More accurate metrics

RSS is useful but incomplete. These other metrics, where available, give you a more accurate view of what your process is actually doing.

PSS Proportional Set Size

Like RSS, but shared library pages are divided proportionally among all processes sharing them. Four processes sharing a 1,000-page library each get 250 pages credited. Sum PSS across all processes and you get an accurate total system memory usage.[12]

Available via /proc/[pid]/smaps or the smem tool on Linux.[1][13]

USS Unique Set Size

Only pages exclusively owned by this process, no shared pages at all. USS answers: how much RAM would actually be freed if this process were killed?[13]

Available via smem -s uss on Linux.

heapUsed process.memoryUsage()

The V8 or JSC heap currently in active use by JavaScript objects. The most direct signal for JS-level leaks. If heapUsed grows without bound while objects should be getting freed, references are being retained in JS code.[4]

Available via process.memoryUsage().heapUsed in Node.js/Bun.

external process.memoryUsage()

Memory used by C++ objects bound to JavaScript objects: Buffers, native addon allocations, streams. This is where many "JS heap looks fine but RSS is climbing" leaks actually live. A growing external value with stable heapUsed is a classic Bun/Node.js leak pattern.[14]

Available via process.memoryUsage().external.

VmHWM High Water Mark

The peak RSS ever reached by this process, as tracked by the kernel. More informative than current RSS for understanding worst-case behavior. A high VmHWM relative to current RSS means the process hit a big allocation spike and then freed it (or the kernel swapped it out).[1]

Available via /proc/[pid]/status on Linux.

Memory footprint macOS

Apple's own metric, accounting for compressed memory, shared frameworks, and dirty pages. More accurate than RSS for macOS-specific analysis. Activity Monitor's default "Memory" column approximates it; the footprint CLI gives the precise value.[9]

macOS only: footprint -p [pid].

8. Common misconceptions

Myth: "If heapUsed is stable, there's no memory leak"
Leaks in JavaScript runtimes often live outside the managed heap. Buffers, native addons, streams, and C++ bindings allocate in native memory. RSS can grow without bound while heapUsed stays flat. Both metrics need to be watched together.[14]
Myth: "RSS should equal heapUsed"
RSS is always larger than heapUsed because it includes the runtime itself, stack memory, native libraries (libuv, OpenSSL, ICU data), JIT-compiled code, and native allocations. A 5:1 ratio at startup is normal. A growing ratio over time is the thing to watch.[4]
Myth: "Bun always uses less memory than Node.js"
Bun is 40–50% lighter at idle. Under sustained load the runtimes converge to within ~15%. Bun also has a documented issue where mimalloc retains freed pages, so RSS can stay elevated long after the GC has cleaned up. Long sessions may show growing RSS that looks like a leak but isn't.[10][11]
Myth: "Calling gc() will fix memory issues"
Manual GC can't free referenced objects, and in Bun's case it can't force mimalloc to return freed pages to the OS. If RSS stays high after a forced GC, you're dealing with retained references or allocator-level page retention, not GC timing.[10]
Myth: "SSD swap makes high memory usage fine"
NVMe SSDs have 200× higher random access latency than RAM. Application memory access is mostly random, so the sequential throughput advantage doesn't help. Sustained swapping is still slow, and on Apple Silicon generates SSD write amplification that adds up over time.

9. Practical guidance for Claude Code sessions

What normal looks like

Based on Claude Code running on Bun (JavaScriptCore + mimalloc) on modern hardware, and cross-referenced with reported issues:[16][17]

RSS rangeWhat it probably means
50–200 MBNormal idle or light session
200–500 MBActive session with moderate context — normal
500 MB – 1 GBHeavy session with large context; acceptable, watch the trend
1–2 GBUpper end of expected; monitor whether it keeps climbing
2–5 GBLikely a problem; consider restarting
5+ GBSomething's wrong; restart

Reading the chart patterns

  • Saw-tooth (rise, sharp drop, rise): healthy GC cycles. The drops are the collector reclaiming objects.
  • Staircase (rise, plateau, rise, higher plateau): slow accumulation. Each step is a batch of allocations that didn't get freed. Often event listeners, large context buffers, or subprocess handles.
  • Flat at a high level: probably mimalloc retention. The allocator reserved pages during peak usage and is holding them. Not a leak in the traditional sense, but the process will stay at that RSS floor for the rest of its life.[10]
  • Monotonic climb with no plateaus: genuine leak. Restart.

This session in context

This dashboard shows Claude Code on an AMD EPYC Linux VM with 32 GB RAM. RSS ranged from 207 MB to 424 MB over ~13 minutes — about 1% of available RAM. That's comfortably within the 200–500 MB range expected for an active Bun session.

  • Without heap-level data (heapUsed, external), normal session growth, mimalloc retention, and a slow leak all look identical in RSS-only monitoring.[10]
  • The gradual RSS increase here (no sharp GC drops) is consistent with mimalloc page retention as much as with a real leak. To distinguish them, you'd need process.memoryUsage() data sampled alongside RSS.

Monitoring commands

# Linux — one-shot RSS reading
ps -o rss= -p <PID> | awk '{printf "%.0f MB\n", $1/1024}'

# Linux — detailed breakdown including peak RSS (VmHWM)
cat /proc/<PID>/status | grep -E 'VmRSS|VmSize|VmHWM|VmSwap'

# Linux — proportional set size (more accurate than RSS for system-wide accounting)
cat /proc/<PID>/smaps_rollup | grep Pss

# macOS — RSS equivalent
ps -o rss= -p <PID> | awk '{printf "%.0f MB\n", $1/1024}'

# Node.js / Bun — heap-level breakdown
node -e "const m=process.memoryUsage(); console.log(JSON.stringify({
  rss: (m.rss/1e6).toFixed(0)+'MB',
  heap: (m.heapUsed/1e6).toFixed(0)+'/'+(m.heapTotal/1e6).toFixed(0)+'MB',
  ext: (m.external/1e6).toFixed(0)+'MB'
}))"

10. References

  1. Linux man-pages project. proc_pid_status(5)/proc/[pid]/status field documentation including VmRSS, VmSize, VmHWM, VmSwap. man7.org/linux/man-pages/man5/proc_pid_status.5.html
  2. Linux kernel documentation. Overcommit Accounting — vm.overcommit_memory modes, CommitLimit, OOM killer behaviour. kernel.org/doc/html/latest/mm/overcommit-accounting.html
  3. Node.js Documentation. Understanding and Tuning Memory Usage — process.memoryUsage() fields, heap structure, Buffer external memory. nodejs.org/en/learn/diagnostics/memory/understanding-and-tuning-memory
  4. WebKit Blog. Introducing Riptide: WebKit's Retreating Wavefront Concurrent Garbage Collector — JSC GC design, concurrency model, write barriers. webkit.org/blog/7122/introducing-riptide-…
  5. WebKit Blog. Understanding GC in JSC From Scratch — JSC generational GC, eden space, IsoSubspace, CompleteSubspace. webkit.org/blog/12967/understanding-gc-in-jsc-from-scratch/
  6. Apple Support. Activity Monitor User Guide: View memory usage — Memory Pressure graph, Real Memory vs memory footprint, macOS memory compression. support.apple.com/guide/activity-monitor/…
  7. Bun GitHub Issue #27514. Memory not being released to OS after GC — documented mimalloc page retention, heap shrinks while RSS stays locked. github.com/oven-sh/bun/issues/27514
  8. Zoer.ai. Bun vs Node.js Memory Usage Comparison — idle and load benchmarks across multiple workload types. zoer.ai/posts/zoer/bun-vs-nodejs-memory-usage-comparison
  9. Wikipedia. Proportional Set Size — PSS definition, comparison to RSS and USS. en.wikipedia.org/wiki/Proportional_set_size
  10. LWN.net. How much memory are applications really using? — PSS vs USS vs RSS; smaps breakdown; example values for bash. lwn.net/Articles/230975/
  11. AppSignal Blog. Node.js Memory Limits — What You Should Know — V8 heap segments, Buffer external memory, RSS vs heapUsed discrepancy. blog.appsignal.com/…/nodejs-memory-limits-what-you-should-know.html
  12. Scorpio Software. Memory Information in Task Manager — Working Set, Private Working Set, Commit Size definitions on Windows. scorpiosoftware.net/2023/04/12/memory-information-in-task-manager/
  13. Claude Code GitHub Issue #24840. Very high memory usage on Windows — 13.2 GB RSS / 47 GB commit size reported. github.com/anthropics/claude-code/issues/24840
  14. Claude Code GitHub Issue #33441. Claude Code memory leak — 2.6 GB RSS within 3 minutes of use. github.com/anthropics/claude-code/issues/33441