
Intermediate workshop
NestJS Workshop: Building for Production
Mastering NestJS: From Basics to Advanced Application Design
Learn more
Understanding CPU profiling is easiest when you see it applied to real-world problems. This article walks through practical examples, starting with profiling a simple script and progressing to more complex builds that involve multiple threads and processes. You will see what current tools make possible, where their limitations appear, and find strategies to approach profiling with greater confidence in your own projects.
The following example demonstrates how to CPU profile a Node.js script using the --cpu-prof
console.log('exmpl-script.js PID', process.pid);
let sum = 0;
for (let i = 0; i < 10000000; i++) {
sum += Math.sqrt(i);
}
process.exit(0);
Command:
node --cpu-prof exmpl-script.js
# This will not work as the arguments may not be passed down
# node ./exmpl-script.js --cpu-prof
This command will CPU profile the exmpl-script.jsCPU.<timestamp>.<pid>.<tid>.<seq>.cpuprofile
Output:
/root
└─ CPU.
When a program creates multiple worker threads, you can use the --cpu-prof
import { Worker, threadId } from 'worker_threads';
const workerCodeESM = `import { threadId as t } from 'worker_threads';
console.log('Worker PID:', process.pid, 'TID:', t);`;
const workerDataURLString =
'data:text/javascript,' + encodeURIComponent(workerCodeESM);
new Worker(new URL(workerDataURLString));
new Worker(new URL(workerDataURLString));
console.log('Main PID:', process.pid, 'TID:', threadId);
Command:
node --cpu-prof --cpu-prof-dir=./cpu-prof-threads ./exmpl-create-threads.js
Output:
root/
└─ cpu-prof-threads/
├─ CPU.
For a more real-life example, consider profiling the Angular framework's build process, which makes extensive use of worker threads to distribute work across its various build steps.
Command:
node --cpu-prof --cpu-prof-dir=./angular-build node_modules/.bin/ng build
Output:
/root
├─ CPU.
32 .cpuprofile
In the earlier examples, we could use relative paths because the CWD stayed the same. In practice, this is often not the case, and your files may end up in different folders depending on the current working directory.
You can profile multiple processes and store their results together in a single folder.
import { spawn } from 'node:child_process';
import { threadId as t } from 'node:worker_threads';
import { join, dirname } from 'node:path';
import { fileURLToPath } from 'node:url';
import { mkdirSync } from 'node:fs';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
// normally this folders exist as the location of the executed script is in that folder.
mkdirSync(join(process.cwd(), 'child-process-1'), { recursive: true });
mkdirSync(join(process.cwd(), 'child-process-2'), { recursive: true });
const childScriptPath = join(__dirname, 'exmpl-script.js');
spawn(process.execPath, [childScriptPath], {
stdio: 'inherit',
cwd: join(process.cwd(), 'child-process-1'),
});
spawn(process.execPath, [childScriptPath], {
stdio: 'inherit',
cwd: join(process.cwd(), 'child-process-2'),
});
console.log('Parent PID:', process.pid, 'TID:', t);
When doing this, make sure the output directory specified by --cpu-prof-dir
Command:
NODE_OPTIONS="--cpu-prof --cpu-prof-dir=/Users/user/workspace/cpu-prof-processes"
node ./exmpl-spawn-processes.js
# This will not work as the --cpu-prof params are not passed
# node --cpu-prof --cpu-prof-dir=./cpu-prof-processes ./exmpl-spawn-processes.js
# This will not work as the directory is relative to the CWD, which is not the same
as the script location.
# NODE_OPTIONS="--cpu-prof --cpu-prof-dir=./cpu-prof-processes"
node ./exmpl-spawn-processes.js
Note: The
environment variable is required to apply CPU profiling options to all child processes. This ensures each spawned process generates its own NODE_OPTIONSfile in the specified directory. Using only .cpuprofileas a command-ine argument will not work, as the --cpu-prof-dirflag may not be passed to child processes automatically. --cpu-prof
Output:
/root
├─ CPU.
One practical real-world example is Nx for managing smart repositories. Building all projects in a monorepo with Nx spawns multiple processes for each project, along with potential threads depending on the build tool in use.
Command:
NODE_OPTIONS="--cpu-prof --cpu-prof-dir=/Users/user/workspace/cpu-prof-nx-build-many"
nx run-many -t build --skip-nx-cache
Output:
/root
├─ CPU.
In practice, you would rarely profile that many processes at once. For example, Nx's TUI already provides a comprehensive overview of build times, and most other solutions include similar rough stats. From there, you can focus on the slowest part of the process and profile it in detail.
Currently, there is no officially supported tool for visualizing multiple CPU profiles generated during a single measurement. The most we can do with the default tools is apply a few best practices and follow established processes.
Less is more
Don't waste time creating 100 profiles that are hard to make sense of. Instead, run a measurement, open a few profiles, and work through them to form hypotheses. Once you have more clues, plan and run targeted measurements.
Rename your profiles
While building your understanding, it helps to rename the original profile file names to something more meaningful that you can remember and get back to later.
Isolate your measures
After you have identified the culprit, isolate the measurement to only what you need. For example, if you find a slow function, run just that function with --eval
node --cpu-prof -e "
const { slowFunction } = require('./slow-module.js');
slowFunction();
"
Focus on quick wins and document your findings
Prioritize issues you understand, that have high impact, and are easy to fix. Avoid spending time on unclear, low-impact, or hard-to-guess problems.
Use the right tools
While there is no official tool for visualizing multiple CPU profiles, you can use @push-based/cpu-prof. It is a CLI that helps create and merge multiple CPU profiles into a single profile that the Chrome DevTools Performance Panel can read.
Exploring practical CPU profiling examples is one of the most effective ways to improve your performance analysis skills. Even though today’s tools have their limits, an organized and focused approach makes it much easier to cut through complexity and pinpoint what matters most. The profiling scenarios discussed above provide a strong foundation to start profiling your own projects and the confidence to keep exploring and optimizing for even better performance.

Intermediate workshop
Mastering NestJS: From Basics to Advanced Application Design
Learn more
Angular is bridging the gap between the new Signal-Forms architecture and the Template/Reactive Forms. This isn't just a minor update—it’s a fundamental shift in how we author custom form controls.

Accessibility doesn’t have to be hard. Follow a comic-style, hands-on journey building an accessible day selector with Angular Aria, learning comboboxes, listboxes, and real screen reader behavior along the way.

If Signals were Angular’s “aha!” moment, Zoneless is the “oh wow—this feels snappy” moment. With Angular v21, new apps use zoneless change detection by default. No more zone.js magic under the hood—just explicit, predictable reactivity powered by Signals.

Part 2 is a practical walkthrough to diagnose and speed up LCP. Learn to read CrUX trends, profile in Chrome DevTools, preload critical assets, use srcset, defer third-party scripts, and code-split Angular bundles to turn red LCP into green.

Deploy one Angular app for many clients with unique configs—without breaking SSR or DX. Here’s how to unlock dynamic configuration.

Largest Contentful Paint (LCP) is a Core Web Vital that shapes user perception of speed. This first part explains what LCP is why it matters for SEO and business, and how its phases affect site performance.