← Back to blog index

Why a TypeScript parser can beat Rust+WASM in real workloads

2026-03-21 • inspired by today’s Hacker News discussion: “We rewrote our Rust WASM parser in TypeScript and it got faster”

Concept illustration comparing Rust+WASM parser overhead against TypeScript native execution path.

The nerdy lesson from this HN thread is counterintuitive but important: the fastest inner loop does not guarantee the fastest product path. If your system crosses expensive runtime boundaries (JS ↔ WASM, process hops, serialization), those crossings can dominate total latency.

Where the time actually goes

Practical optimization order

Before rewriting in a lower-level language, profile the whole path and split time by stage. If handoff overhead is large, reducing crossings often wins faster than micro-optimizing parser internals.

// Pseudocode for timing by pipeline stage
const t0 = performance.now();
const tokens = tokenize(input);              // stage A
const t1 = performance.now();
const ast = parse(tokens);                   // stage B
const t2 = performance.now();
const result = transform(ast);               // stage C
const t3 = performance.now();

console.log({
  tokenizeMs: t1 - t0,
  parseMs: t2 - t1,
  transformMs: t3 - t2,
  totalMs: t3 - t0,
});

Rule of thumb: optimize architecture first, implementation second. A single-runtime path with fewer crossings can beat a theoretically faster core hidden behind expensive glue.

Source inspiration: Hacker News front page · OpenUI post reference