caduh

The JavaScript Event Loop — macro/microtasks, rendering, and avoiding jank

5 min read

A practical mental model of tasks vs microtasks, when the browser renders, and patterns to keep 60fps: rAF, chunking, debouncing, and avoiding microtask traps.

TL;DR

  • Each turn of the browser loop runs one macrotask (e.g., setTimeout, event), then flushes microtasks (e.g., Promise.then, queueMicrotask), then may render a frame.
  • Use microtasks for tiny follow‑ups; use macrotasks (or requestAnimationFrame) to yield back to the UI. Long microtask chains can block rendering.
  • Do DOM writes in requestAnimationFrame; break CPU‑heavy work into frame‑sized chunks to stay under ~16ms per frame.
  • Debounce/throttle input handlers; prefer Web Workers for heavy compute; avoid forced sync layout (read → write → read thrash).
  • Don’t spin on await Promise.resolve()/queueMicrotask loops—they starve the renderer. Yield with await new Promise(requestAnimationFrame) or a macrotask.

1) The mental model (browser)

┌─────────────── Event loop turn ───────────────┐
Macrotask (event, setTimeout, fetch handler, …)
  → Flush all microtasks (Promise.then, queueMicrotask, MutationObserver)
  → Render opportunity (layout, paint, compositing) if needed
  → Idle/rAF callbacks → next turn
└───────────────────────────────────────────────┘

Examples

  • Macrotasks: setTimeout, setInterval, DOM events, message/postMessage, fetch callbacks (per spec timing), setImmediate (non‑standard in browsers).
  • Microtasks: Promise.then/catch/finally, queueMicrotask, MutationObserver callbacks.

Order demo

<script>
  setTimeout(() => console.log("macrotask A"), 0);
  Promise.resolve().then(() => console.log("microtask 1"));
  queueMicrotask(() => console.log("microtask 2"));
  console.log("inline script"); // runs before either queue
</script>
<!-- Order:
inline script
microtask 1
microtask 2
macrotask A
-->

2) Rendering: where requestAnimationFrame fits

  • The browser tries to render at ~60fps (≈16.67ms budget) or the display’s refresh rate.
  • requestAnimationFrame(cb) runs right before the next paint. Ideal for DOM writes and visual updates so the changes land in the next frame.

Pattern: read → measure → rAF write

// Read outside rAF to avoid forcing layout mid-frame
const box = document.querySelector(".box");
const targetTop = box.getBoundingClientRect().top; // read
requestAnimationFrame(() => {
  box.style.transform = `translateY(${Math.max(0, 400 - targetTop)}px)`; // write
});

Yield to next frame

// Great for letting the page paint before heavy work continues
await new Promise(requestAnimationFrame);

3) Micro vs macro: when to use which

  • Microtask (Promise.then, queueMicrotask)
    • Use for: tiny follow‑ups that must run before the browser processes other events (e.g., coalescing state updates).
    • Avoid: long loops or large batches; they block the next render.
  • Macrotask (setTimeout(0), postMessage, MessageChannel)
    • Use for: yielding back to input/rendering, chunking work, scheduling after paint.
  • requestAnimationFrame
    • Use for: DOM writes/animations timed with paint; one update per frame.

Anti‑pattern

// Starves rendering: an endless microtask chain
function burn() {
  queueMicrotask(burn); // or await Promise.resolve() in a loop
}
burn();

Fix

// Yield to the browser periodically
async function burnButNice() {
  for (let i = 0; i < 1e6; i++) {
    // ...work...
    if (i % 1000 === 0) await new Promise(requestAnimationFrame);
  }
}

4) Chunk heavy work to avoid jank

Frame‑budgeted chunking

async function processBigList(items) {
  const CHUNK_MS = 8; // leave time for layout/paint
  let i = 0;
  while (i < items.length) {
    const start = performance.now();
    while (i < items.length && performance.now() - start < CHUNK_MS) {
      doWork(items[i++]);
    }
    // Let the browser breathe before continuing
    await new Promise(requestAnimationFrame);
  }
}

MessageChannel vs setTimeout

// Macrotask with minimal clamping; runs sooner than setTimeout(0) in many cases
const { port1, port2 } = new MessageChannel();
port1.onmessage = () => step();
function step() { /* do small chunk */ port2.postMessage(0); } // schedule next chunk
port2.postMessage(0);

Use a Worker for CPU‑heavy tasks

// main.js
const w = new Worker("worker.js");
w.onmessage = e => render(e.data);
w.postMessage({ items });

5) Inputs, debouncing, and throttling

// Debounce: run after user stops typing for N ms
function debounce(fn, ms = 150) {
  let t;
  return (...args) => { clearTimeout(t); t = setTimeout(() => fn(...args), ms); };
}

// Throttle: run at most once per interval
function throttle(fn, ms = 100) {
  let last = 0, queued, ctx, args;
  return function(...a) {
    const now = Date.now(); ctx = this; args = a;
    if (now - last >= ms) { last = now; fn.apply(ctx, args); queued = false; }
    else if (!queued) { queued = true; setTimeout(() => { last = Date.now(); queued = false; fn.apply(ctx, args); }, ms - (now - last)); }
  };
}

Use passive listeners for scroll/touch (addEventListener('scroll', handler, { passive: true })) so the browser can keep scrolling smoothly.


6) Avoid forced synchronous layout

  • Avoid read → write → read patterns that cause reflow thrash. Batch reads first, then writes.
  • Don’t query layout (getBoundingClientRect, offsetHeight, etc.) after writes in the same turn unless you must.
// Bad
el.style.width = "400px";             // write
const h = el.offsetHeight;            // read → forces layout

// Good
const h = el.offsetHeight;            // read
requestAnimationFrame(() => {         // write next frame
  el.style.width = h + "px";
});

7) Node.js notes (quick view)

  • Node also has microtasks (Promise.then, queueMicrotask, process.nextTick) and macrotasks (setTimeout, setImmediate, I/O callbacks).
  • process.nextTick runs before other microtasks; use sparingly to avoid starving the event loop.
  • Prefer setImmediate over setTimeout(0) to run after I/O callbacks.
setTimeout(() => console.log("timeout"), 0);
setImmediate(() => console.log("immediate"));
// Order can vary; after I/O, setImmediate tends to fire first.

8) Observability: find jank

// Long Tasks API: flag main-thread tasks >50ms
new PerformanceObserver((list) => {
  for (const e of list.getEntries()) {
    console.warn("Long task:", Math.round(e.duration), "ms", e.name || e.attribution);
  }
}).observe({ entryTypes: ["longtask"] });

Also track FPS, input delay, and time‑to‑interactive in your perf tooling.


Pitfalls & fast fixes

| Pitfall | Why it janks | Fix | |---|---|---| | Huge work inside a microtask loop | Blocks render indefinitely | Break work; yield via rAF or macrotask | | DOM read/write ping‑pong | Forces layout repeatedly | Batch reads then writes; use rAF for writes | | Heavy work on main thread | Input & scroll lag | Move to Web Workers; stream results | | setTimeout(0) everywhere | Coarse scheduling, clamped timers | Prefer MessageChannel or rAF where visual | | Synchronous JSON parse of large data | Long task freeze | Use worker + streaming/parse in chunks | | process.nextTick spam (Node) | Starves other tasks | Use sparingly; prefer micro/macrotask queues |


Quick checklist

  • [ ] DOM writes in requestAnimationFrame; reads batched separately.
  • [ ] Chunk heavy work and yield each frame.
  • [ ] Use microtasks only for tiny follow‑ups.
  • [ ] Debounce/throttle input; passive scroll/touch.
  • [ ] Avoid forced layout; watch Long Tasks.
  • [ ] Offload CPU to Web Workers when possible.

One‑minute adoption plan

  1. Wrap visual updates in requestAnimationFrame and batch reads/writes.
  2. Replace loops that do lots of work with frame‑budgeted chunks (yield via rAF).
  3. Add debounce/throttle to high‑frequency handlers (input/scroll).
  4. Turn on a Long Tasks observer in dev/staging; fix >50ms offenders.
  5. Move heavy compute to a Worker or chunk it if it must stay on main.