· 10 min read
Pushing the Limits: Creative Solutions to Stubborn JavaScript Problems
A hands-on collection of challenging JavaScript problems and creative, pragmatic solutions - from cancelling arbitrary async tasks and avoiding memory leaks to reactive state without a framework and deterministic object hashing. Learn patterns, trade-offs, and code you can adapt today.
JavaScript is full of deceptively simple problems that become surprisingly hard at scale. This post collects a set of stubborn JavaScript challenges and shows creative, practical solutions that often combine language features in interesting ways.
For each problem you’ll get a concise description, a creative solution, example code, and notes on trade-offs and browser support.
1) Event delegation that survives dynamic content and Shadow DOM
Problem: You want one listener on a container to handle clicks on dynamically added items - but some items live in a shadow DOM or deeper composed trees, and event.target
doesn’t always reflect the element you care about.
Creative solution: Use event delegation with event.composedPath()
(which crosses shadow boundaries) and a helper that walks the composed path for a matching selector. Fall back to a manual capturing strategy if composedPath
isn’t available.
Example:
function delegate(root, selector, eventName, handler, options) {
root.addEventListener(
eventName,
e => {
const path = e.composedPath ? e.composedPath() : composedPathFallback(e);
for (const node of path) {
if (!node || node === root || node === document) break;
if (node.nodeType === 1 && node.matches && node.matches(selector)) {
handler.call(node, e);
break;
}
}
},
options
);
}
function composedPathFallback(e) {
const path = [];
let node = e.target;
while (node) {
path.push(node);
node = node.parentNode || node.host; // shadow fallback
}
path.push(window);
return path;
}
// Usage
delegate(document.body, '.todo .remove', 'click', e => {
console.log('remove', this); // this === matched element
});
Notes: composedPath
is supported in modern browsers; fallback helps older environments. Works across shadow DOM boundaries and for dynamically inserted nodes.
References: MDN on Event.composedPath - https://developer.mozilla.org/en-US/docs/Web/API/Event/composedPath
2) Debouncing/throttling across component instances
Problem: You have many instances of a widget that all call the same expensive operation (e.g., auto-save). Individual debouncing on each instance still floods the backend if many instances trigger simultaneously.
Creative solution: Maintain a global registry keyed by resource or operation. Debounce per resource instead of per-instance. Use a Map
or WeakMap
to avoid leaks.
Example:
const debounceRegistry = new Map();
function debounceGlobal(key, fn, wait = 300) {
if (!debounceRegistry.has(key)) {
debounceRegistry.set(key, { timer: null, lastArgs: null });
}
const entry = debounceRegistry.get(key);
return function (...args) {
entry.lastArgs = args;
clearTimeout(entry.timer);
entry.timer = setTimeout(() => {
fn(...entry.lastArgs);
entry.timer = null;
}, wait);
};
}
// Usage: multiple components call saveDraft('doc-123') but only one network call occurs
const debouncedSave = debounceGlobal('save-doc-123', data => sendSave(data));
// Component A/B/C can all call:
debouncedSave({ content: '...' });
Notes: Keys can be strings, Symbols, or objects (via WeakMap). Good for rate-limiting shared resources.
3) Cancel arbitrary async tasks (not just fetch)
Problem: fetch
supports AbortController
, but what about custom promises or chained async operations? How do you cancel a task cleanly?
Creative solution: Use an AbortController
as a cancellation token and race the primary promise against an abort
promise. Also provide helpers that wire the abort signal into your async flows.
Example:
function cancellable(promiseFactory, signal) {
if (signal && signal.aborted)
return Promise.reject(new DOMException('Aborted', 'AbortError'));
return new Promise((resolve, reject) => {
const onAbort = () => reject(new DOMException('Aborted', 'AbortError'));
signal && signal.addEventListener('abort', onAbort);
Promise.resolve()
.then(() => promiseFactory(signal))
.then(resolve, reject)
.finally(() => signal && signal.removeEventListener('abort', onAbort));
});
}
// Example async job that knows about signal
async function heavyComputation(signal) {
for (let i = 0; i < 1e9; i++) {
if (signal && signal.aborted)
throw new DOMException('Aborted', 'AbortError');
// chunked work ...
if (i % 1e6 === 0) await Promise.resolve();
}
return 'done';
}
const ac = new AbortController();
cancellable(() => heavyComputation(ac.signal), ac.signal)
.then(console.log)
.catch(err => console.log('cancelled', err));
// later
ac.abort();
Notes: Truly cancelling CPU work requires cooperation from the work itself (periodic checks for signal.aborted
), or offloading to a Web Worker where you can terminate the worker.
References: AbortController - https://developer.mozilla.org/en-US/docs/Web/API/AbortController
4) Deep cloning complex structures (cycles, dates, buffers, functions?)
Problem: JSON can’t handle Dates, Maps, Sets, cyclical graphs, or functions. You need a clone that preserves as much as possible.
Creative solution: Prefer the native structuredClone
(fast and handles cycles, many built-ins). For environments without it, use MessageChannel
-based structured cloning in a worker-like way. For functions, decide on a serialization policy (often better to keep functions out of data, but if necessary, tag and resurrect them explicitly).
Example:
async function smartClone(value) {
if (typeof structuredClone === 'function') return structuredClone(value);
// fallback using MessageChannel
return new Promise((resolve, reject) => {
const { port1, port2 } = new MessageChannel();
port1.onmessage = e => resolve(e.data);
port2.postMessage(value);
});
}
// Usage
const original = { date: new Date(), map: new Map([[1, 'a']]) };
smartClone(original).then(clone => {
console.log(clone.date instanceof Date, clone.map instanceof Map);
});
If you must serialize functions (rare in data exchange), store them as source strings with a safety policy, then reconstruct with new Function(...)
in a controlled environment.
References: structuredClone - https://developer.mozilla.org/en-US/docs/Web/API/structuredClone
5) Memory leaks from closures and DOM references
Problem: Long-lived closures accidentally retain DOM nodes or heavy graphs (event listeners, timers), causing leaks in single-page apps.
Creative solution: Use WeakRef and FinalizationRegistry for some advanced cleanup scenarios and use a MutationObserver
to detect node removal and automatically cleanup listeners. Also prefer WeakMap
/WeakSet
for caches tied to nodes.
Example:
// attach handler with automatic cleanup when node is removed from DOM
function attachSmart(el, event, handler) {
el.addEventListener(event, handler);
const mo = new MutationObserver((mutations, obs) => {
if (!document.contains(el)) {
el.removeEventListener(event, handler);
obs.disconnect();
}
});
mo.observe(document, { childList: true, subtree: true });
}
// Advanced: weak cache keyed by node
const nodeData = new WeakMap();
function setNodeData(node, data) {
nodeData.set(node, data); // doesn't prevent GC when node is gone
}
Notes: WeakRef
/FinalizationRegistry
are powerful but subtle; their timing is non-deterministic. Prefer explicit lifecycle management when possible.
References: WeakRef - https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WeakRef
6) Scheduling heavy CPU work without blocking the UI
Problem: Large synchronous loops or recursion freeze the UI and cause jank.
Creative solution: Chunk the work and yield back to the event loop between chunks. Use requestIdleCallback
where available, and fallback to MessageChannel
microtask-based scheduling for responsive slicing. For real parallelism, use Web Workers.
Example (chunking with requestIdleCallback fallback):
const schedule =
window.requestIdleCallback ||
(cb => setTimeout(() => cb({ timeRemaining: () => 50 }), 0));
function processLargeArray(items, processItem, onDone) {
let i = 0;
function work(deadline) {
while (i < items.length && deadline.timeRemaining() > 1) {
processItem(items[i++]);
}
if (i < items.length) schedule(work);
else onDone();
}
schedule(work);
}
Alternative microtask-based trampoline using MessageChannel for lower latency when you need immediate chunking.
References: requestIdleCallback - https://developer.mozilla.org/en-US/docs/Web/API/Window/requestIdleCallback
7) Deterministic hashing of objects for change detection
Problem: JSON.stringify
is non-deterministic for key order and can’t handle cycles; you want a stable hash to detect changes in large objects.
Creative solution: Implement a stable serialization with sorted keys and cycle handling, then hash using the Web Crypto API (crypto.subtle.digest
) to produce a compact fingerprint.
Example:
function stableStringify(obj) {
const seen = new WeakSet();
function helper(value) {
if (value && typeof value === 'object') {
if (seen.has(value)) return '"__cycle__"';
seen.add(value);
if (Array.isArray(value)) return '[' + value.map(helper).join(',') + ']';
const keys = Object.keys(value).sort();
return (
'{' +
keys.map(k => JSON.stringify(k) + ':' + helper(value[k])).join(',') +
'}'
);
}
return JSON.stringify(value);
}
return helper(obj);
}
async function hashObject(obj) {
const s = stableStringify(obj);
const buf = new TextEncoder().encode(s);
const digest = await crypto.subtle.digest('SHA-256', buf);
return Array.from(new Uint8Array(digest))
.map(b => b.toString(16).padStart(2, '0'))
.join('');
}
Notes: This is robust for change detection. For very large objects you may want incremental hashing or persistent caching.
References: SubtleCrypto.digest - https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/digest
8) Build a tiny reactive system with Proxy (no framework)
Problem: You want reactivity with memoized updates and batched notification without pulling in a framework.
Creative solution: Use Proxy
to intercept gets/sets, track dependencies, and schedule batched updates with a microtask queue. This is the core idea behind modern reactivity systems.
Example (minimal):
function reactive(obj) {
const subs = new Set();
const queue = new Set();
let scheduled = false;
function scheduleFlush() {
if (scheduled) return;
scheduled = true;
Promise.resolve().then(() => {
for (const fn of queue) fn();
queue.clear();
scheduled = false;
});
}
const proxy = new Proxy(obj, {
get(target, key) {
// in a full system we'd track which effect is active
return Reflect.get(target, key);
},
set(target, key, value) {
const res = Reflect.set(target, key, value);
for (const sub of subs) {
queue.add(sub);
}
scheduleFlush();
return res;
},
});
return {
proxy,
subscribe(fn) {
subs.add(fn);
return () => subs.delete(fn);
},
};
}
// Usage
const { proxy: state, subscribe } = reactive({ count: 0 });
subscribe(() => console.log('render', state.count));
state.count++;
state.count++;
// render logs once with final value due to batching
Notes: This minimal example omits dependency tracking; real systems record which properties each effect uses to avoid over-updating. Still, it demonstrates how Proxy
+ microtask batching yields high-impact results.
References: Proxy - https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Proxy
9) Optimistic UI with robust retry and rollback
Problem: You want snappy UIs that assume success (optimistic updates) but must roll back cleanly when the server rejects or network fails.
Creative solution: Apply changes locally, enqueue the network operation in a durable queue, and provide a compensation (rollback) callback. Retry with exponential backoff and escalate failures to a visible error state.
Example pattern:
const queue = [];
let working = false;
async function processQueue() {
if (working) return;
working = true;
while (queue.length) {
const { op, rollback, attempts = 0 } = queue[0];
try {
await op(); // send to server
queue.shift();
} catch (err) {
if (attempts > 3) {
// permanent failure: revert locally and notify
rollback();
queue.shift();
} else {
// exponential backoff
queue[0].attempts = attempts + 1;
await new Promise(r => setTimeout(r, 200 * Math.pow(2, attempts)));
}
}
}
working = false;
}
// when user edits
applyLocalChange();
queue.push({
op: () => sendToServer(data),
rollback: () => revertLocalChange(),
});
processQueue();
Notes: Persist the queue to IndexedDB for resilience across page reloads.
10) Avoid stack-overflow in deep recursion with trampolining
Problem: Deep recursive algorithms blow the stack in JS (no tail-call optimization reliably available).
Creative solution: Transform recursion into an explicit stack and use a trampoline (a loop that iteratively executes thunk functions). This lets you express recursion ergonomically while executing iteratively.
Example:
function trampoline(fn) {
return function trampolined(...args) {
let result = fn.apply(this, args);
while (typeof result === 'function') {
result = result();
}
return result;
};
}
function factThunk(n, acc = 1) {
if (n === 0) return acc;
return () => factThunk(n - 1, n * acc);
}
const factorial = trampoline(factThunk);
console.log(factorial(10000)); // no stack overflow
Notes: Trampolining is useful for functional-style recursion where you can return thunks instead of recursing directly.
Final thoughts: Think in layers and design for cooperative cancellation and observability
Many “stubborn” bugs or performance problems become manageable when you separate concerns:
- Decouple work (UI vs CPU-heavy) via scheduling or workers.
- Use cancellation tokens (AbortController) for long-running tasks and wire them through your async stack.
- Prefer weak references for caches tied to DOM or short-lived objects.
- Use native primitives like structuredClone, crypto, and Proxy where available - but provide pragmatic fallbacks.
The best creative solutions often combine multiple small language features into a dependable pattern: a WeakMap cache + a MutationObserver to clean up, a Promise-microtask batching strategy alongside Proxy-based reactivity, or an AbortController wired through fetch and custom promises. Try to make cancellation, cleanup, and observability first-class in your architecture - it pays off as projects scale.
Further reading and references:
- AbortController: https://developer.mozilla.org/en-US/docs/Web/API/AbortController
- structuredClone: https://developer.mozilla.org/en-US/docs/Web/API/structuredClone
- WeakRef / FinalizationRegistry: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WeakRef
- requestIdleCallback: https://developer.mozilla.org/en-US/docs/Web/API/Window/requestIdleCallback
- MessageChannel: https://developer.mozilla.org/en-US/docs/Web/API/MessageChannel
- Proxy: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Proxy
If one of the problems above hit home for you, pick the pattern that matches your constraints and adapt it. These solutions are recipes - trade-offs and platform quirks apply - but they should spark approaches you can integrate into real projects.