You spent three hours debugging why useEffect ran twice in dev mode—only to realize your team’s “lightweight” React wrapper was swallowing network errors silently, and the real bug was a missing catch() on a fetch() buried in a utility file no one owns.
That’s not a framework problem. That’s a coordination problem.
It’s not that JavaScript is hard. It’s that we keep pretending coordination is free—like state updates, error propagation, side-effect sequencing, and module boundaries are things that just “work out” if we write clean code. They don’t. They cost. And in small-to-mid-sized teams—where you’re shipping features, fixing bugs, mentoring juniors, and reviewing PRs all in the same day—that cost compounds fast. Not in CPU cycles. In cognitive load. In context switches. In the 2 a.m. Slack message from someone saying “the dashboard broke after my PR,” when their change was just adding a tooltip.
I’ve shipped JavaScript for eight years across contexts where “scale” meant 12 people, not 12 million users. I’ve debugged race conditions in logistics dashboards where a WebSocket update overwrote a pending form submission. I’ve traced silent Safari failures in checkout flows because fetch() throws TypeError, not Error. I’ve watched a billing report fail on month-end because three different charting libraries patched Date.prototype.format() in conflicting ways. None of those were syntax errors. None required a new framework. All of them were preventable—not with more tooling, but with intentional constraints.
This isn’t about going “frameworkless” as a virtue signal. It’s about building apps where the JavaScript you write is the JavaScript you ship—no hidden abstractions, no runtime surprises, no “it works until it doesn’t.” Where every function has a clear contract, every error tells you where to look, and every side effect leaves a receipt.
Let’s get concrete.
State That Doesn’t Lie
State is the first place coordination debt takes root—not because it’s complex, but because it’s shared. And shared mutable state without enforcement becomes a game of telephone: one component reads shipment.status, another mutates it directly, a third listens for changes via a fragile useEffect dependency array, and a fourth assumes it’s immutable and spreads it into a new object—only to find the original reference changed underneath it.
The Real Story: Logistics Dashboard Race Condition
At a logistics SaaS startup (12-person team), we built a shipment-tracking dashboard with real-time status updates over WebSocket. Users could also manually confirm shipments via a form. Both paths updated the same shipment.status field—but the WebSocket handler would fire before the form’s API call resolved. So:
- User clicks “Confirm Shipment”
- Form submits → sets
status = 'pending'locally - WebSocket receives
{ status: 'shipped', eta: '2024-05-12' } - UI renders
status = 'shipped' - API call resolves → sets
status = 'confirmed' - But now the WebSocket listener fires again (reconnection? duplicate message?) → rewrites
status = 'shipped' - UI flashes back from “confirmed” to “shipped”
We spent two days blaming race conditions, then another day auditing WebSocket reconnect logic—until we grepped for shipment.status = and found six direct assignments across components, hooks, event listeners, and a legacy “sync service” file no one remembered writing.
There was no single source of truth. Just six places that thought they owned the truth.
The Fix: Enforced Mutations, Not Mutable Objects
We replaced all direct assignments with a plain store—no hooks, no context, no proxy traps, no libraries. Just functions that enforce valid transitions and return immutable snapshots.
// store/shipment.js
const createShipmentStore = () => {
let state = {
status: 'draft',
eta: null,
errors: [],
lastUpdated: null
};
const setState = (partial) => {
state = {
...state,
...partial,
lastUpdated: new Date().toISOString()
};
};
const updateStatus = (newStatus) => {
const validTransitions = {
draft: ['pending', 'cancelled'],
pending: ['shipped', 'failed', 'cancelled'],
shipped: ['delivered', 'returned', 'cancelled'],
delivered: ['archived']
};
if (!validTransitions[state.status]?.includes(newStatus)) {
throw new Error(
Invalid status transition: ${state.status} → ${newStatus}. +
Valid options: [${(validTransitions[state.status] || []).join(', ')}]
);
}
setState({ status: newStatus });
};
const addError = (message) => {
setState({
errors: [...state.errors, { message, timestamp: Date.now() }]
});
};
const clearErrors = () => {
setState({ errors: [] });
};
// ✅ Always returns a copy — no mutation leaks
const getState = () => ({ ...state });
return {
getState,
updateStatus,
addError,
clearErrors,
setState // only for internal use — exported sparingly
};
};
// Usage — consistent everywhere
const shipmentStore = createShipmentStore();
// In form handler
const handleConfirm = async () => {
try {
shipmentStore.updateStatus('pending');
await api.confirmShipment(id);
shipmentStore.updateStatus('confirmed');
} catch (err) {
shipmentStore.addError('Confirmation failed. Try again.');
}
};
// In WebSocket listener
socket.on('shipment:update', (data) => {
// Only update status if it's a valid transition
if (data.status && data.status !== shipmentStore.getState().status) {
try {
shipmentStore.updateStatus(data.status);
if (data.eta) shipmentStore.setState({ eta: data.eta });
} catch (err) {
console.warn('Ignored invalid status update:', data.status, err.message);
}
}
});
Why This Works
- No magic: It’s just a closure with functions. No dependency injection, no context providers, no “store setup” step. Import and call
createShipmentStore(). Done. - Enforced contracts:
updateStatus()throws on invalid transitions. You feel the constraint early—not in QA, not in production, but at dev time when you try to go from'failed'to'confirmed'. - Immutability by default:
getState()returns a copy. No component can mutate the store’s internal state. If they need to “update” something, they call a method. - Debuggable:
console.log(shipmentStore.getState())always shows the current truth. No need to inspect React DevTools or trace through 17 layers of hooks. - Testable: You can test
updateStatus()in isolation with zero setup:
test('blocks invalid status transitions', () => {
const store = createShipmentStore();
expect(() => store.updateStatus('confirmed')).toThrow();
});
Tradeoffs You’ll Face
- No automatic reactivity: You must wire up updates manually (e.g.,
useState+useEffectto subscribe). That’s by design. Automatic reactivity hides who’s responsible for triggering a render. With explicit subscriptions, you control when and why the UI updates. - Slight boilerplate per domain: Yes, you’ll write
createUserStore(),createOrderStore(), etc. But each is ~20 lines, fully owned by one feature, and trivial to refactor later. Compare that to debugging whyuseSelectorreturns stale data because a memoized selector missed a dependency. - No “global” state: You want separate stores for separate concerns. A
shipmentStoreshouldn’t know aboutuserPreferences. Cross-cutting concerns (like auth) get their own store—and you compose them explicitly where needed.
Practical Tip: Subscribe Lightly
You don’t need a full pubsub system. Add minimal reactivity only where you need it:
// store/shipment.js — add this to the returned object
const subscribe = (callback) => {
const initial = getState();
callback(initial);
// Simple event emitter — no external deps
const handler = () => callback(getState());
window.addEventListener('shipment:updated', handler);
return () => window.removeEventListener('shipment:updated', handler);
};
// In component
useEffect(() => {
const unsubscribe = shipmentStore.subscribe(setLocalState);
return unsubscribe;
}, []);
This gives you reactivity without locking you into a framework’s lifecycle. It’s just DOM events—standard, documented, stable across versions.
Errors That Tell You Where to Look
JavaScript errors are useless unless they answer three questions:
- What went wrong? (the message)
- Where did it happen? (the stack, the file, the line)
- What context caused it? (the request URL, the user action, the data involved)
But native errors rarely give you #3. fetch() throws a TypeError on CORS failure—with an empty .message. JSON.parse() throws SyntaxError—but doesn’t tell you which response body failed. And custom errors often omit context entirely (“Network error” — which network request?).
The Real Story: Safari Checkout Failure
A freelance client’s e-commerce checkout worked flawlessly in Chrome and Firefox. In Safari, payments would silently fail—no toast, no log, no user feedback. The bug lived for 11 days.
Why? Because their error boundary caught only instanceof Error, and Safari’s fetch() rejects with a TypeError on CORS misconfiguration. Their logging was:
// ❌ What they had
try {
await fetch('/api/charge', { method: 'POST', body: JSON.stringify(data) });
} catch (err) {
console.error('Checkout failed:', err.message); // logs "" for TypeError
showErrorToast('Something went wrong'); // generic, unactionable
}
err.message was empty. err.stack pointed to fetch() internals—not their code. And since Safari’s dev tools don’t show failed CORS requests in the Network tab by default, no one saw the actual 400 response from their backend.
The Fix: Normalize Async Errors
We wrote safeFetch()—a thin wrapper that guarantees every error has the same shape, regardless of cause:
// utils/safe-fetch.js
export const safeFetch = async (url, options = {}) => {
try {
const res = await fetch(url, {
...options,
// ✅ Always send client version for log correlation
headers: {
'X-Client-Version': 'v2.1',
'Content-Type': 'application/json',
...options.headers
}
});
// ✅ Handle HTTP errors explicitly
if (!res.ok) {
const text = await res.text(); // capture raw body for debugging
const error = new Error(HTTP ${res.status} ${res.statusText});
// ✅ Attach structured context
Object.assign(error, {
url,
status: res.status,
statusText: res.statusText,
responseText: text,
responseHeaders: Object.fromEntries(res.headers.entries())
});
throw error;
}
// ✅ Parse JSON safely
const contentType = res.headers.get('content-type');
if (contentType && contentType.includes('application/json')) {
return await res.json();
}
return await res.text();
} catch (err) {
// ✅ Normalize network-level errors (TypeError, AbortError)
if (err.name === 'TypeError' && err.message.includes('fetch')) {
const error = new Error('Network request failed');
Object.assign(error, {
url,
cause: 'network',
original: err,
timestamp: new Date().toISOString()
});
throw error;
}
if (err.name === 'AbortError') {
const error = new Error('Request timed out or was cancelled');
Object.assign(error, {
url,
cause: 'timeout',
original: err,
timestamp: new Date().toISOString()
});
throw error;
}
// ✅ Pass through all other errors unchanged (e.g., JSON.parse errors)
if (err.url === undefined) {
Object.assign(err, { url });
}
throw err;
}
};
Usage That Makes Debugging Trivial
// pages/checkout.js
const handleSubmit = async (e) => {
e.preventDefault();
try {
const result = await safeFetch('/api/charge', {
method: 'POST',
body: JSON.stringify({ token, amount })
});
showSuccessToast('Payment processed!');
router.push('/success');
} catch (err) {
// ✅ Every error has .url, .cause, and .original
console.error('Checkout failed:', {
url: err.url,
cause: err.cause,
message: err.message,
status: err.status,
timestamp: err.timestamp,
original: {
name: err.original?.name,
message: err.original?.message
}
});
// ✅ User-facing message based on cause
if (err.cause === 'network') {
showErrorToast('No internet connection. Please check your network and try again.');
} else if (err.status >= 400 && err.status < 500) {
showErrorToast('Invalid payment details. Please check your card information.');
} else if (err.status >= 500) {
showErrorToast('Our payment system is temporarily unavailable. Please try again shortly.');
} else {
showErrorToast('Something went wrong. Please try again.');
}
}
};
Why This Works
- Consistent shape: Every error has
url,cause, andtimestamp. Your logging service can index and filter byurlorcause—no more guessing which endpoint failed. - Context-rich:
responseTextcaptures raw JSON/XML/HTML responses. When a backend returns"{'error': 'invalid_token'}"instead of a proper 401, you see it—immediately. - No silent failures: Even
TypeErrors get wrapped and enriched. You won’t miss a Safari CORS failure again. - Zero dependencies: Uses only standard
fetch,Error, andObject.assign(). Won’t break on Node 18+ or Deno.
Tradeoffs You’ll Face
- Slight overhead: Two
awaits (one forres.text(), one forres.json()). But this only runs on error paths—so it’s negligible. And you gain actionable context. - Not a replacement for backend validation: This catches transport and HTTP errors—not business logic errors like “insufficient funds.” Those still belong in your API’s 4xx responses.
- You must remember to use it: We added an ESLint rule to ban bare
fetch()calls:
// eslint-plugin-no-bare-fetch
'no-restricted-syntax': [
'error',
{
selector: 'CallExpression[callee.name="fetch"]',
message: 'Use safeFetch() instead of bare fetch() to ensure consistent error handling.'
}
]
Practical Tip: Log Client Version Everywhere
That X-Client-Version header seems trivial—until you’re staring at logs wondering, “Did this error happen before or after the deploy that fixed the date parsing bug?”
- Set it once, at build time (e.g.,
process.env.npm_package_version). - Send it on every request—API, analytics, health checks.
- Include it in every error log, even non-network ones:
console.error('Date parsing failed:', {
version: 'v2.1',
input: '2024-05-12T',
error: err.message
});
When an error hits your logs, you know exactly which deployed frontend version triggered it. No more “was this on staging or prod?”
Side Effects With Receipts
Side effects—API calls, localStorage writes, analytics pings, DOM mutations—are where JavaScript projects become unpredictable. Not because they’re hard, but because they’re uncoordinated. You click “Save”, and three things happen:
- A
fetch()call - A
localStorage.setItem() - A
gtag()call
Which one fails? Which one succeeds first? Which one retries? Which one cancels the others? If you don’t answer those questions explicitly, the runtime does—and its answers are rarely what you want.
The Real Story: HR Profile Double-Save
At a mid-sized HR platform, the “Save Profile” button would sometimes save twice. Once from the form’s onSubmit, once from a MutationObserver watching for changes inside a third-party rich-text editor (which fired input events and DOMSubtreeModified events).
The API wasn’t idempotent. Duplicate saves created duplicate records in the database. Rolling back wasn’t an option—the fix required backend changes that took two sprints. So we needed a frontend stopgap: guarantee at most one save per user intent.
We tried disabling the button—but the MutationObserver fired after the button disabled, so the second save still happened. We tried AbortController—but the observer wasn’t using fetch(), so it couldn’t be aborted.
The Fix: Explicit Effect Coordination
We built createEffectManager()—a tiny, explicit coordinator for side effects:
// utils/effect-manager.js
export const createEffectManager = () => {
const activeEffects = new Set();
const runOnce = (key, fn) => {
if (activeEffects.has(key)) {
return Promise.resolve();
}
activeEffects.add(key);
return fn().finally(() => {
activeEffects.delete(key);
});
};
const isRunning = (key) => activeEffects.has(key);
const getActiveKeys = () => [...activeEffects];
return {
runOnce,
isRunning,
getActiveKeys
};
};
// Usage in component
import { createEffectManager } from '../utils/effect-manager.js';
const effectMgr = createEffectManager();
const handleSave = async (e) => {
e.preventDefault();
// ✅ Early exit if already running
if (effectMgr.isRunning('save-profile')) {
console.warn('Save already in progress — ignoring duplicate trigger');
return;
}
try {
await effectMgr.runOnce('save-profile', async () => {
// ✅ Real save logic — guaranteed to run once
await api.saveProfile(formData);
showSuccessToast('Profile updated');
});
} catch (err) {
showErrorToast('Save failed — please try again');
}
};
// Also used in MutationObserver
const observer = new MutationObserver(() => {
if (isProfileDirty() && !effectMgr.isRunning('save-profile')) {
effectMgr.runOnce('save-profile', () => api.saveProfile(getCurrentData()));
}
});
Why This Works
- Explicit keys:
'save-profile'is human-readable and scoped to the intent, not the mechanism. You don’t care how the save was triggered—only that “save profile” is happening. - No hidden state:
activeEffectsis aSet, not a boolean flag. You can list all active effects (effectMgr.getActiveKeys()) for debugging. - No framework lock-in: Works with
fetch,localStorage,gtag,postMessage, or any async operation. - Composable: You can nest effects (e.g.,
runOnce('save-profile', () => runOnce('upload-avatar', upload)))—the outer key blocks the inner one.
Tradeoffs You’ll Face
- You must name keys meaningfully:
'loading'is too vague.'save-profile','sync-preferences','load-dashboard-widgets'are precise. When debugging,console.log(effectMgr.getActiveKeys())should tell you exactly what’s happening. - No automatic cleanup on unmount: Unlike React’s
useEffectcleanup, this persists until the promise resolves. That’s intentional—you want the effect to finish even if the user navigates away. If you need cancellation, pass anAbortSignalto your async function. - Not a replacement for backend idempotency: This prevents frontend duplicates—not race conditions between two users saving simultaneously. Always implement idempotency keys server-side too.
Practical Tip: Log Effect Lifecycle
Add debug logging to see when effects start and finish:
const runOnce = (key, fn) => {
console.debug([EFFECT START] ${key});
if (activeEffects.has(key)) {
console.warn([EFFECT SKIP] ${key} — already running);
return Promise.resolve();
}
activeEffects.add(key);
return fn()
.then((result) => {
console.debug([EFFECT SUCCESS] ${key});
return result;
})
.catch((err) => {
console.error([EFFECT ERROR] ${key}, err);
throw err;
})
.finally(() => {
activeEffects.delete(key);
console.debug([EFFECT END] ${key});
});
};
In development, this makes it trivial to see if an effect is stuck, duplicated, or never starting. In production, you can disable it with a flag.
Modules That Don’t Leak
JavaScript modules are powerful—but they’re also the easiest place to introduce silent, global breakage. Patch Array.prototype.flatten()? Now every node_modules package that expects the native behavior breaks. Import a charting library that patches Date.prototype.format()? Now your billing reports parse dates wrong. These aren’t edge cases. They’re daily occurrences in shared codebases.
The Real Story: Charting Library Date Breakage
A startup’s analytics dashboard loaded three charting solutions:
- Chart.js (via CDN)
- D3 (via ES module import)
- A custom canvas renderer (written in-house)
All three patched Date.prototype.format()—each with different signatures:
- Chart.js:
date.format('YYYY-MM-DD') - D3:
date.format('%Y-%m-%d') - Custom:
date.format('yyyy-mm-dd')
The billing module used new Date().format('MM/DD/YYYY')—which worked fine… until D3 loaded and overwrote the method with its %-based parser. On month-end close, all date strings became NaN/NaN/NaN.
The fix wasn’t refactoring the billing module—it was removing the patch. But no one knew which library added it. We grepped for prototype.format, found 17 matches across node_modules, and had to disable each library one-by-one.
The Fix: Strict Module Boundaries
We banned all global patching. Full stop. No Object.defineProperty(Date.prototype, ...). No Array.prototype.myMethod = .... No window.myGlobalHelper = ....
Instead, we enforced:
- Pure utility functions (no side effects, no globals)
- Self-contained modules (no external dependencies, no DOM assumptions)
- Explicit, flat imports (no “side-effect imports”)
// ✅ utils/date.js — pure, self-contained
export const formatDate = (date, format = 'YYYY-MM-DD') => {
const d = new Date(date);
if (isNaN(d.getTime())) return '';
const year = d.getFullYear();
const month = String(d.getMonth() + 1).padStart(2, '0');
const day = String(d.getDate()).padStart(2, '0');
return format
.replace('YYYY', year)
.replace('MM', month)
.replace('DD', day);
};
// ✅ charts/bar.js — zero external deps, zero globals
export const renderBarChart = (canvas, data, options = {}) => {
const ctx = canvas.getContext('2d');
const width = canvas.width;
const height = canvas.height;
const barWidth = Math.max(1, width / Math.max(data.length, 1));
const maxValue = Math.max(...data, 1);
// Clear canvas
ctx.clearRect(0, 0, width, height);
// Draw bars
data.forEach((value, i) => {
const barHeight = (value / maxValue) (height - 20);
const x = i barWidth;
const y = height - barHeight;
ctx.fillStyle = options.color || '#3b82f6';
ctx.fillRect(x + 2, y, barWidth - 4, barHeight);
});
};
// ✅ main.js — explicit, flat imports only
import { formatDate } from './utils/date.js';
import { renderBarChart } from './charts/bar.js';
// ❌ NEVER do this:
// import './polyfills/date-format.js'; // global patch → silent breakage
// import 'chart.js/auto'; // auto-registers plugins globally
Why This Works
- No surprise dependencies:
renderBarChart()works anywhere—Node.js, Deno, browser, Web Worker. It doesn’t assumedocumentexists. - No global pollution:
formatDate()is namespaced. You can’t accidentally call it on aDateinstance—because it’s not a method. - Easy to test:
formatDate(new Date('2024-05-12'), 'MM/DD/YYYY')returns'05/12/2024'. No mocks, no setup. - Greppable: Run
grep -r "prototype\." src/weekly. If it returns anything, fix it that day.
Tradeoffs You’ll Face
- Slightly more verbose calls:
formatDate(date, 'MM/DD/YYYY')vsdate.format('MM/DD/YYYY'). But the verbosity makes contracts explicit—and prevents the “whichformat()am I calling?” confusion. - No “magic” convenience: You won’t get
array.flatten()natively. But you will getflatten(array)—a pure function you control, document, and test. - You’ll need polyfills for older browsers: Use
core-js/stableonce, at the entry point—not scattered across modules.
Practical Tip: Enforce Boundaries with ESLint
Add these rules to your .eslintrc:
{
"rules": {
"no-restricted-properties": [
"error",
{
"object": "Date.prototype",
"property": "format"
},
{
"object": "Array.prototype",
"property": "flatten"
}
],
"no-restricted-syntax": [
"error",
{
"selector": "ImportDeclaration[source.value=/polyfill|shim|patch/]",
"message": "Do not import polyfill/shim/patch modules — use pure utilities instead."
}
]
}
}
This catches violations at dev time—not in production, when it’s too late.
Common Pitfalls (and How to Avoid Them)
These aren’t theoretical. They’re the exact mistakes I’ve made, seen teammates make, and debugged in production—over and over.
1. Using localStorage as a State Manager
localStorage is persistent, synchronous, and simple. That’s why it’s tempting to use it for “temporary” state—form drafts, UI preferences, cart items. But it’s not designed for that. It has no expiration, no scoping, no cleanup, and no consistency guarantees across tabs.
#### The Real Story: Stale Appointment Drafts
A freelance client’s appointment scheduler stored unsaved form data in localStorage under the hardcoded key 'appointment-draft'. Users would:
- Start filling a form
- Close the tab
- Return days later
- See the old draft
- Click “Submit” — resubmitting an appointment from last week
Worse: the app supported multiple accounts. Logging in as a different user didn’t clear the draft—so Account A’s draft overwrote Account B’s.
#### The Fix: Scoped, Time-Bound, and Auto-Cleaned
// utils/draft-storage.js
export const saveDraft = (accountId, formId, draft) => {
const key = draft-${accountId}-${formId}-${Date.now()};
const data = {
...draft,
accountId,
formId,
timestamp: Date.now()
};
try {
localStorage.setItem(key, JSON.stringify(data));
} catch (err) {
// localStorage full? Fall back to memory-only
console.warn('Draft not saved — localStorage full', err);
}
// ✅ Clean up old drafts (>24h) and expired ones
cleanupOldDrafts();
};
export const loadLatestDraft = (accountId, formId) => {
const keys = Object.keys(localStorage).filter(k =>
k.startsWith(draft-${accountId}-${formId}-)
);
if (keys.length === 0) return null;
// Get newest draft
const latestKey = keys.reduce((a, b) => {
const aTime = parseInt(a.split('-').pop(), 10) || 0;
const bTime = parseInt(b.split('-').pop(), 10) || 0;
return aTime > bTime ? a : b;
});
try {
const data = JSON.parse(localStorage.getItem(latestKey));
// Only return if <24h old
if (Date.now() - data.timestamp < 24 60 60 1000) {
return data;
}
} catch (e) {
// Invalid JSON — remove it
localStorage.removeItem(latestKey);
}
return null;
};
const cleanupOldDrafts = () => {
const cutoff = Date.now() - 24 60 60 1000;
Object.keys(localStorage)
.filter(k => k.startsWith('draft-'))
.forEach(k => {
try {
const data = JSON.parse(localStorage.getItem(k));
if (data.timestamp < cutoff) {
localStorage.removeItem(k);
}
} catch (e) {
// Skip invalid entries
localStorage.removeItem(k);
}
});
};
Key principles:
- Scope to
accountIdandformId—not just one. - Timestamp every draft—so you can expire it.
- Clean up after every save—not just on load.
- Handle
localStoragequota errors gracefully.
2. Assuming localStorage Is Available
Not all browsers support localStorage (e.g., Safari in Private Browsing). And some users disable it. Your app shouldn’t crash—or worse, silently lose data—when it’s unavailable.
#### The Fix: Fallback to Memory Storage
// utils/storage.js
export const createStorage = (namespace = 'app') => {
let memoryStore = {};
const getItem = (key) => {
try {
return localStorage.getItem(${namespace}-${key});
} catch (e) {
return memoryStore[key] || null;
}
};
const setItem = (key, value) => {
try {
localStorage.setItem(${namespace}-${key}, value);
delete memoryStore[key];
} catch (e) {
memoryStore[key] = value;
}
};
const removeItem = (key) => {
try {
localStorage.removeItem(${namespace}-${key});
} catch (e) {
delete memoryStore[key];
}
};
return { getItem, setItem, removeItem };
};
// Usage
const storage = createStorage('my-app');
storage.setItem('theme', 'dark');
3. Ignoring Intl for Formatting
Using date.toLocaleDateString() or number.toLocaleString() is fine—if you only support one locale. But toLocaleDateString('en-US') formats differently in Chrome vs Safari vs Firefox for the same date. And Intl.DateTimeFormat handles edge cases (time zones, daylight saving) that string manipulation can’t.
#### The Fix: Use Intl Consistently
// utils/intl.js
export const formatDate = (date, options = {}) => {
try {
return new Intl.DateTimeFormat('en-US', {
year: 'numeric',
month: '2-digit',
day: '2-digit',
...options
}).format(new Date(date));
} catch (e) {
return String(date); // fallback
}
};
export const formatCurrency = (amount, currency = 'USD') => {
try {
return new Intl.NumberFormat('en-US', {
style: 'currency',
currency
}).format(amount);
} catch (e) {
return $${amount};
}
};
Intl is supported in all modern browsers—and it’s designed for this. Don’t reinvent date formatting.
4. Forgetting event.preventDefault() in Forms
This seems basic—until you have a <form> that submits to / and reloads the page, breaking your SPA navigation. Or a button inside a form that triggers a submit and your click handler.
#### The Fix: Always Prevent Default in Handlers
// ✅ Always do this
const handleSubmit = (e) => {
e.preventDefault(); // non-negotiable
// ... your logic
};
// ✅ Or destructure it
const handleSubmit = ({ preventDefault }) => {
preventDefault();
// ... your logic
};
Make it muscle memory. If your linter doesn’t warn you, configure it to.
The Real Tradeoffs (No Sugarcoating)
Going “lightweight” isn’t free. You’ll face real tradeoffs—some painful, some liberating. Here’s what I’ve learned:
You’ll Write More Glue Code
Yes, you’ll write createShipmentStore(), safeFetch(), createEffectManager(). That’s ~100 lines of code you’d get “for free” from a framework. But those 100 lines are:
- Fully owned by you
- Documented in your repo
- Tested with your test runner
- Debuggable in 30 seconds
Compare that to debugging why useSWR’s revalidateOnFocus caused a cascade of refetches across 12 tabs—only to find it’s a known issue in v2.3.1 that’s “fixed” in v3.0.0-beta.
You’ll Spend Less Time Debugging Framework Bugs
I haven’t spent time in the last 18 months debugging:
- Why
useEffectruns twice in dev mode - Why
getServerSidePropsdoesn’t re-run on client navigation - Why
React.memoisn’t preventing a re-render
Because I’m not using those things. My bugs are in my code—not in a 50k-line dependency with 300 open issues.
You’ll Need Stronger Team Discipline
No framework can enforce good practices. If your team ignores safeFetch(), copies mutable objects, or patches prototypes, you’ll have chaos—just with fewer abstractions. So pair this with:
- ESLint rules (no bare
fetch, noprototypepatches) - Code review checklists (“Does this error have
urlandcause?”) - Weekly
grepaudits (grep -r "localStorage\|sessionStorage" src/)
You’ll Ship Faster—Once
The first feature takes longer. You’re building primitives. But by feature #3, you’re reusing createStore(), safeFetch(), and effectManager. By sprint #5, your velocity exceeds what you’d get with a framework—because you’re not fighting its constraints, its bundle size, or its opinionated architecture.
Final Thought: It’s Not About Tools. It’s About Ownership.
The JavaScript you ship isn’t defined by the frameworks you choose. It’s defined by the contracts you enforce, the errors you normalize, the side effects you coordinate, and the boundaries you protect.
You don’t need a new framework to ship better JavaScript. You need:
- A
setStatefunction that throws on invalid transitions - An
asyncwrapper that guarantees error shape - An effect manager that names intent, not mechanism
- Modules that don’t leak into the global scope
These aren’t revolutionary ideas. They’re boring, deliberate, and deeply unsexy. They won’t trend on Hacker News. But they’ll keep your team shipping features—not firefighting silent breakages—at 4 p.m. on a Friday.
That’s the JavaScript you actually ship. Not the one in the tutorial. Not the one in the conference talk. The one that works—today, tomorrow, and six months from now—when the only thing you can count on is your own code.