Devlog 22: Performance Profiling - Finding and Fixing Bottlenecks
Tuggy felt slow. Clicking lagged. Particles dropped frames. Time to profile and optimize.
Here's how I found and fixed the performance problems.
Chrome DevTools Performance Panel
Record a voting session:
- Open DevTools (F12)
- Go to Performance tab
- Click Record
- Click vote buttons rapidly for 10 seconds
- Stop recording
The flame graph shows where time is spent.
What I Found
Problem 1: Layout thrashing
Repeatedly reading layout (offsetWidth) then writing (style.width):
// Bad - forces layout recalc each iteration
for (const element of elements) {
const width = element.offsetWidth; // Read
element.style.width = width + 10 + 'px'; // Write
}
Fix: Batch reads, then writes:
// Good - read all, then write all
const widths = elements.map(el => el.offsetWidth);
elements.forEach((el, i) => {
el.style.width = widths[i] + 10 + 'px';
});
Problem 2: Expensive vote bar updates
The vote percentage bar recalculated on every vote:
$: percentA = (votesA / (votesA + votesB)) * 100;
With thousands of votes, this ran constantly.
Fix: Throttle updates:
import { throttle } from 'lodash-es';
const updatePercent = throttle(() => {
percentA = (votesA / (votesA + votesB)) * 100;
}, 100); // Max 10 updates/sec
Problem 3: Particle system GC pauses
Creating new particle objects on every click:
// Bad - allocates new objects
function spawnParticle() {
const particle = new Particle();
particles.push(particle);
}
Garbage collection caused frame drops.
Fix: Object pooling (already covered in devlog 12).
Lighthouse Audits
Run Lighthouse for overall health:
npm run build
npm run preview
# Open in Chrome, run Lighthouse audit
My scores:
- Performance: 95
- Accessibility: 100
- Best Practices: 100
- SEO: 100
Key improvements:
- Lazy load images:
loading="lazy" - Preconnect to domains:
<link rel="preconnect"> - Minify JavaScript: Vite handles this
- Use WebP images: Smaller than JPEG
Bundle Size Analysis
Check what's in the bundle:
npm run build
npx vite-bundle-visualizer
Found: Moment.js was 67KB! I only used it for date formatting.
Fix: Replaced with native Intl:
// Before
import moment from 'moment';
const formatted = moment(date).format('MMM D, YYYY');
// After
const formatted = new Intl.DateTimeFormat('en-US', {
month: 'short',
day: 'numeric',
year: 'numeric'
}).format(new Date(date));
Saved 67KB.
Code Splitting
Load heavy components only when needed:
// Before - loads immediately
import VotingArea from './VotingArea.svelte';
// After - loads on demand
const VotingArea = () => import('./VotingArea.svelte');
Reduces initial bundle by 40%.
Image Optimization
Use WebP:
<picture>
<source srcset="image.webp" type="image/webp">
<img src="image.jpg" alt="Fallback">
</picture>
WebP is 30% smaller than JPEG.
Lazy load below fold:
<img src="..." loading="lazy" />
Only loads when scrolled into view.
Responsive images:
<img
srcset="
image-400.jpg 400w,
image-800.jpg 800w,
image-1200.jpg 1200w
"
sizes="(max-width: 600px) 100vw, 50vw"
/>
Mobile gets small images, desktop gets large.
Database Query Optimization
Problem: Leaderboard query took 800ms.
Before:
SELECT * FROM votes WHERE matchup_id = $1;
Fetched ALL votes, then sorted client-side.
After:
SELECT
voter_id,
SUM(amount) as total
FROM votes
WHERE matchup_id = $1
GROUP BY voter_id
ORDER BY total DESC
LIMIT 10;
Aggregates in database. Down to 50ms.
Caching Strategy
Cache expensive computations:
const cache = new Map();
function getLeaderboard(matchupId: string) {
if (cache.has(matchupId)) {
return cache.get(matchupId);
}
const result = expensiveQuery(matchupId);
cache.set(matchupId, result);
setTimeout(() => cache.delete(matchupId), 30000); // Expire after 30s
return result;
}
I've written extensively about in-memory caching with TTL which reduced my database load by 95%. Caching is one of the highest-leverage performance optimizations you can implement.
Monitoring Real User Performance
Track Core Web Vitals:
import { onCLS, onFID, onLCP } from 'web-vitals';
onCLS(metric => {
trackEvent('web_vitals', {
name: 'CLS',
value: metric.value
});
});
onFID(metric => {
trackEvent('web_vitals', {
name: 'FID',
value: metric.value
});
});
onLCP(metric => {
trackEvent('web_vitals', {
name: 'LCP',
value: metric.value
});
});
See real-world performance in analytics.
Memory Leaks
Problem: Memory usage grew over time.
Cause: Event listeners not cleaned up:
<script>
onMount(() => {
window.addEventListener('resize', handleResize);
// Missing cleanup!
});
</script>
Fix:
<script>
onMount(() => {
window.addEventListener('resize', handleResize);
return () => {
window.removeEventListener('resize', handleResize);
};
});
</script>
Long Tasks
Chrome flags tasks > 50ms as "long tasks". They block the main thread.
Found: Vote batch processing took 120ms.
Fix: Split into chunks with setTimeout:
async function processBatch(votes: Vote[]) {
const chunkSize = 100;
for (let i = 0; i < votes.length; i += chunkSize) {
const chunk = votes.slice(i, i + chunkSize);
await processChunk(chunk);
// Yield to browser
await new Promise(resolve => setTimeout(resolve, 0));
}
}
Breaks work into smaller tasks, keeping UI responsive.
Takeaway
Don't guess, measure. Chrome DevTools shows exactly where time is spent.
Layout thrashing is sneaky. Batch reads then writes to avoid it.
Object pooling eliminates GC pauses in hot paths like particle systems.
Bundle size matters. Every KB counts on slow connections. Audit dependencies.
Code splitting keeps initial load fast. Load heavy features on demand.
Cache expensive operations. Don't recompute the same thing repeatedly.
Monitor real user performance with Web Vitals. Lab tests don't show real-world issues.
Performance optimization is an ongoing process. Combine profiling with other strategies like vote batching for write performance and in-memory caching for read performance to build a fast, scalable application.