Devlog 17: Rate Limiting & Abuse Prevention Without Killing UX
Once Tuggy went live, I started seeing suspicious voting patterns. Accounts voting hundreds of times per second. Device IDs switching mid-session. Clear signs of abuse.
I needed to stop bots and manipulators without making the game feel laggy or restricted for real users. That balance is tricky.
The Abuse Patterns
Pattern 1: Vote spam
- Botted accounts voting 1000+ times/second
- Way beyond human clicking speed
Pattern 2: Device fingerprint cycling
- Script that generates new device_id on each vote
- Bypasses per-device rate limits
Pattern 3: Vote flooding
- Bursts of identical votes from same source
- Often automated scripts
Pattern 4: Multi-tab exploitation
- Opening 50 tabs, all voting simultaneously
- Legitimate users can do this too, needs careful handling
Client-Side Rate Limiting
First line of defense: don't even let the request leave the browser.
let lastVoteTime = 0;
const MIN_VOTE_INTERVAL = 100; // 100ms = max 10 clicks/second
function attemptVote(side: 'a' | 'b') {
const now = Date.now();
if (now - lastVoteTime < MIN_VOTE_INTERVAL) {
// Too fast, ignore
return;
}
lastVoteTime = now;
castVote(side);
}
This allows rapid clicking (10/sec) but prevents spam scripts running at hundreds/second.
But client-side can be bypassed. Need server-side too.
Server-Side Rate Limiting
Supabase Edge Functions can enforce rate limits:
// supabase/functions/cast-vote/index.ts
import { RateLimiter } from '@upstash/ratelimit';
const ratelimit = new RateLimiter({
redis: Redis.fromEnv(),
limiter: RateLimiter.slidingWindow(100, '60s'), // 100 votes/minute
});
Deno.serve(async (req) => {
const { deviceId, userId } = await req.json();
const identifier = userId || deviceId;
const { success } = await ratelimit.limit(identifier);
if (!success) {
return new Response('Rate limit exceeded', { status: 429 });
}
// Process vote
await insertVote(...);
return new Response('OK');
});
100 votes/minute is ~1.6/second, plenty for human clicking but blocks spam bots.
Detecting Suspicious Patterns
Rate limits alone aren't enough. Some abuse is subtler.
I track voting patterns in real-time:
interface VotingSession {
deviceId: string;
votes: { timestamp: number, side: string }[];
}
function detectSuspiciousActivity(session: VotingSession): boolean {
const last100 = session.votes.slice(-100);
// Pattern 1: Perfect timing (bot)
const intervals = last100.map((v, i) =>
i > 0 ? v.timestamp - last100[i-1].timestamp : 0
).filter(i => i > 0);
const avgInterval = intervals.reduce((a, b) => a + b) / intervals.length;
const variance = intervals.reduce((sum, i) =>
sum + Math.pow(i - avgInterval, 2), 0
) / intervals.length;
if (variance < 10) {
// Nearly perfect timing = bot
return true;
}
// Pattern 2: Alternating sides rapidly (chaos voting)
let alternations = 0;
for (let i = 1; i < last100.length; i++) {
if (last100[i].side !== last100[i-1].side) {
alternations++;
}
}
if (alternations > 80) {
// 80% side switching = suspicious
return true;
}
return false;
}
Humans have irregular timing and usually stick to one side. Bots often have perfect intervals and random side switching.
Handling Suspicious Users
When abuse is detected, I don't ban immediately (false positives happen). Instead, I escalate:
Level 1: Soft limit
- Slower vote processing
- Votes queued instead of instant
Level 2: CAPTCHA
- Require CAPTCHA to continue voting
- Most bots give up here
Level 3: Temporary cooldown
- 5-minute timeout
- Repeat offenders get longer timeouts
Level 4: Manual review
- Flag account for investigation
- Can lead to permanent ban
async function handleSuspiciousVote(deviceId: string) {
const flags = await getFlagCount(deviceId);
if (flags >= 3) {
// Level 4: Flag for review
await flagForReview(deviceId);
return { error: 'Account under review' };
} else if (flags === 2) {
// Level 3: Cooldown
await setCooldown(deviceId, 5 * 60 * 1000);
return { error: 'Too many suspicious votes. Try again in 5 minutes.' };
} else if (flags === 1) {
// Level 2: CAPTCHA
return { requireCaptcha: true };
} else {
// Level 1: Soft limit
await queueVote(deviceId);
return { queued: true };
}
}
Device Fingerprint Stability
To prevent cycling device_ids, I make fingerprints more robust:
async function generateStableFingerprint(): Promise<string> {
const components = [
navigator.userAgent,
navigator.language,
screen.width,
screen.height,
screen.colorDepth,
new Date().getTimezoneOffset(),
navigator.hardwareConcurrency,
navigator.platform,
// Canvas fingerprinting
await getCanvasFingerprint(),
// WebGL fingerprinting
await getWebGLFingerprint(),
];
return hashComponents(components);
}
Canvas and WebGL fingerprints are harder to spoof than just user agent + screen size.
Rate Limit Bypass Detection
Some attackers try to bypass rate limits by:
- Creating multiple accounts
- Using VPNs to change IP
- Clearing cookies and localStorage
I detect this by tracking broader patterns:
-- Find devices with suspiciously similar fingerprints
SELECT
device_id,
COUNT(*) as similar_devices
FROM (
SELECT
d1.device_id,
d2.device_id as similar_device
FROM device_fingerprints d1
JOIN device_fingerprints d2
ON d1.device_id != d2.device_id
AND d1.user_agent = d2.user_agent
AND d1.screen_width = d2.screen_width
AND d1.timezone = d2.timezone
AND d1.created_at > NOW() - INTERVAL '1 hour'
) similar
GROUP BY device_id
HAVING COUNT(*) > 5;
If 5+ "different" devices all have identical fingerprints within an hour, they're probably the same attacker cycling IDs.
Legitimate Power Users
Some real users are just very enthusiastic. They open multiple tabs, click very fast, vote for hours.
To distinguish from bots, I look at:
Human patterns:
- Irregular clicking rhythm
- Mouse movement between clicks
- Occasional pauses (bathroom breaks, etc)
- Engagement with other features (leaderboards, upgrades)
Bot patterns:
- Perfect intervals
- No mouse movement
- 24/7 activity with no breaks
- Only votes, never visits other pages
function isLikelyHuman(session: VotingSession): boolean {
return (
hasIrregularTiming(session) &&
hasMouseActivity(session) &&
hasNaturalBreaks(session) &&
visitedOtherPages(session)
);
}
If all four are true, probably human, even if voting a lot.
Multi-Tab Detection
Opening many tabs is technically legitimate but can overwhelm the system:
// Detect tabs from same device
const tabCount = await redis.incr(`tabs:${deviceId}`);
await redis.expire(`tabs:${deviceId}`, 60);
if (tabCount > 10) {
return {
error: 'Too many tabs open. Please close some and try again.'
};
}
Limit to 10 tabs per device. Prevents accidental DOS attacks from enthusiastic users.
IP-Based Rate Limiting
As a last resort, I rate limit by IP:
const ipLimit = new RateLimiter({
limiter: RateLimiter.slidingWindow(1000, '60s'), // 1000 votes/min per IP
});
const { success } = await ipLimit.limit(clientIP);
This is loose (1000/min) because multiple users might share an IP (corporate networks, public WiFi). But it stops the most egregious abuse.
Takeaway
Abuse prevention is a cat-and-mouse game. Attackers adapt, you respond, repeat.
Start with generous limits. False positives are worse than some abuse slipping through.
Client-side limits improve UX. Server-side limits prevent bypasses. Need both.
Behavioral detection (timing, patterns) catches sophisticated bots that pass rate limits.
Escalate gradually. Don't ban on first flag. Give users chances to prove they're human.
The goal isn't perfect security (impossible). It's making abuse hard enough that it's not worth the effort.
Next up: error handling and graceful degradation when things break.