SmartLead applies rate limits to protect the platform and ensure fair access for all users. This guide explains the rate limit structure, how to detect when you’re being throttled, and strategies for building efficient integrations.
Instead of waiting for 429s, track your usage and throttle proactively:
Python
Copy
Ask AI
import timefrom collections import dequeclass RateLimiter: """Client-side rate limiter to avoid 429 errors.""" def __init__(self, max_per_minute=50): self.max_per_minute = max_per_minute self.requests = deque() def wait_if_needed(self): """Block until it's safe to make another request.""" now = time.time() # Remove requests older than 60 seconds while self.requests and self.requests[0] < now - 60: self.requests.popleft() if len(self.requests) >= self.max_per_minute: # Wait until the oldest request expires wait_time = 60 - (now - self.requests[0]) if wait_time > 0: print(f"Throttling: waiting {wait_time:.1f}s") time.sleep(wait_time) self.requests.append(time.time())# Usagelimiter = RateLimiter(max_per_minute=50) # Stay under the 60/min limitfor campaign_id in campaign_ids: limiter.wait_if_needed() data = make_request("GET", f"campaigns/{campaign_id}/analytics")
Set your client-side limit to 80% of the actual limit (e.g., 50 requests/minute when the limit is 60). This buffer accounts for timing differences and prevents edge-case throttling.
Cache data that doesn’t change often to reduce API calls:
Python
Copy
Ask AI
import functoolsimport time_cache = {}def cached_request(endpoint, ttl_seconds=300): """Cache GET requests for a specified TTL.""" now = time.time() if endpoint in _cache: data, cached_at = _cache[endpoint] if now - cached_at < ttl_seconds: return data data = make_request("GET", endpoint) _cache[endpoint] = (data, now) return data# Campaigns list doesn't change often — cache for 5 minutescampaigns = cached_request("campaigns/", ttl_seconds=300)# Analytics change frequently — cache for 1 minuteanalytics = cached_request(f"campaigns/{cid}/analytics", ttl_seconds=60)
Check if another integration or script is using the same API key. Rate limits are per-key, not per-client. Consider using separate API keys for different integrations.
Rate limits feel too restrictive
Review your request patterns — are you polling when you could use webhooks? Are you making individual requests when batch endpoints are available? If you genuinely need higher limits, contact SmartLead support about Enterprise plans.
Retry-After header is missing on 429 response
Default to exponential backoff starting at 1 second. Most rate limit windows reset within 60 seconds.