Features Pricing
Start My Free Trial

Rate Limits

API rate limiting behavior and how to handle 429 responses.

Overview

MakeEmWait uses rate limiting to protect the API from abuse and ensure fair usage across all accounts. When you exceed the allowed request rate, the API returns HTTP 429 Too Many Requests with the error code RATE_LIMITED.

i
Rate Limits Apply to All Plans Rate limits are the same across all subscription tiers. They are designed to prevent abuse while allowing normal application usage.

Rate Limit Numbers

Global Rate Limit

All endpoints share a single rate limit pool enforced at the API server level using a token bucket algorithm:

Limit Value
Steady-state rate 100 requests per second
Burst capacity 200 requests
Scope All endpoints, all methods

There is no per-endpoint or per-user differentiation — all API calls count toward the same 100 req/s pool. In practice, normal application usage (signup forms, dashboard views, API integrations) will never come close to this limit.

Application-Level Rate Limits

Some endpoints enforce additional per-user rate limits:

Endpoint Limit Behavior
Email verification resend (POST /auth/resend-verification) 1 per 60 seconds, max 10 total per verification cycle Returns 429 with Retry-After header showing seconds to wait
Referral notification emails 1 per referrer per hour Silently skipped (no 429 — the signup still succeeds)

Stripe Rate Limit Passthrough

If Stripe throttles a billing request (checkout, portal, webhook processing), the API returns 429 with the error passed through. This is rare in normal usage.


Error Response Format

A rate-limited response looks like this:

{
  "error": "Please wait 45 seconds before requesting a new link",
  "error_code": "RATE_LIMITED",
  "request_id": "abc123"
}
Field Description
error Human-readable message suitable for displaying to end users
error_code Always RATE_LIMITED for 429 responses — use this for programmatic handling
request_id Unique request ID for debugging and support

Retry-After Header

Application-level 429 responses include a Retry-After header with the number of seconds to wait before retrying:

HTTP/1.1 429 Too Many Requests
Retry-After: 45
i
Global 429 responses (from exceeding the 100 req/s limit) do not include a Retry-After header. Use exponential backoff starting at 1 second for these.

Handling Rate Limits

When your application receives a 429 response, follow these best practices:

  • Implement exponential backoff — wait 1s, then 2s, then 4s, then 8s between retries. This avoids hammering the API while it recovers.
  • Cache responses where possible — waitlist config and signup counts change infrequently. Cache them locally to reduce unnecessary API calls.
  • Batch operations when available — use CSV import instead of creating signups one at a time.
  • Use webhooks instead of polling — set up webhooks to receive real-time notifications rather than repeatedly checking for updates.

Code Example

JavaScript fetch wrapper with automatic retry on 429 responses:

async function apiCall(url, options, maxRetries = 3) {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    const response = await fetch(url, options);

    if (response.status === 429) {
      // Use Retry-After header if present, otherwise exponential backoff
      const retryAfter = parseInt(response.headers.get("Retry-After") || "0", 10);
      const delay = retryAfter > 0 ? retryAfter * 1000 : Math.pow(2, attempt) * 1000;
      await new Promise(resolve => setTimeout(resolve, delay));
      continue;
    }

    return response;
  }
  throw new Error('Max retries exceeded');
}

Use it as a drop-in replacement for fetch:

const response = await apiCall(
  `${API_BASE}/public/waitlists/${waitlistId}/signups`,
  {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({ email }),
  }
);

Tips for Staying Within Limits

  • Use the embed widget for public signups — it handles rate limiting and retries automatically. See Embed Widget.
  • Cache the waitlist config response — the /public/waitlists/:id/config endpoint returns data that rarely changes. Cache it for a few minutes instead of fetching on every page load.
  • Use CSV import for bulk operations — creating signups one at a time is slower and more likely to trigger rate limits. The CSV import endpoint processes them in batch.
  • Set up webhooks for real-time notifications — polling endpoints like signup count or leaderboard will consume your request budget quickly. Webhooks push updates to you as they happen.