Rate Limiting
The Turris Public API implements rate limiting to ensure fair usage and maintain service quality for all customers.
Rate Limit Overview
| Endpoint Type | Rate Limit | Window |
|---|
POST /auth/jwt | 10 requests | per minute |
POST, PUT, PATCH, DELETE | 60 requests | per minute |
GET endpoints | No limit | - |
Rate limits are applied per API client (OAuth) or per token (Restricted Access Token).
When you approach or exceed rate limits, the API returns these headers:
| Header | Description |
|---|
X-RateLimit-Limit | Maximum requests allowed in the window |
X-RateLimit-Remaining | Requests remaining in the current window |
X-RateLimit-Reset | Unix timestamp when the window resets |
Handling Rate Limits
When you exceed the rate limit, the API returns a 429 Too Many Requests response:
{
"statusCode": 429,
"requestId": "dev-a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"errorType": "rate_limit_exceeded",
"errorMessage": ["Rate limit exceeded. Please retry after 60 seconds."],
"timestamp": "2025-01-30T12:34:56.789Z"
}
Retry Strategy
Implement exponential backoff when rate limited:
async function fetchWithRetry(url: string, options: RequestInit, maxRetries = 3) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
const response = await fetch(url, options);
if (response.status === 429) {
const resetTime = response.headers.get('X-RateLimit-Reset');
const waitTime = resetTime
? (parseInt(resetTime) * 1000) - Date.now()
: Math.pow(2, attempt) * 1000; // Exponential backoff
console.log(`Rate limited. Waiting ${waitTime}ms before retry...`);
await new Promise(resolve => setTimeout(resolve, Math.max(waitTime, 1000)));
continue;
}
return response;
}
throw new Error('Max retries exceeded');
}
Best Practices
1. Cache OAuth Tokens
The /auth/jwt endpoint is rate-limited. Cache your tokens for their full 60-minute TTL:
let cachedToken: { token: string; expiresAt: number } | null = null;
async function getAccessToken(): Promise<string> {
// Return cached token if still valid (with 5-minute buffer)
if (cachedToken && cachedToken.expiresAt > Date.now() + 5 * 60 * 1000) {
return cachedToken.token;
}
const response = await fetch('https://public.api.live.turrisfi.com/v1/auth/jwt', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
clientId: process.env.CLIENT_ID,
clientSecret: process.env.CLIENT_SECRET
})
});
const data = await response.json();
cachedToken = {
token: data.data.accessToken,
expiresAt: Date.now() + 60 * 60 * 1000 // 60 minutes
};
return cachedToken.token;
}
2. Batch Requests When Possible
Instead of making individual requests for each resource, use endpoints that return multiple items:
# ❌ Bad: Multiple requests
GET /v1/agents/abc123
GET /v1/agents/def456
GET /v1/agents/ghi789
# ✅ Good: Single request with filters
GET /v1/agents?downstreamEntityAssociationId=xyz789
3. Implement Request Queuing
For high-volume integrations, queue requests and process them at a sustainable rate:
class RequestQueue {
private queue: Array<() => Promise<any>> = [];
private processing = false;
private requestsPerMinute = 50; // Stay under the 60/min limit
async add<T>(request: () => Promise<T>): Promise<T> {
return new Promise((resolve, reject) => {
this.queue.push(async () => {
try {
resolve(await request());
} catch (error) {
reject(error);
}
});
this.process();
});
}
private async process() {
if (this.processing || this.queue.length === 0) return;
this.processing = true;
while (this.queue.length > 0) {
const request = this.queue.shift()!;
await request();
await new Promise(r => setTimeout(r, 60000 / this.requestsPerMinute));
}
this.processing = false;
}
}
Track your usage proactively:
function checkRateLimits(response: Response) {
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining') || '0');
const limit = parseInt(response.headers.get('X-RateLimit-Limit') || '0');
if (remaining < limit * 0.2) {
console.warn(`Rate limit warning: ${remaining}/${limit} requests remaining`);
}
}