Documentation
Errors
Understanding and handling API errors from A4F.
For errors, A4F returns a JSON response with a structure similar to OpenAI's error format. The HTTP response will typically have a status code that indicates the nature of the error.
The HTTP response status code will often match the error.code
from the JSON payload if the error originates from A4F directly (e.g., authentication, rate limiting). If an error occurs while the underlying LLM provider is processing your request (after A4F has successfully forwarded it), the HTTP status might be 200 OK
(for streams that start successfully but then error) or a provider-specific error code proxied by A4F. In such cases, the error details will be in the JSON response body or within an SSE data event.
Example code for printing errors in JavaScript:
Common Error Codes
400 Bad Request (HTTP Status)
The request was unacceptable, often due to missing a required parameter, malformed JSON, or invalid parameter values (e.g., an unsupported model ID).
401 Unauthorized (HTTP Status)
No API key provided, or the API key is invalid, disabled, or revoked. Check your API Keys page and ensure your Dashboard shows an active plan and key status.
403 Forbidden (HTTP Status)
Your API key does not have permission to access the requested resource or model. This could be due to plan restrictions (e.g., trying to access a Pro model with a Free plan key) or if your account has been flagged for violating terms of service. Check your current plan on the Dashboard.
408 Request Timeout (HTTP Status)
Your request timed out. This could be due to network issues or the upstream provider taking too long to respond. Consider retrying or checking the Timeout Handling guide.
429 Too Many Requests (HTTP Status)
You have exceeded your current rate limit (RPM or RPD) for your plan. Check your Dashboard for current limits and usage. See Limits documentation.
500 Internal Server Error (HTTP Status)
An unexpected error occurred on A4F's servers or with an upstream provider. Please try again later. If the issue persists, contact us via our official channels: Telegram, Instagram, Discord (@devsdocode), or Twitter/X.
502 Bad Gateway (HTTP Status)
A4F received an invalid response from the upstream model provider. This might be a temporary issue with the provider.
503 Service Unavailable (HTTP Status)
The A4F service or an underlying model provider is temporarily unavailable, likely due to overload or maintenance. Please try again later. Check our Status Page for updates.
504 Gateway Timeout (HTTP Status)
A4F did not receive a timely response from the upstream model provider.
Moderation Errors
If your input (prompt) is flagged by an underlying provider's content moderation system, A4F will typically proxy the error from the provider. The HTTP status code might be 400 Bad Request
or a specific code like 403 Forbidden
from A4F if access is blocked due to such flags.
The JSON error response body may contain additional error.metadata
or specific error.code
values like "content_filter"
indicating the nature of the flag.
Provider Errors
If the selected model provider (e.g., OpenAI, Anthropic via A4F's routing) encounters an error while processing your request, A4F will generally pass this error information back to you. The HTTP status code might be a 5xx
error (like 500, 502, 503) or a 4xx
error if the provider deems the request invalid for their specific model.
The JSON error body will usually contain a message from the provider. Inspect the error.message
and potentially an error.code
or error.type
field for more specific details.
When No Content is Generated
Occasionally, a model may not generate any content, or the choices[0].message.content
might be empty or null
even with a 200 OK
status. This can happen for several reasons:
- The model is "warming up" from a cold start (more common with less frequently used models or providers).
- The system is scaling up to handle more requests.
- The prompt itself might have inadvertently triggered a content filter or safety mechanism at the provider level, resulting in an empty response rather than an explicit error.
- The model genuinely had no further output for the given prompt and parameters (e.g.,
finish_reason: "stop"
but empty content).
Warm-up times usually range from a few seconds to a few minutes. If you encounter persistent no-content issues, consider:
- Implementing a simple retry mechanism with a short delay.
- Trying a different model or provider available through A4F.
- Reviewing your prompt for any potentially problematic content.
Additionally, be aware that in some cases, you may still be charged for the prompt processing cost by the upstream provider, even if no usable content is generated by the model. Token usage is generally reported in the usage
field of the response.
Was this page helpful?