Documentation
Using the API Directly
Making direct HTTP requests to the A4F API without relying on specific SDKs.
Introduction
While using SDKs like the OpenAI SDK (see our OpenAI SDK guide) can simplify interactions, you can also make direct HTTP requests to the A4F API. This approach offers maximum flexibility and can be useful in environments where SDKs are not available or when you need fine-grained control over the request.
This guide covers the essentials for making direct API calls to A4F, focusing on the chat completions endpoint.
Endpoint URL
The primary A4F endpoint for chat completions is:
POST https://api.a4f.co/v1/chat/completions
Other potential endpoints (e.g., for model listing, image generation) will follow a similar base URL structure (https://api.a4f.co/v1/...
). Refer to the API Reference for specific endpoint paths.
Authentication
All direct API requests must be authenticated using your A4F API key. The key should be included in the `Authorization` header as a Bearer token.
Obtain your API key from the A4F Dashboard. More details can be found in the Authentication documentation.
Request Body (Chat Completions)
For the /v1/chat/completions
endpoint, the request body must be a JSON object adhering to the OpenAI API specification. Key fields include:
model
: (String, required) The A4F provider-prefixed model ID (e.g.,"provider-1/chatgpt-4o-latest"
).messages
: (Array of objects, required) The conversation history. Each message object should haverole
andcontent
.- Other optional parameters like
temperature
,max_tokens
,stream
,tools
, etc.
For a comprehensive list of parameters, see the Parameters documentation and the Chat Completion API reference.
Headers
In addition to `Authorization`, ensure you set the `Content-Type` header correctly for POST requests:
For information on A4F-specific conceptual headers, refer to the Setting Headers guide.
Example Requests
Python (using `requests` library)
Node.js (using native `fetch`)
cURL
Handling Responses
A successful API call (HTTP status 200 OK
) will return a JSON response compatible with the OpenAI API. This includes fields like id
, model
, choices
(containing the assistant's message), and usage
(token counts).
Your client code should parse this JSON to extract the required information, typically the content of the assistant's message from choices[0].message.content
.
Streaming with Direct API Calls
To stream responses directly, set "stream": true
in your JSON payload. The server will respond with a stream of Server-Sent Events (SSE). Each event will be a line starting with data:
followed by a JSON object (a chunk of the completion), and the stream terminates with data: [DONE]
.
Handling SSE streams requires specific client-side logic to parse the incoming data chunks. Many HTTP libraries provide ways to iterate over response lines or chunks for streaming. Refer to the Streaming documentation for more detailed examples and considerations for parsing SSE.
Streaming Libraries
Error Handling
If an API request fails, A4F will return an HTTP error status code (e.g., 400, 401, 429, 500) and a JSON body with error details. Your client code should check the HTTP status code and parse the JSON error response. See the Errors documentation for common error codes.
When to Use Direct API Calls
- When working in a language or environment where a suitable OpenAI-compatible SDK is unavailable or undesirable.
- When you need fine-grained control over HTTP request details (e.g., specific timeout configurations, custom retry logic not offered by an SDK).
- For lightweight integrations or scripting where including a full SDK might be overkill.
- When you need to set custom HTTP headers that an SDK might not easily expose (refer to Setting Headers).
Was this page helpful?