Documentation
Using OpenAI SDK with A4F
Leverage the familiar OpenAI SDKs to interact with hundreds of models through A4F's unified API.
Introduction
A4F provides an API endpoint that is compatible with the OpenAI API specification. This means you can use the official OpenAI client libraries (for Python, TypeScript/JavaScript, etc.) to interact with A4F by simply changing the `base_url` and `api_key`.
This approach allows you to keep your existing OpenAI integration code while gaining access to a broader range of models from various providers, along with A4F's features like optimized routing (by explicit provider selection) and unified billing.
Key Benefits
- Minimal code changes to switch to A4F.
- Access a wide variety of models beyond just OpenAI's.
- Utilize your A4F API key for all requests.
Installation
If you haven't already, you'll need to install the OpenAI SDK for your programming language.
Python SDK
TypeScript/JavaScript SDK
Configuration
The core of using the OpenAI SDK with A4F is to configure the client with:
- Your A4F API Key (obtained from your A4F Dashboard).
- The A4F API Base URL:
https://api.a4f.co/v1
.
Important Security Note
Python Configuration:
TypeScript/JavaScript Configuration:
Basic Usage (Chat Completions)
Once the client is configured, you can make API calls as you normally would with the OpenAI SDK. For chat completions, you'll use the `chat.completions.create` method.
Specifying Models
Crucially, when using A4F, the `model` parameter in your API call must use the A4F provider-prefixed model ID. This tells A4F which underlying provider and model to route your request to.
Examples:
"provider-1/chatgpt-4o-latest"
(Routes to GPT-4o via A4F's Provider 1)"provider-3/claude-3-haiku-20240307"
(Routes to Claude 3 Haiku via A4F's Provider 3)"provider-5/some-other-model"
You can find the list of available models and their A4F identifiers on our Models page. For details on how provider prefixes work, see the Provider Routing documentation.
Streaming Responses
A4F supports streaming responses, just like the OpenAI API. To enable streaming, set the `stream: true` parameter in your request. The OpenAI SDK handles the complexities of processing Server-Sent Events (SSE).
For more information on streaming, refer to our Streaming documentation.
Advanced Usage
Most standard OpenAI API parameters (like `temperature`, `max_tokens`, `top_p`, `tools`, `tool_choice`, `response_format`) are supported by A4F and are passed through to the underlying provider.
Features like function/tool calling can be used if the selected A4F provider and model support them. Refer to our Tool Calling documentation and the specific provider's capabilities. Provider-3 is noted for its function calling support.
If A4F introduces custom headers for features like caching or advanced routing, these might require direct HTTP calls or modifications to the SDK's request-sending mechanism if not natively supported for passthrough by the SDK. See our (forthcoming) Setting Headers guide for more on A4F-specific headers.
Was this page helpful?