Documentation

Guides

Using OpenAI SDK with A4F

Leverage the familiar OpenAI SDKs to interact with hundreds of models through A4F's unified API.

Introduction

A4F provides an API endpoint that is compatible with the OpenAI API specification. This means you can use the official OpenAI client libraries (for Python, TypeScript/JavaScript, etc.) to interact with A4F by simply changing the `base_url` and `api_key`.

This approach allows you to keep your existing OpenAI integration code while gaining access to a broader range of models from various providers, along with A4F's features like optimized routing (by explicit provider selection) and unified billing.

Installation

If you haven't already, you'll need to install the OpenAI SDK for your programming language.

pip install openai
npm install openai

Configuration

The core of using the OpenAI SDK with A4F is to configure the client with:

  • Your A4F API Key (obtained from your A4F Dashboard).
  • The A4F API Base URL: https://api.a4f.co/v1.

Python Configuration:

from openai import OpenAI
client = OpenAI(
api_key="YOUR_A4F_API_KEY",
base_url="https://api.a4f.co/v1"
)

TypeScript/JavaScript Configuration:

import OpenAI from 'openai';
const client = new OpenAI({
apiKey: "YOUR_A4F_API_KEY",
baseURL: "https://api.a4f.co/v1"
});

Basic Usage (Chat Completions)

Once configured, making requests through A4F is similar to interacting directly with a provider's API, but you must specify the desired model using the A4F model identifier format with provider prefix (e.g., `provider-1/chatgpt-4o-latest`, `provider-3/claude-3-opus-20240229`). Using model names without the provider prefix (e.g., `gpt-4-turbo`, `claude-3-opus`) will result in an error.

completion = client.chat.completions.create(
model="provider-1/chatgpt-4o-latest",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"}
],
temperature=0.7,
max_tokens=50
)
print(completion.choices[0].message.content)
curl https://api.a4f.co/v1/chat/completions \
-H "Authorization: Bearer YOUR_A4F_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "provider-1/chatgpt-4o-latest",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"}
],
"temperature": 0.7,
"max_tokens": 50
}'

Specifying Models

Crucially, when using A4F, the `model` parameter in your API call must use the A4F provider-prefixed model ID. This tells A4F which underlying provider and model to route your request to.

Examples:

  • "provider-1/chatgpt-4o-latest" (Routes to GPT-4o via A4F's Provider 1)
  • "provider-3/claude-3-haiku-20240307" (Routes to Claude 3 Haiku via A4F's Provider 3)
  • "provider-5/some-other-model"

You can find the list of available models and their A4F identifiers on our Models page. For details on how provider prefixes work, see the Provider Routing documentation.

Streaming Responses

A4F supports streaming responses, just like the OpenAI API. To enable streaming, set the `stream: true` parameter in your request. The OpenAI SDK handles the complexities of processing Server-Sent Events (SSE).

stream = client.chat.completions.create(
model="provider-1/chatgpt-4o-latest",
messages=[{"role": "user", "content": "Tell me a short story."}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")
print() # For a final newline
async function streamA4FResponse() {
const stream = await client.chat.completions.create({
model: "provider-1/chatgpt-4o-latest",
messages: [{ role: "user", content: "Explain quantum computing in simple terms." }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || "");
}
process.stdout.write("\n"); // For a final newline
}
streamA4FResponse();

For more information on streaming, refer to our Streaming documentation.

Advanced Usage

Most standard OpenAI API parameters (like `temperature`, `max_tokens`, `top_p`, `tools`, `tool_choice`, `response_format`) are supported by A4F and are passed through to the underlying provider.

Features like function/tool calling can be used if the selected A4F provider and model support them. Refer to our Tool Calling documentation and the specific provider's capabilities. Provider-3 is noted for its function calling support.

If A4F introduces custom headers for features like caching or advanced routing, these might require direct HTTP calls or modifications to the SDK's request-sending mechanism if not natively supported for passthrough by the SDK. See our (forthcoming) Setting Headers guide for more on A4F-specific headers.

Was this page helpful?