Documentation

Guides

Using OpenAI SDK with A4F

Leverage the familiar OpenAI SDKs to interact with hundreds of models through A4F's unified API.

Introduction

A4F provides an API endpoint that is compatible with the OpenAI API specification. This means you can use the official OpenAI client libraries (for Python, TypeScript/JavaScript, etc.) to interact with A4F by simply changing the `base_url` and `api_key`.

This approach allows you to keep your existing OpenAI integration code while gaining access to a broader range of models from various providers, along with A4F's features like optimized routing (by explicit provider selection) and unified billing.

Installation

If you haven't already, you'll need to install the OpenAI SDK for your programming language.

Python SDK

pip install openai

TypeScript/JavaScript SDK

npm install openai
yarn add openai

Configuration

The core of using the OpenAI SDK with A4F is to configure the client with:

  • Your A4F API Key (obtained from your A4F Dashboard).
  • The A4F API Base URL: https://api.a4f.co/v1.

Python Configuration:

from openai import OpenAI
import os
# Retrieve your A4F API key (preferably from environment variables)
A4F_API_KEY = os.getenv("A4F_API_KEY", "YOUR_A4F_API_KEY")
A4F_BASE_URL = "https://api.a4f.co/v1" # A4F's OpenAI-compatible endpoint
client = OpenAI(
api_key=A4F_API_KEY,
base_url=A4F_BASE_URL,
)
# You can now use the 'client' object to make API calls
# Example:
# completion = client.chat.completions.create(...)

TypeScript/JavaScript Configuration:

import OpenAI from 'openai';
// Retrieve your A4F API key (preferably from environment variables)
const A4F_API_KEY: string = process.env.A4F_API_KEY || "YOUR_A4F_API_KEY";
const A4F_BASE_URL: string = "https://api.a4f.co/v1"; // A4F's OpenAI-compatible endpoint
const client = new OpenAI({
apiKey: A4F_API_KEY,
baseURL: A4F_BASE_URL,
});
// You can now use the 'client' object to make API calls
// Example:
// async function exampleCall() {
// const completion = await client.chat.completions.create(...);
// console.log(completion);
// }

Basic Usage (Chat Completions)

Once the client is configured, you can make API calls as you normally would with the OpenAI SDK. For chat completions, you'll use the `chat.completions.create` method.

try:
completion = client.chat.completions.create(
model="provider-1/chatgpt-4o-latest", # Use A4F provider-prefixed model ID
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"}
],
temperature=0.7,
max_tokens=50
)
print(completion.choices[0].message.content)
except Exception as e:
print(f"An error occurred: {e}")
async function getA4FChatCompletion() {
try {
const completion = await client.chat.completions.create({
model: "provider-3/claude-3-opus-20240229", // Use A4F provider-prefixed model ID
messages: [
{ role: "system", content: "You are a poetic assistant." },
{ role: "user", content: "Write a short haiku about autumn." }
],
temperature: 0.8,
max_tokens: 60
});
console.log(completion.choices[0].message.content);
} catch (error) {
console.error("Error fetching completion from A4F:", error);
}
}
getA4FChatCompletion();

Specifying Models

Crucially, when using A4F, the `model` parameter in your API call must use the A4F provider-prefixed model ID. This tells A4F which underlying provider and model to route your request to.

Examples:

  • "provider-1/chatgpt-4o-latest" (Routes to GPT-4o via A4F's Provider 1)
  • "provider-3/claude-3-haiku-20240307" (Routes to Claude 3 Haiku via A4F's Provider 3)
  • "provider-5/some-other-model"

You can find the list of available models and their A4F identifiers on our Models page. For details on how provider prefixes work, see the Provider Routing documentation.

Streaming Responses

A4F supports streaming responses, just like the OpenAI API. To enable streaming, set the `stream: true` parameter in your request. The OpenAI SDK handles the complexities of processing Server-Sent Events (SSE).

stream = client.chat.completions.create(
model="provider-1/chatgpt-4o-latest",
messages=[{"role": "user", "content": "Tell me a short story."}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")
print() # For a final newline
async function streamA4FResponse() {
const stream = await client.chat.completions.create({
model: "provider-1/chatgpt-4o-latest",
messages: [{ role: "user", content: "Explain quantum computing in simple terms." }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || "");
}
process.stdout.write("\n"); // For a final newline
}
streamA4FResponse();

For more information on streaming, refer to our Streaming documentation.

Advanced Usage

Most standard OpenAI API parameters (like `temperature`, `max_tokens`, `top_p`, `tools`, `tool_choice`, `response_format`) are supported by A4F and are passed through to the underlying provider.

Features like function/tool calling can be used if the selected A4F provider and model support them. Refer to our Tool Calling documentation and the specific provider's capabilities. Provider-3 is noted for its function calling support.

If A4F introduces custom headers for features like caching or advanced routing, these might require direct HTTP calls or modifications to the SDK's request-sending mechanism if not natively supported for passthrough by the SDK. See our (forthcoming) Setting Headers guide for more on A4F-specific headers.

Was this page helpful?