Documentation

Overview

A4F Documentation

Welcome to the official A4F documentation. Learn how to integrate and utilize our unified AI gateway.

Introduction

A4F provides a single, standardized API endpoint to access hundreds of Large Language Models (LLMs) from various providers like OpenAI, Anthropic, Google, Mistral, and more. Simplify your AI integrations, benefit from automatic failover, optimize costs, and ensure higher availability.

Our platform acts as a proxy, forwarding your requests to the appropriate provider while offering additional features like model routing, caching, and usage tracking. Get started quickly using familiar SDKs or direct API calls.

Installation

While A4F works with many existing AI/LLM client libraries (like OpenAI's Python/JS SDKs), you might need to install specific SDKs depending on your chosen integration method. For direct API calls, no installation is typically required beyond standard HTTP clients.

Using npm (Example: OpenAI SDK)

If you plan to use the OpenAI SDK compatibility:

npm install openai

Using yarn (Example: OpenAI SDK)

Alternatively, using yarn:

yarn add openai

Configuration

The primary configuration step involves setting your A4F API key and pointing your client library (if used) to the correct A4F base URL.

from openai import OpenAI
a4f_api_key = "A4F_API_KEY"
a4f_base_url = "https://api.a4f.co/v1"
client = OpenAI(
api_key=a4f_api_key,
base_url=a4f_base_url,
)
completion = client.chat.completions.create(
model="provider-1/chatgpt-4o-latest",
messages=[
{"role": "user", "content": "Hello from Python!"}
]
)
print(completion.choices[0].message.content)

Remember to handle your API key securely, preferably using environment variables. Do not commit keys directly into your codebase.

Basic Usage

Once configured, making requests through A4F is similar to interacting directly with a provider's API, but you specify the desired model using the A4F model identifier format (e.g., `openai/gpt-4-turbo`, `anthropic/claude-3-opus`).

Example Code (Using OpenAI JS SDK)

async function getChatCompletion() {
try {
const completion = await a4fClient.chat.completions.create({
model: "provider-1/chatgpt-4o-latest", // Specify A4F model ID
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Explain the concept of API gateways." }
],
temperature: 0.7,
max_tokens: 150,
});
console.log("Response:", completion.choices[0].message.content);
} catch (error) {
console.error("Error getting chat completion:", error);
}
}
getChatCompletion();

Refer to the Models page for a full list of available model identifiers and their capabilities.

Advanced Features

A4F offers several advanced features beyond basic proxying:

API Options & Headers

You can pass standard API parameters like `temperature`, `max_tokens`, `top_p`, etc. A4F also supports specific headers for features like routing, caching, and metadata.

async function getAdvancedCompletion() {
try {
const completion = await a4fClient.chat.completions.create({
model: "anthropic/claude-3-haiku", // Use a different model
messages: [
{ role: "user", content: "Write a short poem about Next.js." }
],
temperature: 0.9,
// A4F specific headers can be passed via client config or custom fetch
// Example (conceptual - depends on SDK):
// headers: {
// 'x-a4f-cache': 'read',
// 'x-a4f-metadata-user-id': 'user_123'
// }
});
console.log(completion.choices[0].message.content);
} catch (error) {
console.error("Error:", error);
}
}

See the Parameters and Setting Headers guides for details.

Model Routing

Define fallbacks or logic to automatically route requests to different models based on availability or other criteria. This enhances resilience. Learn more in the Provider Routing documentation.

Next Steps

Explore further documentation to maximize your use of A4F: