Documentation

Overview

A4F Documentation

Welcome to the official A4F documentation. Learn how to integrate and utilize our unified AI gateway.

Introduction

A4F provides a single, standardized API endpoint to access hundreds of Large Language Models (LLMs) from various providers like OpenAI, Anthropic, Google, Mistral, and more. Simplify your AI integrations, benefit from automatic failover, optimize costs, and ensure higher availability.

Our platform acts as a proxy, forwarding your requests to the appropriate provider while offering additional features like model routing, caching, and usage tracking. Get started quickly using familiar SDKs or direct API calls.

Installation

While A4F works with many existing AI/LLM client libraries (like OpenAI's Python/JS SDKs), you might need to install specific SDKs depending on your chosen integration method. For direct API calls, no installation is typically required beyond standard HTTP clients.

Using pip (Python OpenAI SDK)

Install the OpenAI SDK for Python:

pip install openai

Using npm (JavaScript OpenAI SDK)

Install the OpenAI SDK for JavaScript:

npm install openai

Configuration

The primary configuration step involves setting your A4F API key and pointing your client library (if used) to the correct A4F base URL.

from openai import OpenAI
a4f_api_key = "A4F_API_KEY"
a4f_base_url = "https://api.a4f.co/v1"
client = OpenAI(
api_key=a4f_api_key,
base_url=a4f_base_url,
)
completion = client.chat.completions.create(
model="provider-1/chatgpt-4o-latest",
messages=[
{"role": "user", "content": "Hello from Python!"}
]
)
print(completion.choices[0].message.content)

Remember to handle your API key securely, preferably using environment variables. Do not commit keys directly into your codebase.

Basic Usage

Once configured, making requests through A4F is similar to interacting directly with a provider's API, but you specify the desired model using the provider prefix followed by the model name. You can find the particular provider prefix in the Provider Routing documentation.

Example Code

export A4F_API_KEY="your_actual_api_key_here"
curl https://api.a4f.co/v1/chat/completions \
-H "Authorization: Bearer $A4F_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "provider-1/chatgpt-4o-latest",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain the concept of API gateways."}
],
"temperature": 0.7,
"max_tokens": 150
}'

Refer to the Models page for a full list of available model identifiers and their capabilities.

Advanced Features

A4F offers several advanced features beyond basic proxying:

API Options & Headers

You can pass standard API parameters like `temperature`, `max_tokens`, `top_p`, etc. A4F also supports specific headers for features like routing, caching, and metadata.

export A4F_API_KEY="your_actual_api_key_here"
curl https://api.a4f.co/v1/chat/completions \
-H "Authorization: Bearer $A4F_API_KEY" \
-H "Content-Type: application/json" \
-H "x-a4f-cache: read" \
-H "x-a4f-metadata-user-id: user_123" \
-d '{
"model": "anthropic/claude-3-haiku",
"messages": [
{"role": "user", "content": "Write a short poem about Next.js."}
],
"temperature": 0.9
}'

See the Parameters and Setting Headers guides for details.

Model Routing

Define fallbacks or logic to automatically route requests to different models based on availability or other criteria. This enhances resilience. Learn more in the Provider Routing documentation.

Next Steps

Explore further documentation to maximize your use of A4F: