Documentation

Guides

Using Third-Party SDKs

Integrating A4F with various client libraries and frameworks beyond the official OpenAI SDK.

Introduction

While A4F offers excellent compatibility with the official OpenAI SDKs (as detailed in our OpenAI SDK guide), its OpenAI-compatible API endpoint means you can integrate A4F with a wide array of other third-party libraries, HTTP clients, and LLM frameworks.

This guide provides general principles and examples for using A4F with common HTTP clients and frameworks like Langchain.

General Principles

Most SDKs or HTTP clients that allow you to specify a custom API endpoint and an API key can be configured to work with A4F. The core idea remains the same:

  • Point the SDK/client to A4F's base URL.
  • Use your A4F API key for authentication.
  • Ensure your requests adhere to the OpenAI API structure for chat completions, including using A4F's provider-prefixed model IDs.

Key Configuration Points

When configuring any third-party tool to use A4F, look for settings related to:

  • API Base URL / Endpoint: Set this to https://api.a4f.co/v1.
  • API Key: Use your A4F API key. This is usually set via an environment variable (e.g., A4F_API_KEY or OPENAI_API_KEY if the library expects OpenAI's variable name) or directly in the client's constructor.
  • Headers: Ensure the Authorization: Bearer YOUR_A4F_API_KEY and Content-Type: application/json headers are correctly set for POST requests. Most SDKs handle this automatically if the API key and base URL are configured.

Examples in Other Languages / HTTP Clients

Here are conceptual examples of how you might use standard HTTP clients in Python and Node.js to call A4F.

Python (using the `requests` library)

import requests
import json
import os
A4F_API_KEY = os.getenv("A4F_API_KEY", "YOUR_A4F_API_KEY")
A4F_BASE_URL = "https://api.a4f.co/v1"
headers = {
"Authorization": f"Bearer {A4F_API_KEY}",
"Content-Type": "application/json",
}
payload = {
"model": "provider-1/chatgpt-4o-latest", # Use an A4F provider-prefixed model
"messages": [
{"role": "user", "content": "Hello from Python requests!"}
]
}
try:
response = requests.post(f"{A4F_BASE_URL}/chat/completions", headers=headers, json=payload)
response.raise_for_status() # Will raise an HTTPError for bad responses (4XX or 5XX)
completion_data = response.json()
print(completion_data.get("choices", [{}])[0].get("message", {}).get("content"))
except requests.exceptions.RequestException as e:
print(f"Request failed: {e}")
if hasattr(e, 'response') and e.response is not None:
print(f"Response content: {e.response.text}")
except Exception as e:
print(f"An unexpected error occurred: {e}")

Node.js (using `axios`)

First, install axios: npm install axios or yarn add axios.

import axios from 'axios';
const A4F_API_KEY: string = process.env.A4F_API_KEY || "YOUR_A4F_API_KEY";
const A4F_BASE_URL: string = "https://api.a4f.co/v1";
async function callA4FWithAxios() {
const headers = {
"Authorization": `Bearer ${A4F_API_KEY}`,
"Content-Type": "application/json",
};
const payload = {
model: "provider-3/claude-3-haiku-20240307", // Use an A4F provider-prefixed model
messages: [
{ role: "user", content: "Hello from Node.js with axios!" }
]
};
try {
const response = await axios.post(`${A4F_BASE_URL}/chat/completions`, payload, { headers });
console.log(response.data.choices[0].message.content);
} catch (error: any) {
console.error("Error calling A4F API with axios:");
if (error.response) {
console.error("Status:", error.response.status);
console.error("Data:", error.response.data);
} else if (error.request) {
console.error("No response received:", error.request);
} else {
console.error("Error setting up request:", error.message);
}
}
}
callA4FWithAxios();

Frameworks (e.g., Langchain, LlamaIndex)

Many LLM application frameworks like Langchain or LlamaIndex provide abstractions for various LLM providers. Due to A4F's OpenAI compatibility, you can often use their OpenAI integrations by overriding the API base URL and key.

Here's a conceptual example using Langchain with its ChatOpenAI class:

from langchain_openai import ChatOpenAI
import os
# Configure Langchain to use A4F
os.environ["OPENAI_API_KEY"] = os.getenv("A4F_API_KEY", "YOUR_A4F_API_KEY")
os.environ["OPENAI_API_BASE"] = "https://api.a4f.co/v1" # Key configuration
# Initialize the Langchain ChatOpenAI client, it will pick up the env vars
# Or you can pass them directly:
# llm = ChatOpenAI(
# openai_api_key=os.getenv("A4F_API_KEY", "YOUR_A4F_API_KEY"),
# openai_api_base="https://api.a4f.co/v1",
# model_name="provider-1/chatgpt-4o-latest" # Use A4F provider-prefixed model ID
# )
llm = ChatOpenAI(model_name="provider-1/chatgpt-4o-latest")
try:
response = llm.invoke("Hello from Langchain via A4F!")
print(response.content)
except Exception as e:
print(f"Langchain call via A4F failed: {e}")

For Langchain, setting the `OPENAI_API_BASE` and `OPENAI_API_KEY` environment variables is often the easiest way to direct its OpenAI components to A4F. Remember to use the A4F provider-prefixed model ID for `model_name`.

A4F Model ID Format

Regardless of the SDK or HTTP client you use, always remember to specify the model using A4F's provider-prefixed format: provider-X/actual-model-name.

Refer to the Models page for a list of available models and their A4F identifiers, and the Provider Routing documentation for how A4F uses these prefixes.

Limitations & Considerations

  • Feature Parity: While A4F aims for OpenAI API compatibility, some very specific or newer OpenAI features might not be immediately available or might behave slightly differently if they depend on OpenAI-internal logic not exposed via the standard API. A4F prioritizes core chat completion compatibility.
  • SDK Abstractions: Some SDKs might have high-level abstractions or helper functions that make assumptions specific to the original provider (e.g., OpenAI). If you encounter issues, try using the more fundamental request methods provided by the SDK (like a generic `client.post()` or `client.request()`) where you have more control over the request payload and headers.
  • Error Handling: Error messages and codes might be a mix of A4F-originated errors (e.g., for authentication, rate limiting by A4F) and errors proxied from the underlying LLM provider. Your error handling logic should be robust enough to parse these. See our Errors documentation.
  • A4F-Specific Features: If A4F introduces unique features accessible via custom headers (see Setting Headers), you'll need to ensure your chosen SDK or HTTP client allows you to set these custom headers.

Was this page helpful?