Documentation
Using Third-Party SDKs
Integrating A4F with various client libraries and frameworks beyond the official OpenAI SDK.
Introduction
While A4F offers excellent compatibility with the official OpenAI SDKs (as detailed in our OpenAI SDK guide), its OpenAI-compatible API endpoint means you can integrate A4F with a wide array of other third-party libraries, HTTP clients, and LLM frameworks.
This guide provides general principles and examples for using A4F with common HTTP clients and frameworks like Langchain.
General Principles
Most SDKs or HTTP clients that allow you to specify a custom API endpoint and an API key can be configured to work with A4F. The core idea remains the same:
- Point the SDK/client to A4F's base URL.
- Use your A4F API key for authentication.
- Ensure your requests adhere to the OpenAI API structure for chat completions, including using A4F's provider-prefixed model IDs.
Key Configuration Points
When configuring any third-party tool to use A4F, look for settings related to:
- API Base URL / Endpoint: Set this to
https://api.a4f.co/v1
. - API Key: Use your A4F API key. This is usually set via an environment variable (e.g.,
A4F_API_KEY
orOPENAI_API_KEY
if the library expects OpenAI's variable name) or directly in the client's constructor. - Headers: Ensure the
Authorization: Bearer YOUR_A4F_API_KEY
andContent-Type: application/json
headers are correctly set for POST requests. Most SDKs handle this automatically if the API key and base URL are configured.
Examples in Other Languages / HTTP Clients
Here are conceptual examples of how you might use standard HTTP clients in Python and Node.js to call A4F.
Python (using the `requests` library)
Node.js (using `axios`)
First, install axios: npm install axios
or yarn add axios
.
Frameworks (e.g., Langchain, LlamaIndex)
Many LLM application frameworks like Langchain or LlamaIndex provide abstractions for various LLM providers. Due to A4F's OpenAI compatibility, you can often use their OpenAI integrations by overriding the API base URL and key.
Framework-Specific Configuration
Here's a conceptual example using Langchain with its ChatOpenAI
class:
For Langchain, setting the `OPENAI_API_BASE` and `OPENAI_API_KEY` environment variables is often the easiest way to direct its OpenAI components to A4F. Remember to use the A4F provider-prefixed model ID for `model_name`.
A4F Model ID Format
Regardless of the SDK or HTTP client you use, always remember to specify the model using A4F's provider-prefixed format: provider-X/actual-model-name
.
Refer to the Models page for a list of available models and their A4F identifiers, and the Provider Routing documentation for how A4F uses these prefixes.
Limitations & Considerations
- Feature Parity: While A4F aims for OpenAI API compatibility, some very specific or newer OpenAI features might not be immediately available or might behave slightly differently if they depend on OpenAI-internal logic not exposed via the standard API. A4F prioritizes core chat completion compatibility.
- SDK Abstractions: Some SDKs might have high-level abstractions or helper functions that make assumptions specific to the original provider (e.g., OpenAI). If you encounter issues, try using the more fundamental request methods provided by the SDK (like a generic `client.post()` or `client.request()`) where you have more control over the request payload and headers.
- Error Handling: Error messages and codes might be a mix of A4F-originated errors (e.g., for authentication, rate limiting by A4F) and errors proxied from the underlying LLM provider. Your error handling logic should be robust enough to parse these. See our Errors documentation.
- A4F-Specific Features: If A4F introduces unique features accessible via custom headers (see Setting Headers), you'll need to ensure your chosen SDK or HTTP client allows you to set these custom headers.
Was this page helpful?