Documentation

Features

Web Search

Understanding Web Search Capabilities with A4F API Services.

Current A4F Web Search Status

A4F API Services currently do not offer a built-in, universal web search plugin or feature by default across all models. This means you cannot simply append a suffix like :online to any model ID or use a generic "web" plugin ID to enable web search for any arbitrary model through A4F.

Temporary Solutions & Recommendations

In the meantime, you can integrate web search functionality into your applications using A4F by leveraging external APIs or specific models available through our platform that have built-in web search.

1. Using External Search APIs

You can fetch search results from dedicated search APIs and then provide this context to any LLM through A4F for summarization, question answering, or other tasks.

Popular external search APIs include:

  • Brave Search API
  • DuckDuckGo API (often via scraping or community libraries)
  • SerpAPI
  • Perplexity API (can be used for its search results)
  • Google Custom Search JSON API

Conceptual Workflow:

  1. User's query is received by your application.
  2. Your application calls an external search API (e.g., SerpAPI) with the query.
  3. Your application receives search results (snippets, links, etc.).
  4. Format these results as text-based context.
  5. Send a new request to an A4F model (e.g., provider-1/chatgpt-4o-latest) with the original query and the fetched search results as context within the prompt.

Below is a conceptual Python snippet illustrating this workflow:

import os
import requests
from openai import OpenAI
A4F_API_KEY = "A4F_API_KEY"
A4F_BASE_URL = "https://api.a4f.co/v1"
SEARCH_API_KEY = os.getenv("YOUR_SEARCH_API_KEY")
SEARCH_API_ENDPOINT = "https://api.serpapi.com/search
a4f_client = OpenAI(api_key=A4F_API_KEY, base_url=A4F_BASE_URL)
def get_search_results(query: str):
"""Fetches search results from an external API."""
if not SEARCH_API_KEY:
print("Warning: Search API key not set. Skipping web search.")
return "Web search could not be performed (missing API key)."
params = {
"q": query,
"api_key": SEARCH_API_KEY,
}
try:
response = requests.get(SEARCH_API_ENDPOINT, params=params)
response.raise_for_status()
data = response.json()
snippets = []
if "organic_results" in data:
for result in data["organic_results"][:3]:
title = result.get("title", "N/A")
link = result.get("link", "#")
snippet_text = result.get("snippet", "No snippet available.")
snippets.append(f"Title: {title}\nLink: {link}\nSnippet: {snippet_text}")
return "\n\n".join(snippets) if snippets else "No relevant search results found."
except requests.exceptions.RequestException as e:
print(f"Error fetching search results: {e}")
return f"Error performing web search: {e}"
except Exception as e:
print(f"Error processing search results: {e}")
return "Error processing search results."
def query_a4f_with_search_context(user_query: str, search_context: str):
"""Sends the original query and search context to an A4F model."""
messages = [
{"role": "system", "content": "You are a helpful assistant. Answer the user's query based on the provided web search context. Cite sources if appropriate."},
{"role": "user", "content": f"User Query: {user_query}\n\nWeb Search Context:\n{search_context}"}
]
try:
completion = a4f_client.chat.completions.create(
model="provider-1/chatgpt-4o-latest",
messages=messages,
temperature=0.7,
)
return completion.choices[0].message.content
except Exception as e:
print(f"Error querying A4F: {e}")
return f"Error getting response from A4F: {e}"
# original_user_query = "What are the latest developments in quantum computing?"
# search_results_context = get_search_results(original_user_query)
# final_answer = query_a4f_with_search_context(original_user_query, search_results_context)
# print(f"\nFinal Answer from A4F:\n{final_answer}")

2. Using Models with Built-in Web Search via A4F

Some models available through A4F, particularly those from providers like Perplexity, or specific model variants, may have inherent web search capabilities.

Examples might include (always check the A4F model list for current availability and exact naming):

  • Models from Perplexity such as perplexity/pplx-7b-online or perplexity/pplx-70b-online.
  • Specific variants like a hypothetical provider-X/gpt-4o-web-search if offered.

When using these models via A4F, their native web search functionality (if any) would be invoked as per the provider's original behavior. You would typically not need to manage external API calls for these specific models.

Handling Search Results & Pricing

Parsing Search Results (from External APIs)

If you are integrating external search APIs, your application will be responsible for parsing the search results (which are often in JSON format) and formatting them into a text-based context suitable for an LLM. Common elements to extract include titles, snippets, and source URLs.

Pricing Considerations

Be aware of the following cost implications:

  • External Search API Costs: If you use a third-party search API, you will incur costs associated with that API service, separate from your A4F usage.
  • A4F Model Costs:
    • When providing externally fetched search results as context to an A4F model, you will be charged by A4F for the token usage of the prompt (including the search context) and the model's completion.
    • If using an A4F model with built-in web search, the pricing will be as per that specific model's rate on A4F. This rate might include a premium if the web search functionality is a specialized feature of that model.

Refer to the A4F Pricing page for model rates and the respective external search API provider for their pricing details.

Managing Search Context

When providing search results as context to an LLM via A4F (either from an external API or from a model with built-in search that returns extensive results), be mindful of the chosen LLM's context window limit.

You might need to implement strategies such as:

  • Selecting only the most relevant snippets.
  • Summarizing search results before passing them to the main LLM.
  • Truncating context if it exceeds the model's capacity. For guidance on this, see our Message Transforms documentation.

Some models with built-in search might offer parameters to control the verbosity or "context size" of the search results they retrieve (e.g., 'low', 'medium', 'high'). If you use such a model via A4F, you would pass these parameters as part of your request according to that model's specific API.

Pricing Documentation

For more detailed information about pricing models, refer to the official documentation:

Was this page helpful?