Documentation
Tool & Function Calling
Use tools in your prompts with A4F API Services.
Tool calls (also known as function calls) give an LLM access to external tools. The LLM does not call the tools directly. Instead, it suggests one or more tools to call based on the user's prompt and the provided tool definitions. The user's application code then calls the tool separately and provides the results back to the LLM. Finally, the LLM uses these results to formulate a response to the user's original question.
A4F API Services standardizes the tool calling interface across models and providers that support this feature. Since A4F aims for OpenAI API compatibility, you can leverage the familiar OpenAI SDK structure for tool calling. Note that tool/function calling capabilities depend on the specific model and the underlying A4F provider. Provider-3 is specifically mentioned as supporting function calling.
For a primer on how tool calling works in the OpenAI SDK, please see this article from OpenAI, or if you prefer to learn from a full end-to-end example using A4F, keep reading.
Tool Calling Example
Here is Python code that gives LLMs the ability to call an external API — in this case Project Gutenberg, to search for books.
First, let's do some basic setup:
Define the Tool
Next, we define the tool that we want to call. Remember, the tool is going to get requestedby the LLM, but the code we are writing here (your application code) is ultimately responsible for executing the tool/function call and returning the results to the LLM.
Note that the "tool" itself is just a normal Python function (`search_gutenberg_books`). We then write a JSON "spec" (the `tools` list) compatible with the OpenAI function calling parameter. We'll pass this spec to the LLM via the A4F API so that it knows this tool is available and how to use it (i.e., what arguments it expects). The LLM will request the tool when needed, along with any arguments it deems necessary. We'll then execute the tool call locally within our application, make the actual function call, and return the results to the LLM.
Tool use and tool results
Let's make the first A4F API call to the model, providing the tools definition:
The LLM responds with a finish_reason
of tool_calls
, and a tool_calls
array in its message object. In a generic LLM response-handler, you would want to check the finish_reason
before processing tool calls, but here we will assume it's the case. Let's keep going, by processing the tool call:
The messages array now has:
- Our original system and user messages.
- The LLM's response message (containing the tool call request).
- The result of our tool execution (formatted as a "tool" role message).
Now, we can make a second A4F API call, sending the updated messages list back to the model, and hopefully get our final result!
The output will be something like:
Here are some books by James Joyce: * "Ulysses" * "Dubliners" * "A Portrait of the Artist as a Young Man" * "Chamber Music" * "Exiles: A Play in Three Acts"
We did it! We've successfully used a tool in a prompt via A4F API Services.
A Simple Agentic Loop
In the example above, the calls are made explicitly and sequentially. To handle a wide variety of user inputs and potentially multiple tool calls, or a sequence of tool calls, you can use an agentic loop.
Here's an example of a simple agentic loop (using the same tools
definition and TOOL_MAPPING
as above):
Provider Compatibility for Tool Calling
provider-3
supports Function/Tool Calling. When using models that require this feature, ensure your requests are routed to provider-3
by prefixing your model name (e.g., "provider-3/openai/gpt-4o"
) or another suitable model from provider-3
that supports tool calling. You can check the Models page for specific capabilities.Was this page helpful?