Native Function Calling Now Live - Full Agentic Coding Support
The wait is over! Native function/tool calling has been fully implemented and is now live across the platform. This long-awaited feature enables seamless integration with agentic coding tools including Claude Code, Roo, Cline, GPT Codex, and more. Function calling has been rolled out to most models in Provider 3, Provider 5, and Provider 7, with additional model rollouts coming soon.
⚠️ Rolling Deployment: Function calling is currently being progressively rolled out. Not all models support it yet. Please check the Models page to verify whether your preferred model supports function calling before integration.
🚀 Native Function Calling - Major Platform Update
Agentic Coding Support: Full native function/tool calling implementation enabling seamless integration with popular coding assistants.
Streaming with Tools Enabled
- Previously Blocked: Streaming requests with tools (function calling) were not supported
- Now Supported: Requests with
stream: trueandtoolsparameter work seamlessly - Real-time Responses: Get real-time streaming of tool call results
Multi-Turn Function Calling Fixed
- Issue Resolved: Fixed compatibility issues with multi-turn function calling conversations
- Universal Compatibility: Works with Claude Code, Roo, Cline, GPT Codex, etc.
- OpenAI-Compatible: Functions anywhere OpenAI-compatible APIs are accepted
Provider 7 - Anthropic Tool Calling Integration
- Full Conversion: Complete tool message format conversion for Anthropic models
- Stream Support: Proper handling of tool_use content blocks in streaming responses
- Index Mapping Fix: Corrected index mapping for streaming tool call chunks
Provider 3 - Model Catalog Updates
New Models Added (5 models)
Chat/Completion Models:
-
provider-3/glm-4.7- Pro Tier- Latest Z.AI model with 200K context, 128K output
- Features: reasoning, function_calling
-
provider-3/gemini-3-flash-preview- Pro Tier- Fast multimodal preview with 1M context
- Features: reasoning, vision, function_calling, audio
-
provider-3/minimax-m2- Basic Tier- Compact MoE model for agentic workflows
- Features: function_calling
Image Editing Models:
-
provider-3/gemini-2.5-flash-image-preview-edit- Pro Tier- Natural language image editing
-
provider-3/gpt-image-1-edit- Pro Tier- Targeted image modifications
Feature Updates (25+ models)
Function Calling Added:
provider-3/kimi-k2,provider-3/kimi-k2-thinking,provider-3/kimi-k2-0905provider-3/qwen-3-maxprovider-3/gpt-4o-audio-preview
Audio Capability Added:
Vision Capability Added:
provider-3/gemma-3-27b-it,provider-3/gemma-3n-e4b-it,provider-3/gemma-3-4b-it,provider-3/gemma-3-12b-it,provider-3/gemma-3-1b-itprovider-3/gpt-oss-120b,provider-3/gpt-oss-20b
Feature Corrections:
provider-3/deepseek-r1-0528- Removed reasoningprovider-3/deepseek-v3- Removed function_callingprovider-3/deepseek-v3.1- Added reasoningprovider-3/glm-4.5,provider-3/glm-4.5-basic,provider-3/glm-4.6- Feature correctionsprovider-3/sonar-reasoning,provider-3/sonar-reasoning-pro- Removed reasoningprovider-3/qwen-3-coder-480b,provider-3/qwen3-235b,provider-3/llama-3.3-70b- Feature corrections
Tool Calling Compatibility Fix
- Fixed handling when assistant message content is null/missing with tool_calls
- Ensures content field exists for all tool role messages
- Resolves compatibility issues with models returning null content
Provider 5 - Infrastructure & Model Updates
Infrastructure Enhancement
- Load Balancing: Secondary API key added for improved reliability
- Rate Limit Mitigation: Automatic key rotation when limits are reached
- Higher Throughput: Effectively doubles available request capacity
Function Calling Rollout (11 models)
provider-5/meta-llama-3.1-8b-instruct-fastprovider-5/meta-llama-3.1-8b-instructprovider-5/kimi-k2-instructprovider-5/kimi-k2-thinkingprovider-5/qwen3-30b-a3b-thinking-2507provider-5/qwen3-coder-30b-a3b-instructprovider-5/qwen3-coder-480b-a35b-instructprovider-5/hermes-4-70bprovider-5/qwen3-next-80b-a3b-thinkingprovider-5/intellect-3
Context Window Expansions
provider-5/llama-guard-3-8b- 8x context increaseprovider-5/gemma-3-27b-it,provider-5/gemma-3-27b-it-fast- 13x context increaseprovider-5/deepseek-v3-0324- 2.5x context increaseprovider-5/kimi-k2-thinking- 4x context increaseprovider-5/kimi-k2-instruct- Major expansion
Feature Additions
Reasoning Added:
Vision Added:
Embedding Models Improved (4 models)
provider-5/bge-en-icl- 4x max tokens increaseprovider-5/bge-multilingual-gemma2- Improved performanceprovider-5/e5-mistral-7b-instruct- Improved performanceprovider-5/qwen3-embedding-8b- 25% max tokens increase
Image Generation Optimized (2 models)
provider-5/flux-dev- Improved efficiencyprovider-5/flux-schnell- Improved efficiency
🔧 Bug Fixes & Stability
HTTP Error Handling Improvements
Improved error handling across 7 providers for better debugging and consistent error reporting:
- Provider 1 - Added early HTTP error detection
- Provider 2 - Added early error detection with body reading
- Provider 3 - Added early HTTP error detection
- Provider 5 - Error body read before context closes
- Provider 6 - Error body read before context closes
- Provider 7 - Full response captured on error
- Provider 8 - Error body read before context closes
Improvements:
- Error responses now captured and logged correctly
- Consistent error handling pattern across all providers
- Error details preserved even for streaming requests
Platform Impact
- Native Function Calling: Full agentic coding support across Claude Code, Roo, Cline, GPT Codex, etc.
- Streaming + Tools: Real-time function calling now fully supported
- Provider 7 Anthropic: Complete tool calling protocol conversion for Claude models
- Provider 3 Expansion: 5 new models + 25+ feature updates
- Provider 5 Load Balancing: Improved reliability and throughput
Important Notes
- Rolling Deployment: Function calling is progressively rolling out. Check the Models page to verify model support.
- Agentic Coding: Works with Claude Code, Roo, Cline, GPT Codex, and any OpenAI-compatible platform.
- Feature Flag Updates: Model capabilities have been corrected across Provider 3 and Provider 5.
- Context Windows: Several Provider 5 models received major context window expansions. Pro and Ultra tiers receive 100% allocation.