v14.2.0•
Massive Provider 1 Model Expansion & Platform Enhancements
SreeAuthor
This update features a massive expansion of our model library with 80+ new state-of-the-art models and key improvements to platform stability and performance.
New Features
10
- DeepSeek-AI Models: Added 12 new models including:
deepseek-ai/DeepSeek-V3.1-turbo,deepseek-ai/DeepSeek-V3-0324-turbodeepseek-ai/DeepSeek-V3.1-Terminus- DeepSeek-R1-Distill variants (Qwen 1.5B, 7B, 14B, 32B and Llama 8B)
- Meta-Llama Models: Integrated 7 Llama models:
meta-llama/Llama-3.1-8B-Instructand FP-16 variants- Llama 3.2 series (1B, 3B, 11B Instruct variants)
- Qwen/Alibaba Models: Added 15 models including:
Qwen/Qwen2.5-72B-Instruct,Qwen/Qwen2.5-Coder-32B-InstructAlibaba-NLP/Tongyi-DeepResearch-30B-A3B- Qwen3 series including 235B variants with thinking capabilities
Qwen/Qwen2.5-VL-72B-Instruct- Vision-language model
- MistralAI Models: Integrated 6 Mistral models:
- NousResearch Models: Added 5 Hermes models:
NousResearch/Hermes-4-14B,NousResearch/Hermes-4-70BNousResearch/Hermes-4-405B-FP8- DeepHermes-3 variants
- Unsloth Models: Integrated 6 optimized models including Gemma-3 and Mistral variants
- Zhipu AI (GLM) Models: Added 5 GLM models:
- Google Gemma: Added
google/gemma-3-27b-instruct/bf-16 - Specialized Models: Integrated 15+ specialized models including:
- Total Addition: 80+ new state-of-the-art models across multiple providers and capabilities
Improvements
3
- Provider Efficiency: Performance and resource management optimized for Provider 2 and Provider 7.
- Enhanced Streaming Performance: Deployed improvements to ensure faster and more reliable streaming output from models.
- Increased VPS Limits: Resource limits for virtual private servers raised, leading to greater stability and reliability, especially for model editing tasks.