Access our latest shenwen-coderV2 model via shenwenAI API
Powered by swllm.cpp inference engine, providing high-performance cloud inference service
Flexible pricing plans to meet different development needs
swllm.cpp can infer our latest models
High-performance code generation model powered by swllm.cpp inference engine, supports GGUF format, suitable for local and cloud deployment
View on HuggingFace