Current location:
Home > News > Company News > Powering the Next Leap in Large Model AI with Gooxi ComputeLarge model AI is advancing at unprecedented speed, with each iteration pushing the boundaries of industry capabilities. DeepSeek recently unveiled its next-generation DeepSeek V4 series, featuring Pro and Flash editions with 1.6T total parameters and a 1-million-token context window, setting a new global benchmark for open-source large models. Behind every breakthrough lies a foundation of robust, efficient, and reliable compute. With 18 years in server innovation, Gooxi provides a full-spectrum compute portfolio, delivering high performance, open ecosystem compatibility, and extreme adaptability to accelerate AI deployment from research to real-world applications.
As AI models evolve, heterogeneous and multi-chip deployments have become the industry standard. Gooxi’s OAM training-and-inference system G6A80A5 leverages an open OAM architecture to break ecosystem barriers and meet diverse compute requirements. It natively supports major domestic and international AI accelerators, including MuXi C550, HaiGuang DeepCompute 3, and KunLun Chip P800 OAM 2.0 modules, while seamlessly integrating with HGX architectures. Forward-looking design ensures compatibility with next-generation OAM modules, enabling “plug-and-play” deployment and painless upgrades, drastically reducing migration costs and enhancing operational flexibility.
From trillion-parameter model training to millisecond-level inference, AI’s compute demands are unrelenting. The G6660T5 is purpose-built for ultra-large model inference, delivering unmatched performance for enterprise-scale applications. It supports dual 5th Gen Intel® Xeon® Scalable processors (up to 385W TDP each) and 32 DDR5 slots at speeds up to 5600MT/s, increasing memory bandwidth by 50% to handle multi-task and high-concurrency workloads. With support for up to 8 dual-width 600W GPUs and direct CPU-GPU connectivity, it minimizes communication latency, enabling billion-scale concurrent requests with millisecond response times. Its 6U design ensures enterprise-grade reliability in power, cooling, and security, meeting stringent requirements for finance, government, and other data-sensitive sectors.
Massive AI models require vast, high-speed, and reliable data storage. Gooxi’s 4U60 R6480T5 high-density storage server addresses these needs, supporting dual 4th/5th Gen Intel® Xeon® Scalable processors and 32 DDR5 slots (up to 5600MT/s). Its configuration—60 front 3.5-inch drives, 10 front 2.5-inch drives, and 4 M.2 SSD slots—delivers petabyte-scale capacity within a compact footprint, optimizing storage density and reducing cost per TB. Designed for AI model training, data preprocessing, and distributed storage workloads, it’s ideal for internet, telecom, energy, education, and large-scale data center environments.
Gooxi continues to lead in AI infrastructure with its “All-in-AI” strategy, driving innovation across training, inference, storage, and edge computing. By enhancing product performance and ecosystem compatibility, Gooxi empowers partners across the AI value chain, fueling large model innovation and accelerating real-world adoption—laying the foundation for a thriving AI ecosystem.
Leading Provider of Server Solutions
YouTube