Current location:
Home > News > Company News > Gooxi OAM AI Server: Built for Open Ecosystems and Enterprise-Scale AIGooxi OAM AI Server: Built for Open Ecosystems and Enterprise-Scale AI
As enterprises accelerate AI adoption, infrastructure demands continue to rise. Gooxi’s flagship OAM AI server, Haiqing, is designed to deliver scalable performance, open ecosystem compatibility, and operational efficiency—providing a reliable foundation for enterprise AI deployment and growth.
Built for modern data centers, Haiqing supports AI training, inference, and data-intensive workloads while helping organizations reduce deployment complexity and optimize long-term investment.

Haiqing is built on the open OAM standard, enabling broad compatibility across global AI accelerator ecosystems. It supports mainstream OAM 2.0 modules such as Metax C550, Hygon Deep Computing No.3, and Kunlun P800, while also offering seamless compatibility with HGX architecture and NVIDIA RTX GPU H200/B200 modules.
This open design allows enterprises to deploy mixed accelerator environments, reduce migration costs, and avoid vendor lock-in—while maintaining flexibility as AI technologies evolve.
Haiqing supports up to 2 AMD EPYC™ 9005 series processors, with up to 192 cores per CPU and a maximum TDP of 500W per socket. With 24 DDR5 memory slots, 12 memory channels, and support for DDR5-6400, the platform delivers strong compute density and memory bandwidth for demanding workloads.
The architecture is well suited for AI training, deep learning inference, large language models, real-time analytics, and virtualization—providing consistent performance at scale.
Haiqing integrates an intelligent thermal system optimized for high-power AI accelerators. In air-cooled configurations, 15 high-speed rear fans support up to 1200W thermal dissipation per module. The system also supports air and liquid cooling, with a built-in liquid-cooling manifold for flexible deployment.
A dynamic PID-based control strategy adjusts cooling based on workload intensity, improving energy efficiency and reducing noise. Compared to traditional fixed strategies, intelligent thermal management can reduce cooling energy consumption by 15–20%, helping improve data center PUE.
Power reliability is ensured through flexible 3+3 or 5+5 redundant power configurations, along with real-time monitoring and alerting.
Designed with modular scalability in mind, Haiqing supports up to 14 expansion slots and 16 front-mounted 2.5" NVMe bays, enabling flexible integration of SmartNICs, DPUs, and other accelerators.
An integrated motherboard and switch board design provides direct, lossless PCIe connectivity, improving signal integrity and system stability. Dual DPU support further offloads networking, storage, and security tasks, allowing CPUs to focus on AI computation.
Haiqing reflects Gooxi’s focus on practical engineering and enterprise reliability. By combining open architecture, high-density compute, intelligent cooling, and flexible expansion, Haiqing delivers a future-ready AI infrastructure platform for global data centers.
Leading Provider of Server Solutions
YouTube