XC200

4U16B GPU Accelerator
Disaggregated compute accelerator supports 16 PCIe 3.0 x16 accelerator cards for Deep Learning and HPC

XC200

Features & Benefits

Disaggregated Compute Accelerator for Deep Learning and HPC

Wiwynn XC200 is a disaggregated compute accelerator which supports 16 PCIe 3.0 x16 GPU/FPGA cards that are widely available on the market. Wiwynn XC200 supports various server platforms and provides the most flexible CPU to GPU ratio comparing other integrated GPU server solutions for Deep Learning and HPC.

High Scalability for Optimized Workloads

XC200 can be flexibly configured for different workloads with excellent applications scalability. The configuration of host to accelerator ratio includes 1 to 4 (total 4 hosts with 16 accelerators), 1 to 8 (total 2 hosts with 16 accelerators), or 1 to 16.

On-Chassis BMC for Easy Management

Wiwynn XC200 is designed with on-chassis BMC. Its out-of-band management port allows operators to remotely monitor temperature, voltages, power consumption through IPMI management tools. Additionally, the LED indicators provide instant check of system health and accelerators status.

Drawer Design for Easy Maintenance and Non-interrupt Serviceability

16 accelerators are installed in 4 independent drawers. It allows single operator to maintain accelerators of each drawer separately. Each drawer runs independently, so when there’s any maintenance required, the service from other drawers will not be interrupted. On top of that, PSU/Fan is hot-pluggable. All these design saves a lot of labor hours to datacenter.

Tech Spec

Expansion Slots16 PCIe 3.0 x16 (Dual-width cards)
Connection Speed4 PCIe 3.0 x16 (Quad MiniSAS HD connector)
Accelerator TDP300W
Fan4
Management LANOne GbE RJ45 port
Remote ManagementIPMI v2.0 Compliant
Power Supply3 x 3000W (2+1)
Dimensions4U; 176 (H) x 448 (W) x 900 (D)

Download

CategoryFile TitleRelease DateActions
DatasheetXC200 Datasheet2018/07/19 Download