Whitepapers

Explore how AI is redefining data center efficiency and scalability. Read Wiwynn’s latest white papers for strategies in cooling and cloud optimization.

1 min read

White Paper: GPU-Initiated, Liquid-Cooled, Ultra-High-Density Storage for Next-Gen AI

This paper introduces a paradigm shift in storage architecture designed to overcome the CPU-centric data path bottlenecks in modern AI workloads. By combining NVIDIA SCADA (Scaled Accelerated Data Access) with a 100% liquid-cooled, fanless chassis housing 96 E3.S NVMe drives, we have created an...

Read More

1 min read

White Paper: KV Cache Offload to Improve AI Inferencing Cost and Performance

This paper explores a disaggregated key-value (KV) storage architecture designed to efficiently offload KV cache tensors for generative AI workloads. By synergizing Wiwynn's OCP ORv3-compliant servers with Pliops' hardware-accelerated data path, this framework delivers a highly scalable and...

Read More

1 min read

Autonomous AI Agent for End-to-End Component Data Extraction

This paper explores an advanced framework designed to automate the extraction of important attributes from unstructured part datasheets. By synergizing expert data science preprocessing with the Retrieval-Augmented Generation (RAG) architecture, we have achieved industry-leading recognition...

Read More

White Paper: From Design to Live Operation: Wiwynn’s L12 AI Cluster Deployment with MLPerf Validation

Deploying large-scale AI clusters introduces engineering challenges that extend well beyond the individual server rack. From liquid cooling integration to high-voltage power distribution and network topology design, the "Last Mile" of AI infrastructure presents multifaceted integration complexities.

Read More

1 min read

White Paper: AI Rack Management with Wiwynn UMS

This paper discusses the rapid expansion of AI workloads and the resulting transformation in data center infrastructure requirements. Traditional air-cooling systems are becoming less effective due to the increased power density of server racks. To address this challenge, the paper presents...

Read More

1 min read

White Paper: Introduction of a new Firmware Update Workflow for PLDM & Redfish

Firmware updates are essential for the BMC system. Each device requires a unique update flow and utilizes different transport protocols, such as I2C or JTAG. In previous projects, various update utilities had to be developed to accommodate these requirements. However, without proper software...

Read More

1 min read

White Paper: Beyond the Rack - The Elastic Management Framework for AI Data Centers

AI clusters using next-generation accelerators (e.g., NVIDIA GB200) push rack power density beyond 130 kW, making air cooling insufficient and driving adoption of direct liquid cooling (DLC). This whitepaper introduces Wiwynn’s Elastic Management Framework, a modular, scalable, interoperable...

Read More

1 min read

White Paper: Power Efficiency Optimization in AI Systems

This whitepaper examines the growing importance of power efficiency in AI systems, where increasing computational demand translates into significant energy consumption and operating costs. We begin by introducing the overall architecture of AI applications and their critical power components,...

Read More

1 min read

White Paper: General Guidance for Transitioning from Single-Phase to Two-Phase Liquid Cooling Solutions

The rapid expansion of artificial intelligence, high-performance computing, and cloud services is driving unprecedented levels of heat generation in modern data centers. Traditional air cooling and single-phase liquid cooling are increasingly unable to meet the demands of high-density server racks...

Read More