AI Inference Optimization on OCP openEDGE Platform Looking for Edge AI Server for your new applications? What’s the most optimized solution? What parameters should take into consideration? Come to check Wiwynn’s latest whitepaper on AI inference optimization on OCP openEDGE platform. See how Wiwynn EP100 assists you to catch up with the thriving edge application era and diverse AI inference workloads with powerful CPU and GPU inference acceleration! Leave your contact information to download the whitepaper!
Open System Firmware Development on OCP Platform The OCP Open System Firmware (OSF) project is aiming to allow the system firmware (also known as BIOS) to be modified and shared by the owners OCP Open System Firmware Project. From March 2021, supporting OSF is mandatory for servers with OCP badging. Some ecosystem partners have decided to implement Coreboot and LinuxBoot as their open system firmware architecture. In this whitepaper, we are focused on IPMI implementation integrated with Coreboot/LinuxBoot. OCP Bryce Canyon is the one used in this presentation. At the time of writing, we are moving [...]
Optimize Your NFVI Performance with Wiwynn® EP100 and Intel® Speed Select Technology – Base Frequency Wiwynn® EP100 design is optimized for CoSPs’ NFVI with flexibility, serviceability and performance. CoSPs can utilize their existing racks with flexible configuration of the 1U half-width single socket sled. With the powerful and optimized 2nd generation Intel® Xeon® Scalable processor family, EP100 enables high speed packet forwarding, which is critical to NFVI. By adopting the new NFVI specialized feature called Intel® Speed Select Technology – Base Frequency (Intel® SST-BF), EP100 boosts performance of targeted applications essential for some VNFs. With supports from Industrial Technology Research Institute [...]
Two Phase Rack Level Liquid Cooling Solution The power consumption of computing silicon, such as CPU and GPU, continues to push envelope of existing thermal solutions in IT technology. The demands of the thermal technologies for IT gears are not satisfied by functioning solutions only. High power efficiency is usually deterministic to the success of thermal solutions. In this whitepaper, Wiwynn investigated a “Two phase Cooling” thermal solution for meeting upcoming challenges. Dielectric fluid is introduced as a working fluid that is pumped to hot chips and transported heat to the rear door heat exchanger (RDHx). The heat is then ejected [...]
High Performance/Scalability Compute Accelerator with Latest PCIe Gen4 Technology The fast development of AI deep learning and HPC applications increases demands for high performance and high scalability computing solutions to accelerate model training and data processing. To address the fast-growing market, Wiwynn unveiled the industry’s first PCIe Gen 4 enabled 4U disaggregated compute accelerator—XC200G2 at OCP US Summit 2018. XC200G2 is a 19” 4U JBOG (Just a bunch of GPUs) supporting up to sixteen PCIe x16 accelerator cards, including GPUs, ASICs, and FPGA cards. Data centers can choose among various solutions to accelerate and optimize their workloads. With Broadcom’s industry [...]
Accelerating 48V Adoption in Your Data Center: Two-Stage 48V SWC-Regulated Solution With rapid development of cloud computing, the energy used in US data centers is estimated to exceed 190 billion kwr by 2020. Taking into account the increasing demands of high power processors and accelerators, it is inevitable for data center operators to re-architect their power delivery strategy. As discussed in Wiwynn’s previous whitepaper, “48V: An Improved Power Delivery System for Data Centers”, 48V is proven an efficient power distribution architecture, and the adoption of 48V in data center ecosystem is accelerating. The 48V-to-12V two-stage power conversion modules (so-called two-stage [...]
Composable Infrastructure – The Foundation for a Future-Proof SDDC We are well aware of the benefits of composable infrastructure with Wiwynn & Intel® Rack Scake Design. But how do we get it started? In this whitepaper, we will introduce the entire implementation of Wiwynn® Cluster Manager with Intel® RSD, including proof-of-concept (POC), pilot-run, large-scale phase-in, and popular workloads deployment. This is to help potential adopters evaluate the benefits and feasibility of this solution and then transfer their workloads smoothly. Download now to learn how to start and phase in gradually !
Accelerating your AI/deep learning model training with multiple GPU Deep Learning has shown remarkable results in many fields.Instant parameter adjustment is substantial for a successful deep learning model. To accelerate training process of deep learning, many studies are designed to use distributed deep learning systems with multiple GPUs. Hardware performance and utilization efficiency of multi-GPU systems are dependent on factors such as model size and amount of data. In this whitepaper, we will analyze the multi-GPU working model to identify performance bottleneck and its corresponding solutions regarding to model settings and hardware configurations. Benchmark test and Face Swap practice are also used for [...]
Validation Solution for OCP (PCIe) Mezzanine Validation is essential for solid design of NIC cards and its system with OCP mezzanine connectors. Wiwynn, as a key contributor of the OCP community with extensive experiences in designing IT platforms, provides an OCP mezzanine test fixture to satisfy this validation need. The test fixture is used to validate the high speed channel on both the NIC card and OCP mezzanine interface and to ensure meeting design specification and interchangeability/compatibility requirements. This paper addresses challenges of the design and detail of validation process. Download it to learn more.
Integrate Ceph and Kubernetes on Wiwynn ST7200-30P All-Flash Storage Ceph and Kubernetes are popular open source storage software. This paper demonstrates how to integrate them with Wiwynn’s all-flash storage ST7200-30P step by step. Download it to learn more. Contact email@example.com for further discussion