What I Infer as the 5 Reasons for Differences in the Current Cloud Infrastructure
When I first came to the United States to do my Master’s degree, the first thing that struck me was the multi-cultural aspect of the American society.
Specifically, one would have to figure out the plethora of food choices – Mexican, Italian, Asian, American, and such. If you are a student on a strict financial plan as I was, I’m sure you would appreciate the importance of fast food joints. I would venture going as far out to say that knowing the differences about the food choices was as important as getting the degree itself.
However, they all served the common purpose of affordable and delicious food that quenched our hunger at the end of most often long tiring days.
Applying that same philosophy to our modular platform architecture, what constitutes the differences in the current offerings of the different infrastructure choices although we innately understand that the foundational cornerstone that leads us to respond to the requirements captured in the first blog in this series, could lie in commonality.
We intuitively comprehend that at the core, commonality in designs is the way to go forward and Open Compute Platform has been a champion of this cause since its inception in April 2011. However, 8 years since its inception, we still continue to see varying implementations and commonality seems to be elusive.
At Wiwynn, we recognize this problem statement and the underlying reasons that drive the differences and we are working towards addressing them that will allow us, and more importantly our customers, to lead to a world that could be more common in the near future.
Let’s now look at the underlying causes that drive differences in products.
- Fit & Form – There are different rack form factors that are causing different mechanical and thermal solutions optimized towards those specific infrastructures. The different infrastructures that we have today are OCP OU 12V, OCP OU 48V, OCP RU Power shelf, Open 19, EIA 19.
- Personas – There are different customer specific features in the different categories of Reliability, Serviceability, Availability, and Configurability that drive differences in the intelligence or firmware that gets loaded on to the hardware that is part of the different infrastructures
- Workloads – There are different workloads that require a particular element of computing for eg., Compute, Memory Storage, or GPU and each of these has different needs which in turn drive different configurations
- Solutions – Finally depending on the application of the customer use case, the software and the applications that get loaded and exercised are different. This leads to integration level differences during product development causing variations in integration, manufacturing, and deployment level testing
- Proprietary implementation – Although vendor inter-operability has improved since the inception of Redfish standard and Open hardware specifications through OCP, it is still a fact that how vendors interact and interface with hardware north bound traffic is proprietary and non-standard. This proprietary nature of interfacing with underlying hardware needs to be converted to a standard approach if true commonality and thereby vendor inter-operability is to be achieved.
So there it is – the 5 reasons for the differences in the current server ecosystems are Fit & Form, Personas, Workloads, Solutions, and Proprietary implementation. It would be great if you can share some of your thoughts, your own experience, and insights on this topic. We are keen to know whether the tenets that we are considering for a modular platform are relevant to you and if so, in what ways? We would love to hear your feedback in the comments section below.
In the next blog we will strive to understand the reason why we need to rise above these differences and how to strike the right level of balance between innovation and commonality that could potentially make our designs more robust, economical, modular, and may be even make them longer lasting.