IT professionals are attracted to hyper-convergence because the technology claims to create an architecture that grows automatically as businesses add workloads. But when it comes time to act on hyper-convergence, most data centers relegate the hyper-converged environment to a new project such as virtual desktop infrastructure.
In some cases, data centers use hyper-converged products as part of their virtualization 2.0 refresh. But, with a few exceptions, we do not see a wholesale change out from old multi-tiered offerings to a fully hyper-converged data center. Is there something a legacy architecture can still do better than a hyper-converged one?
Is a 100% hyper-converged data center worth it?
- Demands for compute or storage are solved by adding more nodes to the cluster.
- The number of vendors and service contracts the organization needs to deal with should be greatly reduced.
- Processes that once were separate, like data protection and disaster recovery, would converge and no longer require a separate team to manage.
The problems facing 100% hyper-convergence
To achieve a fully hyper-converged data center, converged architectures need to solve several problems. First is performance. Most hyper-converged environments provide more than adequate performance for most data center workloads, but few of them can provide high performance to a very specific set of applications. In other words, hyper-convergence is ideal for mainstream workloads, but not for performance-demanding, often mission-critical, ones.
In a non-converged environment, performance levels can be assured by dedicating certain aspects of the environment to it and using quality of service settings. The shared-everything nature of hyper-convergence makes it harder to guarantee certain levels of performance.
According to George Crump, founder of analyst firm Storage Switzerland, isolating and guaranteeing performance to a specific workload can become a challenge in a hyper-converged data center.
Another concern when trying to become 100% hyper-converged is efficiency, or lack thereof, in the environment. The way compute or storage performance -- as well as storage capacity problems -- is addressed in hyper-converged architectures is by adding nodes. The problem is that most data centers don't need to address all of these concerns simultaneously. Over time, the hyper-converged architecture winds up with an excess of compute performance, storage performance or storage capacity.
A final issue is the inherent latency caused by inter-node communication in hyper-converged architectures. As the architecture scales, this latency becomes more pronounced. If a hyper-converged architecture adds flash storage to resolve some of the above issues, it could not tap into the full performance potential of the flash media because of the latency.
Given these concerns, is a 100% hyper-converged data center realistic? It depends. Many data centers can meet their performance demands with a well-designed hyper-converged architecture. This is especially true for small- and medium-sized data centers. Even in the enterprise, a 100% hyper-converged architecture is not out of the realm of possibility.
About the Author:
George Crump is President and Founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. The "Switzerland" in his firm's name indicates his pledge to provide neutral analysis of the storage marketplace, rather than focusing on a single vendor or approach. With 25 years of experience designing storage solutions for data centers across the United States, he has seen the birth of such technologies as RAID, NAS and SAN. You can visit his company website at storageswiss.com.
Bringing hyper-convergence to the data center
Hyper-convergence can help meet virtualization requirements
The evolution of the hyper-converged market