The rise of hyper-converged architecture for storage
A comprehensive collection of articles, videos and more, hand-picked by our editors
Over the past year, I've interviewed numerous users and asked them why they chose hyper-converged products rather than more traditional data center gear. The users I've spoken with are extremely satisfied with their choices.
When Taneja Group coined the term hyper-convergence in 2012, we defined it as a genuine integration of compute, networking, storage and server virtualization delivered using a modular scale-out approach. These products also included advanced data services such as deduplication, compression, WAN optimization, storage virtualization and data protection, each delivered with a virtual machine (VM)-centric approach.
The original hyper-converged infrastructure pioneers included Nutanix, Scale Computing and SimpliVity. They've since been joined by many others, including most major system vendors. VMware distributes EVO:RAIL through multiple system suppliers. Dell offers both an EVO:RAIL and a Nutanix-based product. Hewlett-Packard has an EVO:RAIL offering in addition to its StoreVirtual VSA. DataCore Software, Gridstore and Maxta also have hyper-converged products and reference solutions.
Key highlights of hyper-converged architectures
Cores galore. I won't quote the latest maximum core count in an x86 processor because the number will have changed by the time you read this. A CPU with 12 cores is mainstream and that number is increasing fast. This matters for hyper-converged products because it allows more infrastructure services like software-defined storage to coexist in the same CPU complex as business applications without diminishing the overall quality of service.
Flash-first architectures. Vendors have taken advantage of the cost/performance improvements enabled by flash-based storage technologies by designing flash-first architectures into their software-defined storage technology. This, in turn, allows advanced data services such as deduplication, compression and WAN optimization to be added without significantly compromising performance. Flash acceleration also enables scale-out storage architectures to outperform legacy spinning disk arrays. So, hyper-converged systems can support a wider range of business application workloads in a smaller form-factor.
Hyper-scale compute platforms. Hyper-scale optimized servers are used at Google and Facebook, so why shouldn't they also be used in on-premises data centers? Typically, the server of choice for many hyper-converged infrastructure products is a four-bladed 2U platform with 24 2.5-inch form-factor drive slots. This is an ideal configuration because the basic three nodes considered the typical starting point for most scale-out solutions can be achieved in an extremely dense package. With two CPUs per blade, you can easily support close to 100 cores in a 2U server chassis -- more than enough to support upwards of 50-plus VMs.
Why users are choosing hyper-convergence
In my interviews with users, I found many factors contributed to their decision to go with hyper-converged products.
Simplicity. Some users of hyper-converged products were fed up with the DIY approach to other products and the administrative expertise required. Typically, these users had limited administrative resources. They didn't want to manage storage complexity and desired simple dashboards to show how they were doing with CPU, memory and storage utilization rates. They appreciated that data services were done at the VM level and that problem diagnosis was done in a similar manner. Some opted to avoid advanced virtualization technology completely and chose a simple, completely encapsulated virtualized platform like Scale Computing's HC3.
Value. Hyper-converged users did their homework on TCO. Often, these companies were comparing the option of moving services off-premises to the cloud vs. an on-premises DIY approach, so more staff is the last thing they wanted. When vendor price quotes came in, hyper-converged products easily beat the DIY three-tier options in terms of capital acquisition costs. Considering the cloud option, these users were not yet comfortable with moving their mission-critical applications to the cloud. So a solution that was just as easy as a public cloud deployment but with on-premises control and security was appealing.
Support experience. While support might not be what initially led users to hyper-converged products, it will be what keeps them there for the long haul. The support mantra of "one throat to choke" is a hallmark of convergence. Users were very happy with their support experiences, as they only have to call one company for help.
Availability and scalability. Users appreciated the built-in availability and scalability offered by their hyper-converged products. Most reported zero downtime with systems; with high availability (HA) built in they don't have to design a multi-server, externally shared storage architecture. They also appreciated the simplicity of the designs; they bought just what they needed and could upgrade modularly in small increments. Some larger users have even adopted a node-level field-replaceable strategy where if a node fails they have standby nodes ready for quick replacement. Smaller customers were able to enjoy (for the first time) an entire data center with built-in HA.
Bottom line for hyper-convergence
We will remember 2014 as the year hyper-converged products became mainstream. One point that stood out in my user interviews is that everyone was giving up brand-name servers and external storage and replacing both with hyper-converged products. So it's no wonder many of the major system vendors are jumping on this bandwagon. I can't wait to see what 2015 will bring in hyper-converged innovations.
Abou the author:
Jeff Kato is a senior storage analyst at Taneja Group with a focus on converged and hyper-converged infrastructure and primary storage.