Hyperconvergence -- also spelled hyper-convergence -- is a software-centric architecture that tightly integrates compute, storage and virtualization resources in a single system that usually consists of x86 hardware. A hyper-converged system can also be sold as software that can be installed on a buyer's existing hardware or as hardware purchased specifically for the installation.
A hyper-converged system allows the integrated technologies to be managed as a single system through a common tool set. Most hyper-converged systems require a minimum of three hardware nodes for high availability and can be expanded through the addition of nodes to the base unit. A grouping of nodes is known as a cluster.
Hyperconvergence began in smaller use cases, such as virtual desktop infrastructure (VDI), but enterprises now commonly use the technology to simplify the deployment, management and scaling of IT resources and to reap Capex and Opex advantages.
Hyper-converged infrastructure (HCI) began as the domain of startups, such as Maxta, Nutanix, Pivot3, Scale Computing and SimpliVity. As a sign of HCI's maturity, larger server and storage vendors, such as Cisco, Dell EMC (including VMware), Hewlett Packard Enterprise (HPE), Lenovo and NetApp, have moved into the market. Some of these vendors have multiple HCI-branded products, often in conjunction with software partners.
Why companies are moving to hyperconvergence
The all-in-one characteristics of the hyper-converged data center are attractive to IT professionals who are used to having to piece these components together themselves or with the sometimes-costly assistance of vendors and IT consultants. Hyperconvergence also enables generalists to administer the systems.
Hyperconvergence is a good option for organizations that have heavily invested in virtualization technologies, but are still having difficulty with the complexity and cost of data protection and storage. It is also a highly attractive technology for enterprises that would prefer to focus their time, money and employee resources more on the operational aspects of their business and less on maintaining infrastructure.
However, the risk of vendor lock-in does exist. For example, you can't combine nodes from one hyper-converged vendor with those of another should the latter offer you a better deal or provide the balance of compute, storage and networking resources you require in new nodes at some future date.
In spite of the technical and financial rigidity associated with hyperconvergence, the total cost of ownership is often lower than the alternative. Let's say you wanted to migrate an entire IT infrastructure to the cloud -- a popular alternative to going hyper-converged today. If you took a hyper-converged approach, you would simply add or remove nodes to increase or decrease resources. With the cloud, you may have to migrate to another cloud service should your current provider not support your expansion plans -- an expensive and often complicated process in its own right.
How hyperconvergence works
A hyper-converged platform typically integrates compute, storage and networking with an often intelligent and automated software-defined data center (SDDC) management system and software layer that defines the operational aspects of that infrastructure. Not all SDDC architectures are hyper-converged, however, including those that feature disparate, nonintegrated hardware platforms and components.
A hypervisor -- designed or modified by the vendor specifically to work with its product -- orchestrates storage, compute and networking provisioning. Because a hyper-converged system virtualizes all resources, those resources can be adjusted to accommodate more or less virtual machines (VMs) on the fly without having to suspend the activity of any VM running at the time.
Once the number of VMs has reached the capacity of the hyper-converged infrastructure, scaling is as easy as adding more nodes. New nodes -- with compute, storage and networking resources -- can be added to the overall storage pool to be shared among the VMs.
Single-pane-of-glass management provides administrators with a comprehensive view of the state of the IT environment they are managing by integrating and presenting data from various data sources in a console that unifies setup, configuration, management and monitoring.
Hyper-converged vendors often build in data protection -- mirroring, replication, striping, erasure coding and so on -- for reliability and data reduction purposes, as well as backup and recovery, disaster recovery (DR) and other business continuity (BC) features. Bundled software also often includes support for automation, VM migration, management tools, load- and resource-balancing, and the ability to implement rolling updates while VMs continue to run. Automatic failover means losing a computer node or storage device won't bring down individual VMs or the system as a whole.
Why hyperconvergence is important
Hyperconvergence helps improve the management of virtual IT environments, serves as a building block for the cloud and changes the roles of IT teams.
Businesses should know exactly what they are getting when purchasing HCI, with a guarantee that the various components will work well together and are easily managed. When it is time to add and scale IT resources, the process should be as simple as buying and linearly plugging in another auto-discoverable node from your hyper-converged systems vendor.
Perhaps, most importantly of all, there is no need for IT to have to separately manage and configure servers, storage, hypervisors and network devices in a hyper-converged environment.
Hyperconvergence vs. converged
In a converged infrastructure, the server, compute and networking components remain separate and are not integrated into nodes as they are with hyper-converged systems. Converged infrastructure basically provides a recipe to customers on how to buy preapproved, best-of-breed components that are guaranteed to work together properly.
The hypervisor is also more tightly integrated in hyper-converged systems than in converged systems. While most converged infrastructures support VMware and, usually, other hypervisors, a hyper-converged infrastructure runs all key data center functions as software on the hypervisor.
An early promise of hyperconvergence included networking in the stack, but that is still in the early stages. Hyper-converged vendors are still working on providing and managing network resources in the same way they support data storage and compute.
Ben Woo, managing director of Neuralytix, explains when to use hyper-converged vs. converged infrastructure products in your organization.
Almost all hyper-converged vendors support VMware's market-leading hypervisor software, although some support Microsoft Hyper-V and Kernel-based Virtual Machine (KVM) hypervisors. Early hyper-converged player Nutanix sells its own Acropolis hypervisor product based on open source KVM, while continuing to also support VMware hypervisors.
The tight integration of the components in a hyper-converged infrastructure provides its primary benefits: ease of management and ease of scaling. Because all of the components have been designed from the ground up or modified by the vendor to work tightly together, it is possible to manage all resources from one management tool or console, including compute, storage networking and virtualization.
When an organization needs more IT resources, expanding the HCI is simply a matter of adding more nodes. This also leads to one of the primary drawbacks of a hyper-converged infrastructure: All resources must be increased when an organization needs to increase any one resource. Early HCI products required an organization to expand all compute resources and the number of VMs when expanding storage. To ease this problem, newer HCI platforms include nodes that are either storage- or compute-centric.
Major vendors and products
Dell EMC sells its VxRail appliance with VMware vSAN HCI software for customers using VMware hypervisors. The vendor also sells its XC Series for customers with a hypervisor other than VMware's already installed in their IT infrastructure. The XC Series uses the Nutanix software stack through a partnership between Dell and Nutanix.
Nutanix is the most successful of the HCI startups and remains among the market leaders. The vendor began shipping HCI appliances in 2013, became a public company in 2016 and forecast more than $1 billion in revenue in 2018. Nutanix offers a branded NX appliance and also sells its software through partnerships with server vendors, such as Dell and Lenovo.
HPE, which acquired SimpliVity for $650 million in early 2017, sells an HPE SimpliVity hyper-converged product that packages SimpliVity OmniStack software on HPE ProLiant servers.
Cisco bought HCI software vendor Springpath for $320 million in September 2017 and runs Springpath software on its Unified Computing System (UCS) servers in its HyperFlex HCI appliance.
Another server vendor, Lenovo, packages its hardware with software from several partners, including Nutanix.
NetApp, the largest stand-alone storage vendor, began selling NetApp HCI appliances in 2017 that use flash hardware from its SolidFire hardware platform.
Regardless of the hardware brand, VMware and Nutanix supply most of the software that provides core hyperconvergence functionality. According to market research from IDC, VMware and Nutanix software ran on appliances that contributed 70% of HCI revenue in the first quarter of 2018.
The future of hyperconvergence
Expect hyper-converged systems to become more and more powerful as performance continues to improve due to advancements in solid-state drive (SSD) technology -- such as nonvolatile memory express (NVMe) -- enabling servers to do a lot more with fewer units per workload.
The arrival of storage-class memory (SCM), specifically SCM NVDIMMS, will also enable more instances per server by acting as a DRAM expander and -- as OSes and compilers start to support it -- will greatly speed application performance. Further down the road, variations on Hybrid Memory Cube architecture that bring CPU and DRAM closer to one another, wider bandwidths to better connect DRAM to CPUs, and increasing amounts of L4 cache on CPUs could lead to tremendous improvements in hyper-converged performance. In addition, plans to make HCI memory shareable over an entire cluster could eventually raise performance even higher.
SSD capacities continue to skyrocket, with a 100 GB SSD drive announced already. Meanwhile, newer, tinier and faster form-factor SSDs -- such as those based on the M.2. specification -- will help lead hyper-converged nodes to previously unheard amounts of storage capacity in ever-smaller appliances. For example, a single 3.5-inch drive bay can hold 10 M.2. SSD cards.
Vendors are also addressing a major complaint with hyperconvergence, which is the inability to purchase storage capacity separate from compute power for applications. Some now offer nodes that tilt in one direction or the other -- compute or capacity. A related trend, known as disaggregation, could also gain more traction in hyperconvergence circles in the near future. Disaggregation aims to boost resource utilization rates far above what they are today, therefore reducing data center cooling, power and space requirements in the process.
The goal is to make hyper-converged systems more flexible and resource-use efficient by pooling specific types of functionality -- compute, flash capacity, memory, hard disk drive capacity and so on -- across all nodes and making them available to all applications in a hyper-converged infrastructure.