Sergey Nivens - Fotolia

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Hyper-converged market grows, but is still young

As vendors address perceived hyper-convergence market flaws such as capacity, performance and vendor lock-in, enterprise users have more choices than ever.

Current offerings in the hyper-converged market are turnkey, self-contained compute systems sold by a single vendor and typically branded by that vendor or an integrator. They include everything needed to create a compute infrastructure, including hardware, software, management functionality and comprehensive support.

Hyper-converged systems are typically composed of multiple, physical appliance modules in a scale-out topology referred to as nodes. Each node includes storage, a compute engine, networking components and, usually, a hypervisor. Although they don't have to include one, all leading products have at least one hypervisor option, and many offer two or more.

Appliances typically contain between one and four nodes, each of which is an independent server with CPU and memory that share a common server chassis. Hyper-converged clusters usually contain between four and 16 nodes, although some have no specified limit. Hyper-converged infrastructures leverage either virtual SAN or clustered file system software to share storage across multiple nodes.

Hyper-scale concepts become hyper-converged storage

The hyper-converged market is based on a model known as the Open Storage Platform (OSP), developed initially by large Web companies as a way to store and handle the enormous amounts of data their social media and Internet-based businesses were generating.

The hyper-converged market is based on a model known as the Open Storage Platform (OSP), developed initially by large Web companies as a way to store and handle the enormous amounts of data their social media and Internet-based businesses were generating. These hyper-scalers leveraged the economics of industry-standard x86 server hardware and the flexibility of software-defined storage features and functionality, much of which they wrote themselves.

From this origin, vendors in the hyper-converged marketplace have productized the OSP model, removing the do-it-yourself aspect and consolidating the components into a turnkey appliance.

The evolution of hyper-converged products has been largely incremental, with vendors adding model configurations with more storage capacity -- flash and hard disk drives (HDDs) -- compute power and memory. The majority of products use a mix of flash and HDD storage, but one vendor offers all-flash configurations exclusively while another provides only HDD storage.

The most common approach has been to develop proprietary software that includes embedded storage and management functions, or runs these functions as virtual machines (VMs) in an embedded hypervisor. This software comes either installed on branded x86 server hardware or on a server from one of the major platform vendors, such as Hewlett Packard Enterprise (HPE), Cisco or Supermicro. Some of these products include Atlantis HyperScale, HPE CS-250 StoreVirtual, Maxta MaxDeploy, Nutanix Acropolis, Dell XC series (using Nutanix software), Scale Computing HC3 and SimpliVity OmniCube.

Another approach is to use the VMware EVO:RAIL platform. Launched at VMworld in 2014, EVO:RAIL can be thought of as a reference architecture that integrates storage and management functions into the VMware hypervisor. VMware has partnered with most major vendors to sell EVO:RAIL using branded x86 server hardware packaged with VMware's Virtual SAN software. Some of these products include Dell EVO:RAIL, EMC VSPEX Blue, Fujitsu PrimeFlex, HDS UCP 1000, NetApp EVO:RAIL and Supermicro SYS-2028TP. Most EVO:RAIL products have essentially the same configurations, although there is some unique functionality provided by the different vendors.

On a per-node basis, hyper-converged infrastructure configurations vary widely, offering up to two dozen CPU cores and a half terabyte of memory. Storage options are similarly varied providing up to 60 TB of disk and 38 TB of flash capacity per 2U appliance, based on current research from the Evaluator Group. These are impressive statistics, but can hyper-converged infrastructure products really handle serious workloads?

Why hyper-converged?

Hyper-converged infrastructures provide a powerful, feature-rich, virtual server environment that can be set up often in an hour or less, and is simple to operate and scale.

So if the hyper-converged market sweet spot is for use cases such as remote offices or businesses with small IT departments, can it be effective in larger environments?

Most of the success these products have seen is in relatively small or isolated environments -- like departmental computing -- or remote locations where technical expertise doesn't support a traditional IT infrastructure. A key area for hyper-converged products is business environments with multiple locations and no dedicated IT staff, such as retail chains or branch offices of a company.

Similar to converged systems, hyper-converged architectures simplify the procurement process by providing a single SKU product. Instead of combining multiple, rack-level IT components, these products combine storage, networking and compute elements into a single, industry-standard server chassis. The result is a data center offering that scales down to smaller configurations and is much simpler to implement, which has led to the success of hyper-converged products.

How hyper-converged infrastructure products hold up

So if the hyper-converged market sweet spot is for use cases such as remote offices or businesses with small IT departments, can it be effective in larger environments? Many of the environments that have implemented hyper-convergence may not be particularly high-performance, but that doesn't mean the systems lack the horsepower to run real workloads.

When comparing hyper-converged options, tests should resemble real-world conditions. In this case, those would most likely be virtual server environments or virtual desktop infrastructures (VDI).

A standard measure of system performance is the number of VMs or virtual desktops supported. Based on Evaluator Group's current IOMark VM and IOMark VDI testing, hyper-converged systems can accommodate significant workloads.

Hyper-converged market limitations: Scale, implementation

When shipped as an appliance, hyper-converged infrastructures include storage, compute and networking. Unlike most traditional IT systems, that means all these resources are required to scale together.

Some vendors have started to address claims that their products only scale capacity and computer in lock-step. Most products (except EVO:RAIL) are available in multiple appliance configurations that aim to increase flexibility, especially in storage. A few vendors even offer storage-only nodes -- appliances that have minimal CPU and memory. Another vendor allows its hyper-converged product to be connected to an existing scale-out storage product.

Even though most products offer different configuration options when adding modules, hyper-converged infrastructures still sub-optimize resources. This results in higher costs as the infrastructure gets bigger.

Hyper-converged systems are, by definition, self-contained, comprehensive IT infrastructures. This means they don't integrate into an existing data center SAN or compute environment. And since all subsequent modules must be bought from the same vendor, most hyper-converged market options lock the customer into a single vendor. The success hyper-convergence has seen in the departmental, remote office and small to medium-sized business space doesn't always translate into the traditional data center, where quick setup and easy operation are less important than scale, flexibility and cost.

Hyper-convergence: No replacement for traditional IT

By reducing the infrastructure stack to a small cluster of homogeneous appliances, even a small company can implement a virtual server compute environment by connecting modules like building blocks and then following a simple, menu-driven startup process.

Hyper-converged systems could be called a "doable, do-it-yourself system." This is the greatest value offered by hyper-convergence, and the reason why products from the hyper-converged market have established themselves in departmental compute environments, remote office locations and small companies. For these use cases, it has become a mainstream technology. But it's a different story in traditional data center environments.

We don't see hyper-converged systems replacing existing storage and compute infrastructures in midsize and larger companies that typically have more IT expertise. These organizations are more interested in consolidating IT systems, not adding new silos to manage. They're also hesitant to be locked into a single vendor, as required by most hyper-converged infrastructure options.

Next Steps

Top players in the hyper-converged infrastructure market

Why IT is jumping on the hyper-converged bandwagon

Competition in the hyper-converged market heats up

This was last published in November 2015

Dig Deeper on Hyper-Converged Vendors and Products

Join the conversation

2 comments

Send me notifications when other members comment.

Please create a username to comment.

Do you think the hyper-convergence market has evolved enough for larger enterprises to adopt the technology?
Cancel
At Least you didn't mention those HCI vendors that are natively integrated with OS like GRIDSTORE that perform a very high level of integration with Microsoft and become a real booster for customer that bypass standard marketing flow...
Cancel

-ADS BY GOOGLE

SearchDataCenter

SearchStorage

SearchNetworking

SearchVMware

Close