alphaspirit - Fotolia

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Should you go all-in on hyper-convergence technology?

Hyper-converged storage reduces TCO, simplifies installs and is poised and ready for the software-defined data center.

This article can also be found in the Premium Editorial Download: Storage magazine: 2017 IT spending trends for data storage:

As solid-state storage has supplanted the venerable hard drive as primary storage, we've come to realize a number of deficiencies in the RAID array model.

Compared to local storage, RAID arrays deliver millisecond delays to all I/O operations and, while this was acceptable when HDD access times ran to tens of milliseconds, it becomes extremely inadequate when, for example, a local nonvolatile memory express (NVMe) SSD can deliver data within 100 microseconds. In the meantime, rebuild times for failed HDDs proved longer than the average time to failure of another drive in the array. This would eventually lead to data loss. RAID 6, with a second parity drive, tempered the issue for awhile. But increases in capacity above 4 TB made even dual-parity loss-prone.

These inadequacies, among others, forced the storage industry to change course, steering us toward hyper-convergence technology.

The road to hyper-convergence

The logical response to the performance and reliability issues revealed by the deployment of flash was to shift from arrays -- with a massive number of slow drives -- to a compact storage appliance with just eight or 10 drives attached by a nonredundant controller. This ensured a high level of data integrity by replicating data across appliances, as opposed to doing so internally with RAID.

One benefit of the small appliance approach is that network performance can easily be matched to the raw performance of the drives. This becomes very important as NVMe drives move into the 10 GBps streaming rate range for a single drive, as just happened in 2016.

Bottom line: There is little risk in moving down the hyper-converged technology path today.

As this was going on, storage software vendors started exploring new virtualization concepts, collectively known as software-defined storage (SDS), that unbundle storage services from actual storage platforms and run them in a general virtual instance pool. This development, which seeks to make storage an agile, scalable resource at the service level, corresponds strongly to server virtualization and orchestration already in place in the cloud.

Running virtual storage services in a storage appliance makes a lot of sense, as they all use some form of commercial off-the-shelf (COTS) platform as their controller. The realization that there was a good deal of spare compute capacity in the storage controller is what directly triggered the concept of hyper-convergence technology, along with the recognition that a compact storage appliance and a typical rack server are essentially identical in form and configuration. It was clearly time to merge compact storage appliances with rack servers, reducing hardware complexity and allowing a broader range of scaling on the storage side.

Hyper-converged systems today

Vendors typically build hyper-converged platforms as a 2U rack unit with an X64-server motherboard and a set of SSDs. These appliances are networked together so all storage forms a common virtual pool. Creating the pool requires some magic sauce, usually a storage management suite, to run on all the appliances and present the storage as a virtual SAN.

Storage management automates discovery of new drives, making the scaling process simple. When a drive fails, the software recovers the cluster by copying data from other drives until the redundancy structure is rebuilt. These tools support both copy redundancy and erasure codes -- the latter with some issues we'll cover later.

One question with hyper-converged technology is what to do with older data, which is usually moved to secondary storage.

From a performance viewpoint, the drives local to any appliance deliver very high I/O and bandwidth internally. A set of eight NVMe drives can deliver 80 GBps or 10 million IOPS, for example. This is way more than a typical server can use, leaving extra bandwidth to be shared.

Network access poses a challenge, however. An ideal network would have little additional latency, but, of course, we are nowhere close to this. A good plan for a hyper-converged cluster is to use a fast network such as multiple 10 or 25 Gigabit Ethernet links (or even faster connections). This admittedly adds some cost, but reduces data management complexity (from the need to localize key data) and allows all servers to function faster.

Many hyper-converged products now use Remote Direct Memory Access Ethernet or InfiniBand networks. This adds considerable throughput while reducing overhead by as much as 90% and significantly reduces latency.

The case for hyper-converging

There are two classes of compact storage appliances. One takes a "Lego" block approach for object storage. It is also used for point products such as virtual desktop infrastructure (VDI), where storage capacity is relatively small, or as a complete storage platform for remote offices. The second, more recent, form is more interesting within the general context of hyper-convergence technology.

It's built to tackle the tasks of virtualizing application instances, storage services and software-defined network (SDN) services. The CPU is usually more powerful -- possibly a dual-CPU configuration -- and the amount of dynamic RAM (DRAM) is quite large. This allows each appliance to run many more instances, especially if Docker containers are fully supported by all three service classes.

Why traditional servers + RAID architecture fall down

  • Can't keep up with SSD
  • Fibre Channel-centric -- not a cluster interface
  • SAN needs own admin and support
  • Can't scale out to large size clusters
  • RAID rebuild time too long
  • RAID doesn't provide appliance-level data integrity/availability

Overall, the upside of hyper-convergence technology lies in ease of use. There is a single hardware box to purchase for both storage and servers, and network switching is much more cost-efficient due to the barebones nature of hyper-converged infrastructure (HCI) switching gear with SDN. This saves on sparing costs and simplifies installation, while HCI software usually comes preloaded to save even more startup time. Support of HCI involves a single supplier, which removes risk and reduces internal staffing needs.

HCI offerings come from all the major IT platform vendors and most use a third-party (Nexenta or Simplivity) virtual SAN tool to for clustering. They may also include other features, such as management and provisioning tools, to bolster the offering. Most hyper-converged technology vendors today limit the number of configurations they offer, with the purpose being to guarantee out-of-the-box operation and first-rate support.

Bottom line: There is little risk in moving down the hyper-converged technology path today. Costs should prove lower than traditional a la carte configurations, especially compared with traditional RAID, and there are many suppliers, ranging from the major vendors to startups. The evolution of software-defined everything will add many service options and take functionality to new levels over the next few years.

Getting HCI right

There are some issues to be cognizant of when deploying hyper-converged technology, however. We've touched on networking, definitely not a place to scrimp and save. Caching data, probably using NVDIMM as a DRAM extender, will likely become important in the near future.

Over time, you will need to extend any HCI cluster. With server and storage both evolving at their fastest pace in decades, it's certain that any upgrades will be heterogeneous to your existing appliances ... faster and bigger memory and drives, and so on. The software making the cluster work has to cope well with these expansions and handle the difference in resources properly.

There are two major trends in IT right now, massive performance improvements and a move toward very compact packaging.

The core cluster software is vendor-agnostic, just requiring COTS hardware to run. However, the major suppliers of HCI gear such as Dell and Hewlett Packard Enterprise dominate, so there's the possibility for long-term lock-in, especially as new appliances enter the market. Ask your vendor if they allow multiple suppliers' products to pool together, just as the typical cloud does today, to prevent this from happening.

Hyper-converged technology is deficient for some use cases, of course. Products with GPUs are still outside HCI-approved configurations, which impacts big data and HPC needs. The HCI configurations described here are also overkill for many remote offices, where the added complexity of storage pooling may be unnecessary.

Using part of an HCI cluster for VDI makes sense, though, since this unifies hardware purchases and allows you to apply much the same resources as would be needed for a decent-sized traditional VDI setup while getting the benefits of a common architecture.

Secondary storage

One question with hyper-converged technology is what to do with older data, which is usually moved to secondary storage. You could add some bulk hard drives to the appliance, and significantly enhance those through compression and deduplication. So, for example, a pair of 10 TB HDDs could effectively add 100 TB of compressed secondary storage to each node. Alternatively, you could move data out to a networked secondary storage system (today, usually an object store).

The choice between the two options in terms of performance is a wash, though simply adding some drives to empty slots in HCI appliances will probably be quite a bit cheaper.

Alternatives to HCI

There are, in fact, only two other viable alternative approaches to HCI for a modern IT strategy. One is to move storage completely to a public cloud. It's possible to obtain a dedicated private cloud-like space -- using Virtual Private Clouds in Amazon Web Services, for example -- that matches up with your data center in terms of security and data integrity. Most IT shops aren't ready for a full transition to the cloud yet, however, while in-house facilities based on HCI may in fact have a lower total cost of ownership (TCO).

The other alternative is to build a traditional server cluster with networked storage. Used as a cloud, these run into I/O performance issues, however, even when all-flash arrays are used to boost networked storage performance. Latencies here are always higher than local NVMe drives, which is pushing all-flash array vendors to deliver NVMe over Fabrics interfaces on their boxes. But even these will still be slower than local drives.

The networked storage approach complicates vendor management and generally increases TCO over hyper-convergence technology. Using networked storage for secondary storage is also more expensive.

The evolution of HCI

There are two major trends in IT right now, massive performance improvements and a move toward very compact packaging.

SSD performance continues to improve at a rapid pace. This means servers can do more with less, with fewer units for a given workload. The advent of storage-class memory (SCM) in the form of NVDIMMs is another game-changer. SCM acts as both a DRAM expander -- allowing more instances in a server -- and persistent DRAM memory, which will speed up applications by large factors as OS and compilers evolve to support it over the next 18 months. This will make HCI appliances much more powerful, especially when coupled with the multiplying effect of Docker containers on instance count.

Benefits of hyper-converged systems

  • Local storage -- both instance store and persistent data storage are low-latency
  • Matches SSD performance needs
  • Can scale to very large clusters
  • Fewer platforms and fewer vendors to manage
  • Lego-like simplicity of integration
  • Inexpensive platform for storage compared with RAID arrays
  • Fits in with software-defined infrastructure
  • Extensible with future CPU and SSD architectures

These are relatively near-term improvements. Within a couple of years, variations of the Hybrid-Memory Cube (HMC) architecture will bring DRAM and CPU much closer together. We'll see CPUs with what amounts to a 16 GB or 32 GB L4 cache in 2017, while the remaining DRAM will be coupled on a much higher bandwidth serial connection scheme. There are initiatives under way that would make all of this memory shareable over an HCI cluster, taking performance to a new plateau.

Meanwhile, SSDs are getting smaller and denser. 3D NAND technology will take over the market in 2017. The result is that tiny SSDs will have huge capacities, so expect 10 TB SSDs in the tiny M.2 form factor. A stack of 10 of these would fit in a 3.5-inch drive bay. At the bulk end of the spectrum, 100 TB 2.5-inch SSDs are already announced, though delivery dates are still up in the air.

The server engine using the HMC approach is also much more compact, since the CPU is delivered on a tiny module with its power system on board. Taken together, small drives and smaller server engines mean smaller systems. Most likely, 2018's sweet spot will be either a 1/2U rack server or a simple high-density blade-chassis approach.

So should you go all-in on hyper-converged systems?

The answer, in short, is yes. As a way to get the lowest TCO, simplest (and fastest) installs, and a platform poised for software-defined infrastructure and fully orchestrated hybrid clouds, hyper-convergence technology looks good today and absolutely compelling over the next year or two.

Next Steps

Selling hyper-converged infrastructure technology

Performance among hyper-convergence hurdles

Flash, software lead hyper-convergence technology options

This was last published in April 2017

Dig Deeper on Hyper-Converged Infrastructure Implementation

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

2 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What's keeping you from making hyper-converged infrastructure a part of your software-defined data center?
Cancel
This is a great breakthrough in technology, power and overall efficiency. I’ve done consulting for enterprise level clients down to medium size companies all whom have benefited from migrating to a hyper-convergence setup. I think this architecture should be considered by any entity looking to upgrade and streamline their infrastructure. Joe in Chicago.
Cancel

-ADS BY GOOGLE

SearchDataCenter

SearchCloudStorage

SearchStorage

SearchNetworking

SearchVMware

Close