The rise of hyper-converged architecture for storage
A comprehensive collection of articles, videos and more, hand-picked by our editors
With competing implementation models, IT decision makers need help when evaluating hyper-converged system offerings...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
for their environments.
The following seven hyper-converged storage system criteria revolve around the basic issues that are central to the objective assessment of the business value of any technology acquisition: Cost, availability and fitness to purpose.
Some hyper-converged server/storage offerings take the form of pre-engineered appliances or hardware platforms certified by a given hypervisor vendor. These create a hardware lock-in situation. But pursuing a hardware-agnostic approach is the only way you can scale various parts of the node at different times based on need or the availability of new technology. The most fundamental question to ask is: "What hardware will I need to make the product work?"
Hardware dependencies should also be considered with an eye to scaling. The tighter the hardware specification, the less you may be able to scale various components in a modular way. Improvements in component technologies can occur at varying rates, making it difficult to keep track of innovation. Moreover, being locked in to a select list of hardware may increase the cost of the hyper-converged system offering over time.
Several hypervisor vendors are selling hyper-converged storage models that begin with a requirement for a minimum of three (or more) clustered nodes. A node usually requires a physical server, a software license, clustering software (whether part of a hypervisor software package, operating system or specialized third-party software), flash storage devices and a storage array or JBOD. The cost per node for one hypervisor vendor's hyper-converged system could range between $8,000 and $11,000 in software licenses and $8,000 and $14,000 in server and storage hardware, according to a recent lab report. Those numbers need to be multiplied by the number of nodes required to create the hyper-converged infrastructure, which is a minimum of three nodes but recommended to be four nodes for availability and performance reasons. By contrast, some third-party hyper-converged server-storage models may require only two physical nodes to start, and can leverage less expensive hardware (SATA, rather than SAS disk, for example).
Leading server hypervisor vendors (and most hyper-converged appliance vendors) prefer their uniform software stack be used to manage all attached resources and the specialized services they offer. When it comes to storage, hypervisor vendors provide interfaces to administer functions ranging from data mirroring and replication across storage nodes, thin provisioning of storage resources, deduplication and compression, and other tasks once performed on array controllers. In essence, they centralize the value-added services once touted as differentiators by storage array vendors into a centralized software-based controller. You need to remember that these services need to be examined to ensure the functionality is what you want to use with your infrastructure. For example, just because the compression service is impressive on a given product doesn't mean its wide-area replication service is best-of-breed.
Be aware that while hyper-converged system vendors agree that storage software services need to be implemented in an off-array software stack, many eschew the idea of abstracting capacity management from the storage array controller. This is a noteworthy limitation of many hyper-converged offerings, since it means capacity management is a separate activity that must be performed on each storage device, often requiring specialized tools and skills.
Hardware utilization efficiencies
Selecting a hyper-converged infrastructure model that optimizes the use of hardware is also important. For example, while most hyper-converged products in the market can leverage dynamic RAM and flash memory storage technology to create caches and buffers that improve application performance, not all of these products actually use either one efficiently or enable the use of the diverse and growing selection of products in the market today. DRAM is better suited for write caching than flash memory, but you might not understand that when reading the marketing literature from certain hyper-converged infrastructure vendors. Often times, flash is recommended for purposes to which it is ill-suited, or the hyper-converged vendor limits the customer's use of less expensive componentry or best-of-breed technologies to accelerate application performance in favor of products that have undergone certification with the vendor.
Support for DRAM
Support for DRAM (and perhaps flash memory) acceleration is required for servers running multiple virtual machines. Memory-based caching and buffering make application performance acceleration possible even when the root causes of slow application performance aren't storage I/O-related. Ideally, flash technology will also be supported, but it's not mandatory.
Overall "fitness to purpose"
Fitness to purpose simply refers to how the technology fits into the environment or setting it is going into. This includes everything from its noise level, electric/HVAC requirements, availability of on-site skilled workers and its compatibility with various hypervisors. It is worthwhile to make a list of facility, workload and user constraints when considering alternative products.
The clustering/data mirroring functionality of the hyper-converged software you are considering must include mirroring of data in memory caches and buffers, as well as the data stored on solid-state or magnetic disk.
High-availability mirrors must be capable of being tested and verified without disrupting application operation. You should also consider whether mirror failover is best provided as an automatic function or one requiring manual confirmation. Availability is a two-way street: After failover has occurred, capabilities must also be provided to fail back. This usually entails buffering mirrored writes until connections can be re-established.
The bottom line
While these seven criteria may not be exhaustive, they should help to winnow the field of hyper-converged system options to those that will serve your business well in the long haul. For my money, I prefer a hyper-converged infrastructure offering that is hardware- and hypervisor-agnostic, supports DRAM and flash devices from any vendor, and has a disk infrastructure that is both direct-attached and legacy SAN-attached (so I can fully realize my expected ROI from the latter). My final selection would also be one that can virtualize all storage capacity so I can manage capacity allocation and special storage services at multiple sites and on heterogeneous storage gear from a single software interface.
A converged and hyper-converged infrastructure comparison
Hyperconvergence in the virtual datacenter