Get started Bring yourself up to speed with our introductory content.

The evolution of hyper-converged architectures

Hyper-converged architectures are still relatively new to many IT professionals, but origins of the technology can be traced back to the early days of storage virtualization.

Hyper-convergence is getting plenty of attention from IT planners and, if its promises hold true, for good reasons. Hyper-converged architectures promise to lower the complexity and price of the infrastructure that had traditionally supported a virtualized server or desktop initiative. The path to hyper-convergence is a relatively short one, but one that is worth understanding.

Hyper-convergence abstracts the storage and networking tiers and embeds them into the compute tier. The technology counts on abstraction, a concept that IT professionals have become more comfortable with over the years. Abstraction, in a storage context, is the separation of storage software from storage hardware. In the past, storage software services such as volume and RAID protection, snapshots, cloning, replication and caching were all integrated into the storage controller. The software ran within the confines of the storage hardware.

Integrating the storage software and storage hardware did have its advantages, especially when storage processing power and storage media (hard disk drives) performance were scarce. The storage software and hardware could be optimized to maximize these resources. Early storage systems typically used customized processors, ASICs and FPGAs with a heavy focus on extracting maximum performance and reliability from the systems.

Intel and Linux changed everything

In the late 90s and early 2000s Intel began to deliver processors that, while general purpose, were able to provide as much performance as proprietary processors did just a few years before. At the same time, Linux and an open source variant of Unix started to deliver enterprise-class stability and performance. The combination fueled the first revolution in storage -- virtualization. Storage virtualization enabled the storage software to run on commodity servers based on Intel CPUs.

This combination led to companies that tried to leverage storage virtualization to make storage a software-only purchase. Companies such as DataCore, FalconStor and StorAge provided appliances that acted as the storage controller to provide storage services across a wide variety of storage hardware.

Storage virtualization allowed them to focus on software development and then to package that software with off-the-shelf storage hardware.

New storage vendors that leveraged storage virtualization emerged as well, but in a more turnkey manner. Storage virtualization allowed them to focus on software development and then to package that software with off-the-shelf storage hardware. The more turnkey experience proved very popular with customers and companies, such as EqualLogic, Compellent, LeftHand Networks and 3PAR, and began to affect the market share of EMC, IBM, NetApp and Hitachi Data Systems.

The virtualization of virtualization

From about 2005 on, storage virtualization became even more abstracted from the underlying hardware. It was able to run as a virtual machine in the industry's rising star, server virtualization, led by VMware. By this time, storage virtualization could be delivered as a complete software package, and the term software-defined storage was born. These products still counted on shared storage systems to which they would provide storage services.

Hyper-converged architectures come of age

Hyper-converged architectures take software-defined storage to the next level and create software-defined, scale-out storage systems that aggregate storage internal to the servers that they are installed on. This creates a virtual pool of storage that the virtual machines, and sometimes bare metal systems, can access as if they were a shared LUN.

Hyper-converged systems differentiate themselves quite a bit beyond this basic definition. Over the next few months, we’ll explore how they manage memory, storage (internal and shared), where they execute (as a virtual machine or in the kernel), how they provide quality of service and how they protect themselves.

Next Steps

A look at the difference between converged and hyper-converged architectures

Hyper-converged storage market evolution

A hyper-converged deep-dive: Definition, types and benefits

The value of deduplication in hyper-converged environments

This was last published in October 2015

PRO+

Content

Find more PRO+ content and other member only offers, here.

Essential Guide

What data storage virtualization looks like today

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What major differences do you see between hyper-converged architectures and storage virtualization?
Cancel

-ADS BY GOOGLE

SearchDataCenter

SearchCloudStorage

SearchStorage

SearchNetworking

SearchVMware

Close