One of the primary disadvantages data storage administrators encounter when working with hyper-converged systems...
is that storage control is not as granular as it would be with another type of system.
Consider, for example, a standalone virtualization host that uses direct-attached storage. If an administrator wished to increase storage capacity for such a server, they could install extra disks or replace existing disks with larger ones. This same basic philosophy also applies to servers that rely on remote storage.
Hyper-converged systems work differently because most vendors will not allow customers to increase storage capacity by adding disks or attaching to a remote storage array. Instead, hyper-converged systems are sold as pre-packaged bundles of hardware and software.
A hyper-converged system consists of a series of nodes. For all practical purposes, a node can be thought of as a hardware module. Each node includes CPU (compute), memory and storage resources (DAS). These hardware resources are matched to one another to ensure compatibility and optimal performance.
One thing that differentiates a hyper-converged system from other servers is the software layer. The software layer usually consists of a virtual appliance whose job it is to manage the existing nodes.
Every vendor does things a bit differently, but they all place stipulations around the total number of nodes that can exist within a hyper-converged system. In addition, there are usually rules pertaining to how nodes must be added to the system. For example, some vendors require four nodes to be added at a time, until the system reaches its maximum supported node count of 32 nodes. In some deployments, there is a ratio of one appliance to every four node blocks.
Upgrading hyper-converged systems: Scale-up vs. scale-out
This approach makes it easy to add storage capacity, but not without some difficulties. To understand why, you must be familiar with the difference between scale-up and scale-out.
Think back for a moment to the previous example of a standalone server equipped with DAS. If you wanted to increase the capacity of such a server, you might be able to scale-up through the installation of additional storage hardware (more disks).
Scale-out refers to the addition of nodes rather than the installation of individual disks. Rather than increasing the capacity of a single server, capacity is increased for the system as a whole by bringing more nodes online.
Hyper-converged systems must be upgraded by scaling out rather than scaling up. An administrator cannot simply add more disks to a node; they must add more nodes to the system.
Although adding nodes to a hyper-converged system is easy to do, the process has its drawbacks:
- Nodes consist of storage resources, as well as compute and memory resources. This means an organization could pay for compute resources it does not need just to increase storage capacity.
- Some hyper-converged systems do not let administrators add a single node. Instead, they may have to install new nodes four at a time.
One way to circumvent these limitations is to build your own hyper-converged system rather than rely on a prepackaged system. Reference architectures exist that allow organizations to implement hyper-converged systems in a way that makes sense to them.
If you use a pre-built hyper-converged system, it is very important to balance storage capacity with performance. Remember, adding additional storage capacity means purchasing additional nodes. Likewise, improving storage performance may also require the purchase of additional nodes.
Hyper-converged systems are convenient, but the tradeoff is the inability to perform granular hardware upgrades. It is important to choose a system that strikes a good balance between capacity and performance to avoid frequent upgrades. It is also essential to accurately assess your organization's hardware needs prior to purchasing a hyper-converged system.
Is there a difference between converged architecture and hyper-converged storage?
Why hyper-convergence is here to stay