Hyper-converged integrated systems are ideally suited to scale-out workloads such as virtual desktop infrastructure. The ability to add storage capacity and performance when the IT department adds compute capacity helps avoid any I/O bottlenecks as the VDI deployment progresses.
Predictable and linear scaling of desktop numbers make planning the deployments simpler, and the operational simplicity of the hyper-converged infrastructure (HCI) platform lightens the load on the VDI support team.
Virtual desktops work well with a virtual infrastructure that gets its resources from many smaller pools, rather than a smaller number of larger pools. Just like databases, the performance of a virtual desktop is dependent on its storage performance, as well as the compute resources available.
Storage is the key to smooth performance
The most common way for a VDI deployment to go wrong is hitting a storage performance limit. With a storage array-based infrastructure, all the desktop I/O comes together on the shared storage array. This array has a fixed maximum performance. As long as the array can satisfy the I/O performance requirements for all the desktops, the virtual desktops perform well, and users are delighted. But when IT deploys too many desktops, the array can become overloaded. Suddenly, every VDI user's performance suffers. The performance of the array often falls off an "I/O cliff" where an overload causes reduced array performance and affects every virtual machine (VM) that shares the array. This kind of I/O cliff can stop a VDI program in its tracks and taint the business' view of the desktop delivery method.
Admins can avoid the I/O cliff through careful storage design, which requires knowledge of the VDI workload. But admins might be unfamiliar with VDI when they design the storage, before the large-scale VDI rollout. A scalable storage system allows IT to increase performance as the VDI program grows, avoiding the cliff.
With a hyper-converged integrated systems implementation, there is no central, shared storage array; the HCI nodes provide shared storage that is spread across all of the nodes. Even better, when IT deploys additional HCI nodes for additional desktops, they bring more storage capacity and performance. As the HCI cluster expands, its storage performance grows, too. There is far less chance that the VDI deployment will hit an I/O cliff as the deployment proceeds.
The scale-out nature of HCI is a good fit for a scale-out workload such as VDI. Each desktop is a relatively small VM, but there will be a large number of desktop VMs that together demand a lot of resources. Each HCI node is a medium-sized hypervisor host; the HCI cluster will be made up of many medium-sized hosts. The HCI cluster simply needs to provide an aggregate resource that is sufficient for the resource demands of the desktops.
How VDI deployments are like snowflakes
No two VDI environments are the same; each has unique combinations of applications and unique user behaviors. The result is that the number of users on the same server can be very different between two VDI deployments. The scale-out nature of hyper-converged integrated systems does help simplify the design process. A pilot set of users on a small HCI cluster will give a direct indication of the size of the cluster that will be required for a full deployment. If a three-node cluster can accommodate 300 basic desktops, then a twelve-node one will accommodate 1,200 basic desktops. IT can accommodate 10,000 basic desktop users on seven clusters of 15 nodes each. With linear scaling, there should be no unexpected bottlenecks.
Even within a single VDI deployment, there will be variations, so make sure to account for the desktops that need more resources; admins won't get as many on each node. Another useful feature of HCI is the ability to simply add nodes to existing clusters when IT requires more resources. Some VDI deployments take less resource planning time. Instead, the admins watch resource consumption and deploy extra HCI nodes into clusters as the load grows.
There is more to a VDI environment than just the hardware needed to run the virtualization platform. Desktops always need file servers for user profiles, home directories and application virtualization. In a large VDI deployment, dedicated filers are the best way to deliver these file shares rather than Windows VMs.
HCI is all about simplification, which includes simplicity in deployment, operation and expansion. Deploying an HCI cluster is typically a one-day activity, and adding nodes takes a matter of minutes. Storage management is usually a non-issue with HCI: no LUNs, storage fabric, or multipathing to manage. Using HCI means that there is more time for managing the rest of the VDI environment.
But moving to an HCI platform is not without its challenges. One significant challenge is the financial model that HCI favors, where capacity is bought as it is required, rather than on a budget cycle. Organizations buy legacy architectures as a whole unit on an annual cycle. With hyper-converged integrated systems, it is common to start with a moderate purchase and then buy some more when the workload increases. It may take six months before IT needs new servers, or it might only be three months. This more frequent but smaller purchase does not match up to the traditional annual budget cycle.
Quiz yourself on how to use HCI for VDI
Hyper-converged infrastructure can be an assembly required play
The limits and lessons of using HCI with VDI