BACKGROUND IMAGE: stock.adobe.com
Windows Server 2016 includes new features for implementing hyper-converged infrastructure with the Hyper-V hypervisor at center stage.
While these new features are welcome by organizations looking to implement Hyper-V-based HCI in a Windows Server deployment, such an implementation requires a good deal of planning.
IT teams planning to deploy HCI based on Windows Server need to first ensure that the correct physical platform is in place and properly configured. After that, they must take the steps necessary to deploy the virtualized resources.
Microsoft offers the HCI features only in the Windows Server 2016 Datacenter edition, although IT administrators can install the OS in either Server Core mode or Server with Desktop Experience mode. Along with the general Windows Server deployment requirements, the HCI features have specific requirements of their own. IT admins should carefully assess the hardware they have on hand or any they plan to purchase to make certain it can support the server requirements, while meeting the demands of their data storage workloads.
Microsoft recommends customers work with vendors that offer hardware validated through the Windows Server Software-Defined (WSSD) program to ensure the platform can properly support their HCI implementations. Many WSSD vendors provide reference architectures for deploying HCI systems built with their own hardware.
Administrators should install Windows Server Datacenter on all servers in an HCI cluster, making sure to apply the latest patches and updates, as well as install the latest drivers. They should then configure each OS instance to support the necessary services or features, such as failover clustering, Active Directory or the Microsoft Server Message Block (SMB) protocol. As part of this process, admins should install the required server roles on each Windows Server computer, including Hyper-V and failover clustering.
Virtualization is the key to Windows Server HCI, and that's where Hyper-V comes in. Hyper-V abstracts the underlying CPU and memory resources and delivers them as logical services to virtual machines (VMs). Hyper-V also provides the platform necessary to virtualize the storage and network resources.
Hyper-V supports both generation 1 and generation 2 VMs. Generation 2 VMs can take better advantage of features such as Secure Boot but support fewer guest OSes or methods for booting into the system than generation 1.
Hyper-V must be installed and running on every Windows Server node in the HCI cluster. The computers should be configured with 64-bit processors that support second-level address translation (SLAT), a hardware virtualization technology that reduces hypervisor overhead. The HCI cluster nodes also require at least 4 GB of memory. In addition, support for virtualization must be turned on in the BIOS (basic input/output system) or UEFI (Unified Extensible Firmware Interface) and the VM Monitor Mode extensions installed on each node.
Every host in an HCI Windows Server deployment must also be configured with at least one Hyper-V virtual switch, a software-based Layer 2 Ethernet switch. A Hyper-V virtual switch enables the VMs on a computer to communicate with other systems across virtual and physical networks. Administrators must install the Hyper-V role on the cluster nodes before creating the virtual switches.
One of the most important roles of an HCI implementation is to provide software-defined storage (SDS) that pools storage resources across the cluster nodes. Windows Server SDS requires the physical drives to be directly attached to the nodes and to be SATA, SAS or nonvolatile memory express.
Each drive can be attached to only one cluster node. Windows Server supports SSDs and HDDs. When possible, each server should use the same type of drives and the same number of each type. However, the drives can be combined in different ways to support all-flash or hybrid configurations.
Administrators can also implement a server-side cache tier that is independent of the storage pool. The cache tier should use the best-performing storage devices available. If the HCI Windows Server deployment supports multiple drive types, Windows Server automatically configures the cache, using the fastest drives. If the deployment includes only one type of drive, administrators must configure the cache manually.
Storage Spaces Direct is at the core of Windows Server SDS. Storage Spaces Direct is a service that pools the locally attached storage into virtualized resources made available to the VMs. Storage Spaces Direct utilizes such Windows Server features as failover clustering, SMB 3.0 and the Cluster Shared Volumes file system. Storage Spaces Direct also comes with Software Storage Bus, which establishes an SDS fabric that permits the servers to see each other's drives.
Not all HCI products implement software-defined networking (SDN), including those based on the Windows Server HCI technologies. Still, SDN is becoming an increasingly important factor and should be considered when planning an HCI deployment.
Whether or not SDN is included, administers need to create at least one virtual switch for each server in the HCI cluster to enable VM networking. They must also create a virtual network adapter for each VM and then connect the VM to the switch. Hyper-V supports multiple types of virtual switches and network adapters, so administrators should carefully review the available options before implementing either one in their HCI Windows Server deployment.
Once the virtual switches and adapters are in place, admins can configure the SDN features if they choose to implement them. The SDN components are all contained within the HCI cluster. That makes it possible to control the virtual network and switches from within the platform's clustered compute and storage environments, along with the other virtualized resources.
There is much more to a Windows Server deployment for HCI, and the deployment process itself can vary significantly from one situation to the next. In all cases, however, administrators must ensure that they have the right physical platform in place before trying to virtualize the compute, storage and network resources.