In the world of physical servers, each workload requires a compatible server configured and tuned to optimize results...
for that particular application. You can often find Exchange servers on one platform, Oracle database servers on another and so on, each living on a machine engineers have determined would be optimal for that use case and configured for that workload.
From the perspective of optimizing each workload, this makes sense because you want to build the application on the best possible platform to garner the best results. The bigger picture is not so rosy. Keeping the physical layout and configuration for each particular machine -- and ordering, building and implementing them on a custom basis for each workload -- creates a tremendous amount of variability and complexity across the server virtualization storage environment.
Take that concept beyond the server, and you'll find that the IT staff is spending a tremendous amount of time and energy making the key pieces of the data center (server, storage, network) fit together and work well. It's one thing for a vendor to make things work in its controlled factory environment, but it's much harder to do on a data center floor where there's a mix of new and old systems, as well as inconsistent skills, procedures and approaches. The IT staff pulls its hair out trying to get everything to work initially, and struggle even more trying to keep it all running over the long term as things break and change.
Buying separate software, servers, storage and networking, and then attempting to make it all fit together is akin to buying a car by ordering an engine from one manufacturer, the chassis from another and the wheels, seats and body from additional vendors. While theoretically it might be possible to get better results than trusting the combination a single vendor recommends, the likelihood of such a result is extremely low. With improvements in the ability of industry-standard x86 components to handle demanding workloads, and the standardization of components from one vendor to another, many IT architects are realizing that there's more value in making things consistent and eliminating variability, rather than striving for the absolute best architecture for each individual workload.
Converged storage options are a natural outcome of this line of thinking. The rise of virtual servers has had a homogenizing effect on IT requirements. Today, you just need to know your workload is compatible with virtualization, and that your server is compatible with your virtual server technology, and that's how you build the server. The specific requirements of each application in the virtual server environment become, to some extent, less important. And as organizations use virtual server technology for a bigger percentage of the workloads they run, their server virtualization storage needs to be provided by customized configurations that differ by workload.
Converged and hyper-converged storage: Less assembly required
Entrenched and emerging vendors are aware of this, and have begun to offer converged infrastructure -- products that combine servers, storage and networking from various vendors. These products can be sold as a pre-configured package or as reference architectures that customers have to assemble themselves. Both options are designed to deliver consistently good results for all the workloads that fit within the target of the particular package. In a similar trend, other vendors are selling hyper-converged products that package servers, storage, networking and a hypervisor in one box. These converged systems generally focus on a particular virtual server technology, like VMware or Microsoft Hyper-V, or an application like desktop virtualization.
With physical servers, you have to optimize the server and storage environment to fit the needs of the workload running on each individual machine, and take a wide range of needs into account. Virtual server technology acts as a homogenizing factor, removing variability and interoperability from the equation. When organizations standardize on a virtual server technology, and run a majority of their production workloads in such an environment, it allows for a much higher level of infrastructure consistency from workload to workload.
When you have this increased level of consistency within your IT infrastructure, the focus on best-in-class individual components capable of high levels of bare-metal customization and as a broad fit for many configurations diminishes significantly. This shift in focus, along with industry-wide consistency in choosing a small number of virtual server options, has led to the prevalence of options for pre-integrated converged systems that offer a blueprint and order sheet that acts as a cookbook for users to assemble a pre-defined combination, or hyper-converged system that assembles the server, storage, network and management tools in the factory. These products allow users to bring virtual server environments online more quickly and with less effort and integration risk.
Ten steps to optimizing your server virtualization storage
Best practices for provisioning virtual server storage
How to size a LUN for your virtual environment