peshkova - Fotolia
As an actor on the stage of IT, you know that converged systems of the VCE ilk (now the Converged Platform Division of Dell EMC) have been growing in leaps and bounds over the past five years. During the same period, traditional storage array revenues of the Dell EMC VNX and NetApp FAS type have been dropping like lead balloons. Simultaneously, hyper-converged infrastructure products from vendors like Nutanix and SimpliVity (now part of Hewlett Packard Enterprise) as well as software-defined storage players such as Hedvig are practically doubling in revenue each year.
Further muddying the waters are complete cloud stack vendors, such as SwiftStack, vying for the same dollars. Oh, and don't forget the Amazon Web Services and Microsoft Azure public clouds that are running away with thousands of your applications and petabytes of your data.
Here is my perspective on the underlying drivers of each of these trends and some advice on how to make some fundamental decisions regarding your next-generation data center.
The end of an era
To put it mildly, the era of traditional storage arrays is rapidly coming to an end. However, because this industry moves at the speed of molasses in terms of fundamental changes, revenues will decline over the next five years, but not go away.
I never viewed converged systems as a paradigm shift, but as a good way to buy three-tier infrastructure products that simplified purchasing, deployment and management. Under the covers, they still have traditional storage arrays, so their utility is limited to what these arrays can do. Vendors involved in the converged game -- Cisco, Dell EMC, Hewlett Packard Enterprise, Hitachi Data Systems, IBM, NetApp and others -- have done an excellent job of making traditional compute, storage and networks easier to buy and manage. As a result, this category has fast and furiously extracted revenue from traditional storage categories. This is easy to understand.
Recently, converged revenue started to decline. Why? The impact of the newer categories listed above. Take hyper-converged infrastructure products, for example. Just as converged products destroyed the traditional storage array category in terms of revenue, hyper-converged products initially drew and impinged on array revenue as did cloud stacks and public clouds. But now I see these newer categories drawing from converged technology. You could even argue that converged has run out of steam. At Taneja Group, we have spoken to several users about why they switched from converged to hyper-converged. In all instances, it had to do with revisualization of the data center and maturation of hyper-converged infrastructure products.
The software-defined principle
It all comes down to the realization that you must build tomorrow's data centers on software-defined principles. It is cost prohibitive, if not impossible, to build web-scale, scale-out infrastructures with hardware- and silo-centric storage. That's the common theme behind all the trends impacting the world of storage today.
Hyper-converged platforms are all software-defined, at least from a storage perspective. The same goes for private cloud stacks and public cloud offerings. If you study the underlying architecture of Hedvig's Distributed Storage Platform, for example, you quickly realize that its scalability, performance and manageability all stem from its software-defined underpinnings. If you look at VMware's software-defined data center offering, it is based on vSAN and not hardware-centric external storage arrays or hyper-converged infrastructure. This is also true of storage products from Datrium and Datera.
You need to build your next-generation data center on software-defined principles, regardless of whether your applications are traditional, big data- or cloud-based. And that means weaning yourself off the traditional methods as fast as your individual environment allows.
It is my premise that converting a software-defined infrastructure into acting and behaving like a cloud is much easier than trying to convert a traditional infrastructure into a cloud. But a cloud is more than just a software-defined infrastructure. A true cloud requires infinite scalability; self-service; multi-tenancy; the ability to quickly provision and de-provision resources, storage and others, without involving IT; automated movement of application workloads based on policy; orchestration of workflows; self-healing; the ability to run multiple workloads with a policy-based service-level agreement; quality of service; and much more.
Can you achieve these in a traditional three-tier infrastructure? Possibly, but the Capex and Opex would be so high as to make it infeasible. Converting a software-defined infrastructure into a cloud is infinitely more feasible. This is why all public clouds are built on software-defined principles.
In a recent survey, customers largely favored building a private cloud on hyper-converged or software-defined infrastructures. It is precisely for this reason that Nutanix, while a leader in hyper-convergence, now focuses on delivering an enterprise cloud platform. I suspect others will follow.
I think organizations will build all new IT environments, with minor exceptions, on software-defined principles, often using hyper-converged infrastructure products. Whether you run legacy, structured data applications, born-in-the-cloud applications or anything in between, you must carefully look at the infrastructure you'll run them on. Most likely, the answer will be software-defined. And you wouldn't want to stop there; you'll want all the benefits of the cloud, as exemplified by the likes of Amazon Web Services.
In your search, you will need to include new players, as well as existing strategic partners. As in the past, a lot of new, mind-blowing ideas are coming from startups.
A comparison of hyper-converged and software-defined storage
How hyper-convergence is changing the job of the storage admin
Software-only hyper-converged architectures require adequate IT staff