carloscastilla - Fotolia

News Stay informed about the latest enterprise technology news and product updates.

Hyper-converged appliances become hyper-consolidated

IBM mainframe shows path hyper-converged market is taking toward more manageable and cost-efficient hyper-consolidated IT infrastructure.

This article can also be found in the Premium Editorial Download: Storage magazine: Object storage market vendors give NAS a nudge:

Many recent conversations with IT folks confirm and reinforce the trend away from the industry darling "hyper-converged infrastructure" (and, by extension, hyper-converged appliances) meme of the past 18 months or so toward something best described as "hyper-consolidation." A friend of mine in Frankfurt, Germany -- Christian Marczinke, vice president of solution architecture at DataCore Software -- coined the term (to give credit where credit is due). But I first heard about hyper-consolidation when I interviewed IBM executives at the IBM InterConnect 2016 conference in Las Vegas in February.

IBM mainframes hyper-consolidation

At that event, IBM introduced its latest mainframe, the z13s, which is designed to host not only the legacy databases that are the mainstay of mainframe computing, but also so-called "systems of insight" (read: big data analytics systems) traditionally deployed on sprawling, mostly Linux-based cluster farms in which individual nodes consist of a server and some internal or locally-attached storage. IBM made a pretty compelling case that all of those x86 servers could be consolidated into VMs (or KVMs) running on its new big iron platform.

Through the combination of a lower cost and more Linux-friendly mainframe with the integration of lots of open source technologies already beloved by big data- and Internet of Things-philes, the resulting hyper-consolidated infrastructure would cost companies less money to operate than hyper-converged appliances and facilitate their transition into the realm of hybrid clouds. But it would do all of this without the unknowns and insecurities that typically accompany cloudification.

Disappointments of consolidations past

Back in 2005, leading analysts actually produced charts suggesting that hypervisor computing would enable such high levels of server-infrastructure consolidation, that by 2009, Capex spending would all but flat line, while significant Opex cost reductions would start to be seen in every data center. It never happened.

Then 2009 came and went and Capex cost kept right on growing, in part because leading hypervisor vendors blamed legacy storage (SANs, NAS and so on) for subpar virtual machine performance, telling their users that they needed to rip and replace all storage in favor of direct-attached storage cobbles. Over time, this idea morphed into software-defined storage, which vendors also touted as "new," but was actually a re-visitation of System-Managed Storage from much earlier mainframe computing days.

After a few false starts, the industry productized SDS as hyper-converged infrastructure (HCI) appliances. Since then, the trade press has been filled with advertorials subsidized by server vendors-qua-HCI appliance vendors talking about their "HCI cluster nodes," combining commodity server and storage stuff with lots of hypervisor and SDS software licenses, as though they were the new normal in agile data center architecture and the Lego™ building blocks of clouds. Only, deploying a bunch of little server nodes -- as IBM's z13s-play suggests -- is not really consolidating much of anything. Nor is it really reducing much cost. Even as improvements are made in orchestration and administration of such infrastructures, the result has been a return to the isolated-island-of-data problem that companies sought to address in the late 1990s with SANs.

Enter hyper-consolidation

One way to clean up this mess with hyper-converged appliances is to return to the mainframe. IBM facilitates this with a hardware platform, the z13s, rooted deeply in multiprocessor architecture and engineered for application multi-tenancy. And to make it palatable to Millennials who don't know symmetrical multiprocessing from Shinola, they have ladled on (in the form of an architecture called LinuxONE) support for all of the Linux distributors and open source appdev tools, cloudware, analytics engines and in-memory databases that they could lay their hands on. The idea is to consolidate a lot of x86 platform workloads and storage into the more robust, reliable and secure compute and storage of the z Systems ecosystem.

IBM investment protects the system through "elastic pricing," which means you can get your money back if not satisfied after the first year. (Interestingly, the presentations I saw at the IBM conference pointed out that users were realizing superior ROI to either cloud computing or x86 computing models after only three years with IBM's mainframe platform.) All in all, though, it is clear that IBM's idea has a lot of appeal -- both to legacy "systems of record" managers (the overseers of traditional ERP, MRP, CRM and other workloads who like the reliability and security of the mainframe) and the appdevers and cloudies who prize Agile development over all else.

Big iron isn't for everyone

Now, you don't need to use a mainframe to do hyper-consolidation. Not every company has the skills on staff to run a mainframe, or the coin to finance the acquisition of big iron -- with or without elastic pricing. Marczinke notes, for instance, that his clients are simply looking for real savings from consolidation that hypervisor vendors promised, but didn't deliver. He may be right.

They are just as interested in hyper-consolidation, but want to remain in the aegis of the x86 hardware and hypervisor software technologies they're more familiar with. These folks need something else: not just a sprawling infrastructure comprising hyper-converged appliances that are each a data silo with a particular hypervisor vendor's moat and stockade surrounding their workload and data. They want to embrace consolidated storage -- call it something other than SANs if you want -- so it is less costly to manage, and they want to use locally attached storage where that works. But they want all of that to be manageable from a single pane of glass by someone who knows little or nothing about storage.

Thankfully, some cool things are coming down the pike in the realm of hyper-consolidation. If I am reading the tea leaves correctly, what DataCore has already done with Adaptive Parallel I/O on individual hyper-converged appliances, for example, could very well be on its way to becoming much more scalable, creating -- via software -- a mechanism to deliver application performance and latency reduction at the cluster level. Think of it as "no-stall analytics for the rest of us." Ultimately, this kind of hyper-consolidation may be a winner across a very broad swath of organizations that aren't willing to outsource their futures to the cloud and can't stretch budgets to embrace IBM's most excellent z Systems platform.

Next Steps

Hyper-converged appliance popularity soars

What hyper-convergence means for SAN, NAS

A closer look at hyper-converged systems

This was last published in May 2016

Dig Deeper on Hyper-Converged Infrastructure Systems

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What must happen for hyper-consolidated to take over for hyper-converged appliances?
Cancel

-ADS BY GOOGLE

SearchDataCenter

SearchCloudStorage

SearchStorage

SearchNetworking

SearchVMware

Close