kentoh - Fotolia
- Rich Castagna, Vice President of Editorial
Some problems are best solved by breaking them down into discrete tasks and distributing them among a number of different parties, hoping that by divvying up responsibilities and spreading them out, the work will get done faster. Other issues require more specialization, so it makes more sense to allocate the work to one or two entities with the required expertise.
While those are disparate methods of getting work done, most companies employ both approaches routinely. For example, many companies centralize back office operations like HR, accounting, facilities and so on. This works, because the services those departments provide are essentially the same regardless of the receiving business unit -- so duplication of effort and expenditure is avoided. However, the same company probably maintains separate sales forces and their supporting functions for each of its product lines. That works well, too, because selling a gadget might require completely different knowledge and tactics than selling a widget.
So, companies are usually comfortable functioning with both centralized and decentralized operations. That is, except for IT.
For IT, it's an either-or case -- either everything is centralized or everything is distributed. It seems like, every eight or 10 years, many IT shops change course and either rein in operations or spread them out. It's a boon for vendors who exclaim, "Centralize! Centralize! Centralize!" until that mantra loses its buzz and they shift gears with exhortations to "Distribute! Distribute!" Good for vendors maybe, but a nightmare for IT shops.
Change is especially tough in storage environments. Storage is at the core of everything that is IT, so most companies have a lot of it, rely on it and have to manage it effectively and efficiently. And because it accounts for so much of the IT budget and is home to pretty much all of the corporate IP, it can take a huge effort and a lot of expense to alter its course.
Centralized storage for non-mainframe computing environments is barely 20 years old, but in some quarters, it's already been declared dead -- circling the drain at least, if not moribund. Hyper-converged infrastructure and its kin software-defined storage (SDS) are the poster children of the modern IT shop. These architectures break the mold of shared, centralized, networked storage resources and replace it with a clustered, server-based system that relies on sharing each server's local storage resources.
It's an attractive proposition, often made even more enticing by suggesting it can run on commodity hardware. Somehow, in the great scheme of SDS, the software is so powerful that it can overcome the limitations of cheap hardware.
SDS' flexibility, ability to scale (to some degree, at least) and the idea of having a single entity that comprises storage, compute, networking and hypervisors are all real benefits. But -- you knew a "but" was coming, right? -- this decentralized, building-block approach also has some significant shortcomings.
Server virtualization caught on in data centers because it offered cost savings over the one OS/one server model. With one server hosting many virtual servers, economies of scale reduce costs, as does the reduction in hardware maintenance. You need fewer maintenance dollars and staff to administer and maintain one huge server than lots of smaller ones. Makes sense.
And that's the same sense that led to shared storage. Why would you separately maintain pockets of storage tucked away in dozens or hundreds of servers when you could do all that in one place -- more economically and more efficiently?
So, server virtualization and networked storage share some similar concepts, goals and benefits: centralize and reduce the amount of hardware required and life gets a lot easier all around.
SDS, however, doesn't really follow that model. While, on the surface, SDS promises to shift the burden of operations from the hardware to software, it may actually accomplish the opposite. With shared, networked storage, IT needs to maintain a single unit that provides centralized storage resources to potentially hundreds of virtual and physical servers. Maintenance, administration and other operations are also centralized. In a hyper-converged infrastructure -- whether a single-SKU product or in the form of an SDS offering, the amount of necessary hardware may be far greater than with centralized storage.
As you build out your hyper-converged environment, you're adding more and more server hardware. So, as the installation scales, there are more and more CPUs, DRAM, interfaces, network connections, etc., to support and maintain. Meanwhile, those resources are dedicated to only the application running in the hyper-converged silo, as most hyper-converged infrastructure and SDS products are essentially closed-ended systems.
SDS and hyper-converged systems definitely have their places in data centers, but those places are probably niches for specialized applications (VDI or Exchange, for instance) than as primary data center storage. These technologies may get there someday, but they will have to be more like the company as a whole, able to embrace both distributed and centralized operations in a single environment.
BIO: Rich Castagna is TechTarget's VP of Editorial.
Centralized storage can keep your data storage infrastructure intact
Hospitals take a centralized approach to storage
Dig Deeper on Hyper-Converged Infrastructure Implementation
What service does the Microsoft WSSD program provide?
The danger of ageism in the tech industry and ignoring the past
Windows Server 2016 hyper-converged options are here
Software-defined storage leaders talk 2018 trends in SDS, HCI