Data centers of all sizes are under pressure to be more responsive to the current needs of the business and future...
computing demands. At the same time, IT departments are constantly reminded that they need to drive down their costs. The roadblock to meeting these goals is most apparent in the data storage infrastructure as it tries to support ever-changing virtualization requirements.
Five years ago, as virtualization moved from test to production, legacy storage systems -- thanks to the addition of flash storage -- were able to keep pace with virtualization requirements, but just barely. As these initial virtualization storage systems are now ready to be upgraded, does it make sense to stay with a traditional shared storage architecture or look to hyper-convergence as the way forward?
Storage upgrade motivators
Every few years IT planners face the need to upgrade their storage infrastructure. There are three primary motivators behind storage upgrades:
- Current storage systems have reached their limits. Before virtualization, a storage system's primary limitation was capacity -- data growth simply surpassed what the system could support. Thanks to new virtualization requirements, the primary limitation is now performance. It is impossible to configure a legacy storage system to meet the performance demands of the infamous storage I/O blender that virtualization creates.
- Storage sprawl. Data centers tried to add a point solution to address capacity and performance limitations, but this led to storage system sprawl. It is not uncommon to see a data center with a legacy storage system, a storage system for the server virtualization environment and still another for the virtual desktop environment. Eventually, the costs and complexity of managing multiple storage systems and meeting virtualization requirements become the motivator for a storage refresh -- the hope of consolidating storage under a single pane of glass.
- Financial realities. Even if an IT planner could accurately forecast data center demands four to five years out, most organizations are forced to upgrade within three. The reason? Many vendors charge an exorbitant fee for maintenance of hardware and software after its initial warranty period. In many cases, it is less expensive to buy a new storage system than to pay the annual maintenance costs on the older system.
Continuous upgrade approach
We have all heard the old saying: "How do you eat an elephant? One bite at a time." What if a data center treated upgrades the same way? Instead of a huge project every three to five years, administrators would make small upgrades happen continuously. A hyper-converged architecture, by definition, is in a state of continuous upgrade -- as new nodes are added, the overall compute and storage capacity increases.
Those new nodes also have the latest processors and storage technology to meet virtualization requirements for performance. Today, most hyper-converged systems are flash-based, so storage performance is a minor concern. As nodes grow old, they are simply retired and removed from the cluster. The result is a rolling, gradual upgrade that stays fresh.
Virtual storage and Docker storage have their own requirements
Update your data center with virtualization
Containers don't have extensive data center requirements