News Stay informed about the latest enterprise technology news and product updates.

Software-defined storage leaders talk 2018 trends in SDS, HCI

Software-defined storage, fueled by hyper-converged infrastructure and scale-out object/file storage, will grow in 2018, but terminology and product architectures may change.

Experts and software-defined storage leaders predict increased use of software-defined storage and hyper-converged...

infrastructure in 2018, along with an evolution of the terminology and the product architecture behind these technologies.

Software-defined storage (SDS) is designed to manage data storage resources and run on commodity servers, with no dependencies on the underlying hardware. Some of the most popular SDS product types include scale-out file and object storage, as well as the software powering hyper-converged infrastructure (HCI).

Some software-defined storage leaders envision the SDS approach becoming so common that it will cease to be a product differentiator. And changes could be on the way with hyper-converged architectures, as HCI products encounter scalability challenges with the single-box approach that combines storage, compute, virtualization and networking resources.

You can find a sampling of SDS and HCI predictions from a number of software-defined storage leaders below and additional predictions for general storage trends, flash and memory technologies, and cloud storage at the embedded links.

Ritu Jyoti, research director, IDC: In 2018, software-defined storage will account for about 25% of the enterprise storage market, growing from about 18% in 2017. People are now getting comfortable with it. SDS will continue to enable cloud-native applications and hybrid cloud deployments. Hyper-converged infrastructure and scale-out file and object storage offerings will be the primary contributors for the growth, replacing legacy SAN and NAS.

Will HCI start to lose its appeal?

George Crump, founder and president, Storage Switzerland: The luster is going to come off hyper-converged infrastructure. As these environments scale and node counts grow, the inefficiency of the architecture will become increasingly apparent. And hyper-converged vendors will either need to adapt their solutions, or we may see a return to a more dedicated compute tier and a dedicated storage tier within the architecture.

Matt Kixmoeller, vice president of marketing and product management, Pure Storage: This is the year where hyper-converged is becoming disaggregated and unconverged. The whole mania of hyper-converged was this notion that storage, networking and compute were all sandwiched in the same white box, commodity compute architecture. What we're seeing is that the hyper-converged architecture doesn't scale that well. And the hyper-converged vendors are all breaking out dedicated nodes for storage and dedicated nodes to compute. At that point, it starts to look more like classic converged infrastructure. So, I think the definition of HCI is going to evolve quite a bit and open up from inside the box, if you will.

Mohit Aron, CEO and founder, CohesityMohit Aron

Mohit Aron, CEO and founder, Cohesity: Data centers will continue to shrink through the application of hyper-convergence to both the primary and secondary storage parts of the data center. The reason is that hyper-converged systems are easier to manage, take less power and space and are less costly. It is a myth that hyper-converged products cannot scale compute and storage separately. The software can deal with heterogeneous clusters, making this independent scaling possible. There is one place though where some improvement is needed. More than the hyper-converged products themselves, some legacy applications need to change. Some earlier applications were written to assume a monolithic compute/storage infrastructure. Such applications need to be rewritten to use the inherent distributed nature of hyper-converged products, so parts of them run on many different servers and together these parts work collaboratively and effectively act like a single, big application.

Software-defined storage leaders on SDS as a differentiator

All storage will become software-defined, and we'll stop talking about software-defined storage.
Scott Sinclairsenior analyst, Enterprise Strategy Group

Scott Sinclair, senior analyst, Enterprise Strategy Group: All storage will become software-defined, and we'll stop talking about software-defined storage. By all storage, I mean the vast majority of on-premises storage solutions have already become to some extent software-defined. I'm not just talking about the ones that are leveraging the terminology. Even the ones that show up as an array, [the vendors] have figured out that if they build the software to be more hardware agnostic, then they can spin features and capabilities faster. In 2018, saying you're software-defined isn't a differentiator, because everyone is essentially doing it. So, organizations will start talking more about what benefit they are using software-defined technology to provide. Are they delivering an unstructured, on-premises cloud in a storage environment? Are they delivering a multi-cloud data management layer that can distribute data across multiple public clouds, as well as on premises?

Eyal David, CTO, KaminarioEyal David

Eyal David, CTO, Kaminario: The storage industry will continue its trajectory away from traditional scale-up array architectures toward more flexible, software-defined architectures. In particular, we see the composable infrastructure/composable storage paradigm gaining traction as the natural evolution of scale-out storage architectures.

HCI, SDS still rely on hardware performance

Jeremy Werner, VP of SSD marketing and product planning, Toshiba Memory AmericaJeremy Werner

Jeremy Werner, vice president of SSD marketing and product planning, Toshiba Memory America: Lower cost of flash and SSDs will enable expansion of all-flash hyper-converged infrastructure deployments. Combined with HCI software, it will become more economical to use flash-based hyper-converged architectures to build and expand IT systems for small and medium-sized business. OEMs like Dell and software vendors like VMware and Nutanix will capitalize on this growing opportunity.

Marc Staimer, founder, Dragon Slayer Consulting: The software that runs on the CPU is going to be the next major bottleneck to get attacked to improve performance so that you can get better results out of your storage, whether it be hyper-converged, software-defined or externally shared systems. They're all subject to the same issues, since everybody has gone to an x86 architecture.

HCI, SDS fuel rise of infrastructure generalists

Sinclair, Enterprise Strategy Group: IT organizations will shift away from individual domain expertise of storage administrators and network administrators to infrastructure generalists. IT demands are growing fast, not just because applications are demanding more data, but as we move into the digital economy, companies understand that data can be used to either generate revenue or drive business efficiencies. So, IT resources have to work directly with the line-of-business teams to better understand what they need. All these new jobs and responsibilities take people and take time. And the only way to free up time effectively is to start automating stuff. For some organizations, that's going to be public clouds. For some, it's hyper-converged. For some, it's 'build our own,' tie into orchestration layers, better use containers, and have a more agile IT environment.

Dig Deeper on Hyper-Converged Infrastructure Systems

Join the conversation

5 comments

Send me notifications when other members comment.

Please create a username to comment.

What will be the major trends in software-defined storage and HCI in 2018?
Cancel
First: Understanding the correct use case to provide the good solution is the big challenge. Performance, Legacy Architecture, hardware requirements are some points that we need to watch carefully.

Second: Create POC, DEMOs and POT are interesting to the market see more value in the SDS Solutions, because the trust in their storage hardware and now we are offering decouple the intelligence from the hardware storage and put them in the upper layer to management everything.

Third: ILM - Information Lifecycle Management is the magic word to provide value in the Data Governance.

Marcos Pitanga
IBM SDS & SDC Tech Specialist
Cancel
How ready do you think most enterprise users are at this point in time to try a different approach?
Cancel
Yes, they are. Everyday the SDS is present in their life's. The pressure to reduce costs and make some data governance between on promise and off promise is mandatory today. I worked for 3 years making evangelism processes using POCs, POTs and Demos. Today the market is more open to use this new approach to delivery more intelligence to manager their data.
Cancel
Can you share any specific, real-world examples that show why enterprises are more open to this new approach? I know you won't want to name specific customers, but perhaps you could share their industries.
Cancel

-ADS BY GOOGLE

SearchDataCenter

SearchStorage

SearchNetworking

SearchVMware

Close