sommai - Fotolia

News Stay informed about the latest enterprise technology news and product updates.

Gridstore Inc. CEO predicts hyper-converged vendors will double in 2015

Gridstore claims nearly one-third of new revenue stems from sales of its recently launched HyperConverged Appliance, and that Hyper-V is winning VMware converts.

Gridstore Inc. jumped bandwagons in 2014, launching a hyper-converged system while shedding its software-defined storage label.

The Gridstore HyperConverged Appliance line of hybrid and all-flash arrays launched in late 2014. That put Gridstore on a rapidly expanding list of vendors who sell hyper-converged systems that collapse computing, networking and storage resources into a single appliance.

The focus on hyper-converged storage marks the latest strategy shift for Gridstore, which started out in 2009 selling scale-out network-attached storage (NAS) systems to small and medium-sized businesses and service provider markets. Gridstore refashioned itself as a software-defined storage company with Virtualized Controller Architecture for distributing shared NAS workloads across client machines. The strategic focus changed in 2013 when Gridstore re-architected its grid storage nodes to optimize block storage for the Microsoft Hyper-V hypervisor. The hyper-converged appliance is part of the Hyper-V focus.

We caught up with Gridstore CEO George Symons to get his take on his company's progress, trends in the storage market and Gridstore's planned product roadmap for 2015.

Gridstore previously positioned itself as a software-defined storage vendor. Why did you drop that label?

George Symons: We were focused on software-defined storage and it wasn't helping us. A year ago I was saying, 'Get more clarity around software-defined storage.' Well, that didn't happen. It just clarified that our positioning around hyper-convergence was much better for us. We still use our intelligent software to deliver hyper-converged infrastructure. It's just that the software-defined label was not useful in the marketplace.

There are a couple of reasons. One of the big challenges we noticed is that people often think of software-defined storage kind of like a do-it-yourself kit, versus a packaged solution. This was especially true as we tried selling to the big markets. The do-it-yourself approach works for a few extremes, but it's not for the core of the market. Service providers try to save money by doing that, for example. And those at the low end of the market get the software and JBODs and try to make it all work to save some money. But most people are looking for packaged solutions.

The second challenge is that every vendor seems to be trying to label themselves as software-defined. There are no subcategories. There is no universally agreed-upon standard of what software-defined means. There was no way for us to differentiate from someone like Nexenta, which is what I would call a standard iSCSI storage-area network [SAN] or maybe a NAS device. What we've done is put software on both the server and the storage side. Because of the software benefits, we can do policy-based management and handle the data differently by understanding data at the server side. That's a different solution than taking your storage controller and software and putting it on any hardware.

How do you evaluate the competitive landscape for hyper-converged platforms in 2015?

Symons: We are seeing explosive growth and adoption of hyper-convergence, both in terms of people purchasing and [the number] of vendors. I expect we'll see twice the number of hyper-converged vendors by the end of the year. Roughly 32% of our fourth-quarter revenue came from sales of the hyper-converged systems that we introduced at the tail end of last year. That's completed sales. We went from zero to 32%. In our pipeline this quarter, 62% of projected revenue comes from hyper-converged.

How do you distinguish your hyper-converged storage from competitors' platforms?

Symons: We have the scale-out capabilities to get over some of the issues other hyper-converged vendors face. As you scale, most hyper-converged solutions protect data by mirroring, and in some cases if you want real protection at an enterprise level, you have to have two mirrors. With two mirrors, if I wanted 12 TB of usable data, that means I need 36 TB of raw data. The erasure coding Gridstore uses means I can have the same two levels of protection, but only have to give up one-third the raw capacity -- I would need 18 TB [rather than] 36 TB of raw capacity.

Virtualization broke storage as we knew it, because admins no longer can guarantee the performance of a particular virtual machine [VM] -- the easiest way is to continue to overprovision. Our ability to fine-tune the I/O controller on a per-VM basis is a real strong differentiator. We can deliver an all-flash hyper-converged system, because we only have to use a small amount of additional storage to provide multilevel protection.

Virtual Controller sits in the Windows stack and looks like an iSCSI device. These are high-end compute servers: dual Intel 2690 V3 with 12-core processors and 256 GB RAM. The V3 servers each have 6 TB of storage that gets pooled, to give 12 TB across the cluster.

Who's buying your hyper-converged storage, and how are they using it?

Symons: Almost all the hyper-converged systems we sold last quarter are being used for primary storage. I would characterize the number of customers in the [20's or 30's]. They're not coming from converged infrastructure, which is more of a reference architecture that may or may not be delivered as a platform. In almost every case, our customers are coming from traditional storage server environments virtualized with some sort of SAN. In some cases, they're doing hyper-convergence for a specific workload -- a virtual desktop infrastructure deployment, for example. But in other cases they are doing hyper-convergence as part of a hardware refresh: Rather than refreshing standard storage and server hardware, they opted for hyper-converged infrastructure for their general-purpose compute.

VMware's vSphere remains the most widely adopted hypervisor. Does focusing on Microsoft Hyper-V deployments limit the market for your hyper-converged storage?

Symons: We're focused on Windows Hyper-V because, from a management perspective, you can put System Center on top, and then add Azure Pack to create a hybrid cloud. It basically gives a testing environment a self-serve private cloud-in-a-box that's ready to go.

How many VMware customers are switching to Hyper-V storage with Gridstore?

Symons: We have made a number of sales to people who moved away from a VMware-NetApp [combination], for example, to Gridstore with Hyper-V. They justify it very easily off of their license savings. And, of course, Hyper-V is packaged with Windows Server.

You've stated publicly that 2015 is the year all-flash arrays will gain significant traction. What makes you think so?

Symons: This is a change that I would not have expected a year ago, but I think we're going to see explosive growth in all-flash, compared to hybrid flash with spinning disk. I'm talking specifically for primary storage. I see two things happening: We're hitting the price tipping point where we can buy 960 GB eMLC solid-state drives [SSD] at very competitive prices. It's probably [approximately two times] the cost of spinning disk, but at two times it gets pretty interesting. When I look at one of our hyper-converged units with three nodes, the difference between a hybrid and an all-flash system in many cases can be $20,000 to $30,000 on a $100,000 system. If I can get all-flash for $30,000 more, I'm going to do it. The price delta for all-flash is narrowing.

This may be true for initial installation, but isn't the ongoing cost of replacing flash the bigger cost impediment?

Symons: The second thing is that the durability of flash is getting better and better. And honestly, the biggest issue you have today in a data center is replacing drives. I don't think replacing flash is going to be any worse than replacing spinning disk. The good news with SSDs is you don't have the mechanical side of it. You do have the wear, but this is where the technology for flash is moving rapidly.

Does Gridstore have plans to deliver a true all-flash array at some point?

Symons: We already deliver an all-flash hyper-converged platform. One of the key things about our hyper-converged [architecture] is that we enable you to scale storage separately from compute. We can do hybrid storage nodes if what you need is a lot of capacity. We're seeing hybrid become interesting for the Tier-2 storage and flash for Tier-1 storage.

Along the same line, what will drive Gridstore's product roadmap in 2015?

Symons: We've made the big bet on hyper-convergence. Another area where we'll focus a lot of effort this year is on what I call application-focused solutions. A lot of our customers are looking at doing SQL Server consolidation, so we'll do a lot of work with our partners, customers and our solutions team. Some of it will be around best practices, and some will be additional code to make sure we give features/functions for SQL Server consolidation that tie into the cloud as an extension for disaster recovery and other capabilities. Those workload-based solutions will be a key area for us this year.

Will this necessitate new software rollouts?

Symons: Some of it will be to continue driving automation and ease of deployment. It's not a core underlying technology, but more software add-ons to our infrastructure and feature set upgrades.

When can we expect the new releases?

Symons: The first part will come out in Q2, with more coming out in Q3. We are not planning any new hardware this year -- the existing Gridstore hyper-converged platform is still gaining traction.

Next Steps

Gridstore Inc. widens support for Hyper-V

Engineering firm finds success with Gridstore Inc. appliance

Dig Deeper on Hyper-Converged Vendors and Products

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

Do hyper-converged systems offer an appealing alternative to a SAN upgrade?
The recent hyper-converged system options seem like a rathe lazy idea and really are not up to the standards of existing SAN upgrades. The SAN upgrades are reliable tested and give consumers greater control versus the hyper-converged systems now offered in competition. For my industry we will not be adapting the hyper-converged system, instead opting to continue using the tested and proven SAN upgrades as they become available. We have been happy with these upgrades.
The Gridstore is trying to sale its old product by trying ride on hyperconverged bandwagon. They seems to have replaced disk with flash and added some management features for Hyper-V. Gridstore needs to prove that they have true enterprise class data protection and the product that truly scales. All they seem to have is a windows driver to encode and spread data to few systems. It is being bloated as hyper-converged storage. In the past they have been selling the so called "storage grid" without essential functionality to be even an enterprise class storage and called themselves grid storage. Be careful about the claims by this company.
Can a hyper converged system that allows you to scale out storage separately to compute really be called hyper converged?
Industryveteran — 27 Jan 2015 2:55 PM

The Gridstore is trying to sale its old product by trying ride on hyperconverged bandwagon. They seems to have replaced disk with flash and added some management features for Hyper-V. Gridstore needs to prove that they have true enterprise class data protection and the product that truly scales. All they seem to have is a windows driver to encode and spread data to few systems. It is being bloated as hyper-converged storage. In the past they have been selling the so called "storage grid" without essential functionality to be even an enterprise class storage and called themselves grid storage. Be careful about the claims by this company.

Classic FUD. "Beware of small company making big claims that big company has no response to.” Being a startup has the advantages of being able to innovate without a legacy. But it also has limited resources, so you have to take pragmatic steps to do something big.

Correctly, our early products focused on the ability to scale-out using this grid architecture which is unique - hence the company name Gridstore. We did not focus on adding the enterprise features of large competitors. Today, this has been tested to around 90 nodes and proves the ability scale.

During this time, we have also perfected the use of high performance Erasure Encoding for data protection - more commonly found with the more innovate object storage companies and which goes beyond any traditional “Enterprise” storage systems. This provides the same protection as replicas while using 50% less infrastructure. This is critical in hyperconverged - you are not replicating cheap capacity any more, you are replicating the full infrastructure stack. Reducing this by 50% is a massive cost savings and TCO savings.

Also correct, we operate in the Kernel of Windows. Which moved the control and data plane into the host and a distributed protocol to communicate with other nodes in the grid. A requirement for HyperConverged and the most efficient way to achieve the integration while consuming the least amount of resources and delivering the fastest I/O path. Only VMware EVO has been able to achieve the same. All other “HyperConverged” platforms run a full iSCSI or NFS server as a Guest VM. Very easy to do but very inefficient. While you say it is just a driver, that is the great part about it - it simply looks like a SCSI block device to the host - making integration completely seamless and transparent to every tool and process a windows admin could us.

Each of these three elements required new innovations and time to mature. These are architectural designs that differentiate our products and not something whipped up over night. Now, when you bring all three of these components together you get a HyperConverged platform that scales using 50% less infrastructure than all other competitors while delivering higher performance.
InterestedStranger — 28 Jan 2015 7:49 AM

"Can a hyper converged system that allows you to scale out storage separately to compute really be called hyper converged?"


Thats a really interesting question.

One of the biggest complaints we heard from companies using Hyperconverged was they were being forced to scale Compute and Storage together (the classic definition of HyperConverged). The reason the industry moved away from Direct Attached storage was due to the fact that historically storage grows 5-6X faster than compute. The sprawl that was common with direct attached storage is the exact same with HyperConverged. Growing your compute and licensing stack at 5-6X their normal rate is not good. The capital and operating costs will be enormous.

So we built into our architecture, the ability to add storage only nodes. They come in exactly the same form factors, but only add storage to the pool. They do not add compute, they do not require a full OS, Virtualization, Management software stock on top. But they are managed together as a single resource, through a single management point.

We deliver hyperconverged nodes that contain both Compute and Storage. We can also grow that storage pool easily by adding storage nodes to the pool. And the Compute nodes simply see the pool is larger.

We can also work with existing compute - it does not need to come in an HyperConverged packaging from us. Our virtual controller software can be deployed at no cost to any of your existing hosts who can then access the storage pools. We do not rely on a virtualization layer like all of our competitors do. So this gives us greater freedom in how to deploy. We do not create another technology silo.

Our view is that HyperConverged is a packaging or deployment option. On its own, it has some critical down sides that we wanted to address for customers. So strictly speaking, we go beyond the classic definition of HyperConverged - and our customers liked that.