News Stay informed about the latest enterprise technology news and product updates.

DataCore launches latest virtualization platform

DataCore added auto provisioning and support for Windows Server 2003 to the latest version of its SANsymphony virtualization engine this week.

Hoping to strike a chord with its customers, DataCore Software Corp. upgraded its SANsymphony storage virtualization software platform this week by adding auto-provisioning features and support for the Windows Server 2003 operating system.

Ken Horner, vice president of marketing for the Fort Lauderdale, Fla.-based company, said SANsymphony 5.2 hits two sweet spots: It delivers auto-provisioning capabilities while taking advantage of the storage-centric benefits of Windows Server 2003.

"Version 5.2 is not only now fully Win2003 compatible but also exploits and utilizes Microsoft's new Virtual Disk and [Volume] Shadow Copy Services," Horner said.

Windows Server 2003 environments can use Virtual Disk Services (VDS), Volume Shadow Copy Services (VSS), and Multi-path I/O (MPIO) to request functions from DataCore storage pools.

Horner said Windows is being accepted as a mainstream storage platform. "It's just a matter of time before Windows totally penetrates the NAS market and proliferates the SAN market as well," he said.

SANsymphony also takes advantage of new Intel hyper-threaded CPUs and larger network caches to gain performance boosts. New multi-port host bus adapters have also been qualified, effectively doubling the port density per node while enhancing IOPS and throughput. The SANsymphony virtualization exceeds speeds of 400,000 IOPS and 2.1 gigabytes per second sustained on two nodes.

DataCore said the use of automation and group actions in SANsymphony 5.2 allows a single storage administrator to manage many terabytes of distributed disks on Windows, Unix, Linux, MacOS and NetWare hosts. Rather than loading a multitude of storage resource management (SRM) agents for each application and each operating system, SANsymphony gleans storage consumption and I/O rates directly from virtual volumes statistics gathered in real-time across the network.

Users have been aware of the benefits of virtualization for some time, but they are waiting for clear-cut definitions and proven solutions in the open systems market.

"Having a single point of administration -- that's huge," said Stephen Serbe, technology architect for E. & J. Gallo Winery in Modesto, Calif., explaining that he wants and expects to get such a benefit from virtualization.

But, while Serbe said he has seen virtualization for years -- virtual memory is the basis of all of his company's servers -- what he hears now seems more like vendor marketing hype. Serbe wants to know the specific value that virtualization will provide. For instance, he doesn't want to have to put more agents on every server just to know what's going on with his data; he's afraid of creating server downtime.

"We're in the business of producing wine. We're not a test lab, and more often we've felt like the latter," Serbe said.

Steve Duplessie, founder and senior analyst at Enterprise Storage Group Inc., in Milford, Mass., said the biggest fear of virtualization vendors must be the arrival of IBM Corp.'s long awaited storage virtualization tools.

"Life was tough in Fort Lauderdale but is about to get a heck of a lot tougher," Duplessie said. "If IBM turns on its machine, how do you stop it, realistically?"

Earlier this week, IBM solidified the release dates for its first pair of storage virtualization products. The IBM TotalStorage SAN Volume Controller and SAN Integration Server, which will both be available July 25, are designed to give users a centralized point of control for volume management and to provide a common platform for functions like copy services, quality of service, security, and improved capacity utilization, IBM said.

IBM's design provides performance scalability by adding more I/O groups, which enable the SAN Volume Controller to scale storage capacity by adding disks to its attached storage arrays. The company said this results in performance of up to 280,000 I/Os per second (IOPs), up to 1780 Mbps of throughput and up to 2 petabytes of pooled storage.

Let us know what you think about the story. E-mailKevin Komiega, News Writer

SNW: Despite its age, virtualization lauded as next best thing

Q&A: EMC's Gahagan on taking a virtualization ride

Fabric-based virtualization ready for prime time

Comment on this article in the Discussion forums

Dig Deeper on Hyper-Converged Vendors and Products

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.