agsandrew - Fotolia
To hear some vendors talk about software-defined technology, you would think it's something new. But software-defined storage has been around nearly as long as other mainstay elements of software-defined architecture, such as servers and networking. Early file servers from the 1980s could be considered software-defined storage, since they present storage to clients that is not necessarily internal to the server.
The simplest definition of software-defined storage (SDS) is that it is software that manages storage decoupled from the underlying hardware. However, vendors have different definitions of the term based on their own platforms.
Some platforms are software-only products that present various storage pools as a single contiguous drive, while others offer sophisticated options such as replication, snapshots and tiering using commodity hardware.
Need for flexibility spawns software-defined technology
For the past 20 years or more, big iron SANs have offered a variety of extremely useful features providing redundancy and high availability, including replication, snapshots, RAID and object-based storage, as well as increased efficiency through thin provisioning, tiering and compression. Systems administrators have moved from a server and SAN model to hyper-converged systems, cloud and virtual servers running on vSphere, Hyper-V and other hypervisors, both on premises and in the cloud.
This growing use of software-defined architecture forced storage vendors to respond by making storage available not only via Fibre Channel and iSCSI, but over LAN, WAN, HTTP and other types of connections, as well. That allowed storage to move from one piece of hardware to another with virtual servers -- whether that piece of hardware was in the local data center, remote data centers owned by the same company or the cloud.
Virtually any software requires storage, even containers, which were originally conceived as stateless systems that wouldn't require persistent storage, have been modified to include persistent storage because data is at their root.
As applications evolved from monolithic programs running on a single PC to distributed applications running on any of a dozen servers in the local data center, the cloud or mobile devices, storage also had to change. From the original block-based storage that simulated an internal hard drive, to file shares and then object storage, layers of abstraction have been added that make it easier to present the same files to a wide variety of devices, regardless of their location.
Software-defined features make storage fast, secure and flexible
Flexibility is at the heart of software-defined storage. A software-defined architecture allows SDS the ability to connect to applications, no matter where they reside.
In addition to providing storage to applications in various locations, a software-defined storage architecture can optimize performance and costs through auto tiering, which moves the most used data to the fastest available storage, or caching, which uses the fastest storage, generally RAM or flash, to accelerate all reads and writes. Tiering and caching -- two methods of optimizing storage in software-defined architectures -- vary in their flexibility, transparency and the amount of fast storage necessary to accelerate the slower tiers below.
The complexity of software-defined architecture systems has grown dramatically over the last few years; 15 or 20 years ago, there were far fewer potential tiers than there are today. Now, we have several tiers between RAM and the fastest solid-state drives:
- Memory-bus-based storage, such as 3D XPoint, Memory1 and magnetoresistive RAM
- PCI Express and NVM Express flash
- Standard SATA and SAS flash
- 15,000 rpm and 10,000 rpm hard drives
- 7,200 rpm high-capacity hard drives
- Tape and the cloud
Tiering software that can migrate data from the fastest memory to the slowest tape based on how long it has been since it was accessed, and then bring it back up the chain when needed, must be very sophisticated.
Security has also become much more complex. Software-defined technology now often provides a variety of security features.
- Redundancy to prevent data from being lost.
- Consolidation applications that prevent too many different copies from propagating.
- Snapshots that allow instant images of a data set to be backed up to tape or used to start a new version of a database.
- Replication to remote locations.
- Backups to the cloud that can be brought online as needed.
The downside of software-defined architecture systems that provide some or all of these features is that the software may not be as robust as SAN systems that run on only one piece of hardware using a hardened, single-purpose operating system. In addition, administrators may find they need to run more than one software-defined storage system to get all the features they want, which may cause conflicts among these multiple systems. Setting up a software-defined storage system may also require more knowledge and time than a "drop-in" SAN system.
However, for many administrators, the ability to upgrade software without having to buy new hardware from a single vendor, along with the flexibility to fit storage to a wide variety of applications in many different locations, is causing a boom in software-defined systems.
Software-defined tech can work if hardware gets smart
Making sense of software-defined limitations