Storage Virtualization: What are the benefits ?
By   |  July 03, 2015

A fast growing trend these last years has been virtualization. So much so that almost every IT product family can be virtualized these days. Networks, Storage, Servers and even Desktops and workstations through VDI-like mecanisms. We have decided to focus on storage, since it pretty much underlines the other market segments.

Today, virtualization is a subtle blend of hardware and software. On the hardware side, the processors are evolving dramatically, with the number of physical and logical cores constantly increasing, which are immensely helpful for virtualization tasks. The third-generation Xeon processors (Haswell) have from 4 up to 18 physical cores for the Xeon E5-2699 v3 cpu. This multiplies all capabilities for a server to run dozens of virtual machines (VMs) under optimal conditions. For a two- or four-processor server, the number of active VMs that can be executed can be from dozens to hundreds. In addition to this horsepower, specific instructions called VT (for Virtualization Technology) are built into the processors that improve VM performance by granting them direct addressing capabilities to the hardware resources of the host machine via the PCI Express bus.

On the software side, the two leading virtualization platforms are Microsoft Hyper-V and VMware vSphere (also known as ESX Server). The former is a type 2 hypervisor, running over a functioning operating system (Windows Server 2012 or 2012 R2). The latter is a type 1 hypervisor, independent of any OS and running natively on the host server.

A fast growing trend these last years has been virtualization. So much so that almost every IT product family can be virtualized these days. Networks, Storage, Servers and even Desktops and workstations through VDI-like mecanisms. We have decided to focus on storage, since it pretty much underlines the other market segments.

Today, virtualization is a subtle blend of hardware and software. On the hardware side, the processors are evolving dramatically, with the number of physical and logical cores constantly increasing, which are immensely helpful for virtualization tasks. The third-generation Xeon processors (Haswell) have from 4 up to 18 physical cores for the Xeon E5-2699 v3 cpu. This multiplies all capabilities for a server to run dozens of virtual machines (VMs) under optimal conditions. For a two- or four-processor server, the number of active VMs that can be executed can be from dozens to hundreds. In addition to this horsepower, specific instructions called VT (for Virtualization Technology) are built into the processors that improve VM performance by granting them direct addressing capabilities to the hardware resources of the host machine via the PCI Express bus.

On the software side, the two leading virtualization platforms are Microsoft Hyper-V and VMware vSphere (also known as ESX Server). The former is a type 2 hypervisor, running over a functioning operating system (Windows Server 2012 or 2012 R2). The latter is a type 1 hypervisor, independent of any OS and running natively on the host server.

  • independence from the hardware,
  • a manageable quality of service (QOS),
  • consolidation of input / output operations in memory.

Managing heterogeneous interfaces
This virtualized approach allows freedom of choice with respect to hardware. No matter which brands of resources are dedicated to storage, they can all be managed from a single dashboard. This also applies to the varying interfaces that have often evolved over the years. A virtualization solution allows the seamless coexistence – and intercommunication ! – of iScsi and Fibre Channel interfaces for instance. This is far from being the sole benefit : access to recent and innovative features such as auto-tiering (moving the most frequent-accessed data from traditional hard disks to fast flash-memory based SSDs), CDP (continuous backup) and thin Provisioning are also present. Added to this is a connection ability – along with snapshot features – with Cloud spaces like Amazon Web Services and Microsoft Azure Cloud platform.

Provide storage rules to ensure operation
Being able to manage QOS over the entire storage infrastructure allows the prioritzation of input / output operations according to application needs. Financial or HR applications can be categorized as being critical and therefore benefit from guaranteed and prioritary access…despite the storage infrastructure load while still remaining within the permissible limits.

Another example is a company’s business database, that through QOS manageability can keep a guaranteed flow and thus remain exploitable during favorable conditions or maintain a minimum percentage of bandwidth even under heavy load, thus ensuring high availability. As Mr. Le Cunff states, it is not a question of data volume, citing for example the case of a food operations company whose database is only 300GB, but hosts critical traceability data which must be accessible at all times with optimal flow rates.

Hardware independence to combine existing and new technologies
An enterprise’s infrastructure is almost always the accumulation of heterogeneous resources using separate connectors: iScsi SAN, FC SAN, disks, disk arrays… And as often happens, whenever a manufacturer decides to change its product line, connection and compatibility questions arise. Take for example a client who has purchased an IBM drive bay 3 years ago, a model since removed from the vendor’s catalog. It very probably is not supported anymore, either on the hardware or the software compatibility side. Another typical example is an enterprise needing to evolve its existing infrastructure started years ago, but being forced to stay with the same technology, often at exorbitant costs (new equipment is expensive) and without any performance benefit. A virtualized storage architecture solves these two scenarios and provides access to the latest technologies without disturbing the existing hardware infrastructure. This is also the opportunity to equip dumb storage arrays (since the latter is offset at the storage hypervisor) and thus at lower cost.

Migration without the headache
Migration effects are also eased in a consolidation phase, during which many low capacity drives are replaced with more recent, higher capacity drives. Another benefit is energy consumption falls sharply! Usually data migration requires an IT department to first save data, and thus establish an equivalent storage space. Such a setup takes time, is cumbersome and risky, not to mention that it will need to keep track of the data transfers between the backup and restore operations. When a company has a virtualized storage infrastructure, the abstraction layer that isolates the hardware allows, through a feature known as substitution, to copy the data to new storage spaces without interruption – and afterwards, leaving the freedom to keep the old, less effective devices for less critical needs or to withdraw them from service altogether.

An opening to the simplified Cloud and a three-speed storage
Harnessing the Cloud as a third storage space for overflow purposes or BCP/BRP (Business Continuity Plan /Business Recovery Plan) is much simpler with a virtualized infrastructure. Depending on the speed of the network link, it is possible to maintain a synchronous (broadband) or asynchronous (low speed) backup mechanism. Even better, the Cloud is also integrated into the single management console in order to keep overview as clear and complete as possible. In addition to the seamless integration of cloud storage, a virtualized hypervisor enables visibility of storage space types. These are typically divided into three storage levels that are set up within the company:

  • Slow storage spaces are dedicated to local backups.
  • Efficient storage spaces are reserved for office and production applications (accountancy, HR).
  • High-performance storage-based SSD or Flash are reserved for critical business applications.

The most efficient storage for the most critical missions.
These three types of storage spaces can be combined with QOS rules to ensure their availability and thus respond to the changing needs of a company. For example, a business database which increases from 50 GB to 700 GB can be migrated to more efficient and capacitive storage resources. Not forgetting a golden rule for storage: better to spread data across multiple disks (or drive arrays) of lower capacity than putting them into a single drive bay or disk, for obvious reasons of redundancy and security. The other benefit is to provide better load balancing across multiple arrays and disks (it is better to use three 300GB disks than a single 1TB drive). Some application contexts require the fastest available resources. The greatest constraints are those required by virtualization workspaces (VDI and DaaS). Software such as SANsymphony can identify the fast and slow zones on multiple storage spaces. And also identify potential bottlenecks and move applications accordingly onto higher performance storage pools. Another added bonus of such visibility is to benefit from statistics to use predictive failure algorithms in order to avoid the risks of an operation halt.

A rule-oriented storage strategy
The VMware approach is to define a rule-based storage: take the physical storage and add an abstraction layer on top of it to pool all the existing storage resources. Thereafter, the VMware administrator uses these resources to ensure quality of service. To achieve this, the rules are defined first (extremely fast storage and storage highly available or redundant storage, etc.). In this case, the VMware administrator in charge of deploying virtual machines will not need to know which storage it will be actually physically attached to, but will indicate the needs of the of the deployed application (highly available and/or high-performance). Therefore, the application is deployed with the defined quality of service rules. Thus ensuring that the storage strategy is consistent with the real needs of applications or VMs.

Two solutions to the same objective
Specifically, there are two solutions at VMware to implement this strategy. The first product was released last year and is called VSAN. It is a technology embedded directly into the VMware hypervisor that lets you define QoS rules by using local server disks. Thus the administrator will create a pool of servers and physical disks will be concatenated to form a logical unit. There, the QoS rules will be defined and the administrator of the VM will use and apply the available resources based on this approach. Relying on wizards to be deployed, no dedicated storage administrator is needed. A concrete, practical implementation of this concept is EVO: RAIL, a converged infrastructure consisting of servers, networking and storage using the VSAN technology. Another new technology called Virtual Volume has appeared (an extension of the rule-based VSAN), but is applied to external systems belonging to partners offering SAN or NAS solutions. Is is also based on rules (redundancy, performance, availability, etc.) but the administrator can use external and distant SANs or NAS as it can do within a VSAN (without referring to LUNs or volume).

© HPC Today 2019 - All rights reserved.

Thank you for reading HPC Today.

Express poll

Do you use multi-screen
visualization technologies?

Industry news

Brands / Products index