Software Defined Datacenters: Powering the infrastructure of the future
By and   |  December 07, 2015

Benefit # 1: Cost – It is more cost-effective to upgrade software than to update hardware infrastructure when development errors occur.

Benefit # 2: Simplicity – Reduce network complexity through implementation of intelligence in software and speed up implementing new services in the network, simplified deployment by using the software layer.

Benefit # 3: Standard – arrival of standards will allow companies to mix and match infrastructure elements. It will also allow implementing safety and security measures very early in the development cycle.

Benefit # 4: Agility – The conversion of physical servers into virtual machines provides an unprecedented operational agility that traditional investments do not allow.

Benefit # 5: Flexibility – Once the servers and resources are virtualized, their creation and assignment (both internally and externally) according to the business requirements is greatly facilitated. With considerable gains in operating earnings and deadlines.

Benefit # 6: Monitoring – The state of the art can today provide a total and complete visibility into the company’s use and utilization rates of IT resources.

In past issues, HPC Review has covered the unprecedented development and benefits of virtualization of workstations and storage infrastructures in a computing environment, allowing remote supercomputing uses to meet business, academic and research needs. But the changes occur also on the infrastructure side: Datacenters have begun to change dramatically. It is now servers and the network itself to be converted to their virtualized counterparts, with additional key benefits. All industry players offer solutions for linking virtualized environments, with their equivalent data center side, which together to form a single logical link, offering administrators more security, performance and agility. And ultimately, more competitiveness. HPC and end to end virtualization is a winning combination!

Network Virtualization: eliminate inconsistencies
The potential of network virtualization (or SDN) lies in its ability to facilitate the changes in the organization and support different processes where control resources entrusted to business segments which are outside the network or infrastructure enterprise IT. This new operation is complex, but the gains can be substantial. Why try to virtualize a network? To reduce the costs and in order to both simplify and add flexibility which in turns, generates unprecedented usage possibilities. Virtualizing the network is completely synonymous with Software Defined Network allowing to schematize a network on a higher level in the software part by not using physical layers or simple actions. The idea to virtualize the network is not recent. Among the pioneers are Microsoft that in order to reduce networking costs between PCs on a Wan, had the idea to overcome the hardware limits by converting some of the hard-wired tasks into software. Thus the Winmodem software interface was born to replace the dedicated circuit dedicated to communications between PCs on the WAN (usually a modem) which sent the signals via the telephone line. Upgrading the network hardware, usually quite costly, was therefore limited to a software upgrade much more accessible. It alone was a powerful reason enough to switch some of the traditional missions devoted to hardware, to its software defined sibling.

And the network became software
Becoming Software Defined certainly is the future of the network, through which all the digital exchanges take place. Numerous industry actors and network equipment manufacturers of routers, switches, load balancers, firewalls etc, have already jumped on the bandwagon, convinced that tomorrow’s network world will be mostly, if not purely software based. One common conception is that the intelligence lies in the software, which in turn can run on almost any hardware with no peculiarities whatsoever – think generic.

Which is precisely what Software Defined X is based upon – abstracting itself from the hardware layer while retaining its former specifics and strong points, and pulling out of the expensive hardware equipment all the organization, orchestration and action automation capabilities, resulting in a network virtualization layer having identical characteristics on which businesses can adapt and reinvent their IT infrastructure. Of course, this transition is of paramount importance for developers and IT administrators so they can operate their infrastructure elements efficiently both on the management side, application side and business side.

In a software defined world, security is of utmost importance. The reason being that when the future’s network infrastructure relies on code, any development mistake resulting potentially in a network vulnerability situation can have heavy consequences.

The SDN Vision : One vision, multiple approaches
By separating the Data Plane and the Control Plane used respectively for task execution and operation intelligence at the heart of the physical network, three trends have emerged so far.
The control plane controls and centralizes the actions to realize and the operational intelligence. To send commands to the control plane, a virtual layer connect and interact through APIs (Application Programming Interfaces). Which is where the application developers have an important role to play. At this level, an Open Source protocol is also driven by the community : OpenStack, which is used for the data exchanges between the top services layer and the Control Plane.

The requests from these services are translated at the layer control level by simple commands that will be sent to the physical elements scattered anywhere on the network. This communication uses the Open Source Openflow protocol. Dan Pitt, executive director of the OFB (Open Networking Foundation) promotes the use of Open Source protocols around the planet. Beyond this, there are several ways of accomplishing a Software Defined Network software architecture because each network equipment manufacturer has his own vision, a vision strongly influenced by the existing product portfolio.

Safety and Speed
According to the virtualization pure players, the hypervisor should remain at the center of an SDN architecture. Which in turn, allows to be closer to the virtual machines. Vmware’s vision is in fact, to have two hypervisors. One on the server and one on the network, through their dedicated NSX hypervisor. Which in turn, allow to attain both safety and speed, according to VMware.

Besides Open Source protocols and hypervisor based environments, there are other means to achieve the virtualization of a network while keeping the existing network infrastructure. Every network manufacturer is actively working on SDN solutions able to bridge the traditional network infrastructure with an increasingly SDN world. While a full blown SDN network is yet to be achieved, the evolution is going forward, and it is time for companies to begin taking it into account in their infrastructure evolution plans.

Server virtualization: consolidate and streamline
If VMware did not invent virtualization, this company has greatly contributed its dissemination and its modernization. This is a fact: virtualization needs are skyrocketing. Which is not an accident given taking into account the technological developments and the business needs. Therefore the demand is soaring… and it will last! We met Marc Frentzel, Technical Director Europe and Magdeleine Bourgoin, Technical Director France to better understand the benefits of server virtualization.

Why virtualize servers?
Virtualization generates many immediately noticeable and quantifiable benefits, since replacement or extension of IT assets are quickly amortized when virtualized in whole or part. It is not the only benefit, since flexibility resource allocation that a virtualized park provides results in a double gain, both operational and financial. According to our interlocutors, the experimentation phase is finally over and organizations can focus further to their digital transformation with improved and robust virtualization infrastructure and solutions. Indeed, the first phase has been crucial to convert the bulk of hardware conversion into virtualized resources. Server virtualization Server is unique since it concerns all sectors of an organization by the use of related technologies on the network (NSX) and storage (vVolumes, vCloud Air).

Virtualization improves the use of IT equipment
The second step that companies are preparing to cross is dedicated to the steering and provisioning of application resources. These past are indeed both virtualized third. Their supervision and control required new forms of supervision. This involves understanding the use of monitoring and automation functions to better manage the existing resources in conjunction with a high level of SLA. This technical steering aims to help organizations to keep an eye in the infrastructure economics through the concept of billing that allows to reinvoice resource utilization. On an operational level, let’s not forget that 50% of the virtualized workloads in the world are now occupied by SAP, Microsoft and Oracle applications.

Overprovisioning dangers
Virtualization has many benefits, but also identified pitfalls. VM deployment has become so simple that administrators often forget how many and which ones they have deployed so far in a giuven point in time. Which results in unused VMs that continue to eat up computing power and resources. The first step of a rearchitected infrastructure is to help regularly control and identify unused virtual machines in order to reclaim and redispatch the dormant resources on the fly. This ability is crucial in a world where the implementation of VM explodes, and where their Monitoring is not always easy to achieve. To this regard, VMware has a technology (NSX) which adds an arbitration layer to enterprise networks to complete these operations in the most transparent and effective manner.

A more flexible control of resources
An important concept is that of continuity environments, namely the need control the resources allocated for preserve flexibility without the risk of interrupting exploitation. This ability allows now move workloads without interruption, including those outside the company on partner data centers. What preserves freedom of choice a public cloud and a private cloud. This possibility overflow allows companies keep hold of their costs by transferring in Opex.

“Pay as you grow “: A rational infrastructure growth and costs related to actual needs
Not so long ago, an organization needed to invest heavily in IT equipment in anticipation of its activity growth. Which induced additional costs related to Installation, skills, dedicated staff and specific expenses (surface occupation, electricity, air conditioning). With the advent of virtualized servers, these costly operations are becoming as trivial as renting a car. With two decisive advantages: on one hand, the lag time between the equipment, operation and amortization stages disappear. A peak of activity, a big project, a large order? Triggering external resources can be done in record time. Does the use of these resources need to be further extended? Easy, simply extend the contract on a weekly or monthly basis. Moreover, it has become a matter of days, if not hours, to provision unprecedented capacity previously only achievable through physical equipment!

© HPC Today 2024 - All rights reserved.

Thank you for reading HPC Today.

Express poll

Do you use multi-screen
visualization technologies?

Industry news

Brands / Products index