Transforming your Datacenter
By , and   |  May 31, 2016

Jack Welch, most known for his role as CEO of General Electric for two decades, is quoted as saying, “An organization’s ability to learn, and translate that learning into action rapidly, is the ultimate competitive advantage.” Starting with the principles from Chapter 5, it’s time begin the transformation of your data center. This must be done immediately!

Look at what Jack says: the competitive advantage comes from translating knowledge into action rapidly. Data center transformation sounds like a lofty goal, but it needn’t be quite so scary. There are plenty of opportunities to begin effecting radical change without tearing everything down and starting from square one.

Once your IT organization is properly positioned to make this transformation valuable to the business as a whole, it can start picking off the low-hanging fruit. Some examples will be discussed shortly, but the possibilities are as varied as there are companies in the world in need of transformation. As work on the easy targets in the organization helps the software defined data center (SDDC) take shape, the transformation will gain momentum. By the end, the new data center will look nothing like the old.

Align the Data Center and Business Needs
Prior to beginning this transformation process, it’s important to evaluate the motives for the transformation. It’s easy to get caught up in the technical aspects of the transformation and in the exciting new tools and processes. But transforming a data center is only valuable for one reason, and it’s the same reason that the data center exists in the first place: the data center makes the business money.

With that in mind, the first step to transformation is to take a hard look at which transformation choices will affect the bottom line.

For example, a radical overhaul to turn all of the blue LEDs in the storage arrays to newer, sleeker green LEDs are not likely to be well received by the board. However, if these transformations lower operational expenses by reducing administrative complexity, they will be better received. Or do these transformations increase the accuracy of the final product, reducing the number of products that are discarded as faulty or returned? If so, that’s another way to garner support.
Perhaps the transformations make another business unit happy; that’s always a good way to find support for a project! If the finance team needs updates to their portion of the website to be completed more quickly, changing the development workflow and using automation and orchestration to increase the speed of iteration on development projects will make them happy.

Regardless of what the benefit to the business is, you must have a clear goal in mind before beginning this transformation. Implementing a hyperconverged solution to aid in building an SDDC simply for the sake of having a software defined data center is missing the point and is liable to get someone fired.

On the other hand, clearly defining business transformation objectives and achieving business growth by meeting these objectives using the principles and knowledge within this book is a surefire way to garner yourself a pat on the back, a promotion, a raise, a lead assignment on a high visibility project, or what have you.

So, what’s the best way to make sure that a project looks good from the start? Get some easy wins right out of the gate. This makes the project look good to stakeholders and increases support for the project moving forward. Let’s look at some ways to get started on the right foot.

Where to Address the Low-Hanging Fruit
No business is exactly the same as any other, so there can be no conclusive blueprint for completing the transformation to a modern data center. However, there are a number of technology use cases that apply to a great number of businesses. It’s quite likely that one of these use cases applies to your business in one way or another. Any one of these use cases can be the perfect opportunity to show the value of the software defined approach by taking a technology and business process that the organization is familiar with and streamlining it.

Typically, these types of transformations exhibit a bit of a snowball effect. As the transformation goes on, code can be reused, knowledge about the infrastructure gained from a previous phase can accelerate a different phase, and so on. That’s why it’s wise to begin the data center transformation with one of the technologies that is most familiar to the team, one that has specific value to the business, and one that is extensible into other areas of the data center — the low-hanging fruit. Because of the team’s in-depth knowledge of the technology, the project will be easier to complete than the same transformation on a product or system they’re unfamiliar with. In other words, they have a high ability to execute technically. Also, the business value will give immediate return on the investment in the project. And ensuring the work done can be reused and extended into other areas of the data center makes the project more efficient. The Venn diagram in Figure 6-1 represents the factors in identifying low-hanging fruit for data center transformation.

Test/Dev
The software defined transformation may affect the testing and development (Test/Dev) environments of an organization in a bigger way than any other part of the business. Due to the purpose of the Test/ Dev environment, the quicker fresh environments can be created and destroyed, the faster the developers can make progress on projects.

Plus, the more accurately the environments are reproduced each time, the less error prone the Test/Dev process, and finally, the more automated the creation and destruction of environments, the less time operations-and-development staff waste performing repetitive operations. Their attention can then be directed to other important tasks.

Test/Dev environments are low-hanging fruit for many due to the fact that development staff can immediately see the benefit of the work being done, and sometimes are even eager to help. Getting input from the developers can help create an agile infrastructure that caters perfectly to their needs.

Software Defined Networking
Software defined networking (SDN) can be a boon to the development process in that it can enable the rapid deployment of applications that are completely isolated from the rest of the environment. It is all too common in a legacy environment that development components get their wires crossed with production components or (more commonly) an identical development component that another developer is using.

SDN can allow the developers to sandbox their work with no additional effort required, which leads to less frustration and quicker, more accurate testing.

Software Defined Storage
Software defined storage (SDS) could be leveraged to automate the copying of production data to a testing platform in a timely, efficient way. The more quickly and accurately the production data can be replicated in the testing environment, the more quickly deployments can be validated and approved. Also, due to the fact that these types of environments typically contain many copies of the same data, SDS can provide deduplication mechanisms that reduce the capacity needed to store this data. As Test/Dev could be one of the most complicated environments to transform, the effort expended here will likely make other areas of transformation simpler down the road.

Remote/Branch Offices
A software defined approach in remote/branch office (ROBO) situations can really enable an IT organization to accomplish more with less. One of the challenges with remote offices is providing the level of functionality the users want and the level of availability the business needs without spending the money it takes to build a Tier 2 data center in the office. By leveraging software defined compute (SDC), SDS, and SDN, the remote office deployment can be more robust and agile at a substantially reduced price point.

Software Defined Compute
SDC leads the way. In a ROBO, physical space needs to be carefully considered. SDC will allow the creation of a fairly sizable compute deployment in a small (relatively speaking) physical footprint. Less physical servers also reduce the cooling challenge and consumes less power overall. To top it off, SDC makes it easier to manage all of the remote site’s compute workloads from the IT office.

Software Defined Storage
SDS can also help to create similar advantages to SDC. When used to create a hyperconverged infrastructure (HCI), a disparate storage appliance can be avoided and storage can be included in the physical servers. This again reduces noise, heat, and power usage — all of which are important in a ROBO setting. The reduced footprint and increased simplicity also makes it less likely that a dedicated IT resource will be needed in the branch office. SDS also might allow storage workloads to be spread between storage local to the branch office and storage residing in the main company data center. This can increase resilience and control while maintaining performance and a good user experience.

Software Defined Networking
Lastly, SDN can allow the rapid creation of new networks for new offices. It can also enable things that have traditionally been more complex (like stretched Layer 2) in a matter of moments by leveraging technologies like VXLAN. SDN coupled with Network Function Virtualization would also allow the easy deployment of network equipment at the remote office like firewalls, distributed routers, and
load balancers where deploying a physical appliance would have been challenging.

Server Virtualization
Server virtualization is the heart of most of today’s data centers. Therefore, it makes sense that it’s one of the most well understood use cases for SDDC transformation. Deep knowledge of the system will provide the needed leverage in the early stages of transformation. Also, because the production servers are one of the most visible aspects of the IT organization to end users, a successful transformation in this area will help create support for future projects to accomplish the same results in other areas.

The SDDC transformation has already begun in most data centers, but if it hasn’t in yours, it must begin immediately. That value of intelligent, optimized, automated server virtualization is huge and provides operational and financial benefits in almost every case. If this process has begun with some basic server virtualization, automating deployments and configurations in the next big step. Leveraging configuration management tools like Puppet or Chef to ensure continuity throughout the environment and radically increase the speed of provisioning will pay dividends.

Software Defined Storage
SDS is likely the big frontier of many data centers today. The challenge is to abstract storage, whether it be monolithic or hyperconverged, and control it with policy and with code. SDS in the server virtualization arena is really a means to an end; creating the SDS platform exposes helpful interfaces to allow for orchestration, higher performance, higher efficiency, and potential space reduction.

To use physical space reduction as an example: an SDS solution that unifies a number of disparate storage arrays might be able to offer global deduplication. This ability alone can dramatically reduce the physical storage footprint and utility costs in the data center.

Software Defined Networking
SDN in the server virtualization world will allow an IT organization, especially service providers and high security environments, to leverage security measures like micro-segmentation to fully isolate east-west traffic. This level of firewall protection would be problematic to build and operate with physical firewalls, but SDN not only makes it possible but relatively easy. Besides distributed firewalls, SDN might also provide distributed routing and switching. This allows a server virtualization network to scale with much less complexity than when using a traditional architecture.

Big Data/Analytics
In the big data space, scale is king. Being able to scale up and down rapidly based on the job that’s being run is critical, as idle resources are expensive resources.

Software Defined Compute
SDC is the key to achieving this. With physical compute nodes for job processing, their resources are wasted. With SDC, the orchestration platform can create and destroy job nodes on the fly to accommodate the size of the job and the availability of existing nodes.

While compute agility is important in big data, the name “big data” also implies there’s a lot of it, and all of it must be stored. Storage agility is also critical.

Software Defined Storage
The nature of big data environments is that the data storage requirements are always growing. The storage needs to perform faster to allow for expedient analysis of that data, and the capacity needs to grow to allow more data to be stored. SDS is the only painless way to meet these needs. With silos of storage or even a monolithic, scale out storage array, managing the changing needs of big data storage without pain is unlikely.

Thanks to the abstraction that SDS can provide, however, scaling the performance and capacity requirements of the big data workloads is simple. SDS also allows the workloads to be controlled by policy, allowing the precise needs of the workload to be met automatically.

Navigation

<123>

© HPC Today 2024 - All rights reserved.

Thank you for reading HPC Today.

Express poll

Do you use multi-screen
visualization technologies?

Industry news

Brands / Products index