Quantcast
Channel: Infrastructure Management - CA Technologies Blog
Viewing all articles
Browse latest Browse all 123

Goodbye big iron production environment, hello DevOps

$
0
0

Goodbye big iron production environment, hello DevOps

Find out how combining containers with network performance management solutions can bring greater velocity and portability to your DevOps activities.

The concept of containers has long been part of the IT infrastructure landscape, albeit with different guises. The mainframe LPAR is a container, the storage area network is a container, and so too is the virtual server.

Containers have a great pedigree, and their role is still evolving. The latest container innovation is Docker. The open-source model has already gained support from Microsoft, IBM, Rackspace and Google and customer numbers are growing rapidly.

In case you’ve missed out on all the excitement, here’s a quick summary. The premise is simple: Docker uses containers instead of virtual machines to enable multiple applications to run at the same time on the same server.

In today’s application economy, that’s already a big win, but it gets better. Containers share the OS kernels resources but differ by keeping applications and services separated, which means they are not as resource hungry as their virtual counterparts – another big win for the CIO wanting to do more for less.

A solid performance

OK, so you get it, containers rock. But what’s the link to network performance management? We need to wave goodbye to the old resource hungry big iron production environment, and say hello to the world of DevOps.

When developers are testing new business applications or network delivery technology, they need to ensure performance is not going to be impacted.

The application economy is all about performance – just a few seconds of downtime can send customers and their cash in the direction of your competitors.

With Docker, network performance management solutions can be containerized to simplify and accelerate the testing process.

For example, a communications service provider (CSP) wants to deploy some new networking technology. They must verify that the new components will deliver the performance metrics needed for detailed customer reporting.

Thanks to Docker’s agile functionality, the developers can hook up the organization’s existing network performance management solution to see how it behaves with the new components or new networking technology without waiting for a VM to be spun up or a costly test environment deployed.

Greater velocity and portability

A containerized approach can also be used to verify if an organization’s existing network performance management solution will work alongside a new application.

Docker allows organizations to easily package an application with all of its dependencies into a standardized unit for software development and testing.

Let’s go back to our CSP again. It wants to introduce a mobile app for its engineers so they can access network performance data while off-site to improve problem resolution.

Using Docker, the CSP can evaluate how data from its existing performance management solution will be visualized in the new app, without any impact on its production environments or customer services.

Regardless of the use case, containers mean one thing: greater velocity and portability. Customers don’t wait in the application economy. To survive and succeed, organizations need to move fast.

 

The post Goodbye big iron production environment, hello DevOps appeared first on Highlight.


Viewing all articles
Browse latest Browse all 123

Trending Articles