Increase Uptime with the Right Code Deployment Strategy

Faster is not necessarily better. The push to launch new applications and services as quickly as possibly can cost you in the long run. A 2013 Gartner study predicted that ineffective software release management would cause 80% of production environment incidents in large organizations in the next few years. If keeping your network or applications highly available and secure is important to you, then you need to implement tight controls on how your software is released.

Deployment Environment Design for Uptime

The first principle in protecting live networks from out-of-control development is to separate the development and production environments. There are several structures used in the software deployment process. The simplest progression includes three separate steps: development, testing, and production. Anyone who performs uncontrolled changes on a live network shows a significant lack of technical professionalism and is playing with uptime.

The first principle in protecting live networks from out-of-control development is to separate the development and production environments.

The development environment may be done on an individual code writer’s workstation. Of course, the developer will need to install the necessary tools of the trade. These will vary depending on the objectives of the coder and programming languages used. At a minimum, a developer usually needs a text editor, a shell program, and a compiler. He may also need a web server or some other platform to try out his programs. A developer may also have a sandbox environment set up on a remote server, or he may use virtual machines to try out his or her ideas.

Testing is an invaluable step in software development. Applications should not only be tested for their functionality, but also for security and performance. A true test environment mirrors the production environment. It is a way to see if the code actually does what it’s supposed to do before it is launched onto a live network. Test cases are used to ensure the finished product meets the software requirements. If not, then it’s back to development.

A four-step structure for deployment uses the categories development, testing, staging, and production. In this case, the staging environment is a duplicate of the production environment. One variant of this method is to have beta users try out the software before releasing it to the entire network, or load testing it to simulate the same user base that the production environment is accustomed to.

Release Management to Control Uptime

Deploying new software requires a process. It shouldn’t be done on the fly. The trend now is for application development to be fast, agile, and flexible. One of the ways this is done is to use microservices and containers to abstract programs from the physical hardware and operating systems. But new technologies are no excuse for sloppy deployments. The same precautions and care should be taken as was used in traditional software releases.

The key to uptime is to stick to best practices.

The key to uptime is to stick to best practices. That might mean adhering to local working instructions within the company, or it might involve complying with recognized IT guidelines. There are formal processes in the framework set out by the Information Technology Infrastructure (ITIL), including practices known as IT Service Management (ITSM). We couldn’t possibly cover the details of the framework here, but we recommend further investigation if you’re interested.

Some companies may want to tie their release management practices to their change management program. A more regimented process can help ensure that every aspect of availability, security and performance is covered, especially when dealing with customer-facing applications where scheduled maintenance windows and appropriate advance notification of downtime is required. We covered this important topic in our blog post “Decrease Downtime with Change Management”.

Software Release and Uptime

Coding errors on a developer’s workstation are no big deal. Sometimes the coder may ask “What if I do this?” A few taps on the keyboard and he can find out. It’s part of developing. Errors in a testing environment can be embarrassing — especially if the team is large and there are some important managers monitoring the tests. But identifying mistakes before rolling out software is a good idea, and having the chance to correct them is beneficial.

Launching software before it has been rigorously tested is a definite no-no.

But launching software before it has been rigorously tested is a definite no-no. Depending on the size of the organization and the type of transactions involved, faulty software can upset your best customers and even create a public relations nightmare. Consider the programmer who used imperial measurement instead of metric in a line of code and caused a NASA Mars lander to shoot off into space. We don’t know his name, but the mistake must have weighed heavily upon him when it was discovered.

We hear about outages at big service providers fairly regularly. To find out more, have a look at “What Went Down in 2017”. The human errors involved in some big outages may be played down by clever wordsmiths and attorneys — but you can be sure that they happen.

Conclusion

Anyone involved in the deployment of software should remain vigilant and take nothing for granted. Isolating the development and test environments from production is a very good idea for a number of reasons including uptime and security. It may seem drastic, but limiting the access of software developers to the live network could prevent intended or unintended consequences. And following a well-written plan that includes objectives, scheduling, version control, and other practices will provide greater control of software development and deployment.

Prevent your next outage now!

TRY IT FREE

Other articles you might like to read: