Product
|
Cloud costs
|
released
October 20, 2021
|
3
min read
|

Building & Releasing to Deployment Environments

Updated

The term "cowboy coding" was coined in the spirit of making arguably risky changes directly in a production environment. This means that code doesn't first make it into lower deployment environments, and instead, is directly shipped to the hands of the end-user. This of course is risky, as automated tests and manual testing are thrown out the window.

To that point, many companies have a series of pre-production environments that their development and operations teams leverage to validate changes before "full sending" them to production. In this article, we'll be discussing how you might leverage multiple environments to ensure a degree of code quality before potentially interrupting customers with bugs, outages, or other chaotic scenarios.

What Are Deployment Environments?

A Deployment Environment is where engineers, product owners, QA teams, and automation can be used to get a picture of how well a new code revision behaves, performs, and how it looks and feels from a UI perspective. Sometimes, the number of SDEs per application can differ depending on the mission-criticality of the application's production environment. Additionally, for complex applications that have several attached dependencies, such as AWS RDS instances, Redis clusters, or other infrastructure, it's even more common to have a set of standard environments so that changes can be tested in isolation (when needed).

Some common environments we see at companies are:

  • Development
  • Staging, aka "pre-prod"
  • Production

Depending on the size of the engineering teams, the development methodology implemented (Agile, Waterfall, etc.), and whether or not PR or ad-hoc feature environment automation is in place, you may see a plethora of other SDEs, such as:

  • QA
  • UAT-1..N
  • Load, Performance, & Chaos Testing
  • Demo

The ultimate goal of these environments is to enable teams to build and test infrastructure and application code in isolation, and ultimately to deliver high-quality software that performs in production. 

Now that we've highlighted some of the environments you might use to methodically test code before moving it to production, let's dive a bit deeper into how and why these environments are used.

Purpose of the Development Environment

In most organizations, the latest version of code is found in the development environment. 

So, you've made some modifications to the code via your local development environment, tested it locally (right? right?), and submitted a PR against your upstream branch. This might be your development branch, whereby potentially unstable code gets merged. CI kicks off tests and hopefully gives the green light for merging.

The goal of the development environment is to allow software developers to batch code changes together and deploy them via CD to the remote dev environment. If you're an AWS shop, this might be in Amazon ECS, EKS, or on simple EC2 instances or virtual machines elsewhere. Perhaps you're leveraging Azure AKS or Azure virtual machines.

At this point, developers can see if their code changes are working as expected in a production-like environment. I say this loosely, as the scale of the environment may be different, but the underlying technology should be the same. This is to avoid the trap of "it works on my machine" but doesn't work in the actual deployment environments that colleagues and customers might interact with. This is one of the key areas that DevOps teams focus on, as it happens more often than we'd like, given the wealth of environment variables and config file entries that might be environment specific.

Once the final developer QA testing on this environment completes, the code may be merged into another upstream branch for deployment to the next environment, typically staging. If your team subscribes to cutting semantically versioned releases, the code would likely get merged into the main or master branch in Git next. At that point, CI may kick off another build pipeline to rebuild assets for the staging environment or promote the previously-built VM or docker image to a release candidate. The approach here may vary depending on the application and coding languages used. 

For instance, if static assets need to have environment-specific URLs baked into the JS/HTML via Webpack, Gulp, or the like, then additional build steps may potentially be needed. You might also need to install different npm packages or an optional module in lower deployment environments, for debugging purposes, but not in production.

Purpose of the Staging Environment

Now that we have our build artifact for staging, engineers deploy the code to staging via their deployment pipeline. A common pattern is to have a single pipeline that carries code from development, to staging, into a load test environment, and then to production. Having a unified pipeline can be helpful, as it allows one to easily visualize the end-to-end process of committing, deploying, testing, and promoting new code into production.

After the new code is live in staging, one might run a suite of tests against the environment. This could include end-to-end tests via Cypress, integration tests, OWASP security scans, and even load tests if there isn't a dedicated environment for load testing.

This is the last chance to catch a bug, performance regressions, and security issues before our code problems become our customers. As a best practice, it's always wise to manually check features out in staging, even if tests have been completed. Test confidence takes time to build up to, and can we ever truly be 100% certain that our automated tests cover all cases? The saying "trust but verify" applies here.

In the ideal circumstances, the staging environment should be as close as possible to the production environment. 

All database connection string variables should maintain parity to production, in terms of replica/reader URLs pointing to a replica in staging. Personally, I've seen a plethora of cases whereby locally, replica database URLs are pointing at a single Postgres or MySQL database instance, so having "writes" going to the replica URL flies under the radar. This may also subvert checks in CI, as most teams aren't running multi-node setups in CI (with a primary and replica node). The business impact of such code getting into production varies by company and application but can result in major production disruptions in the right scenario.

Try not to have environment-specific deviations between staging and production, as much as feasible, as it could make or break the efficacy of the environment.

Conclusion

Leverage multiple deployment environments to thoroughly validate changes before pushing them to production. Make sure that your pre-prod or staging environment is as similar to production from an infrastructure and configuration standpoint as possible to narrow the chances of missing issues.

For further reading, check out our articles on CI/CD Best Practices, and our eBook on Deployment Pipeline Patterns.

Sign up now

Sign up for our free plan, start building and deploying with Harness, take your software delivery to the next level.

Get a demo

Sign up for a free 14 day trial and take your software development to the next level

Documentation

Learn intelligent software delivery at your own pace. Step-by-step tutorials, videos, and reference docs to help you deliver customer happiness.

Case studies

Learn intelligent software delivery at your own pace. Step-by-step tutorials, videos, and reference docs to help you deliver customer happiness.

We want to hear from you

Enjoyed reading this blog post or have questions or feedback?
Share your thoughts by creating a new topic in the Harness community forum.

Sign up for our monthly newsletter

Subscribe to our newsletter to receive the latest Harness content in your inbox every month.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Continuous Delivery & GitOps