If your software is not available to someone or something else, is your software even a piece of software? As metaphoric as that statement is, as a software engineer, having what you built on your local machine does nobody any good. At its most simplistic form, you need to build, test, and deploy your changes. 

This article contains an excerpt from our eBook, One Developer Experience – Build, Deploy, and Experiment. If you like the content you see, stick around to the end where we’ll link the full eBook for you. It’s free – and best of all, ungated.

Overly Simple CI/CD

Your idea can transverse many environments and pieces of infrastructure before being in the hands of the users. Even taking safety into consideration, you might have to navigate release strategies, such as a canary release, to incrementally update running systems. As more items shift left towards the developer, the amount of broadened knowledge, such as infrastructure automation and DevSecOps practices, can be challenging to learn. 

Thanks to advancements in Continuous Integration and Continuous Delivery, your changes can be built, packaged, tested, and deployed safely into new or existing infrastructure. Automation and expertise can be placed into CI/CD pipelines and beyond to help further the journey and achieve DX goals. A more complete journey to production can look like the journey below; getting a deployable artifact and deploying that artifact. 

Going from code to artifact might look like the below diagram:

Deployable Artifact

Though having an artifact is only part of the journey. In modern software delivery there is a need to orchestrate confidence building steps and safety. Automation is key for consistency and a good developer experience. A highly-automated deployment to production might look like the below (leveraging a canary release):

DX: Automated Deployment

Expectation On and of Modern Software Engineers

Engineers, in general, are natural optimizers. The ability to work more efficiently is very important. Because of the rise in incremental development practices such as Agile, the velocity of work and demands for features can be infinite. The only limitation to software is really time and resources. 

Software engineers are also naturally inclined to better the craft and in a field that requires constant learning. Several paradigms continue to shift left towards the developer, as brought up in the previous section, such as security with the DevSecOps movement and application infrastructure automation thanks to Kubernetes. The engineering burden continues to increase. 

Modern software engineers expect a good DX. Needing to buck the trend of the ever-increasing ramp up times to be productive, software engineers can feel fulfillment seeing the fruits of their labor quicker by building and deploying at the pace of innovation. 

Best DX When Building Software/Continuous Integration

Extending the positive local build experience outside of a developer’s development environment does take thought. Continuous Integration is build automation and focuses on externalizing the build. Though, more than just the compiled source code can go into a build; the end product for Continuous Integration is a release candidate. 

A core tenet of engineering efficiency is meeting your internal customers where they are. For software engineers, this is being as close to their tools and projects as possible. Like many modern pieces of application infrastructure, shifting left to the developer means being included in the project structure in source code management (SCM). 

As the velocity of builds increases to match the mantra that “every commit should trigger a build,” development teams could potentially be generating several builds per day per team member, if not more. The firepower required to produce a modern containerized build has increased over the years, versus traditional application packaging. 

Typically, an organization’s first forays in running automated tests in a repeatable and consistent fashion end up in their Continuous Integration pipelines. Usually, this is an easy lift; the same code/test coverage that a developer is subject to in their local build makes its way into the build pipeline since those steps should have been executed before the commit. 

A common distributed system fallacy is that one person understands the entire end-to-end of the system. This is not true. When adding new features or expanding test coverage, we are prone to a Big Ball of Mud pattern, both in development and test-wise. Execution times and complexity of test suites potentially increase with every new change. By running tests in an intelligent manner and only executing tests that are prudent to the new changes, this significantly combats test coverage complexity. 

Showing time saved by executing test coverage in an intelligent manner:

DX in CI - Test Optimization

Helping determine appropriate test coverage by modeling coverage and changes:

DX in CI - Test Graph

Best DX When Deploying Software/Continuous Delivery

Building software is typically done by convention, but deploying software can be very bespoke. Software is the culmination of decisions before, during, and potentially after your time on a project or product team. Navigating all of the application and infrastructure choices, especially on a live system with user traffic, can be complex and unique to each team.  

Deploying incremental changes into a development-integration environment usually is geared towards DX as developers have full control over the environment. Though as the march towards production continues, certain organizations might be under business controls that developers are not allowed in production. This is where Continuous Delivery steps in. 

The definition of Continuous Delivery from Jez Humble:

Continuous Delivery is the ability to get changes of all types—including new features, configuration changes, bug fixes, and experiments—into production, or into the hands of users, safely and quickly in a sustainable way.

Safely means pipelines enforce quality, testing, and a mechanism to deal with failure. Quickly implies pipelines promote code without human intervention, in an automated fashion, and finally, sustainable means pipelines can consistently achieve all this with low effort and resources.

Good delivery is fast and repeatable, great pipelines are fast, safe, and repeatable.The book Accelerate states that elite performers have a lead time of less than 1 hour, and a change failure rate of less than 15% for production deployments. Therefore, a great pipeline will complete in under an hour, and catch 95% of anomalies and regressions, before code reaches an end user.

If your code takes longer than an hour to reach production, or if more than 2 out of 10 deployments fail, you might want to reconsider your pipeline design and strategy. Because experimentation takes iteration, having an all-out deployment might be too lengthy to make targeted changes or experiments if it needs to happen throughout the day. Experimentation is important in the next generation of DX. 

DX in CD - Kubernetes Canary Deployment

Conclusion

We hope you enjoyed this excerpt of our One Developer Experience eBook. In the next excerpt, we’ll go over the importance of experimentation for DX. If you don’t want to wait for the next post, go ahead and download the DX eBook today – it’s free and doesn’t require an email address: One Developer Experience.