August 14, 2024

Learn About Database DevOps at KubeCon

Table of Contents

The CNCF just announced that Harness’s own Stephen Atwell will join Chris Crow from Portworx to present "Database DevOps: CD for Stateful Applications" at KubeCon in November. This session will discuss the intricacies of managing stateful applications within Kubernetes, leveraging Harness, Portworx, Liquibase, and Argo to automate data migrations and ensure consistent, repeatable deployments. We will be discussing practical approaches for making database DevOps a reality and share a real-world demo where we will walk through a database CD process live.

For those of you who saw Chris and Stephen’s talk at Data on Kubernetes Day last year, this talk expands the topic to encompass all of Database DevOps. Instead of just discussing the power of leveraging data within the DevOps pipeline, we will cover managing changes to databases, and database schemas, in a safe, reliable, and 0 downtime manner. These changes often slow down application delivery, and are high risk as data loss must be avoided. Similarly, crafting database schemas that can simultaneously support both the old and new application version is critical for all 0 downtime deployment strategies. This KubeCon talk will discuss database refactoring techniques and provide the foundational knowledge that you need to benefit from Database DevOps.

Want a sneak peak?

The Challenge of Stateful Applications

Stateful applications, which maintain persistent data across sessions, present unique challenges in a Kubernetes environment. Unlike stateless applications, which can be easily scaled and redeployed without concern for data persistence, stateful applications require careful management of data integrity and consistency. Depending on how your cluster’s storage is set up, when a new application pod starts, it may not have access to the old pod’s storage volume. Care must be taken to ensure storage is highly available, and that if a node goes offline, the new pods will have access to the old pods data.

Beyond Kubernetes all databases, regardless of operating environment, need to manage changes to their structures carefully to ensure data consistency and not slow down the delivery of the applications that use them. Traditional database management techniques often fall short, leading to deployment slowdowns and potential data inconsistencies. 

If software development teams lack database expertise, or must wait for a manual database change control process from outside the team, it hinders a company’s ability to compete. Agile software practices result in smaller but more frequent releases allowing companies to innovate faster. The success of agile is dependent on setting up highly empowered software development teams that can deliver code independently. Unfortunately, most applications have stateful components, and many organizations have manual database change processes. These manual processes slow down application delivery.

Additionally, the behavior of stateful applications often changes depending on the data in the database. At a prior company part of our release process involved cloning every customer’s database, and comparing the application's behavior on the old and new version to ensure consistency. We didn’t do this because we wanted to, but because it was the only way we could find a certain class of critical bugs. Back then we had to build a bespoke solution for this over several years. These days manipulating the database in your DevOps pipelines is finally becoming easy.

Database DevOps

Database DevOps bridges the automation gap surrounding stateful applications by integrating database changes into the CD pipeline. This ensures that both application code and database schema changes are versioned and deployed together. This approach not only enhances the reliability of deployments but also aligns database management with the principles of modern software development.

The benefits of integrating database changes into the CD pipeline include:

  • Reducing Deployment Slowdowns: Streamlining your deployment process by managing database changes alongside application code enables your applications to ship new features faster. This increased innovation velocity provides your business with a competitive edge.
  • Improving Data Consistency: Ensuring that your database schema changes are applied consistently across all environments reduces the risk of data-related issues. When changes are applied manually, sometimes they get missed– or the DBA uses a script that differs from the ones the developers originally tested with. These issues can lead to data inconsistencies and/or data loss, which often come with a steep cost to the business.

Join Us at Kubecon

Want to learn more? If you are coming to KubeCon this year, we’d love it if you joined us on the journey of Database DevOps by attending the talk, or stopped by our booth to discuss your database DevOps challenges. If you aren’t coming to Kubecon, We’ve got some more in depth blogs coming, you can also contact us to discuss your unique needs.

See you at KubeCon!

No items found.