It’s been about a year since we launched Harness Developer Hub [HDH] in Beta. Today, HDH is GA and is serving tens of thousands of unique visitors every month and hundreds of thousands of pageviews every month all across the globe. All of this while supporting hundreds of contributors with varying levels of skills. The traffic and number of contributors in the public repository continues to grow as we expand the capabilities of HDH.
Looking at how HDH is architected, HDH is a Docursarus Implementation. Our site embraces documentation-as-code as a paradigm and is no different than any other modern TypeScript [Javascript] based application. We have an application that multiple contributors need to contribute to and needs to be built and deployed all throughout the day.
Over the previous year we have made two shifts in how we build and deploy. We now treat every commit as a potential release and build multiple times throughout the day with every git commit and also deploy multiple times throughout the day with every merge to our main branch. Let’s look at our current solution and then jog down memory lane how we evolved.
We leverage several Harness capabilities to deliver HDH to the world.
Our source code management solution is the source of truth for HDH and the genesis of changes being published. We have webhook events that fire on several SCM events to Harness to process.
These events are then processed by Harness.
Our goal is to provide preview/ephemeral builds for changes that are represented in a Pull Request. To do this, we need to remotely build the Docusarus instance which leverages Yarn and NPM to facilitate the build. We build on every net new commit to the PR.
We build via a Harness Cloud [hosted] build node so we do not have to manage build infrastructure and dependencies on the build node. We also leverage for performance Cache Intelligence on a conservative estimate sped up our builds more than 30%. From when we implemented the current setup, we have had over 9000 builds.
From a deployment standpoint, we deploy to our static host which is Netlify. The flexibility and extensibility of Harness allows us to bring a plugin that interacts with Netlify’s APIs. We have a decision that we make in JEXL if a build needs to head to a preview environment or if a build needs to be published to production.
Preview Logic [if branch is not main]:
Production Logic [if branch is main]
Configuring this Harness Trigger, here is our YAML configuration looking out for a few events.
Based on the condition, we fire a slightly different request to the Netlify API. Once we get the results of the Netlify API call, we comment back to the GitHub PR. This allows the contributor to preview their work in a live site if a preview flow is executed. In totality, the Pipeline looks as follows in the Harness Editor:
For example the Cache Intelligence step is easy to weave in during the Build Stage. Once execution will look as follows in the Harness UI:
Pipelines are designed to evolve. We had two other renditions of the Pipeline which we optimized over the year to produce what we are currently leveraging today.
We have embraced two principals as we evolved our pipelines. The KISS Principle to take a more simplistic approach and DRY Principle to cut out duplicate steps/tests. Our second rendition was Kubernetes heavy for the static site before we optimized on calling the Netlify APIs directly for a preview build; we used to maintain our own preview environment when Netlify provided this out of the box. Because we learned of this feature, we were able to easily modify our HDH Pipeline to leverage this new methodology.
If you like to continuously improve your software delivery capabilities, I would implore you to consider signing up and using the Harness Platform to help you with your goals.
Cheers,
-Ravi