Over the past decade, Jenkins has ushered in Continuous Integration to the mainstream. Though as paradigms start to shift, and with the rise of Continuous Delivery, leveraging Jenkins for a complete CI/CD pipeline can be seen as “DevOps Duct Tape”. In a recent webinar with DevOps.com, a very knowledgeable pair of engineers weigh in on where Jenkins fits in their respective ecosystems today. Chris Jowett, DevOps Architect from ABC Fitness, and Stephen Gregory, Senior Principal SRE from Lessonly, provide great insights alongside Steve Burton, CMO at Harness. 

If you did not catch the webinar, feel free to watch it here:

Continuous Integration and Continuous Delivery Have Different Goals

When talking about your development pipelines, a common eponym is to say “CICD” pipelines. However, CI/CD have two different goals. Continuous Integration focuses on the build, and Continuous Delivery focuses on the deployment of the artifacts. 

Stephen Gregory sums up CI quite well: “That’s the whole point of Continuous Integration, making sure all of your builds are consistent and everything there is working”. 

Chris Jowett adds, “There’s a lot of organizations where they’re deploying Java; it’s super common in the enterprise, they’re probably using Gradle, or Maven as a build system… And for that use case, just Jenkins works really [well] because it’s a fairly simplistic use case for a CI.”

When transitioning to Continuous Delivery, actually deploying and validating the artifacts into production in a safe manner, the concerns widen. 

Chris Jowett mentions goals around, “If you’re doing full CD, then auto testing deploys the next environment on its own, you have as few gates as possible”. 

Jenkins has primarily been designed around the rise of CI. As a tool, it is not impossible to build functionality for CI and CD into Jenkins, though the level of effort can outweigh the perceived cost savings. 

Build vs Buy – Focus on Your Core Competency

Going on the build vs buy journey is different for every organization. Leveraging a free tool like Jenkins certainly has a low acquisition cost having several free renditions. Though engineering cost is not free and can have a greater business impact directing resources away from the core competency and value drivers for your organization. 

Stephen Gregory poises the question when looking at build vs buy “What are your company’s core competencies? What is it that your company does?”

Then Stephen follows up with, “Things like getting the software that we built to the production servers—our customers aren’t going to see that. Hopefully, if we’re doing it right, there will hopefully be a positive impact because we’re not going to be spending all of our time shipping software.”

Quite true that your customers don’t care how you did something, but the results in the end that impact them is true business value. At some point, there will be an inflection point that resources are too costly focusing on a non-core competency problem. 

Chris Jowett sums up the level of effort with, “And in those cases, you probably should build your own CI/CD pipelines, but at some point, you’ll hit this mark where you’re spending more and more time on managing and maintaining those CI/CD pipelines.”

With the Harness Platform, the guardrails and ease of use baked into the platform allow you to focus on your core competencies. One more point brought up by Chris Jowett, “it’s [Harness] actually saved us money because I’ve effectively gotten an engineer back.”

Harness – Your Partner in CI/CD

No matter where you are in your CI/CD journey, Harness is here to partner with you. The transformational journeys that the two firms in the webinar are going in is certainly within reach for you and your organization. Harness has the ability to run Jenkins Jobs and capture outputs to leverage in a Harness Pipeline.  

Don’t just take our word for it: the webinar is filled with great tidbits that transcend tools and talk about approaches from seasoned architects. If you have not signed up for the Harness Platform yet, feel free to do so now. 


Charlene O’Hanlon  00:00
Good afternoon, good morning, or good evening— depending upon where you are in the world— and welcome to today’s DevOps.com webinar. I’m Charlene O’Hanlon; I’m the moderator for today’s event, and I welcome you.

Let’s go ahead and kick off today’s webinar, which is “Modernizing Jenkins CI/CD Pipelines.” Our speakers today are Steve Burton, who is the CMO at Harness; Stephen Gregory, who is the Senior Principal SRE at Lessonly; and Chris Jowett, who is the DevOps Architect at ABC Fitness. Gentlemen, thank you so much for joining me today. Really appreciate it.

Steve Burton  01:31
Thank you, great to be here.

Charlene O’Hanlon  01:33
Steve, I know you’re going to be driving this conversation, so I’ll put myself on mute, take myself off-camera, and I’ll see you guys on the other side.

Steve Burton  01:40
Thank you. Welcome, everyone. Before I do a quick speaker introduction, because we got some fantastic stories to share, you’re probably looking at the slide and the webcams and thinking— is that really Chris? I can assure you that’s Chris. Chris is the DevOps Architect at ABC Fitness. He previously worked at Rackspace, so he has a lot of experience in DevOps. Interesting fact: he actually has a military intelligence background and bomb—

Chris Jowett  02:10
Bomb analysis, like just collecting evidence off of IEDs.

Steve Burton  02:15
Right, and he assures me the reason for his beard is because of the 10 years in DevOps that he has endured. So that’s pretty much his excuse for why the photo looks different on this slide. Anyway, thanks for joining us, Chris.

Stephen Gregory, Senior Principal SRE at Lessonly. Stephen previously worked at Oracle and again, like Chris, has a background in DevOps. Interesting fact: he actually wrote a PHP application that saved someone’s life.

Stephen Gregory  02:48
PHP is not all bad, I promise!

Steve Burton  02:50
Yeah. Myself, I do marketing, but in a former life, I was a software engineer working on Java 1.12. I think that was what it was back then, and EJB’s— which are not fashionably cool these days, so I’m going to leave it at that.

Anyway, the purpose of the webinar is to talk about the butler in the room, so let’s crack on. So then, guys, why do we need to modernize Jenkins CI/CD pipelines? Chris, do you want to give us your thoughts at a high-level? Why you think we need to talk about this?

Chris Jowett  03:26
Yeah, so Jenkins is pretty great for some uses, but it just has a lot of management overhead that you need to take care of and there’s a lot of development time. If you can afford to trade time for a little bit of cost, then for some organizations, that can be really beneficial.

Steve Burton  03:48
Okay. Stephen, yourself?

Stephen Gregory  03:51
So, you know, I think you can do a lot of things with Jenkins, obviously. Jenkins originally came around for the whole CI portion of the CI/CD pipeline, and people allowed it to do extra things. But the primitives aren’t really there for doing things like getting your code to the final destination. While you can make it happen, I think that having tools in place that are natively designed for that actual delivery portion really is a major benefit.

Steve Burton  04:20
Okay, and so if we take a pause and think, alright, it’s 2021. It’s 2021— and I was using Jenkins back in 2008 when it was called Hudson when I was an engineer— and so if I’m in your shoes and I’m running a DevOps team or an SRE team, what are the things I should be thinking [I should] start doing and stop doing with regards to my pipelines in Jenkins?

Stephen Gregory  04:55
So, I guess I’ll start. I think that one of the things that have worked great is, [for] the things in the pipeline, making sure that everything is consistent. That’s the whole point of Continuous Integration, making sure all of your builds are consistent and everything there is working.

I think the areas where we’ve had problems are removing flakes and removing some of those things that are less valuable from the pipeline— but then also figuring out what parts are going to change as your team grows, and making sure that you can scale those out. I think that delivering the portion of the pipeline that’s actually delivering software to the production environment has been the part where I’ve had the most trouble. So really identifying how that delivery happens is kind of a key thing.

Steve Burton  05:49
So on that point, [it’s] looking forward 6-12 months [and saying], what is it our requirements are likely going to be then, and then trying to work backward from that?

Stephen Gregory  05:58
Right? Yes.

Steve Burton  06:00
Great. Thanks, Chris. What about yourself?

Chris Jowett  06:02
So one thing I would recommend is— a lot of us are Linux people, [and] some of us are Mac people but that’s okay— so we’re used to applications that do one thing and do one thing well. So I’d recommend working on making your pipeline be built out of small composable chunks. You should have a step that does a thing. That way, you can compose those pieces together in different ways, and you can reuse them. That also helps out a lot if you want to transition to a different system; you no longer have to do an all-or-nothing transition because you can move into individual steps from maybe using one test framework to another framework, or using one deployment tool to another deployment tool.

Steve Burton  06:44
Got it? So modularize and reuse. And how effective have you seen reuse with Jenkins?

Chris Jowett  06:51
It depends. We’ve had some pretty decent luck with reusable pipeline steps, but that’s because, in our organization, we place a lot of constraints on what developers are and are not allowed to do. I frequently use the phrase that we manage boxes, not services. We like to be able to put services into a little box and categorize it, and we expect them to work the same way as all of the other services in that box. And only because we’re so strict about that, do we get a lot of reuse value out of these individual steps that we make? But in an organization where you have a lot of snowflakes—speaking to Stephen’s point— then your reusability goes right out the window.

Steve Burton  07:35
Got it. And what do you think the reason is for that, like what intrigues an engineer to create a snowflake versus reusable components?

Chris Jowett  07:46
I mean, from a software developer’s standpoint, it’s easier to just sit down and write it in a way that you know that you can make it work—as opposed to having to abide by standards. But from an overall organization point, I personally don’t know of any good reason why an organization would want 10 services that work 10 different ways— as opposed to having maybe 10 services where some of them work one way, like maybe three categories of services. It makes everything easier to manage.

So from an organizational standpoint, I don’t know a good reason why. But as a software developer, it would definitely be easier if I didn’t have to follow any rules.

Stephen Gregory  08:24
Yeah, and I think some of it, too, comes about because of independent invention. A bunch of people working in different areas all kind of realize they need the same thing, and it doesn’t really look like it’s the same thing until you get a couple of months down the line, right? Trying to identify those and correct for that, some standardizations. Always, always good.

Steve Burton  08:47
Great. So with that, Chris— can you walk us through your journey? Walk us through the beginning with Jenkins, the benefits you’ve had, and where you ended up today? And what are the key things you’ve been doing to modernize your Jenkins pipelines?

Chris Jowett  09:01
Yeah, so we’ve been working on building a new platform to replace some legacy systems, which is probably what a lot of DevOps people find themselves doing— is build this new thing to replace the old thing. And while we were doing it, we wanted to limit cost. And so Jenkins was a great option; it’s free, but you have to just build it yourself. And so that made a lot of sense for us when we were first starting out, because when you’re first building the new product, no one is paying for the new product yet. And so if you have no income, you want to keep your expenses to a minimum.

As we started to grow in size, and (hopefully not grow as much in complexity) but grow in complexity a bit as well—you start to feel some pains of having to maintain everything yourself. That’s when we started looking into areas that we could free up engineers to have them work on other tasks that are not so easily shifted off to other systems. That’s when we started looking at moving away from Jenkins.

And that’s actually a fairly recent transition for us. I think it was only about a year ago that we got to [the point] where Jenkins was just taking up too much of our time. That’s when we started moving things over to Harness actually. And because of the fact that we had individual pieces that did individual things, and we had a very clear separation between CI and CD; it actually made that transition pretty easy. So that modular approach that I was talking about earlier really helped us out there.

Steve Burton  10:39
So just on that point, that’s interesting. You’re one of the few customers that, I think, keeps CI and CD separate, even though you were using Jenkins. When you look back at that, what was the reason behind it, as opposed to just building a single pipeline that did everything?

Chris Jowett  10:52
We did technically have a single pipeline that did everything. But because we had individual steps that did one thing and did one thing well, there was a spot in the pipeline where it very clearly shifted from being responsible for building the artifact and putting the artifacts somewhere, and [then] deploying that artifact. There was a very clear line in the pipeline where it transitioned from CI to CD. So we didn’t technically have them separated—but when we moved our CD over to Harness, we basically just got to delete the latter two-thirds of our pipelines and leave the first bit there to handle the CI part.

Steve Burton  11:31
And so you talked about that time being spent previously trying to do everything in Jenkins. Can you quantify a little bit? Was it a few hours, a few days?

Chris Jowett  11:43
So what Stephen was talking about earlier about, there’s not a lot of primitives there in Jenkins. I mean, you can extend it with plugins, but because we ran into quite a few issues with bugs and plugins and things like that, it actually took up a pretty decent amount of time. At one point, we ran into a bug where if you use— so we’re running Jenkins pipelines, obviously— a parallel call in a pipeline, then after Jenkins restarted, it would fail to properly resume a previous pipeline. And at that point, your whole pipeline is just dead and stuck. So we would have a pipeline that would run partway through, deploy into a couple of environments, Jenkins gets restarted for some reason— it’s in Kubernetes, it gets restarted all the time— Jenkins gets restarted, and now that pipeline is just dead.

And that took us a lot of time to debug that and figure out what was going on— [we] eventually figured out that it was the call to parallel that made it to where the pipeline couldn’t resume properly. I think that bug is now fixed, but that took hours of time for us to figure it out. And anytime that we wanted to add a new environment, we had to go in and modify that the CD portion of the pipeline to add that environment. Or if we wanted to get rid of an environment, we had to go in and modify it and take that it’s just not nearly as flexible. You could make it more flexible, but then you’re adding even more development time to it.

You can build anything you want— the question is just how much time do you have to actually build it? Because at some point, you need to have it actually do work.

Steve Burton  13:10
Got it. And so the skeptics [are going to] say, well, why do I need two tools when I can use one? So if you think about [it], how much effort was it to move to another tool? Because we hear a lot from engineers “why do I need another tool to do what I already do?”

Chris Jowett  13:26
Yeah, and that’s a very valid concern. Tool sprawl is a problem; you don’t want to have 50 tools to deploy a platform that has like eight microservices. Whenever you add another tool into the mix, it needs to be very carefully considered as to whether or not you’re going to get a benefit out of the addition of that tool. And to us, we added the complexity of adding another tool because we got what has, for the most part, been a more stable platform and taken up much less of our time.

I was talking earlier about adding extra environments— adding an environment in Harness takes us maybe at most 30, or if you’ve done it before, 20 minutes to add another environment into our Harness pipelines. In Jenkins, it takes a decent bit longer, and it’s more error-prone because you’re basically just going into text files and you’re copying and pasting and having to replace certain spots. Everyone’s done the copy, paste and find-replace maneuver in code before. And so that’s a very error-prone process: did you replace all the things that you needed to replace? Well, normally, especially if it’s a higher-up environment, you might not find out until something gets far enough down the pipeline to try and deploy into that new environment— now it doesn’t work. Whereas with Harness, we had a new environment, we know it’s going to work.

Steve Burton  14:50
Now, how long did it take to make that migration? Whenever we talk about modernizing anything, people assume months, years— especially microservices, monoliths, things like that.

Chris Jowett  15:01
Yeah, so we still have a couple of things that are deployed through Jenkins. We probably have 95% of our stuff deployed through Harness, though. And the stuff that we have migrated over to Harness, it really didn’t take that long. During our PoC period, we had a meeting scheduled to go through and like start bringing our first service over, and one of our engineers decided to go in and play around in it— six hours later, he had that service deploying just on his own just because he felt like it. Then we showed up to the next meeting: “All right, well, let’s get one of your first services. Oh, wait, you already have one in here.”

So it really did not take that long. And because we shove our services down into little boxes, once we got that one service is done, every service in that category was able to be moved over to Harness really quickly and easily because we already had it all set up. We just had to add the new services and say “Use same pipeline” and we were good. It probably took us a total of a couple of weeks to get our 1.0 of “migrate everything over to Harness” done.

Steve Burton  16:11
Great, thanks for sharing. Stephen, how does your journey look like at Lessonly?

Stephen Gregory  16:18
Originally, when I joined Lessonly, the goal was to move off of a platform-as-a-service platform. At the beginning, all of our developers just pushing code up— that does something that causes a build in the platform and deploys the code. We had CI running tests, [but] we didn’t really have anything that was building artifacts, that we could deploy to multiple places. And we needed to start deploying to somewhere other than this platform-as-a-service provider, right; so we now are deploying to AWS, we have our own VPC there.

I’ve been running Jenkins and Hudson since the mid-2000s, right— 2005 or 2006, whatever. So my go-to tool for that was Jenkins because [it was] easy enough to get running. [I] got it building, got it installing our services using the Helm into a Kubernetes cluster, developed our new platform. And I’m basically the only person at this point who knows how to use any of this stuff, so we have this entire learning curve for the rest of the team to try to get everybody up to speed.

The whole deployment process had been a script that people were running— which is probably something you should also stop doing, is having your deployment biggest script on somebody’s local machine—but we got all of the Jenkins up and running. I realized there’s a lot of maintenance there; Jenkins has a new version every week, right? There are all the plugins, the Helm script they had in place— it worked, but every time I needed to update an environment variable or change anything, it was a lot of work to get everything working.

So [for] my next step, I actually tried Spinnaker and got Spinnaker running. That was definitely a big step up from using Jenkins as a deployment tool; but as much as Jenkins is a lot to run, Spinnaker is even more. It’s just a couple of people on my team who are managing all of this stuff, right. So [from] that point, we realized we [didn’t] want to be the ones managing all of this infrastructure. There are some pieces missing for getting all of our team into this tool so that they could be the ones pushing out code and making all these changes. So that’s where we ended up with Harness.

Steve Burton  18:58
You bring up a good point in that when you build in Jenkins and it was all working, only you really understood the mechanics of how it worked. How long would it take someone your team to onboard and get familiar with or learn that basic understanding of how to operate it?

Stephen Gregory  19:13
It definitely took a couple of days to get a new person inside of the code and looking at how everything’s working the pipeline, and then also getting them to understand how something like Kubernetes is working as well. They would then have to learn the entire kind of concept of Kubernetes all at once, whereas within Harness right now, we can give some boilerplate stuff for deployments; they don’t really have to know about how a lot of that stuff works, but they can still go in and modify things and be productive pretty quickly without having to understand ingresses and services.

Steve Burton  19:55
Right, so it’s that abstraction almost? And what were some of the benefits? Chris talked about saving engineering time and maintenance and just making it more repeatable and modular. What was the big win for your team?

Stephen Gregory  20:13
A lot of it has been, much like Chris mentioned, bringing on a new environment. When I had that Jenkins server deployed directly, we had one environment that was kind of a prototype environment because we had new customers that might be coming on to this environment. Now we’ve got three of those environments. If we had to bring on another one of those environments, it wouldn’t take any time. In doing that— bringing on new services—we have two new services that, whereas historically, we have been deploying a monolith, we’re starting to bring on some new not microservices, but more individual specific-purpose services.

And we’ve been able to just deploy those everywhere. We don’t have to go through and tweak a script to make sure everything works. We have the definition now, and the same way we deploy everything else, we deploy this new service.

Steve Burton  21:17
Great, thanks for sharing. So just to summarize, I guess you’ve got two options, right? You can keep Jenkins for CI, [which] I think both of you’ve done because the cutoff point is really once you do the build and the tests, and you’ve got a package and you’ve got an artifact that generally sits in a repo like JFrog or Sonatype or a Docker Hub— then simply attach it in your CD solution to that repo then allows you to distribute it across the different environments. So you can see there, I think you talked about look[ing] at Spinnaker as well as lots of other solutions. I know Harness was mentioned, but there are lots of other tools you can evaluate and look at for your requirements.

Number two, how do you feel about modernizing CI? Is Jenkins just running out of time with CI? Is it good enough? How many years do you think it’s got left for dev teams to keep using it for CI?

Chris Jowett  22:13
I can go first on that one. I think it really depends on your organization’s requirements and what you’re building. There’s a lot of organizations where they’re deploying Java; it’s super common in the enterprise, they’re probably using Gradle, or Maven as a build system, and that’s going to handle most of your build and test to at least get your JAR files going. And then you just, for the most part, wrap it up into a Docker Image and publish it somewhere. And for that use case, just Jenkins works really [well] because it’s a fairly simplistic use case for a CI.

But there are opportunities to do far more advanced things with your CI, especially with all of the advances in machine learning and stuff that’s being added into build processes. I mean, could you do that in Jenkins? Maybe. I mean, at this point, I don’t know of any Jenkins plugins that do any machine learning stuff, but I’m sure someone will write one eventually.

The other question is not, “can you?”. It’s “should you?”. Like, do you have ML experts on staff and stuff like that? If you’re looking to leverage some of that more modern functionality then yeah, absolutely look at a CI platform that’s going to give you that functionality so you don’t have to hire a team of people that help build AlphaGo, or something like that.

So I think it just heavily depends on what your use case is, and what you need out of your CI platform. You’re just going to have to look at your requirements and determine what kind of tooling fulfills those requirements at the price point that you want to hit. And so I think it’s going to be a very personal decision from organization to organization.

Steve Burton  24:00
Yeah, yeah.

Stephen Gregory  24:01
I think for us, we’ve just been running the same thing we’ve been running for years. It runs tests, builds stuff— but at the same time, it also takes a really long time. Some of that is because we have tests that probably aren’t worth running anymore, that take a lot of time. And we can either spend the time going through and cleaning out and rewriting all of our tests (and we should probably be maybe deleting them). Or maybe there are some better tools there to help us find a better way to run than that. There have been some on the rise that we’ve kind of started to look at, so getting more curious about that.

Steve Burton  24:40
So if I understand this, there are advances in how you can optimize the build and test cycles. You can effectively reduce the size of containers, reduce the test cycle time spent, and be a bit more intelligent on when or in tests and when not.

Cool, just want to pause that for a second. [Addressing audience] We’d like us to be interactive, there’ll be a Q&A at the end, if you’ve got any questions as we go through. I can see a few coming in now. We got a question from Sandeep: “What would be the best practice to set up a highly available Jenkins for VMware infrastructure?” So if you kind of think about, do you cluster the Jenkins servers, do you have failover? How have you architected Jenkins CI [previously]?

Chris Jowett  25:26
Yeah, so that’s definitely a challenge with Jenkins. There are things that you can do [or] can look into, like pipeline resiliency, so that pipelines can be resumed after Jenkins restarts. There are ways that you can run multiple Jenkins Masters, but I’ve had varying levels of success with all of those ways. Especially since we run most of our infrastructure in Kubernetes, you have to run on the assumption that every now and then, your stuff is going to get restarted.

And so we have actually just put a policy on our Jenkins pod, like a pod disruption budget, to tell Kubernetes to try its best to not restart that pod. Then when we run our builds inside of Kubernetes workers, inside of other Kubernetes pods, the pipelines can be resumed if needed. But we realized for the most part our pipelines are very short-lived— it runs the build, it finishes. Then if Jenkins restarts, it’s not a big deal because there are probably not any pipelines running.

Making a proper H/A Jenkins setup is definitely a challenge, just like anytime you’re trying to make a piece of software that was made back in the early 2000s when Jenkins was first made— man that makes me feel old. Anytime you’re trying to make something like H/A resilient, [something] that was built when those were not major concerns in the industry is going to be a challenge. And the way that you achieve that is going to highly depend on your infrastructure and its capabilities.

Steve Burton  27:05
Got it. Another question for Chris. [Reading question] “We hear people use the words toolchain or pipeline; when should you use one or the other, or do you actually care?” What do you call it— a toolchain, a pipeline? How do you describe the CI/CD mechanisms?

Chris Jowett  27:23
For me, a toolchain would be the actual tooling that you’re using to do your build. So for our builds, we have Gradle run our builds— that’s part of our toolchain; SonarQube runs our code quality analysis— that’s part of our toolchain; we have Twistlock that runs some static analysis of the code— that’s part of our toolchain. And then the pipeline is the code that weaves all of that together to give you your end result. So the toolchain is the specific tools that you’re using, and then the pipeline is the code that actually invokes them and serves. In old-school terms, the pipeline’s kind of like a make file.

Steve Burton  28:02
Got it. A question for Stephen: “Did you look at any of the solutions from cloud providers like AWS Code Pipeline, or any of the native tooling that you get?”

Stephen Gregory  28:13
Yeah, so that was definitely something we explored a little bit. I think AWS provides a platform for building your own platform, and so there’s a lot of opportunity to build something really great with it. But [along] the same lines of “you can build out anything with Jenkins”, you can build out anything with the Code Pipeline. The advantage [of] Spinnaker, Harness, those types of tools, is [that] they’re a little bit more opinionated around how things can be done; the structures that they provide are much more in line with what we’re trying to do, as opposed to trying to define our own thing and use Code Pipeline and Code Build and all of those tools to get something very specific. So I think that’s really the difference there.

Steve Burton  29:17
Thanks, we’ll leave it there, and we’ll catch up on the rest of the questions towards the end of the session.

Here’s a curveball; I’m sure many Jenkins-passionate people are screaming at me saying, “Why not Jenkins X?” So let’s discuss Jenkins X. And any thoughts, opinions on Jenkins X?

Stephen Gregory  29:39
So I had not spent a lot of time with Jenkins X. When I was bringing up Jenkins, Jenkins X was definitely one of the things I was looking at because I was trying to run Jenkins inside of a cluster. The downside there seems to be that a lot of it seemed to be very command line-oriented—which isn’t necessarily a bad thing— but when I was trying to bring on a team that is very used to having a GUI, a web interface for doing these things, that was a downside. So [it was] trying to not move everybody from this website to these command-line tools.

Chris Jowett  30:18
And for us, I’ve used Jenkins X a little bit. But for the most part, I found it as being a different skin on top of the same old dog, right. So echoing the “it’s all command line”— I’m a command-line guy, I run Gentoo Linux on my desktop; I’m about as command line as you can get. I love command-line tools. But you know who doesn’t love command-line tools? People who work on the business side of the house; they need to be able to see things. QA analysts need to be able to see things. You’d be surprised how many software developers are not command line people. They have their IDEs, and that GUI gives them a lot of assistance.

So it’s going way back to the beginning, when Steve was talking about it’s a tool that only really he and his team knows how to work— like Jenkins X would be great for me and my team, but now it’s a tool that only me and my team know how to work. And that’s not useful for the company. And also, it didn’t fix a lot of the problems with Jenkins of, like, we still have to build it. We still have to maintain it. And we still have to do the care and feeding. It’s just a different animal that we’re having to care for and feed.

Steve Burton  31:34
Or, we’ve got option four: let’s just keep building things. So interesting question— what do you think are the right things DevOps engineers should be building? So obviously, you can build anything. But in 2021, what do you think are the right, valuable things to build versus potentially looking at leveraging or using as a service? Stephen, if you want to kind of start and give us your thoughts?

Stephen Gregory  31:59
Sure. I think that it’s hard to avoid building things that are core to your company. What are your company’s core competencies? What is it that your company does? So Lessonly does build training software, and we try to integrate with a whole bunch of different things. Inside of a lesson, you might have video, you might have images, all that sort of stuff. We’ve tried not to build the actual handling of all [those] images and videos, but we’re finding that [for] that type of stuff, we need to bring it right. We may need to bring in-house more than having it as a third party or some sort of off-the-shelf solution because it’s so core to what we’re doing on a daily basis.

Things like getting the software that we built to the production servers— our customers aren’t going to see that. Hopefully, if we’re doing it right, there will hopefully be a positive impact because we’re not going to be spending all of our time shipping software. You know, running, make TARing up the files, SSH’ing them off, and running install scripts; that’s not going to help anybody. So anything that we can do to remove that burden on the developers, without having to really spend time on it, is definitely something that makes sense to not build.

Steve Burton  33:24
Yeah, I think, Chris, you made the analogy of why do we want to build buttons?

Chris Jowett  33:29
Yeah, we were talking previously with a pipeline, a well-done pipeline. It’s like, I click a button and my code goes into an environment. If you’re doing full CD, then auto testing deploys the next environment on its own, you have as few gates as possible. But what everyone sees that consumes the CI/CD system is just see buttons; they click a button and nothing happens. The real magic comes in building that button, right? And you know, you want it to seem like it’s really easy because you want it to be really easy— but in reality, is a lot of work that goes behind building that button.

As far as should you keep building buttons, should you keep building CI/CD pipelines…. The answer is, like a lot of things I’ve said, it depends on your organization’s requirements. It especially depends on your funding. If you’re a startup and you have very little funding and [are] trying to get your business off the ground, there are probably better places for you to spend your very limited funding.

Like Stephen was talking about, spend that money on your company’s core competency [and] core product—because if you can’t get people consuming your core product, then your business is not going to stay around. And in those cases, you probably should build your own CI/CD pipelines, but at some point, you’ll hit this mark where you’re spending more and more time on managing and maintaining those CI/CD pipelines.

And there’s a crossover point on the graph of “how much does this tool cost” versus “how much does an engineer cost”, and then the crossover. That’s the point when the tool is now cheaper than an engineer is, [when] you should probably look into paying for the tool because now you don’t have to hire another engineer. Where that point varies from company to company.

Stephen Gregory  35:30
And then at that point, you know, you can start working on building out things that are, “how do we standardize making these services go out the next time we’ve got another one.” It’s easy; we have to think about it.

Chris Jowett  35:41
Yeah, the more self-service you can make things, the fewer engineers you have to pay for it. Because the most expensive part of all of this is going to be your engineers. Your software developers are expensive; DevOps engineers are expensive. That’s a major cost-saving— if you can have fewer people and can pay fewer people, then you can do more cool things and invest more in your company’s core competency and invest more in your actual product, not the things that make your product.

Steve Burton  36:11
Got it. And so I think we address free lunch quite nicely. I think what I heard there was if it gets to a point where building it or having a solution that takes one dedicated resource, do you want a dedicated resource building buttons, or a dedicated resource building your business or your core functionality?

So a question for both of you. Stephen, which is more important to you: your time or cost?

Stephen Gregory  36:43
I mean, there’s obviously a tradeoff there. I’ve been at a small company with two or three developers, and at that point in time, we definitely did not have any sort of budget to spend. But at Lessonly, we’ve got a team of 20 developers, and it’s definitely worth us spending our money in places where we can make everybody more productive, right? If we don’t have to have everybody waiting 20 minutes for a CI build because we can spend a little bit more on CI and get some more build machines, it’s definitely worth it. The same thing on the CD portion of it— the less we have to have our engineers working on the less flashy portion of it that nobody sees, the more they can spend focusing on actually delivering customer value. Then it definitely makes sense to spend money on those tools.

Chris Jowett  37:40
The way that I look at it is like it’s not time versus money. Because time is money. If I have an engineer that spends three hours working on something, there’s a cost associated with those three hours. Everyone knows the phrase “time is money”, so time is important to track, but you cannot ignore that time does cost you money. So that’s where making that choice that I was talking about earlier of “how much time is your engineer taking to manage and maintain the system? How much is that system costing you?” Even though Jenkins is free, it’s not actually free— going back to the free lunch thing. It’s not actually free; your engineers are costing you money to be able to manage and maintain and build and develop all of your pipelines.

Steve Burton  38:37
Right, that’s a good point. So if we kind of summarize it, lessons learned: if you were to do this all over again, what would be your top tips to DevOps practitioners like yourself? So if you can playback the last two years, what advice would you give?

Stephen Gregory  38:57
I think the time I spent at some smaller organizations made me a little bit shy from looking to options that cost money, and I definitely regret the amount of time I spent trying to run my own Jenkins server and running Spinnaker by myself. But I think I definitely learned a lot in that process, so I’m not too upset about it. But I wouldn’t shy away from bringing up the budget conversation a little bit earlier.

Chris Jowett  39:29
Yeah, and I would definitely say the same thing as well. I’m very cost averse; I like to penny-pinch wherever I can. I watch the AWS billing reports like a hawk, and try and find anywhere that I can make things cheaper. Harness is probably only the second enterprise, for-cost tool that I brought into our infrastructure— other than obviously AWS. I don’t consider that to be an enterprise tool; that’s a requirement, almost. And I probably would have been a little bit less averse to that cost because, in reality, it’s actually saved us money because I’ve effectively gotten an engineer back.

Steve Burton  40:18
Right. Anything, you know now that you’d change? Like any hiccups, any challenges, any things you wish you’d done differently?

Chris Jowett  40:32
I mean, I would just go back to what I was saying earlier about making sure that you compartmentalize your pipelines. It’s not that I did something differently, but it’s something that worked really, really well for us. Like I was talking [about] earlier, the fact that we had a very clear delineation between where CI stopped and where CD started— that made the whole transition to a different deployment tool very, very easy for us.

There’s a lot of people that don’t really understand the difference between Continuous Integration and Continuous Deployment— and if you fall in that boat, where you’re not sure what we’re talking about with CI and CD, like, definitely read up on that and learn about it and know where that line is in your pipelines. Because it’s a pretty important distinction to make between— usually it’s right around “I’ve built an artifact, I’m deploying the artifact.” That’s the line; that’s where it changes right there. Know where that line is, and enforce that line. Don’t let bits of your pipeline live on both sides of that line. That is a really critical bit if you want to maintain flexibility in the future.

Steve Burton  41:45
Alright, thanks for that. Let’s get to the questions; we’ve got lots of questions lined up. Let’s put [these] questions on a plate.

A question from Nicolas: “Don’t all tools have the same inherent flaw? If you use it wrong, then everything on it’s unmaintainable.” So I think the feeling here is: isn’t just Harness the same as Jenkins and CircleCI?

Chris Jowett  42:04
I can field that one. Yeah, any tool used incorrectly will do incorrect things. The difference between tools comes in how easy is it to use it correctly, and more importantly, how easy is it to use the tool incorrectly? It’s very easy to do really suboptimal things in Jenkins because it’s not a CD platform. The quote I used before is “when you install Jenkins, you get a web interface and an 85-year old guy with gray hair.” And then from there, you can build a CI system and you can build a CD system— but it’s very easy to get that wrong. Whereas other tools like Spinnaker, Argo, Harness— they’re more opinionated, and therefore it’s easier to get it right and not do those incorrect things.

But yes, you’re correct: if you do bad things, bad things will happen. It’s a tautology. It’s just about how easy is it to do things right.

Stephen Gregory  43:12
Yeah, it’s about using the right tool for the right job. You don’t hammer in the screw, right? You’ve got a screwdriver for that. So try to make sure you do it using the right thing for what you’re doing.

Steve Burton  43:26
Question for Stephen: “Can you debug pipelines live in Harness? Does it give you that visibility in the console and walk through the step by step execution?”

Stephen Gregory  43:37
Yeah, so there’s definitely a lot of visibility into each step. I’ve been very impressed with the ability to handle failures at different steps and where things are happening. There’s a great visual layout of exactly how everything’s executing and all the logs coming from each of the different steps that can happen. So it’s been pretty easy to figure out why things aren’t doing what they’re supposed to be doing.

Chris Jowett  44:13
The visibility is definitely top-notch. It’s really, really good.

Steve Burton  44:19
I’ll pay you afterward for that gentleman. [Chuckles] Question for both of you on security: “What are you using to do security scans, vulnerability, static code analysis, anything like that in the pipeline?”

Chris Jowett  44:36
For us, we’re using a combination of SonarQube and Twistlock. SonarQube does some static code analysis, but it’s more focused on code quality; it’ll pick up some security issues. But we’re also using Twistlock, which is another paid product that does more security-oriented static analysis of our artifacts. We’re always interested in increasing our security posture though. It’s a never-ending cat and mouse game, right? Like what tools I’m using today does not necessarily reflect what tools I’ll be using in two years because the security landscape changes on a daily, if not hourly, basis.

Stephen Gregory  45:28
Yeah, we’re using some static analysis tools. We’ve got a lot of Ruby, so things like Handbrake [brakeman] for static analysis and that type of thing are there. And then, outside of the whole pipeline, we’ve got all sorts of other scanners happening with AWS Inspect and Tenable and all those things as well.

Steve Burton  45:50
Get a question from Simeon: “When Jenkins is CI/CD itself, why do we need additional CI/CD tools?”

Chris Jowett  46:03
So that goes back to what we were talking about earlier about: using the right tool for the right job. To be really clear, Jenkins is not a CI/CD platform. They say it is, but it’s not. Jenkins is a framework with which you can build a CI/CD system, so it all comes down to how much time do you want to invest into Jenkins to build a CI/CD system. Can you do it? Yes. Can it work well? Yeah, you can build a well-functioning CI/CD system in Jenkins. It’s just a matter of how long is it going to take you to do that, and then anytime that you need to modify what you do as part of your CI and CD pipelines, how long is that modification going to take you?

So going back to what Stephen said earlier, it’s not that that one tool is bad. It’s about using the right tool for the right job. A hammer is great; a screwdriver is great. One of them is clearly better for putting in a screw.

Steve Burton  47:06
Question from Hamad: “Stephen, what process are you using to build container image artifacts inside Jenkins pods inside Kubernetes?”

Stephen Gregory  47:17
So we’re just using Docker at this point— Docker build to build the images.

Steve Burton  47:29
If money was not an issue, what would you do? Good question.

Stephen Gregory  47:36
Yeah, that’s a good question. I think right now, I’m happy with how everything’s working. I think we figured out our CD solutions out pretty well; I think the next thing that we were looking at doing would be how do we optimize our CI portion of it. Because that’s our problem right now.

Chris Jowett  47:58
I think that question is also a little bit flawed because it kind of insinuates that you can make the situation better by just throwing money at the problem. That’s not the right way to think about it; it’s not about how could I make it the best if I had all the money in the world. It’s I have a problem, and how can I solve that problem? Very rarely does throwing money at a problem make it any better. It just makes the problem more expensive.

Steve Burton  48:30
Gentlemen, I think we’re running out of time. So I’d just like to give a thank you to both Stephen and Chris for their time and knowledge. If any of you want to take a free trial of Harness, go to staging-devharnessio.kinsta.cloud. You can sign up; you don’t have to speak to anyone. We’ve got a free version as well. It’s free for life; there are no clauses in that, [so] you can deploy away.

So with that, gentlemen, thanks once again. Look forward to doing it another time.

Charlene O’Hanlon  49:05
Thank you. Great presentation guys, lots of great information. Alright, guys, Steve, Stephen, and Chris— thank you all so much for a great presentation. I know the audience got a lot out of it, judging from the questions that came in. Really, really good and useful information. So thank you again for your time and your expertise all three of you. I also want to thank the audience for joining me today. This is Charlene O’Hanlon, and I am signing off. Have a great day everybody, and please stay safe!