Harness Product Update for April 2018
Given the rate of progress, I thought it would be a good idea to summarize what's new in Harness, or more specifically, what our customers have been asking for lately.
Below is a report of our own Continuous Delivery process over the past 30 days. We did 901 deployments (roughly 30/day) and 17% are failing, meaning we’re catching lots of defects/bugs before our code is pushed into production for customers. You can also see we deployed to over 2,700 compute instances during this time.
I’ve split our product update up into 3 sections that mirror the core capabilities of Harness:
Smart Automation
Smart Automation allows customers to build deployment pipelines in minutes. We do this by providing native integration for technology stacks and tools that customers use to run their apps and services.
One major deliverable back in February was our Configuration-As-Code feature. Users can now code deployment pipelines in YAML and use Git for version control. Read More: Blog.
We also enhanced our Kubernetes automation with support for:
- Horizontal Pod Autoscaling
- Ingress Controllers
- Istio Rules (required for Blue/Green deployments and traffic splitting)
- Azure Kubernetes Service
- Helm
Regarding Amazon Web Services, we released CodeDeploy, CodePipelines and Lambda support (Read More: Blog | Video) back in November last year and we just released support for AMI artifacts and AWS Fargate. AKS will follow shortly.
To assist with infrastructure provisioning, we recently added HashiCorp Terraform support, and also added CI support for Helm and Microsoft Team Foundation Server (TFS).
And finally, we added a new catalog feature that allows users to “catalog” their own existing shell scripts and reuse them across Harness deployment pipelines.
Continuous Verification
Continuous Verification allows customers to automate deployment health checks using machine learning and their existing APM and log analytics tools.
In addition to supporting AppDynamics, New Relic, Splunk, ELK, Sumo Logic and Logz.io we now support Dynatrace, Datadog, Prometheus and AWS CloudWatch. Read More: Blog
We’ve also added a new Real-time Continuous Verification Dashboard to the main navigation:
This new dashboard summarizes all deployment verifications (tests/health checks) and lets users drill down in two-clicks to understand the root cause of each failure. Read More: Blog | Video
Our data science team has also been experimenting with Neural Nets as a more accurate way to detect anomalies in application log files. Currently, we use KMeans clustering with some Jacard and Cosine distance algorithms. With the addition of Neural Nets, we now have more options for anomaly detection. Check out the below comparison: on the left is how traditional algorithms (A) detect anomalies (red), and on the left is how neural nets (B) interpret the same data and detect anomalies (red).
Continuous Security
Continuous Security allows customers to audit, govern, and control their deployment pipelines. We heard security loud and clear from customers at the inception of Harness, and as you can expect you’ll see plenty of security-related capabilities in the future.
Last quarter we added Audit Trail and Secrets Management so customers can manage secrets within Harness. They can also use their own store such as HashiCorp Vault.
The first big update is our new Connected On-Premises deployment that allows Enterprises to deploy and manage Harness within their own data center and firewall. Read More: Blog
Second, we introduced our new Role-Based Access Control (RBAC) capability with LDAP/SAML/OKTA integration for Authentication and also flexible granular permission for Authorization. Read More: Blog | Video
Below is a diagram of what this control looks like:
In addition, we added IP White Listing to further restrict Harness usage, and also Usage Restrictions so users can restrict Cloud Provider Accounts and Tools to specific applications and environments.
That’s it for this months update, a massive thank you to our engineering team for getting ship done!
Don’t forget you can request your trial of Harness right here.
Cheers,
Steve.
@BurtonSays
So you had 17% of failed deployments? This is close to one out of five. Isn’t this a bit high? How many tests do you have? What is your coverage?
The whole purpose of a deployment pipeline is to kill a release candidate. We have automated testing as part of our deployment pipeline hence why many deployments fail during our development and QA stages. Catching failure in these environments means we prevent issues from reaching our customers in production.