November 15, 2024

Auto Remediate Security Vulnerabilities with Harness AI

Table of Contents

Addressing vulnerabilities at each stage of the software delivery process is essential to prevent security issues from escalating into active threats. For DevOps and security teams, minimising time-to-remediation (TTR) is critical to delivering more secure applications without degrading velocity. With Harness’s Security Testing Orchestration (STO) module, teams can quickly identify and assess application security risks. At the same time, remediation often becomes a delivery bottleneck due to the time, expertise, and coordination needed to fix vulnerabilities.

With Harness AI integrated into STO, the remediation process advances significantly. Harness AI leverages generative AI to provide effective remediation guidance, reduce developer toil, manage security backlogs, address critical vulnerabilities, and even allow teams to make direct code suggestions or create pull requests from STO. This approach improves TTR and enhances security responsiveness throughout the software lifecycle without compromising speed.

How does Harness AI remediation LLM work?

Security scans for container images, code repositories, and IaC can be initiated by triggering the configured pipelines through automation based on events or on a set schedule, and you do not need to make any changes to your pipeline to use Harness AI. Once scans are complete, Harness AI in STO integrates seamlessly with the scan results for analyzing and providing a clear, context-driven remediation for vulnerabilities identified throughout the software delivery lifecycle. Here's what the complete process looks like.

__wf_reserved_inherit

The results from each successful scan—detailing specific security issues—are examined and processed to generate a structured prompt. This prompt is then passed to Harness ML infrastructure, invoking the LLM APIs built on top of foundational LLMs from Google & OpenAI. Also, Harness allows developers to raise pull requests and make code suggestions directly to the source code from STO.

The following sections dive into specific aspects of this workflow, detailing how Harness AI processes scan data and delivers targeted remediation guidance.

Model inputs for vulnerability processing

Effective interactions with LLMs rely heavily on prompt quality. For LLMs to generate precise, contextually accurate responses, they must receive well-structured, comprehensive prompts that cover all relevant details of the task. In the context of security vulnerability remediation, a high-quality prompt ensures that LLMs can interpret the issue accurately and provide actionable, reliable solutions that align with the needs.

In Harness STO, the prompt creation process involves gathering and structuring detailed data from each scan. For every identified security issue, STO collects key information, including 

  1. Vulnerable code snippet
  2. Vulnerability details
  3. Scanner-suggested remediation

Also, when scanner results lack remediation steps or specific vulnerable code snippets, STO attempts to locate the vulnerable code using contextual clues provided by the scanner, such as file names, line numbers, or other indicators, it will then retrieve the relevant code snippet from the source code (to which it has access) and incorporates this into the prompt, ensuring our AI has all the information it needs to provide appropriate fixes. All this information is formatted into a structured prompt and sent to the Harness ML infrastructure for magic to happen.

After generation, the LLM’s response is standardised for display within the STO scan results, providing developers with consistent and relevant remediation insights across various scan types and stages of the software delivery lifecycle.

Model response with structured output

The remediation suggestions generated by the LLM are carefully structured and presented in a clear, user-friendly format with detailed, actionable steps. For greater contextualisation and accuracy, these suggestions can also be edited and reprocessed. Importantly, the AI-generated code suggestions maintain the code's functionality and do not impact the behavior.

Given the variety of scanners that Harness AI integrates with, a consistent format for remediation output is essential. This standardised structure ensures that AI-generated guidance is easy to interpret and apply. 

Harness AI delivers each response in a standardised format, which may vary based on the issue processed and the LLM response, this includes: 

  1. Vulnerable code snippet
  2. Vulnerability description
  3. Remediation concepts
  4. Steps to remediation with examples
  5. Suggested code changes

Auto Create Pull Requests and Code Suggestions

For scan results from code repository scans–specifically, SAST and secret detection scans– Harness AI enables direct remediation at the source code. With AI-generated remediation suggestions, developers can either open a new pull request (PR) to address the issue or integrate fixes into an existing PR if it’s already in progress.

The Suggest Fix option which is for making a code suggestion, appears when the affected file matches a file modified in the open PR. This ensures that suggested changes are applied directly to the relevant code without requiring a separate PR, streamlining the remediation process.

The Create Pull Request option is available for both branch scans and PR scans. In the case of PR scans, this option is shown only if the remediation applies to files outside those already modified in the current PR. This allows you to address new or existing vulnerabilities in the base branch without altering the active PR’s changes directly.

Editable remediation and contextual customisation

To refine the AI’s recommendations, STO allows users to edit the suggested remediation. Teams can add additional context by adding code, helping the AI to understand the codebase better and to offer even more precise suggestions. Quality gates are in place to ensure that any edits remain safe, filtering out potentially harmful code and limiting inputs to safe, relevant content. Refer to our documentation for more information.

__wf_reserved_inherit

Security and Privacy measures

Security and privacy stand as a top priority for Harness, and in Harness AI, the entire process goes through security and privacy reviews, incorporating both standard and internal protocols to ensure total protection for Harness users. Additionally, we implement hard checks to safeguard against AI-specific risks, such as prompt injection attacks and hallucination exploits. This process evaluates potential risks related to security, inappropriate content, and model reliability, making sure the AI-based solutions maintain the highest standards of safety.

To further enhance our AI-driven solutions, we track anonymised telemetry data on usage by account and issue. This anonymised data allows us to make data-driven improvements without accessing or identifying specific customer information.

Learn more

To learn more about Harness STO and auto-remediation, please visit our documentation.

No items found.
No items found.