Harness AI Blogs

Featured Blogs

December 2, 2025
Time to read

Today, we’re excited to announce our expanded partnership with Amazon, bringing together the power of Amazon Kiro, Amazon Q Developer, and Harness SaaS on AWS to revolutionize how your team builds, troubleshoots, secures, and deploys software. This collaboration is designed to deliver a seamless, intelligent, and scalable software delivery experience for all AWS customers.

Harness SaaS on AWS: Powering Cloud-Native Delivery

Harness SaaS on AWS empowers engineering and DevOps teams to deliver software faster, safer, and more efficiently across cloud-native environments. By leveraging the robust infrastructure of AWS, Harness provides an AI platform that scales with your business needs. With Harness SaaS on AWS, you get:

  • Accelerated, Reliable Deployments: Build and execute complex pipelines in minutes, supporting advanced deployment strategies like blue/green, canary, and rolling updates for AWS services such as EKS, ECS, Lambda, and Auto Scaling Groups.
  • Smart Automation & Continuous Verification: Harness leverages AI to run only tests affected by code changes and detect anomalies and automatically roll back failed deployments, minimizing downtime and risk.
  • Automated Security & Compliance: Built-in secret management, audit trails, granular RBAC, and continuous vulnerability scanning ensure every deployment is secure and compliant.
  • Cloud Cost Optimization: Harness Cloud Cost Management gives FinOps and engineering teams granular visibility and automated orchestration to maximize savings and optimize AWS consumption.

AI-Powered Software Delivery, Right in Your IDE

The Harness Platform is now even more powerful with the integration of Amazon Kiro and Q Developer. This agentic IDE revolution enables developers to manage, optimize, and resolve CI/CD pipeline, security, and testing issues directly from their development environment using natural language. No more switching between Kiro and the Harness platform. Now, you can ask Kiro for your AWS infrastructure information, along with Harness pipeline executions, in the context of your IDE. Learn more about the different use cases in this blog post.

Context is the key for any AI to be effective. Harness’s Software Delivery Knowledge Graph ensures that this automation is tailored to your AWS environment, providing actionable insights rather than generic recommendations. Whether you’re a seasoned expert or a new team member, this integration makes shipping code faster and more intuitive. Learn more about the Knowledge Graph.

Real Value, Real Results

Harness is already helping customers achieve accelerated build and deployment using AI-powered features such as Test Intelligence and Continuous Verification. Organizations such as United Airlines and Trust Bank have enhanced their software delivery experience with Harness and AWS.

Specifically, for Trust Bank, Harness's cloud-first capabilities enable Trust Bank to efficiently deploy changes across its containerized applications on AWS, utilizing Amazon EKS. Trust Bank also leverages AWS services such as Amazon Athena, Amazon RDS, and Amazon Aurora, resulting in a resilient, secure, and optimized infrastructure that scales with its growth. Trust Bank sought a solution to integrate with its AWS environment, reduce developer workload, and prioritize innovation, aiming to offload the onerous supply delivery chain lifecycle. By integrating Harness with AWS and automating compliance, security, and risk checks in the CI/CD pipeline, Trust Bank cut its deployment lead time from two weeks to just 24 hours. Automated compliance ensures all industry-standard controls are met, maintaining high security. This automation enables Trust Bank to focus on delivering innovative products, rather than being hindered by manual processes.

Built for Shipping Software with Confidence

This partnership delivers enterprise-grade automation, contextual intelligence, and seamless cloud-native delivery, all backed by the reliability of AWS. Harness SaaS on AWS is available for purchase through the AWS Marketplace, making it easy for your organization to get started and scale as your needs grow. Together with Amazon Kiro, Q, and Harness SaaS on AWS, Harness and Amazon are committed to removing bottlenecks and empowering teams to ship software with confidence. 

Register for the “Securing AI Velocity” webinar to learn more about how Harness and AWS can make software delivery better.

November 15, 2024
Time to read

Addressing vulnerabilities at each stage of the software delivery process is essential to prevent security issues from escalating into active threats. For DevOps and security teams, minimising time-to-remediation (TTR) is critical to delivering more secure applications without degrading velocity. With Harness’s Security Testing Orchestration (STO) module, teams can quickly identify and assess application security risks. At the same time, remediation often becomes a delivery bottleneck due to the time, expertise, and coordination needed to fix vulnerabilities.

With Harness AI integrated into STO, the remediation process advances significantly. Harness AI leverages generative AI to provide effective remediation guidance, reduce developer toil, manage security backlogs, address critical vulnerabilities, and even allow teams to make direct code suggestions or create pull requests from STO. This approach improves TTR and enhances security responsiveness throughout the software lifecycle without compromising speed.

How does Harness AI remediation LLM work?

Security scans for container images, code repositories, and IaC can be initiated by triggering the configured pipelines through automation based on events or on a set schedule, and you do not need to make any changes to your pipeline to use Harness AI. Once scans are complete, Harness AI in STO integrates seamlessly with the scan results for analyzing and providing a clear, context-driven remediation for vulnerabilities identified throughout the software delivery lifecycle. Here's what the complete process looks like.

__wf_reserved_inherit

The results from each successful scan—detailing specific security issues—are examined and processed to generate a structured prompt. This prompt is then passed to Harness ML infrastructure, invoking the LLM APIs built on top of foundational LLMs from Google & OpenAI. Also, Harness allows developers to raise pull requests and make code suggestions directly to the source code from STO.

The following sections dive into specific aspects of this workflow, detailing how Harness AI processes scan data and delivers targeted remediation guidance.

Model inputs for vulnerability processing

Effective interactions with LLMs rely heavily on prompt quality. For LLMs to generate precise, contextually accurate responses, they must receive well-structured, comprehensive prompts that cover all relevant details of the task. In the context of security vulnerability remediation, a high-quality prompt ensures that LLMs can interpret the issue accurately and provide actionable, reliable solutions that align with the needs.

In Harness STO, the prompt creation process involves gathering and structuring detailed data from each scan. For every identified security issue, STO collects key information, including 

  1. Vulnerable code snippet
  2. Vulnerability details
  3. Scanner-suggested remediation

Also, when scanner results lack remediation steps or specific vulnerable code snippets, STO attempts to locate the vulnerable code using contextual clues provided by the scanner, such as file names, line numbers, or other indicators, it will then retrieve the relevant code snippet from the source code (to which it has access) and incorporates this into the prompt, ensuring our AI has all the information it needs to provide appropriate fixes. All this information is formatted into a structured prompt and sent to the Harness ML infrastructure for magic to happen.

After generation, the LLM’s response is standardised for display within the STO scan results, providing developers with consistent and relevant remediation insights across various scan types and stages of the software delivery lifecycle.

Model response with structured output

The remediation suggestions generated by the LLM are carefully structured and presented in a clear, user-friendly format with detailed, actionable steps. For greater contextualisation and accuracy, these suggestions can also be edited and reprocessed. Importantly, the AI-generated code suggestions maintain the code's functionality and do not impact the behavior.

Given the variety of scanners that Harness AI integrates with, a consistent format for remediation output is essential. This standardised structure ensures that AI-generated guidance is easy to interpret and apply. 

Harness AI delivers each response in a standardised format, which may vary based on the issue processed and the LLM response, this includes: 

  1. Vulnerable code snippet
  2. Vulnerability description
  3. Remediation concepts
  4. Steps to remediation with examples
  5. Suggested code changes

Auto Create Pull Requests and Code Suggestions

For scan results from code repository scans–specifically, SAST and secret detection scans– Harness AI enables direct remediation at the source code. With AI-generated remediation suggestions, developers can either open a new pull request (PR) to address the issue or integrate fixes into an existing PR if it’s already in progress.

The Suggest Fix option which is for making a code suggestion, appears when the affected file matches a file modified in the open PR. This ensures that suggested changes are applied directly to the relevant code without requiring a separate PR, streamlining the remediation process.

The Create Pull Request option is available for both branch scans and PR scans. In the case of PR scans, this option is shown only if the remediation applies to files outside those already modified in the current PR. This allows you to address new or existing vulnerabilities in the base branch without altering the active PR’s changes directly.

Editable remediation and contextual customisation

To refine the AI’s recommendations, STO allows users to edit the suggested remediation. Teams can add additional context by adding code, helping the AI to understand the codebase better and to offer even more precise suggestions. Quality gates are in place to ensure that any edits remain safe, filtering out potentially harmful code and limiting inputs to safe, relevant content. Refer to our documentation for more information.

__wf_reserved_inherit

Security and Privacy measures

Security and privacy stand as a top priority for Harness, and in Harness AI, the entire process goes through security and privacy reviews, incorporating both standard and internal protocols to ensure total protection for Harness users. Additionally, we implement hard checks to safeguard against AI-specific risks, such as prompt injection attacks and hallucination exploits. This process evaluates potential risks related to security, inappropriate content, and model reliability, making sure the AI-based solutions maintain the highest standards of safety.

To further enhance our AI-driven solutions, we track anonymised telemetry data on usage by account and issue. This anonymised data allows us to make data-driven improvements without accessing or identifying specific customer information.

Learn more

To learn more about Harness STO and auto-remediation, please visit our documentation.

Latest Blogs