
TLDR: Today, Harness is introducing the Harness Cursor Plugin, bringing the power of the Harness AI-native software delivery platform directly into Cursor. This integration, along with the Harness Secure AI Coding hook for Cursor, allows developers and AI agents to move from code changes to vulnerability detection, CI/CD execution, security validation, approvals, deployments, and operational insight without leaving the editor.
AI has completely changed how we write code. You can spin up functions, refactor entire files, and generate tests in seconds. The inner loop, writing and iterating on code, has never been faster. But the moment you try to ship that code, everything slows down. This is what we call the AI Velocity Paradox.
You are suddenly back to juggling pipelines, waiting on approvals, checking security scans, debugging failed runs, and bouncing between tools just to get a change into production.
That gap, between fast code and slow delivery, is what we kept running into. So we built something to fix it.
Today, we are introducing the Harness Plugin for Cursor, a way to go from PR to production without leaving your editor.
If you are using agentic coding tools, such as Cursor, you have probably felt this.
You can:
But shipping still depends on everything outside your editor:
And none of that got simpler just because AI showed up. In fact, AI makes the problem more obvious.
Now you can create changes faster than your delivery process can safely handle. And if those controls are not tight, you are introducing a whole new category of risk. Fast-moving code with fragmented governance.
AI did not break software delivery. It exposed how disconnected it already was.
Instead of jumping between tools, what if you could just tell your editor what you want to happen?
Something like:
“Deploy PR #4821 to staging once the security scan passes, and Slack me if anything fails.”
That is the idea behind the Harness Cursor Plugin.
It connects Cursor directly to Harness, so you can trigger and manage your entire delivery workflow using natural language, right inside Cursor.

No tab switching. No manual orchestration. No guessing what is happening in the pipeline.
Once connected, you can use Cursor to interact with your delivery system just as you do with your code.
For example, you can:

This builds on what we introduced last month, Secure AI Coding, which integrates directly with Cursor and scans code at the moment of generation rather than waiting for a PR review. Developers see inline vulnerability warnings with the option to send flagged code back to the agent for remediation, without leaving their workflow. Under the hood, it leverages Harness's Code Property Graph (CPG) to trace data flows across the entire codebase, surfacing complex vulnerabilities that simpler linting tools would miss.
The key thing is that you are no longer just interacting with code. You are interacting with the entire delivery system from the same place.
One of the biggest concerns with AI in delivery is obvious:
“Are we about to let agents push code to production without guardrails?”
No.
With Harness, everything runs through the controls that you can rely on:

Instead of being manual checkpoints spread across tools, they are enforced automatically as part of the workflow while you stay in flow.
So AI can help move things faster, but it cannot bypass the governance that matters.
Most integrations today expose APIs or bolt AI onto existing systems. That is not what we wanted to do.
We designed the Harness Cursor Plugin specifically for how AI agents actually work:
Because shipping software is not a single action. It is a chain of decisions across CI, CD, security, approvals, and operations. If AI is going to help here, it needs access to that full picture. That’s where the Harness Software Delivery Knowledge Graph comes into play. It provides the necessary context for AI to take actions for you.
The knowledge graph models the relationships between services, pipelines, environments, policies, and operational signals in real time. Instead of treating each step in delivery as an isolated task, it creates a connected system of record that AI can reason over. This allows agents to understand not just what to do, but when and why to do it, based on dependencies, risk signals, and historical behavior.

In practice, this means smarter automation: deployments that adapt to context, approvals that are triggered based on policy and impact, and faster root cause analysis because the system already understands how everything is connected.
This is not just about convenience. It is a shift in how software actually moves from idea to production.
Instead of:
You get a single, connected workflow:
All accessible from your editor. Cursor accelerates the building. Harness governs the shipping. And the handoff between the two disappears.
Watch the demo:
If you want to try it:
For example:
“Run the CI pipeline for this branch, check if the security scan passed, and promote to staging if it did.”
That is it.
AI is not just changing how we write code. It is changing expectations for how fast we should be able to ship it. But speed without control does not work in real environments. What we are building toward is something simpler:
A world where every step, from PR to production, is:
Without forcing developers to leave their flow. This plugin is one step in that direction.

“We’ve been operating in a hybrid environment with both OpenTofu and Terragrunt, and Harness has made it much easier to bring those workflows together into a single, consistent platform with IaCM. The addition of Terragrunt support is a valuable step toward simplifying how we manage infrastructure at scale.”
— Lead Platform Engineer, Enterprise Customer
Infrastructure as Code is now a standard for modern cloud operations, with most enterprises using IaC to provision and manage environments. However, as adoption grows, so does complexity. Teams are no longer managing a handful of environments. They are operating across multiple regions, accounts, and services, often at massive scale.
This is where traditional approaches begin to fall short.
As organizations scale their infrastructure, Terraform alone is often not enough. Teams adopt Terragrunt to manage complex, multi-environment deployments, but they are often forced to stitch together fragmented tooling that lacks visibility, governance, and consistency.
At Harness, we are changing that.
Today, we are excited to announce native Terragrunt support in Harness IaCM, bringing it to full parity with Terraform and OpenTofu while delivering capabilities that go beyond what is available in standalone tooling. This is more than support. It is about making Terragrunt a first-class platform for enterprise infrastructure management.
With Harness IaCM, teams can now:

Terragrunt has become a critical layer for managing infrastructure at scale because it simplifies how teams structure and reuse configurations across environments. Harness builds on that foundation with deep, native integration, enabling platform teams to operate with both flexibility and control.
This is especially important for enterprises where a single deployment spans multiple environments and services. Harness abstracts that complexity while maintaining governance, auditability, and consistency.
Terragrunt is part of a broader shift toward multi-tool infrastructure strategies.
Modern teams are no longer standardized on a single IaC tool. Instead, they operate across:

This creates challenges around consistency, visibility, and governance. Harness IaCM is built for this reality. We are evolving IaCM into a unified control plane for multi-IaC workflows, where teams can manage different frameworks with a consistent experience, shared policies, and centralized visibility.
This means:
Instead of managing infrastructure in silos, teams can now operate from a single platform across the entire lifecycle.
The next phase of Infrastructure as Code is not just about supporting more tools. It is about making infrastructure systems more intelligent and automated.
We are investing in two key areas:
We are continuing to support modern frameworks like AWS CDK, enabling developer-centric infrastructure workflows alongside provisioning, configuration, and orchestration tools.
We are introducing intelligence into IaC workflows to simplify tasks such as drift management and optimization. This helps teams reduce manual effort and operate more efficiently at scale.
Together, these investments move IaCM toward a unified, multi-IaC platform that combines flexibility, governance, and automation. Terragrunt has become essential for managing infrastructure at scale but until now, it hasn’t had a platform that truly supports it. As infrastructure continues to grow in complexity, our focus remains the same. Helping teams move faster, reduce risk, and scale with confidence no matter which IaC tools they use.
.png)
We’ve come a long way in how we build and deliver software. Continuous Integration (CI) is automated, Continuous Delivery (CD) is fast, and teams can ship code quickly and often. But environments are still messy.
Shared staging systems break when too many teams deploy at once, while developers wait on infrastructure changes. Test environments get created and forgotten, but over time, what is running in the cloud stops matching what was written in code.
We have made deployments smooth and reliable, but managing environments still feels manual and unpredictable. That gap has quietly become one of the biggest slowdowns in modern software delivery.
This is the hidden bottleneck in platform engineering, and it's a challenge enterprise teams are actively working to solve.
As Steve Day, Enterprise Technology Executive at National Australia Bank, shared:
“As we’ve scaled our engineering focus, removing friction has been critical to delivering better outcomes for our customers and colleagues. Partnering with Harness has helped us give teams self-service access to environments directly within their workflow, so they can move faster and innovate safely, while still meeting the security and governance expectations of a regulated bank.”
At Harness, Environment Management is a first-class capability inside our Internal Developer Portal. It transforms environments from manual, ticket-driven assets into governed, automated systems that are fully integrated with Harness Continuous Delivery and Infrastructure as Code Management (IaCM).

This is not another self-service workflow. It is environment lifecycle management built directly into the delivery platform.
The result is faster delivery, stronger governance, and lower operational overhead without forcing teams to choose between speed and control.
Continuous Delivery answers how code gets deployed. Infrastructure as Code defines what infrastructure should look like. But the lifecycle of environments has often lived between the two.

Teams stitch together Terraform projects, custom scripts, ticket queues, and informal processes just to create and update environments. Day two operations such as resizing infrastructure, adding services, or modifying dependencies require manual coordination. Ephemeral environments multiply without cleanup. Drift accumulates unnoticed.
The outcome is familiar: slower innovation, rising cloud spend, and increased operational risk.
Environment Management closes this gap by making environments real entities within the Harness platform. Provisioning, deployment, governance, and visibility now operate within a single control plane.
Harness is the only platform that unifies environment lifecycle management, infrastructure provisioning, and application delivery under one governed system.
At the center of Environment Management are Environment Blueprints.
Platform teams define reusable, standardized templates that describe exactly what an environment contains. A blueprint includes infrastructure resources, application services, dependencies, and configurable inputs such as versions or replica counts. Role-based access control and versioning are embedded directly into the definition.

Developers consume these blueprints from the Internal Developer Portal and create production-like environments in minutes. No tickets. No manual stitching between infrastructure and pipelines. No bypassing governance to move faster.
Consistency becomes the default. Governance is built in from the start.
Environment Management handles more than initial provisioning.
Infrastructure is provisioned through Harness IaCM. Services are deployed through Harness CD. Updates, modifications, and teardown actions are versioned, auditable, and governed within the same system.
Teams can define time-to-live policies for ephemeral environments so they are automatically destroyed when no longer needed. This reduces environment sprawl and controls cloud costs without slowing experimentation.
Harness EM also introduces drift detection. As environments evolve, unintended changes can occur outside declared infrastructure definitions. Drift detection provides visibility into differences between the blueprint and the running environment, allowing teams to detect issues early and respond appropriately. In regulated industries, this visibility is essential for auditability and compliance.

For enterprises operating at scale, self-service without control is not viable.
Environment Management leverages Harness’s existing project and organization hierarchy, role-based access control, and policy framework. Platform teams can control who creates environments, which blueprints are available to which teams, and what approvals are required for changes. Every lifecycle action is captured in an audit trail.
This balance between autonomy and oversight is critical. Environment Management delivers that balance. Developers gain speed and independence, while enterprises maintain the governance they require.
"Our goal is to make environment creation a simple, single action for developers so they don't have to worry about underlying parameters or pipelines. By moving away from spinning up individual services and using standardized blueprints to orchestrate complete, production-like environments, we remove significant manual effort while ensuring teams only have control over the environments they own."
— Dinesh Lakkaraju, Senior Principal Software Engineer, Boomi
Environment Management represents a shift in how internal developer platforms are built.
Instead of focusing solely on discoverability or one-off self-service actions, it brings lifecycle control, cost governance, and compliance directly into the developer workflow.
Developers can create environments confidently. Platform engineers can encode standards once and reuse them everywhere. Engineering leaders gain visibility into cost, drift, and deployment velocity across the organization.
Environment sprawl and ticket-driven provisioning do not have to be the norm. With Environment Management, environments become governed systems, not manual processes. And with CD, IaCM, and IDP working together, Harness is turning environment control into a core platform capability instead of an afterthought.
This is what real environment management should look like.


Modern software delivery has evolved far beyond single-service deployments. Today's releases span dozens of services, multiple teams, and complex approval workflows—coordinated through spreadsheets, Slack channels, and manual checklists scattered across tools. When a production release involves deploying ten microservices across three environments, enabling five feature flags, running security scans, collecting approvals from four stakeholders, and coordinating with three different teams, the question isn't whether you can ship—it's whether you can track what shipped, when it shipped, and who approved it.
Release Orchestration solves this. It provides a unified framework for modeling, scheduling, automating, and tracking complex software releases across teams, tools, and environments—giving you end-to-end visibility from planning through production deployment and monitoring.
Without orchestration, enterprise releases become coordination nightmares. Status lives in spreadsheets that go stale within hours. Coordination happens through email threads spanning dozens of messages. There's no single source of truth for what was deployed, when, or by whom. Manual checklists drift out of sync. Approval workflows rely on memory and goodwill. And when something goes wrong at 2 AM, reconstructing what happened requires археology across multiple systems.
Release Orchestration transforms this chaos into structured, auditable, repeatable processes. Model your release blueprint once—defining phases, activities, dependencies, and approval gates—then execute it repeatedly with different configurations. Automate pipeline-backed steps while retaining manual sign-offs where governance requires them. Track activity-level status, phase-level progress, and overall release health in real time. Enforce approvals, capture sign-offs, and maintain a full audit trail linking code to deployment to business outcome.
The result? Releases that used to require days of coordination now run faster with complete visibility and zero spreadsheets.
Release Orchestration introduces a structured, visual approach to modeling and executing releases. Define Processes—reusable blueprints composed of Phases (Build, Testing, Deployment) and Activities (automated pipelines, manual approvals, or nested subprocesses). Release Groups define cadences and automatically generate releases. The Release Calendar provides unified visibility across all releases. The Activity Store and Input Store promote reusability—define once, execute many times with different configurations. And ad hoc releases let you execute any process on demand when you need flexibility outside your regular schedule.
At its core, Release Orchestration delivers the foundational capabilities enterprise teams need: process modeling with visual editors, scheduled and recurring releases through release groups, real-time execution tracking with dependency management, comprehensive audit trails for compliance, and AI-powered process creation that transforms natural language descriptions into structured workflows. These capabilities form the foundation for enterprise release management at scale.
Release Orchestration launches with a comprehensive set of capabilities designed for enterprise release management. Here's what you can do today.
Not every release fits a scheduled release. Customer-specific deployments, unscheduled maintenance, and process testing need one-off releases. Ad hoc releases let you create and execute releases on demand-select a process, configure timing, provide inputs, and optionally run immediately. Test new processes in isolation, handle customer deployments without disrupting your calendar, or orchestrate emergency maintenance with full tracking and audit capabilities.
Modern releases deploy multiple services across multiple environments. Release Orchestration's input system handles this through variable mapping—define global variables like releaseVersion and `targetEnvironment` once, and they flow automatically to all activities. Deploy to QA with "QA Inputs," production with "Production Inputs"—same process, different configurations. This eliminates repetitive data entry, ensures consistency, and scales from three services to thirty without growing complexity.
Release Orchestration integrates with Harness's centralized notification framework, delivering alerts when releases start, pause for input, complete, or fail. Route notifications to Slack, email, PagerDuty, Microsoft Teams, or webhooks. Platform teams managing multiple releases shift from reactive monitoring to proactive awareness—get notified immediately when action is required.
Compliance reviews and post-mortems require detailed records. Release Orchestration provides downloadable Excel reports with complete execution history—every activity, status, timestamps, approvals, and inputs used. Generate reports for individual releases (sprint retrospectives) or release groups (quarterly audits). Activity-level detail meets compliance needs; process-level overviews serve executive summaries. All execution data is captured in the audit trail, allowing you to reconstruct exactly what happened during any release.
As releases scale, filters help you focus. Filter by source (ad hoc vs recurring), status (in progress, completed, failed), time window (this sprint, Q1 2026), environment (production, staging), or scope (specific orgs/projects). Platform teams filter to ad hoc releases for one-off deployments. Release managers filter by status for in-progress releases. Compliance teams filter by date range for audit periods. Transform an overwhelming calendar into a focused view of exactly what you need.
Production incidents don't wait for your release cadence. Release Orchestration supports hotfix workflows that fast-track emergency releases while maintaining governance. Mark releases as hotfixes to distinguish them in calendars and reports. The system detects execution conflicts—if a hotfix targets an environment where a release is running, you get visibility to coordinate decisions. Hotfixes use the same process structure, ensuring that approvals and audit trails are maintained. The hotfix designation flows through reports and logs, documenting emergency procedures for post-incident reviews. Speed meets governance.
Not everything can be automated. Security reviews, architectural approvals, and stakeholder sign-offs require human judgment. Release Orchestration treats manual activities as first-class citizens with the same visibility and dependency support as automated activities. Manual activities pause execution until someone provides input—an approval, verification, or checklist confirmation. Notifications alert the responsible person; they review the context and complete the activity, optionally leaving notes. Manual activities can depend on automated activities (approval after deployment) or vice versa (deployment after approval). All completions appear in audit trails and reports for compliance documentation.
Release Orchestration provides primitives—processes, phases, activities, dependencies, inputs—that compose to match how your organization ships software. Model microservice releases with parallel deployments and end-to-end tracking. Define compliance-driven releases with approval gates at critical checkpoints. Create streamlined hotfix workflows for emergencies. Coordinate feature flag enablement with deployments. Assign phase owners for multi-team coordination with notification-driven handoffs. The system scales from simple three-phase releases to complex workflows with fifty activities and nested subprocesses.
Harness AI transforms natural language descriptions into structured processes. Describe your workflow—"Create a multi-service release with phases for build, testing, deployment, and monitoring. Assign owners for Development, QA, and DevOps,"—and AI generates the complete structure with phases, activities, and dependencies. Refine the generated process by adding activities, adjusting dependencies, and configuring inputs. This reduces process modeling time from hours to minutes, making it practical to create specialized processes for different release types.
Release Orchestration provides real-time tracking at three levels: activity (running, succeeded, failed, waiting), phase (overall progress), and process (end-to-end status). The execution graph shows phases as nodes, dependencies as arrows, and color-coded status on each activity. Drill into pipeline executions from the release view with one click. See approval history for manual activities—who approved, when, and with what notes. This unified view eliminates the need to check multiple systems. Platform teams can see at a glance which releases are progressing smoothly, which are awaiting approval, and which need attention. [Learn more →](https://developer.harness.io/docs/release-orchestration/execution/activity-execution-flow)
Release Orchestration is available now in Harness. Contact Harness Support to enable the module for your account. Once enabled, explore Processes (model release blueprints), Release Calendar (schedule and track releases), Activity Store (reusable activities), and Input Store (configuration sets). The getting started guide walks you through creating your first AI-powered process, adding activities, and executing a release.
We're actively developing additional capabilities: deeper analytics and insights (release velocity metrics, phase duration trends, failure pattern analysis), advanced dependency modeling (cross-release dependencies, environment-level locking), enhanced collaboration (in-line comments, Slack-native monitoring), a template marketplace for common release patterns, and API/GitOps for managing processes as code. The roadmap prioritizes capabilities that help teams ship faster with greater confidence.
Software delivery has evolved far beyond single-service deployments, but release management tooling hasn't kept pace. Spreadsheets, email coordination, and manual checklists don't scale to modern microservice architectures, multi-team workflows, and compliance requirements. Release Orchestration provides the unified framework enterprise teams need to model, automate, and track complex releases across teams, tools, and environments.
Define reusable processes. Execute them with different inputs. Track activity-level progress. Enforce approvals and capture sign-offs. Maintain complete audit trails. All in one place, integrated with the pipelines and deployment workflows you already use.
Ready to see it in action? Explore the Release Orchestration documentation or reach out to your Harness account team to discuss how Release Orchestration can transform your release workflows.
The future of release management isn't about doing the same manual coordination faster—it's about orchestrating releases as structured, repeatable, auditable processes. That future is available today.


Welcome to our Q1 2026 Pipeline update! This quarter brings eight major enhancements that make pipeline development faster, validation easier, and governance stronger. From Git tags for immutable pipeline versions to AI-assisted policy authoring, these capabilities address the most common friction points teams encounter when scaling pipeline automation across their organizations. This update complements our Continuous Delivery & GitOps update released today, which covers expansions to the deployment platform and AI-powered verification.
Pipeline development workflows gain significant GitX improvements this quarter, bringing immutable versioning, flexible testing, and pre-commit validation directly into your Git-based workflows.
Pipelines stored in Git can now be triggered and executed from Git tags, not just branches. This unlocks release workflows where pipeline versions align with semantic versioning tags in your repository—when you tag a release as `v2.1.0` in Git, run that exact pipeline version via the UI or API. Tags provide immutable references to specific pipeline states, making it easy to replay historical pipeline configurations for compliance audits, debugging, or managing multiple product versions in parallel.
Learn more about Git tags for pipelines →
Pipeline chaining now supports branch selection for child pipelines, not just the default master branch. When configuring a Pipeline stage, specify which branch of the child pipeline to execute, enabling proper testing of parent-child pipeline integrations before merging to production. This is crucial when output variables from the child pipeline are only available in a feature branch, or when you're testing coordinated changes across multiple chained pipelines.
Learn more about pipeline chaining →
A new validation API lets you check pipeline YAML before committing changes to your repository. The API validates YAML syntax, schema conformance, entity references (Services, Environments, Connectors, Templates), RBAC permissions, OPA policy compliance, and expression syntax—all without actually running the pipeline or updating it in Harness. This closes a critical gap in GitOps workflows: changes made directly in GitHub bypass Harness validation, enabling teams to validate bulk updates in feature branches before merging and to catch configuration errors early.
Directed Acyclic Graph (DAG) execution support moves to Phase 2 with full UI integration. Define complex step dependencies in which multiple steps can run in parallel but must complete before downstream steps begin, within a single stage. DAG support enables sophisticated deployment patterns, such as parallel infrastructure provisioning followed by application deployment, or concurrent test suite execution with a final aggregation step. The visual graph makes it easy to understand execution flow and identify bottlenecks, while the declarative YAML representation keeps configuration simple.
Pipeline observability and notification capabilities expand to give platform teams better visibility into queue states and more granular control over failure alerting.
A new Account Settings page surfaces all queued pipelines across your entire account, showing queue position, org/project filters, and estimated execution order. The queue view includes bulk abort capabilities for queued pipelines and is available to Account Admins. For teams using pipeline queues to manage deployment locks or shared resource access, this visibility eliminates the mystery of why a pipeline is waiting and how long it's likely to remain queued.
Learn more about pipeline queuing →
Centralized notifications now support step-specific failure triggers, not just stage-level or pipeline-level failures. Configure notifications to fire only when a particular critical step fails—like a production deployment step or a compliance validation check—reducing alert noise and ensuring teams get notified about failures that actually matter. This granular control means you can route different failure types to different teams or channels: a failed security scan notifies the security team, while a failed deployment step notifies the on-call engineer.
Learn more about pipeline notifications →
OPA policy capabilities receive significant AI-powered enhancements and full GitX integration, making governance more accessible and easier to scale across organizations.
An AI assistant helps write OPA policies, reducing the expertise barrier for policy creation. Describe your governance requirements in natural language, and the assistant generates the corresponding Rego policy with explanations of how it works. This democratizes policy authoring beyond Rego experts, enabling security teams, compliance officers, and platform engineers to codify governance requirements without deep OPA expertise.
Learn more about OPA AI Assistant →
OPA policies now support the full GitX experience, including branch switching, bidirectional sync, and package name management. Policies can be developed and tested in feature branches before rolling out to production, with PR workflows providing change review and approval. This brings the same infrastructure-as-code benefits you have for pipelines and templates to your governance layer, enabling version control, change tracking, and collaborative policy development.
Learn more about OPA GitX integration →
New APIs support evaluation by both policy set IDs and entity-type/action pairs, giving teams greater flexibility in structuring and applying policies across their organizations. This enables more sophisticated policy architectures in which different evaluation strategies can be applied to distinct workflows or organizational structures.
Learn more about OPA policies →
The features highlighted in this update are available now in Harness Platform. Ready to see them in action? We've created a comprehensive video playlist that walks through these capabilities, featuring live demos and configuration guides.
Watch the Q1 2026 Pipeline Feature Playlist →
From Git-based pipeline versioning to AI-assisted policy authoring, this quarter delivers capabilities that streamline development workflows, improve validation practices, and strengthen governance controls. Whether you're managing dozens or thousands of pipelines, these enhancements reduce configuration overhead and align with how modern platform engineering teams scale automation across their organizations.
Be sure to also check out our companion post covering [Continuous Delivery & GitOps innovations](#)—including AI-powered verification, Azure Container Apps support, Windows deployment enhancements, and more.
Explore the documentation links throughout this post to dive deeper into each feature, or reach out to your Harness account team to discuss how these capabilities can accelerate your pipeline development and governance workflows.
What's coming next? Q2 2026 will bring advanced pipeline debugging capabilities, expanded expression engine functionality, and continued investment in GitX experience improvements. Stay tuned for more updates - we're just getting started.


Welcome back to the quarterly update series! If you've been following along, you've seen how Q3 2025 brought [deeper control and strengthened integrations], while Q4 2025 [closed the year strong] with platform upgrades and quality-of-life improvements. The first quarter of 2026 builds on these foundations with AI-powered continuous verification that eliminates configuration overhead, expanded deployment platform support, and GitOps workflow enhancements that align with how teams actually ship software.
Native support for Azure Container Apps brings serverless container orchestration to your Azure workloads with the full Harness deployment experience. Azure Container Apps provides a fully managed platform for running microservices and containerized applications with automatic scaling based on HTTP traffic or events, and now you can deploy to it with the same confidence and control you have for Kubernetes, ECS, and other platforms.
Harness gives you two deployment strategies designed for Azure Container Apps' architecture. Choose Basic deployments for immediate traffic cutover when you need speed, or leverage Canary deployments with progressive traffic shifting (20% → 70% → 100%) using Azure Container Apps' built-in revision management to validate new versions under real production load. The platform includes an automated rollback that captures container app state before deployment, enabling instant recovery if issues arise. Authentication is flexible—support for both Azure OIDC (keyless authentication) and Service Principal methods means you can deploy across subscriptions using a single connector, with full support for Azure Container Registry (ACR) and Docker Hub as artifact sources.
Learn more about Azure Container Apps deployments →
This year, we're focusing heavily on Windows deployments to address the performance and scalability challenges that enterprise Windows teams face every day. The two enhancements shipping this quarter are just the beginning—we're bringing the same innovation velocity to Windows deployments that you've come to expect across all Harness platforms. Stay tuned for more Windows Deployment capabilities throughout 2026 that will continue to streamline your deployment processes and eliminate friction in enterprise Windows environments.
Learn more about Windows deployments →
Windows Session Reuse eliminates redundant connection overhead by enabling delegate-wide session pooling, cutting connection setup time from 30-60 seconds to instant reuse in JEA environments. When a command step executes, Harness checks the pool for an existing idle session to the target host with matching credentials and reuses it immediately, dramatically reducing pipeline execution time for workflows with multiple command steps.
Learn more about Windows Session Reuse →
Multi-Host Deployment with Dynamic Targeting extends Windows Deployment credential setup to dynamically target different hosts, enabling true parallel execution across multiple Windows servers. Configure multiple host groups within a single credential configuration, and Harness automatically routes commands to the appropriate servers based on your deployment strategy. This unlocks centralized credential management while maintaining the security boundaries required in JEA environments, enabling teams managing large Windows server fleets to deploy faster with reduced credential sprawl.
Learn more about Multi-Host Windows Deployments →
Amazon ECS deployments get two powerful new capabilities that bring operational flexibility and automation to your container workloads.
Standalone ECS Scaling lets you scale services up or down without triggering a full deployment, enabling operators to respond to real-time demand without triggering change management processes. The new ECS Scale step lets you modify desired task counts on demand—whether you're responding to traffic spikes, performing maintenance windows, or testing capacity limits—without redeploying your application.
Learn more about ECS scaling →
ECS Scheduled Actions enable time-based scaling policies directly within your ECS service deployments, eliminating the need to manage scheduled actions separately in the AWS console while keeping your entire ECS configuration under version control. Define scheduled actions to automatically adjust desired task counts at specific times—scale up services before anticipated morning traffic, scale down during off-peak hours, or align capacity with predictable business patterns.
Learn more about ECS scheduled actions →
Terraform deployments now include automatic security protections that prevent accidental exposure of sensitive data throughout your pipeline workflows.
Terraform outputs marked as `sensitive = true` are now automatically masked in the Harness UI, preventing accidental exposure of credentials, API keys, and other secrets in pipeline execution logs and output tabs. When Terraform outputs are marked as sensitive, Harness respects that designation and redacts the values wherever they appear—you can still reference these outputs in downstream steps using expressions, but the actual values remain encrypted and hidden from view.
Learn more about masking sensitive outputs →
This quarter's focus on continuous verification centers on eliminating configuration overhead through AI automation and expanding observability platform integrations. From zero-config deployment health analysis to Git-based configuration management, these capabilities make verification accessible to more teams while reducing the time to production-ready monitoring.
Alongside AI Verify, AI-assisted health source configuration makes traditional verification setup effortless through a guided workflow that discovers available signals from your observability platform, classifies them by deployment impact, and generates verification-ready configurations. Describe your service and monitoring goals in natural language, and the Configuration Agent automatically discovers relevant metrics, organizes them into intelligent categories, and generates the queries and thresholds for you—with human checkpoints for selection and refinement at every stage.
Fine-tune configurations with simple natural language inputs or create custom composite metrics on the fly. What used to take hours now takes minutes.
AI Verify eliminates the manual setup complexity that has traditionally slowed the adoption of continuous verification. No more baseline configuration, threshold tuning, or monitored service management. AI Verify deploys lightweight data-collection plugins into your Kubernetes cluster that collect, aggregate, and provide observability data while stripping personally identifiable information before it leaves your environment.
The plugins gather logs and metrics from your observability platforms and perform statistical and algorithmic anomaly detection. Large language models then contextualize these anomalies against your deployment verification criteria, filter false positives based on business-criticality, and synthesize natural-language root-cause insights with actionable remediation suggestions—all without requiring explicit baseline data. This shifts continuous verification from weeks of configuration work to immediate, intelligent monitoring that understands your services from day one.
Learn more about AI-powered verification →
Harness Continuous Verification now supports Dynatrace Query Language (DQL) for querying timeseries metrics from Dynatrace Grail, their next-generation data lakehouse. Craft sophisticated metric analysis using aggregation functions, enable dimension-based data splitting for per-instance continuous verification, and combine multiple data sources in a single query. This extends beyond the traditional Full Stack Observability model, giving you direct access to custom metric queries rather than relying solely on predefined metric packs.
Learn more about Dynatrace DQL support →
GitOps workflows gain AI-powered intelligence, unified notifications, and enhanced PR capabilities this quarter. These improvements streamline application management, improve operational visibility, and align GitOps workflows with how teams naturally collaborate through pull requests.
AI-powered operations management brings natural language queries and intelligent automation to GitOps applications, AppSets, and clusters. Ask questions like "What applications are out of sync?" or "Which syncs failed in the past 24 hours?" and get instant answers drawn from your entire GitOps deployment landscape. The AI agent can also trigger operations—such as syncing all applications managing non-prod services with a single command or generating pipeline snippets for common GitOps workflows. This transforms dashboards and manual queries into conversational operations management, making GitOps accessible to platform teams, developers, and operators alike.
[Learn more about AI-powered GitOps →]
GitOps applications now integrate with Harness's centralized notification framework, bringing the same notification capabilities available for pipelines to your GitOps workflows. Track application sync events—start, complete, success, and failure—alongside ApplicationSet creation, sync, and error events through Slack, email, PagerDuty, Microsoft Teams, or any webhook-compatible system. Configure notification rules at the account, organization, or project level using the same interface you already use for pipeline notifications.
Learn more about GitOps notifications →
GitOps PR-based workflows get two key improvements. The Update Release Repo step can now block until the raised PR is merged, eliminating the need for separate Merge PR steps and manual approval stage coordination—the step creates the PR, waits for review, and proceeds once merged. Squash and Merge Support brings native squash-and-merge strategies to the Merge PR step, working with GitHub App tokens and following your repository's configured merge strategies to maintain a clean, linear repository history.
[Learn more about PR pipelines →]
The features highlighted in this update are available now in Harness CD and GitOps. Ready to see them in action? We've created a comprehensive video playlist that walks through these capabilities, featuring live demos and configuration guides.
Watch the Q1 2026 Feature Playlist →
From AI-powered verification that understands your deployments from day one to Windows performance breakthroughs and GitOps workflow enhancements, this quarter delivers capabilities that eliminate configuration overhead, expand platform coverage, and align with how modern teams ship software.
Explore the documentation links throughout this post to dive deeper into each feature, or reach out to your Harness account team to discuss how these capabilities can accelerate your delivery workflows.
What's coming next? Q2 2026 will bring deeper integrations with cloud-native platforms, expanded AI capabilities across the deployment lifecycle, and continued investment in developer experience improvements. Stay tuned for more updates—we're just getting started.


AWS re:Invent 2025 made one thing very clear: enterprise interest in AI is no longer theoretical. The conversation has moved beyond curiosity. Teams are actively experimenting, leaders are looking for production-ready use cases, and engineering organizations are trying to figure out where AI can create real leverage across software delivery, security, platform engineering, and operations.
That part is real. But after five interviews at the event, I came away with a more important takeaway: AI is not removing the need for engineering discipline. It is increasing. Many of the challenges organizations are now running into with AI are not really AI problems at all. They are governance problems, process problems, data problems, platform problems, and measurement problems. AI is just making them harder to ignore.
A lot of the market conversation still centers on speed. Faster code generation, faster documentation, faster testing support, faster issue resolution, faster delivery. And there is truth in that. Across the interviews, there was broad agreement that AI is already creating meaningful value across the software development lifecycle, especially by helping teams move faster through repetitive work.
But speed by itself is not the breakthrough. What matters is whether the system around that speed is strong enough to absorb it.
Tim Knapp, who leads Slalom's product engineering capability in Chicago, put it bluntly: you cannot layer AI on top of broken processes and expect transformation. Many enterprises are still operating from a waterfall mindset dressed up in modern tooling, and that mismatch becomes more expensive the faster AI pushes output through the pipeline. As Tim described it, every team is feeling the AI imperative right now, but the people and the processes are still trying to figure out how to adapt to a technology side that just changed drastically.
If engineering organizations already struggle with inconsistent processes, weak standards, poor documentation, siloed ownership, or unclear governance, then AI does not solve those issues. It amplifies them. The question is no longer just, "Can AI help teams move faster?" The better question is, "Can our engineering system handle what AI is about to accelerate?"
One of the more grounded themes I heard was that AI's immediate value is often less glamorous than the headlines suggest. It is not that developers disappear. It is that developers spend less time on drag.
Eric Baran, who manages Amazon's global financial services developer platform business, described this clearly. The teams he works with are finding the most impact not from AI-generated features, but from offloading all the surrounding work (test creation, pipeline configuration, infrastructure templating, deployment patterns) that was eating into developers' days. As Eric put it, many developers were only writing lines of code on actual features for an hour or two out of their day before AI entered the picture. The rest was operational weight. When AI reduces that weight, developers get meaningful time back to focus on what actually differentiates the product.
This is where some of the loudest AI narratives miss the mark. The most valuable use of AI in engineering may not be autonomous software creation. It may be freeing teams from the work that keeps real innovation from happening.
It is easy to celebrate increased output. It is harder to govern it. That tension came up again and again.
Ron Miller, editor of the Fast Forward newsletter, shared a story that captured this perfectly. He overheard two developers in a Miami coffee shop. One of them was bleary-eyed, telling his friend he had been up all night reading 10,000 lines of code. His buddy suggested using AI to review it. The response: no, I have to know my code. Ron's point was sharp. Even though AI is generating code faster, someone still has to understand what is moving into production, and that responsibility does not shrink just because the volume grows.
Eric Baran echoed this from the enterprise side. His financial services clients are seeing an explosion of code coming off AI engines, but the regulatory and governance requirements have not changed. Teams that already struggled to keep up with compliance before AI are now moving even faster into territory they cannot fully audit. As Eric described it, customers keep coming back to the table saying they need to get better at actually getting this code out, and making sure their audit and governance teams can confirm it was built to spec.
This is where many organizations will get stuck. They will assume the bottleneck is still code creation, when in reality the bottleneck is shifting toward validation, governance, and operational trust. The winners in the next phase of AI adoption will not just be the teams that can generate faster. They will be the teams that can govern faster. be the teams that can govern faster.
Speed without trust is not transformation. It is just a faster risk.
If AI increases the pace of software creation, platform engineering becomes more important, not less. Someone still has to create the paved road. To make the secure path the easy path, reduce cognitive load, standardize workflows without creating more friction, and design systems where governance is built in rather than layered on after the fact.
Hasif Calp, a technology leader with over 16 years at Cisco who holds both platform engineering and security leadership roles, used a phrase that stuck with me: frictionless security. His argument is that most security friction comes not from the controls themselves, but from how they are implemented. When organizations rely on slow approval chains and human gates that overwhelm the people in them, the result is not better security. It is a clicking exercise where nobody fully understands the implications of what they are approving. Hasif advocated instead for making it easy to do the right thing through platform engineering, so that good security and fast delivery are not in conflict.
The more power AI gives teams, the more important it becomes to have strong internal platforms, clear golden paths, embedded controls, and systems that guide good behavior by default. That is the right goal. Not security by slowdown, not security by ticket queue, but security that fits naturally into how software is built and delivered.
That model was valuable before AI. It becomes essential with AI.
One of the more underrated themes from these interviews was that AI does not just raise the value of technical infrastructure. It also raises the value of human clarity.
Ron Miller made a compelling case for this. As a writer, he has watched the narrative around AI and communication skills closely, and he pushed back hard on the idea that writing becomes less important in an AI-driven world. His argument is exactly the opposite. If you are building an agent or designing a prompt that will drive real automation, the quality of how you articulate intent matters enormously. You have to understand a process deeply, and then you have to be able to communicate it in a way that models can act on. That is a writing skill, and it is becoming more important, not less.
Tim Knapp built on this from the engineering side with the concept of context engineering. His framing was vivid. Every time you make a call to an LLM or invoke an agent in your IDE, you might as well be pulling somebody brand new off the street who knows nothing about what you are trying to do. If organizations do not invest in structuring and maintaining layers of context alongside their codebases, the AI will not perform. Tim described this as an emerging discipline that teams are just beginning to take seriously, and one that could become a real differentiator.
The future is not just model-driven. It is context-driven.
AI may be new. Enterprise dysfunction is not. Another pattern that came through clearly is that many organizations are still carrying old habits that were expensive before AI and become even more expensive after it. Slow approval chains, waterfall thinking in modern clothes, measurement that creates theater instead of clarity, security models that rely too heavily on human gates, buying tools without changing operating rhythms, and mistaking experimentation for operational adoption.
Ron Miller captured the broader landscape well. Most of the CIOs and CTOs he talks to know they have to move toward AI, but many are still stuck in experimentation mode rather than production deployment. The knowing and the doing remain far apart.
Piyush Dewan, a director of software engineering at BridgeBio Medicines, offered a concrete example of how process rigidity undermines delivery. He described how over-engineered agile methodologies (the capacity planning rituals, the strict sprint predictability expectations) often cause teams to lose sight of what they actually need to deliver and how they should innovate. In his view, the emphasis on process compliance can become its own form of waste.
Tim Knapp reinforced this point by recommending that organizations remove agile project metrics from QBRs entirely. Story points and defect counts are useful internal signals for delivery teams, but they need heavy context to mean anything, and at the QBR level, they often create more theater than clarity. What matters at that altitude is whether timelines were met, how scope was managed, and what outcomes were actually delivered.
This is part of why so many leaders are now talking about centers of excellence, shared governance, and tighter collaboration between platform, security, and engineering leadership. AI does not just require new tools. It requires a more mature operating model around those tools.
Trying a model is easy. Running the business differently is the hard part.
This is what all five interviews really pointed out. The AI era is not just about adopting new capabilities. It is about whether the organization itself is ready to operate differently. That includes better internal platforms, clearer governance, stronger delivery controls, more usable system context, tighter alignment between engineering, security, and operations, and a more disciplined way of measuring what is actually improving.
The organizations that benefit most from AI will not just be the ones that experiment the fastest. They will be the ones that modernize the system around the experimentation. That is the difference between a promising demo and durable advantage.
If I had to summarize what I heard at AWS re:Invent in one sentence, it would be this: AI is not bypassing engineering excellence. It is making it more necessary.
Yes, the AI tools are getting better. Yes, the use cases are becoming more tangible. Yes, teams are finding real value. But none of that removes the need for strong platforms, clear governance, trustworthy delivery systems, and disciplined operating models. If anything, it raises the bar.
The next wave of competitive advantage will not come from using AI in isolated ways. It will come from building an engineering organization that can turn AI into reliable, scalable, governed outcomes. That is a much harder challenge than generating more code. And it is the one that matters.


Application security & engineering teams are under pressure to move faster, cover more, and reduce the operational drag that often comes with security testing. But in practice, two problems keep slowing teams down and adding friction.
Today, we’re introducing several important enhancements to Harness API Testing that are designed to solve these exact issues and make API scans easier to configure, more reliable, and more efficient.
The new scan configuration experience is built to reduce friction from the moment a user clicks “Create Scan.” It simplifies the setup flow, improves validation, and provides users with more guidance directly in context, rather than forcing them to guess or leave the page for help.
The highlights include:
The new reachability validations in XAST Replay and DAST help you confirm whether APIs are actually reachable and properly authenticated, so scan execution stays focused on targets that can produce real results.
The highlights include:
These launches address two persistent sources of friction in API security testing: configuration complexity and execution inefficiency. Both slow teams down, create avoidable rework, and make it harder to get to meaningful security outcomes.
The problem is not just that the setup takes time. It is that the experience in tooling has often lacked enough structure, validation, and in-context explanation.
When configuration is too complex, users are far more likely to:
Security teams have long been dealing with incorrect or incomplete configurations, unclear field usage, and longer times to initially create a successful scan.
Without strong validation at the point of initial setup, users can move forward thinking a scan is correctly configured, only to discover later that something was malformed, missing, or misunderstood.
That creates a chain reaction:
Without validations, helper text, tooltips, and field-specific guidance, it’s easy to make mistakes when entering wrong inputs and making selections.
Context switching creates another major issue. If users need to leave the scan flow to create a policy, configure authentication, or add a runner, the API test setup experience becomes fragmented.
That fragmentation leads to:
Teams often waste time bouncing between multiple pages and increase the likelihood of mistakes without inline workflows.
On the execution end of the equation, teams may encounter cases where tests are generated even when the target APIs are unreachable or not properly authenticated.
That leads to several downstream problems:
When API endpoint targets aren’t validated upfront, the result is unnecessary test generation and low-quality output.
Large numbers of generated tests can look impressive on release dashboards, but if those tests are tied to unreachable APIs, they fail to create real security value.
Teams are left with:
Improper scan configurations that produce high volumes of poor results yield inaccurate metrics that are critical to application security programs. This reality can create a false sense of confidence in security posture.
These enhancements improve two critical parts of the API testing experience: how scans are configured and how test execution readiness is validated.
Rather than spreading configuration across a larger set of steps, the new flow reduces the experience to three main sections that are:




That reorganization does more than simplify the UI. It separates required setup from optional tuning, helping you complete scan creation with more confidence and less guesswork.
With these enhancements, you can now more easily:
This enhancement is especially important for teams that want to move quickly without sacrificing correctness. Keeping these dependent tasks in a single flow reduces interruptions and lowers the risk of setup errors.
The advanced settings experience also adds more clarity around complex configuration options, where you can now work with:
These details matter because they turn a complex setup from opaque to guided and actionable. You can find more technical documentation here.
For every running or completed scan, you will now see a Validation Summary tab that highlights critical details and the overall health of the configured API testing. Information here includes:

The Reachability Test enhancement brings that same philosophy to execution: validate earlier, execute smarter. Before generating tests, the Harness platform now provides clearer visibility into whether APIs are actually ready to be tested.
The new Reachability Test tab gives you a dedicated place to inspect endpoint readiness before test generation begins. It surfaces:

This enhancement turns what was previously harder to diagnose into something visible and actionable.
The Harness platform now uses reachability and authentication readiness as part of test generation control.
That means that no test cases are generated when:
The reachability tests help ensure execution resources are spent on APIs that can actually produce meaningful results. For security teams, this creates a more efficient and trustworthy scan lifecycle with:
You can read more technical details here.
Taken together, these enhancements make API security testing more usable at the front end and more efficient at the back end. Teams can configure scans faster, with fewer errors and less dependency on expert intervention, while also improving the quality of what gets executed once a scan runs.
These Harness API Testing features are available immediately with your existing Harness subscription. There is no additional cost or setup required.


Summary: Google Cloud Next ’26 focused on the future of software delivery, emphasizing that AI, platform consolidation, and an urgent push toward efficiency are reshaping the Software Development Life Cycle (SDLC). The key takeaway from the event was that organizations are moving from AI experimentation to operationalization, actively consolidating fragmented tools onto end-to-end platforms that embed AI for control, intelligence, and speed.
Google Cloud Next 2026 made one thing clear: the future of software delivery is being reshaped in real time by AI, platform consolidation, and an urgent push toward efficiency. From the show floor to executive roundtables, the conversations we had reinforced a consistent theme: teams are looking to tackle the AI Velocity Paradox by simplifying, modernizing, and intelligently automating every stage of the SDLC.
We had hundreds of meaningful conversations with engineering, platform, and cloud leaders. The patterns were unmistakable.

Across industries, organizations are grappling with:
These challenges mapped directly to Harness’ core solution areas:
And the urgency is real. We spoke with teams:
A recurring thread:
“We’re already experimenting with AI, but we need a platform that brings it all together.”
If 2025 was about AI experimentation, 2026 is about operationalization.
We saw a sharp increase in:
Multiple attendees explicitly mentioned:
This shift aligns perfectly with Harness’ vision of AI-native software delivery, where intelligence is embedded, not bolted on. In fact, at Next, we announced a major step forward in making that vision real through our expanded partnership with Google Cloud. By integrating Google Cloud Developer Connect with the Harness Software Delivery Knowledge Graph, we’re enabling a unified layer of AI intelligence across the entire SDLC.
This means AI in Harness isn’t operating in silos. It has full, real-time context across code, pipelines, infrastructure, and runtime signals. The result is smarter automation, faster root cause analysis, and AI agents that can act with confidence, not guesswork. It’s a foundational step in moving from AI-assisted workflows to truly AI-native delivery systems, which is exactly what attendees told us they’re looking for.
A great example of this is Keller Williams. Keller Williams leveraged the Harness platform to transform their software delivery, increasing deployment frequency from a few times a year to over 20+ annual releases. By automating manual pipelines, the platform eliminates operational bottlenecks and allows its developers to focus on rapid innovation rather than deployment logistics.
Harness’s Martin Reynolds joined leaders from Atlassian, Datadog, LangChain, and Google to explore what’s next in a session titled: The Future of Developer Experience is Frictionless
The takeaway?
The next leap in productivity won’t come from isolated tools. It will come from connected, intelligent systems that remove friction entirely. With 150+ attendees on the final day, it was clear this message resonated.
Across every conversation, one strategic shift stood out:
Teams want fewer tools and smarter ones.
Organizations are actively:
Jenkins modernization alone came up repeatedly. Not as a question of if, but when.
The event kicked off with Google Cloud recognizing its partners. We were proud to be named Google Cloud’s 2026 Technology Partner of the Year, a reflection of the innovation and impact we’re delivering together with GCP.

Google Cloud Next ’26 wasn’t just about cloud. It was about control, intelligence, and speed.
The organizations moving fastest right now are:
Harness is uniquely positioned at that intersection.
And based on what we saw in Vegas, the demand for that future is only accelerating. Here’s the event recap video.

If you connected with us at the event or want to continue the conversation, we’d love to dive deeper.
.png)
.png)
It’s becoming increasingly clear that AI-generated code can create real challenges once it reaches production. At Harness, we’ve been focused on innovating fast and solving those problems, so teams can move quickly without sacrificing reliability.
In the past 30 days, we delivered 70+ new features. These features enable our users to ship fast, not by cutting corners, but by sharpening the feedback loops: faster builds, integrated security checks within the pipeline, deeper visibility into AI across discovery and testing, and deployment tools that are intuitive enough to use without a runbook.
Here’s a look at everything we shipped.
Cursor Plugin
Harness is introducing the Cursor Plugin, bringing AI-native software delivery directly into the Cursor editor. Developers and AI agents move from code changes to vulnerability detection, CI/CD execution, security validation, approvals, deployments, and operational insight without leaving the IDE. The integration includes the Harness Secure AI Coding hook for Cursor. Download the plugin.
Google Cloud Partnership for a unified AI for Software Delivery
We partnered with Google Cloud to integrate Developer Connect with our Software Delivery Knowledge Graph, giving teams a unified, AI-ready view of the entire software delivery lifecycle.
This enhanced context enables smarter, faster AI-driven decisions, helping engineering teams troubleshoot issues, improve accuracy, and deliver software with greater confidence and efficiency. Learn more.
Harness MCP Server Updates
The biggest additions to this version of the MCP server are pipeline YAML support so agent-driven pipelines work with the current schema, OSS vulnerability lookup for supply chain security with anti-fabrication extractors, and added resilience support. Download and get started.
SLSA Provenance for Non-Container Artifacts
Supply chain attestation via SLSA now covers Helm charts, JAR/WAR files, standalone binaries, and other artifact types, not just container images. Generate and verify provenance across the full artifact portfolio. Learn more.

OSS Remediation for Code Repositories
Automated and manual remediation for vulnerable open-source components now runs directly against code repositories. When a dependency has a fix available, the tooling can apply it.

API Security Scan Configuration Revamp
The scan creation flow has been simplified into three logical groups: General, Source and Attacks, and Advanced Settings. Every field now has a tooltip and step-level documentation. Field-level validation catches misconfigurations before a scan runs.

Reachability Test for DAST and API Security Scans
Before generating test cases, a new Reachability Test validates that each API endpoint is actually reachable. Endpoints that don't respond don't generate test cases. Reduces wasted scan time against dead endpoints.

Posture Events: Sensitive Data Evidence
When a posture event involves sensitive data exposure, the finding now shows exactly where: which parameter, in the request or response, with the classification and dataset inline. Previously required navigating across modules to get this context.

All Occurrences Dashboard
A new account-level dashboard surfaces every raw vulnerability finding across all pipelines, not just the rolled-up view. Filter, export, and drill into file paths, line numbers, and repos. Useful when you need to understand whether a scanner finding is one instance or fifty. Release notes.
Prisma Cloud Scan Result Enhancements
Prisma Cloud (formerly Twistlock) scan results now include File Name, Distro, and Distro Release fields. The file name is derived from packagePath to improve traceability when the same vulnerability appears across multiple package locations.
Third-Party MCP Discovery
Extends AI asset discovery beyond your own application ecosystem. Harness now surfaces external MCP servers and the AI assets they expose, giving security and platform teams visibility into AI interactions that originate outside their direct control.

Behavioral Insights Extended to MCP Tools
Internet exposure, encryption status, and authentication usage were previously available for APIs only. Those same behavioral signals now apply to MCP tools. View them via the info tooltip on any MCP tool in the inventory. Helps identify high-risk tools based on actual usage patterns, not configuration alone

Risk Score Enhancements for APIs and MCP Tools
Two changes in one release: API risk now shows a unified view with contributing factors, the Likelihood vs. Impact calculation, and direct links to underlying issues. MCP tools now have their own dedicated risk scores using the same model. Side-sheet editing means you can act on a finding without leaving context.

AI Assets Tab and Licensing Visibility
A dedicated AI Assets tab provides a single view of all AI-related assets discovered in customer environments: AI APIs, MCP tools, models, and their usage patterns. Licensing visibility is included so teams can track AI consumption against entitlements.

Improved Pipeline Execution Layout
The pipeline execution listing page now uses a card-based layout. The Service and Environment columns are replaced by an Update Summary column showing service-to-environment mappings for CD stages and schema-to-instance mappings for Database DevOps stages, i.e., more signal per row!
AWS Connector Validation Without ec2:DescribeRegions
AWS connector validation now uses sts:GetCallerIdentity instead of ec2:DescribeRegions. The new call requires no IAM permissions, which means tighter least-privilege configurations no longer block connector setup.
ApplicationSet TemplatePatch Support
TemplatePatch configuration in GitOps ApplicationSets is now preserved in the Manifest Edit panel. Previously, setting TemplatePatch in the UI and saving caused the configuration to disappear.
Cache Storage Connector Override in YAML
Self-hosted builds can now specify a stage-level connector override for cache storage in YAML. If not set, the connector from Default Settings is used. Useful when different stages need to read from different cache backends.
Containerless Step Binary Path
Containerless CI steps now use app.harness.io as the default download path for step binaries. This reduces egress dependencies on external sources.
Warehouse Native Experimentation
Run experiments directly in your data warehouse using your own assignment and metric data. No exporting, no duplicating data outside your analytics source of truth. Supports Snowflake, Amazon Redshift, and Google BigQuery. Release notes.

Reallocate Traffic API
A new Reallocate Traffic endpoint lets you reset the bucketing seed for a feature flag in a specific environment via the API. Useful when you need to re-randomize user assignments without changing the flag configuration.
Native Terragrunt Support and Multi-IaC Orchestration
Teams can now orchestrate complex deployments across Terraform, OpenTofu, and Terragrunt in a single platform. A unified multi-IaC control plane eliminates fragmented tooling, standardizes workflows, and covers provisioning, configuration, and deployment consistently. Read the blog post.

AWS CDK Support (Beta) Define AWS infrastructure in TypeScript or Python using the AWS Cloud Development Kit and let Harness handle provisioning, state, and pipeline integration. Engineers who already write CDK don't need to learn HCL or adopt a separate tool.
Module Registry 2.0 Store IaC modules as artifacts natively in Harness, auto-sync new versions as they're published, and run module onboarding directly on Harness pipelines. A single place to manage the full module lifecycle: publishing, versioning, and consumption, without stitching together a registry, a pipeline tool, and a version tracker.
Terraform Sensitive Output Masking (Beta)
Output fields marked sensitive = true in your main.tf are now automatically masked in the pipeline Output tab during Terraform Apply step execution. Sensitive outputs remain accessible in downstream steps via Harness expressions, but don't appear in plain text in the UI.
Swift Package Registry
Artifact Registry now supports Swift packages with full SwiftPM compatibility. Authenticate, publish, and resolve dependencies using the registry URL directly. Existing SwiftPM workflows work without changes. Release notes.
Raw File Registry
A new Raw File registry stores and retrieves arbitrary files by path: archives, reports, configuration files, binaries, anything that doesn't belong to a package manager. Upload and download via HTTP and curl. No specialized client required.
Copy Version Between Registries
Promote a specific package version from one Harness registry to another directly from the UI. No re-pushing from your machine, no scripts to move artifacts between project or organization registries.
Soft Delete for Artifacts and Versions
Deleting a package or version now moves it to a Deleted view where it remains recoverable until the retention window expires. Permanent delete is available from the same dialog when that's the intent.
Artifact Registry Audit Dashboard
An out-of-the-box dashboard records every artifact upload and download across all Harness Artifact Registries. Provisioned and maintained automatically for accounts with Artifact Registry enabled. No setup required.
Webhooks Extended to Python, Maven, and NuGet
Artifact Registry webhooks now cover Maven, NuGet, and Python (PyPI) in addition to existing package types. Use artifact events to trigger CI/CD, security scans, or notifications for more of your package ecosystem.
IBM DB2 Support
Database schema changes and migrations now work across all DB2 variants: DB2 LUW, DB2 for iSeries, and DB2 for z/OS. Mainframe and midrange databases now fit in the same pipeline workflow as everything else. Release notes.
Google BigQuery Support
Deploy database changes to BigQuery using the same Liquibase-based workflow used for relational databases. No separate tooling or custom scripting required.
Percona Toolkit for MySQL
Use Percona Toolkit natively in Harness Database DevOps to make MySQL schema changes safer and virtually downtime-free. Read the blog post.
ECS Support for Database Jobs (Early Access)
Database DevOps can now run deployment jobs on ECS Fargate instead of Kubernetes. For teams not running Kubernetes, this removes the requirement to stand up a cluster just to run database migrations. Contact Harness to enable. Read the docs.
Keyless Authentication for Google CloudSQL
Authenticate to CloudSQL (Postgres and MySQL variants) using the delegate's service account. No credentials to rotate, no secrets to manage. Read the docs.
OIDC Authentication for Google Cloud Databases
Authenticate to CloudSQL (Postgres and MySQL), Google Spanner, and Google BigQuery using OIDC. Works with any OIDC-compatible identity provider already in use for the rest of your Google Cloud infrastructure. Read the docs.
Environment Management
Developers can now self-serve dev, test, staging, and production environments directly from the developer portal. Platform teams configure the governance rules; developers provision within those bounds without opening a ticket. Read the blog post.
ServiceNow Integration for Engineering Metrics (Beta)
DORA metrics in the Efficiency Insights dashboard can now be calculated from ServiceNow incident and change management data. Deployment Frequency, Change Failure Rate, and MTTR all supported. Useful for teams where ServiceNow is the system of record for incidents, not a secondary tool.
Custom Dashboards in Engineering Metrics (Beta)
A new Canvas page (being renamed to Studio) lets teams build custom Insights dashboards using HQL queries across all data sources. Dashboards support Draft and Published states. Query Variables allow dashboards to adapt dynamically per team or environment.
Custom Entity Kinds in Developer Portal
Platform engineers can now define entity kinds beyond the built-in set (Component, API, Resource, Environment, System). Model domain-specific software components that don't fit existing kinds, with their own name, icon, and JSON Schema for validation. Release notes.
SonarQube Integration in Developer Portal
Harness connects to SonarQube Server (self-hosted) or SonarQube Cloud and brings projects into the developer portal catalog as catalog entities. Code quality data surfaces alongside the rest of your software catalog.
Scorecard Aggregation
Scorecard data can now be aggregated across multiple catalog entities. Roll up compliance and health metrics from individual components to systems or domains without manually combining reports.
Custom Dashboard Data Retention Extended to 12 Months
The data retention period for custom dashboards increased from 3 months to 12 months. Longer historical windows for trend analysis and compliance reporting.
Code Repository Language Breakdown
Developers can now see the language composition of any repository directly in the list view and repo detail page. Particularly useful when migrating off other source control systems and auditing what you're moving.

Code Repository Tags Repositories can now be tagged with metadata like team, intent, or domain, consistent with how pipelines, connectors, and other Harness entities are tagged. Useful for filtering, governance, and search at scale.
AI-Generated Post-Mortems with Action Item Detection
When an incident closes, AI SRE automatically generates a structured six-section retrospective: Summary, Impact, Root Cause, Resolution, Insights, and Lessons Learned. The AI synthesizes the full incident context: timeline events, Slack conversations, RCA theories, and responder actions. What typically takes a lead engineer 2-4 hours to write comes out in seconds. Action items are detected in real time from Slack conversations and meeting transcripts during the incident, with each item including a description and the responsible person extracted from context. They carry forward into the post-mortem automatically, so nothing gets reconstructed from memory days later. Release notes.
ServiceNow Change Record Correlation in RCA
When an incident fires, the AI Investigator automatically pulls recent ServiceNow change records and correlates them to the incident timeline. If your organization already has a Harness ServiceNow connector configured for pipelines or approvals, change data flows into root cause analysis immediately with zero additional setup. Change records appear alongside deploy events and code changes in the AI's correlation engine, reducing manual cross-referencing between tools. Documentation.
Stakeholder Status Updates
Incident commanders can now broadcast structured status updates to subscribed stakeholders (executives, customer support, dependent teams) without flooding the war room. Stakeholders subscribe to the services they care about and receive updates triggered by the Incident Lead. The system pre-populates a branded email with incident ID, title, summary, impacted services, and current status. The sender reviews, edits if needed, and sends. Eliminates the "what's the status?" interruptions that pull responders out of active response. Release notes.
Google Chat Integration
Teams running Google Workspace can now run incident response directly from Google Chat: create incident channels, post updates, receive notifications, and collaborate in real time. Uses a Pub/Sub-based architecture for reliable message delivery. Bring incident collaboration to Google Chat on par with the existing Slack integration. One-time admin setup per organization.
Runbook Slug Commands in Slack
On-call responders can now trigger runbook automations from Slack using short slug commands: /harness run <slug>. No UI navigation during high-pressure response. Common actions like restart-pods or scale-up become muscle memory. Removes a context switch from the critical path during active incidents. Release notes.
MCP support for Resilience Testing
MCP support for Resilience Testing improves extensibility across chaos and resilience workflows.
Pipeline Integration with Chaos Step Templates
Any experiment template can now be referenced and used from any scope in a pipeline. Makes it easier to standardize chaos execution across delivery workflows.
Probes and Observability Splunk Enterprise and Datadog APM Probes
APM probes now support Splunk Enterprise and Datadog. Teams can validate system behavior during experiments using the observability tools they already rely on.
Namespace Label Filters in ChaosGuard
ChaosGuard conditions now support namespace label filters, giving teams finer-grained control over which namespaces chaos experiments can target. Release notes.
Experiment Run Reports
Experiment run reports are now available in the UI and accessible via a new API endpoint that returns report data as JSON. Useful for integrating chaos results into external dashboards or compliance workflows.
Docker Labels-Based Chaos Injection on ECS
Added support for targeting ECS in-VM SSM chaos injection using Docker labels. Expands targeting flexibility for teams running mixed ECS workloads.
70+ features in 30 days. The teams using AI to accelerate code generation are now running into the same reality we tracked in March: the bottleneck isn't writing code, it's everything downstream. Artifact management, security posture, deployment reliability, incident response, and AI asset governance. April's releases push the feedback loop tighter at each of those stages. Post-mortems that took 4 hours now take seconds. The change record correlation that required manual cross-referencing now happens automatically.
The velocity compounds when the whole software delivery moves together, not just the part where the AI writes code.
See you in May!


TLDR: Today, Harness is introducing the Harness Cursor Plugin, bringing the power of the Harness AI-native software delivery platform directly into Cursor. This integration, along with the Harness Secure AI Coding hook for Cursor, allows developers and AI agents to move from code changes to vulnerability detection, CI/CD execution, security validation, approvals, deployments, and operational insight without leaving the editor.
AI has completely changed how we write code. You can spin up functions, refactor entire files, and generate tests in seconds. The inner loop, writing and iterating on code, has never been faster. But the moment you try to ship that code, everything slows down. This is what we call the AI Velocity Paradox.
You are suddenly back to juggling pipelines, waiting on approvals, checking security scans, debugging failed runs, and bouncing between tools just to get a change into production.
That gap, between fast code and slow delivery, is what we kept running into. So we built something to fix it.
Today, we are introducing the Harness Plugin for Cursor, a way to go from PR to production without leaving your editor.
If you are using agentic coding tools, such as Cursor, you have probably felt this.
You can:
But shipping still depends on everything outside your editor:
And none of that got simpler just because AI showed up. In fact, AI makes the problem more obvious.
Now you can create changes faster than your delivery process can safely handle. And if those controls are not tight, you are introducing a whole new category of risk. Fast-moving code with fragmented governance.
AI did not break software delivery. It exposed how disconnected it already was.
Instead of jumping between tools, what if you could just tell your editor what you want to happen?
Something like:
“Deploy PR #4821 to staging once the security scan passes, and Slack me if anything fails.”
That is the idea behind the Harness Cursor Plugin.
It connects Cursor directly to Harness, so you can trigger and manage your entire delivery workflow using natural language, right inside Cursor.

No tab switching. No manual orchestration. No guessing what is happening in the pipeline.
Once connected, you can use Cursor to interact with your delivery system just as you do with your code.
For example, you can:

This builds on what we introduced last month, Secure AI Coding, which integrates directly with Cursor and scans code at the moment of generation rather than waiting for a PR review. Developers see inline vulnerability warnings with the option to send flagged code back to the agent for remediation, without leaving their workflow. Under the hood, it leverages Harness's Code Property Graph (CPG) to trace data flows across the entire codebase, surfacing complex vulnerabilities that simpler linting tools would miss.
The key thing is that you are no longer just interacting with code. You are interacting with the entire delivery system from the same place.
One of the biggest concerns with AI in delivery is obvious:
“Are we about to let agents push code to production without guardrails?”
No.
With Harness, everything runs through the controls that you can rely on:

Instead of being manual checkpoints spread across tools, they are enforced automatically as part of the workflow while you stay in flow.
So AI can help move things faster, but it cannot bypass the governance that matters.
Most integrations today expose APIs or bolt AI onto existing systems. That is not what we wanted to do.
We designed the Harness Cursor Plugin specifically for how AI agents actually work:
Because shipping software is not a single action. It is a chain of decisions across CI, CD, security, approvals, and operations. If AI is going to help here, it needs access to that full picture. That’s where the Harness Software Delivery Knowledge Graph comes into play. It provides the necessary context for AI to take actions for you.
The knowledge graph models the relationships between services, pipelines, environments, policies, and operational signals in real time. Instead of treating each step in delivery as an isolated task, it creates a connected system of record that AI can reason over. This allows agents to understand not just what to do, but when and why to do it, based on dependencies, risk signals, and historical behavior.

In practice, this means smarter automation: deployments that adapt to context, approvals that are triggered based on policy and impact, and faster root cause analysis because the system already understands how everything is connected.
This is not just about convenience. It is a shift in how software actually moves from idea to production.
Instead of:
You get a single, connected workflow:
All accessible from your editor. Cursor accelerates the building. Harness governs the shipping. And the handoff between the two disappears.
Watch the demo:
If you want to try it:
For example:
“Run the CI pipeline for this branch, check if the security scan passed, and promote to staging if it did.”
That is it.
AI is not just changing how we write code. It is changing expectations for how fast we should be able to ship it. But speed without control does not work in real environments. What we are building toward is something simpler:
A world where every step, from PR to production, is:
Without forcing developers to leave their flow. This plugin is one step in that direction.


The question for enterprise AI in 2026 is no longer just which model. It’s which harness.
An agent harness is the system around the model. It decides what the agent remembers, what context it sees, what tools it can call, what it is allowed to do, and what happens when it is wrong.
The model provides intelligence. The harness provides control.
This is where the real engineering is happening. When Claude Code's source was accidentally exposed earlier this year, reports put it at more than half a million lines. None of that was the model. All of it was the system around the model.
The model gets you started. The harness gets you to production.
Software engineering is one of the first places this plays out. AI coding tools are writing and editing code. Autonomous agents are starting to deploy, operate, and respond to incidents. These are not suggestions anymore. They are changes to running software, made by agents acting on their own.
And one harness is not enough.
Software engineering has two halves at the level that matters for agent harness design. Software development, where code gets written. Software delivery, where code becomes running software.
The inner loop is software development. Code gets written, edited, tested, and reviewed. Coding agents work here, close to the developer and bounded by the repository. Whether they live in an IDE, a terminal, a background session, or a web workspace doesn’t change what they do. They help one person write better code faster.
The outer loop is software delivery. Code becomes software that is built, tested, secured, deployed, verified, operated, and sometimes rolled back. That includes CI, security scans, deployments, infrastructure, feature flags, incidents, and approvals.
The two are loops different. The inner loop is about individual productivity. The outer loop is about organizational execution under risk. It crosses teams, touches production, uses secrets, enforces policy, and leaves an audit trail.
An agent delivering software can’t be a coding assistant with API access. It has to run inside a system that enforces the organization’s rules.
The stakes are easier to see by starting with what breaks.
Security. An agent with broad access to deploy, provision, and push config changes is a new attack surface. Prompt injection through a PR description, a poisoned dependency, or a malicious issue comment can turn an autonomous agent into the most privileged insider threat in the company. It acts under its own identity, with its own scoped credentials, doing exactly what it’s authorized to do. The attacker just redirects the authorization. Without an identity model and governed execution, every action the agent can take becomes a potential action path for an attacker.
Compliance. An agent that ships code without the same policy gates, approvals, and audit trails humans use creates a parallel path that regulators and auditors will challenge. A single deployment that skipped EU data residency review can trigger a finding that takes quarters to close. Cyber insurers are starting to scrutinize AI governance, and some are exploring exclusions or tighter terms for poorly governed AI. Within a year or two, “we have autonomous agents deploying code without an evidence trail” will be impossible to defend. Autonomous delivery without verification is autonomous liability.
Confident bad decisions. An agent with partial context looks like it’s working. It deploys during a change freeze. It rolls out a config change that breaks an upstream service. It enables a feature flag during an incident. Each failure is locally reasonable and globally wrong. Without the full knowledge graph, the agent keeps making the wrong call.
AI-specific failure modes. Autonomous agents fail in ways that deterministic automation doesn’t. They hallucinate actions, generating and deploying a Kubernetes manifest that doesn’t match reality. They get stuck in loops, rolling back and redeploying the same change until a human kills the process. They’re confidently wrong, proposing a fix that passes a weak policy gate and breaks production an hour later. No attacker involved. Without verification strong enough to catch them, errors reach production.
All of this has happened with deterministic automation, one mistake at a time. With autonomous agents, errors happen in parallel. A coding agent with bad context can push 10 broken PRs in 10 minutes. A delivery agent without verification can deploy 20 services before anyone notices.
Speed used to be the feature. With autonomous agents, speed is also the damage multiplier.
A software delivery agent needs four things: memory, context, tools, and verification. The shape and stakes of each element are distinct.
Suppose a team is shipping a new version of a retailer’s checkout service on Thursday. Checkout depends on payments, inventory, fraud, and identity.
A Software Delivery Knowledge Graph is a connected map of services, teams, pipelines, deployments, incidents, policies, scorecards, and artifacts. Nodes and edges show how they all relate.
To answer “Is checkout safe to ship Thursday?”, the agent has to know which services checkout depends on, what their scorecards look like, whether any have open critical CVEs, whether there’s a change freeze, and who’s on call Thursday night.
Tha’is a graph query. If the agent doesn’t have the graph, it’s guessing.
Memory is the durable map. Context is the live signal. Memory tells the agent how the delivery system is connected. Context tells it what’s happening now.
Back to checkout. The agent sees that a chaos experiment last week showed payments fail when its Redis cache is unavailable. It sees that yesterday’s security scan flagged a critical CVE in a library fraud detection depends on. It sees that the new version changes the same config flag that caused an incident two weeks ago.
None of this is in the pull request. All of it matters.
Context isn’t something you assemble from scratch at runtime. It accumulates in the harness long before the agent is asked to act.
People often assume “tools” means function calls to APIs. For a software delivery agent, it means something different. The agent can deploy to Kubernetes, run a database migration, apply a feature flag, trigger a security scan, run a chaos experiment, open and close an incident. Real actions, inside your network, using your credentials, under your policies, with full audit logging.
At Harness, every action runs through a Delegate: a lightweight worker inside your environment. Your VPC, your Kubernetes cluster, your data center. The agent issues an instruction. The Delegate executes it inside your perimeter and returns the result.
Secrets are decrypted inside the Delegate. Never in the agent’s context window, never in a model provider's memory, never in an audit log.
An agent with arbitrary production access is dangerous. An agent constrained by governed execution is governable.
This is the pillar coding and personal productivity agents don’t need at this depth. Software delivery agents do.
Three mechanisms make it concrete:
For checkout, the Thursday release is blocked unless the scorecard passes, no critical CVEs are open, no change freeze applies, and an EU compliance approver signs off. If any of those fail, the agent cannot deploy. If they all pass, the deployment runs through a Delegate and an evidence record is written.
The rules of the organization are enforced in the harness. The agent operates inside them.
I mentioned that an agent needs memory, context, tools, and verification. The good news: a modern software delivery platform like Harness already has the foundations, because truly automated delivery has always needed those four things.
A note on our name. We called the company Harness in 2017 because the original thesis was a safety harness for code: let developers move fast without breaking things. Pipelines, policies, approvals, rollbacks, evidence. The scaffolding that lets speed and safety coexist.
That thesis hasn’t changed. The mover has. Developers are still moving fast. AI agents are moving fast too, and faster. The harness has to hold both.
Pipelines aren’t agents. Pipelines are the harness that lets agents safely act. They’re the control plane where agent actions are evaluated, constrained, and executed under policy.
The word “pipeline” carries baggage. Many people hear “script runner.” That isn’t what we mean. Harness pipelines are production orchestration engines: loops, matrix runs, parallel stages, conditions, approvals, OPA gates, rollback, retries, and deterministic-plus-agentic step-chaining.
An agent step can run inside a loop. A deterministic step can pass output to an agent, then to a policy gate, an approval, another agent, and a deployment. The agent isn’t replacing the pipeline. The agent is one kind of step the pipeline already knows how to run.
Harness pipelines execute hundreds of millions of runs a year across enterprise production systems. That isn’t a theoretical runtime for agents. It’s a runtime already hardened at scale, on real delivery, under real policy, with real rollback. That’s the difference between a script runner and a production harness for autonomous action.
The rest of the foundation maps the same way. The Delegate is how actions reach your infrastructure. The Software Delivery Knowledge Graph is the memory. Our platform modules are the tools. Scorecards, policy gates, and signed evidence are the verification. Harness AI, the intelligence layer on top, uses all four of these elements.
We didn’t set out to build an agent harness. We set out to build a software delivery platform with AI at its core. It turns out those two things are the same.
Coding agents (IDE copilots, background agents, terminal-based assistants, cloud coding sessions) are built for a different job. They know your codebase, your style, your recent commits. That’s a real harness, bounded by the repository and the developer. A software delivery harness has different scope, memory, risks, and accountability.
A coding agent’s memory is the repository. A software delivery agent’s memory is the organization.
The context gap. Ask your coding assistant: “Is it safe to deploy this checkout change to production tonight?” It can’t answer. It doesn’t know the current scorecard, the change freeze status, last week’s chaos test results, or who’s on call. None of that lives inside the developer's workspace. A coding agent can write a change. It can’t know if the change is safe to ship.
The blast radius gap. A coding agent’s bad change usually gets caught before it hurts anything: in review, in CI, in a security scan, on a policy gate. Fifteen minutes wasted, not a production incident. A software delivery agent’s worst day is customer data exposure, a production outage, or a regulatory incident. Same agent paradigm, radically different blast radius.
The safety-net gap. Both kinds of agents are moving toward less human oversight. The difference is what catches them when they’re wrong. A coding agent mistake gets caught downstream: by CI, by security scans, by policy gates, by the delivery harness itself. A delivery agent mistake has nothing downstream. It is the downstream.
The control-plane gap. Could a coding agent call Harness as a backend? Of course. It should. But the caller isn’t the control plane. The software delivery harness decides whether the request is allowed, how it executes, and what evidence is retained.
The preference gap. Developers are going to pick their own coding agents. Most enterprises already run two or three: Cursor on some teams, Claude Code on others, Copilot on others, whatever ships next year on yet other teams. That’s healthy. Software development is distributed by design. Software delivery is the opposite: it’s centralized. One company, one delivery control plane. One set of policies, one audit trail, one source of evidence, one place where credentials are held.
The winning pattern is the two meeting cleanly: whichever coding agent the developer picks, the deployment passes through the same delivery harness.
Managed agents. Stateful APIs. Server-side memory. Model providers are extending into harness territory, and for many use cases, that works. For software delivery specifically, the architecture runs into a different set of constraints.
The credentials problem. Every software delivery action requires production credentials: cloud admin roles, Kubernetes service accounts, database passwords, secrets manager keys. The most sensitive assets in the company. Enterprises spend years building the controls around them: vaults, rotation, scoped access, audit trails. A model-provider-hosted agent loop would require those credentials to flow through the model provider’s infrastructure on every action. Few CISOs will approve it. Few auditors will sign off. In regulated industries, it’s often a non-starter.
The inversion. A model can be hosted anywhere. Any provider, any cloud. Execution has to happen inside the enterprise, using credentials that never leave. The model stays outside. The control plane runs inside. Intelligence can live anywhere. The control plane can’t.
The live-state problem. A software delivery agent’s answer to “Is this safe to ship?” depends on a state that changes every minute. The current change freeze. The latest incident. The newest CVE. Who’s on call right now. Whether the deployment window just closed. A model provider can reason about what you put in the prompt. It doesn’t naturally own the current state of your delivery system. A model provider knows the world. The harness has to know your world, right now.
The accountability problem. When a delivery agent does something wrong, the model provider isn’t on the incident bridge. The on-call engineer is. The platform lead is. The CTO is. The company is the one that has to explain the outage to customers, the finding to regulators, the miss to the board. Accountability can’t be outsourced. The harness that constrains the agent can’t be either.
A model provider can be the brain. It can’t be the harness for delivery.
More and more code will be written by AI. The bottleneck is shifting from code generation to safe delivery.
Coding agents help developers write code. Software delivery agents help teams safely deliver and operate it. Two harnesses. Two categories. Two sets of winners.
The foundation for software delivery is ready. The agents that need it are arriving now. The category now has a name.
We’ve always called it Harness. The idea just got bigger.


“We’ve been operating in a hybrid environment with both OpenTofu and Terragrunt, and Harness has made it much easier to bring those workflows together into a single, consistent platform with IaCM. The addition of Terragrunt support is a valuable step toward simplifying how we manage infrastructure at scale.”
— Lead Platform Engineer, Enterprise Customer
Infrastructure as Code is now a standard for modern cloud operations, with most enterprises using IaC to provision and manage environments. However, as adoption grows, so does complexity. Teams are no longer managing a handful of environments. They are operating across multiple regions, accounts, and services, often at massive scale.
This is where traditional approaches begin to fall short.
As organizations scale their infrastructure, Terraform alone is often not enough. Teams adopt Terragrunt to manage complex, multi-environment deployments, but they are often forced to stitch together fragmented tooling that lacks visibility, governance, and consistency.
At Harness, we are changing that.
Today, we are excited to announce native Terragrunt support in Harness IaCM, bringing it to full parity with Terraform and OpenTofu while delivering capabilities that go beyond what is available in standalone tooling. This is more than support. It is about making Terragrunt a first-class platform for enterprise infrastructure management.
With Harness IaCM, teams can now:

Terragrunt has become a critical layer for managing infrastructure at scale because it simplifies how teams structure and reuse configurations across environments. Harness builds on that foundation with deep, native integration, enabling platform teams to operate with both flexibility and control.
This is especially important for enterprises where a single deployment spans multiple environments and services. Harness abstracts that complexity while maintaining governance, auditability, and consistency.
Terragrunt is part of a broader shift toward multi-tool infrastructure strategies.
Modern teams are no longer standardized on a single IaC tool. Instead, they operate across:

This creates challenges around consistency, visibility, and governance. Harness IaCM is built for this reality. We are evolving IaCM into a unified control plane for multi-IaC workflows, where teams can manage different frameworks with a consistent experience, shared policies, and centralized visibility.
This means:
Instead of managing infrastructure in silos, teams can now operate from a single platform across the entire lifecycle.
The next phase of Infrastructure as Code is not just about supporting more tools. It is about making infrastructure systems more intelligent and automated.
We are investing in two key areas:
We are continuing to support modern frameworks like AWS CDK, enabling developer-centric infrastructure workflows alongside provisioning, configuration, and orchestration tools.
We are introducing intelligence into IaC workflows to simplify tasks such as drift management and optimization. This helps teams reduce manual effort and operate more efficiently at scale.
Together, these investments move IaCM toward a unified, multi-IaC platform that combines flexibility, governance, and automation. Terragrunt has become essential for managing infrastructure at scale but until now, it hasn’t had a platform that truly supports it. As infrastructure continues to grow in complexity, our focus remains the same. Helping teams move faster, reduce risk, and scale with confidence no matter which IaC tools they use.


The release of Anthropic Mythos and Project Glasswing marks an exciting and pivotal new chapter in software development. As the industry advances, the speed and economics of vulnerability exploitation have fundamentally shifted. What once took weeks of manual reconnaissance can now be scaled rapidly through automated models. However, this is not just a security problem to solve. It is a massive engineering opportunity to build cleaner, more robust systems. By leaning into AI-accelerated defense, engineering teams are uniquely positioned to lead the charge and redesign the landscape of modern software architecture.
To succeed in this new era, the traditional silos separating security and engineering must fall. Defense at machine speed requires a unified front.
The foundation of AI-accelerated defense relies on sound, proactive engineering practices. Developers must take ownership of architectural hygiene from the ground up.
Even with the best architecture, unexpected friction will occur. Resilient engineering means planning comprehensively for your ecosystem.
To keep pace with the increased velocity of engineering teams, Security teams must also evolve their operational models.
Engineering leaders and developers are in the perfect position to navigate this industry inflection point. By taking ownership of these structural changes today, you ensure the long-term viability of your products and the enduring strength of your codebase. Bring your security, infrastructure, and engineering teams together into the same room and start building your shared roadmap today.


What happens when your Infrastructure as Code management strategy works perfectly in dev, scales reasonably well in staging, and then quietly fractures across seventeen production workspaces because nobody documented which Terragrunt wrapper goes with which AWS account? You spend Friday afternoon reverse-engineering DRY patterns that made sense six months ago, wondering why your team is managing three different IaC execution engines with four incompatible workflow philosophies.
This scenario isn't hypothetical. It's the reality of organizations that adopted IaC incrementally, layer by layer, without a unified management approach. One team standardized on OpenTofu for new infrastructure. Another maintained legacy Terraform configurations because migration felt risky. A third discovered Terragrunt and used it to wrangle complexity across AWS regions, but now those wrappers exist outside any centralized governance model. Each decision was rational in isolation. Together, they created an orchestration problem masquerading as a tooling problem.
The actual challenge isn't choosing between Terraform, OpenTofu, or Terragrunt. It's managing their outputs, enforcing policy consistently across execution contexts, and ensuring that infrastructure changes don't outpace your ability to understand what's deployed.
Most platform teams don't set out to run multiple IaC tools simultaneously. They inherit Terraform state from acquisitions, adopt OpenTofu for licensing predictability, and introduce Terragrunt because someone needed to stop copying backend configurations across 40 AWS accounts. The tools themselves aren't the problem. The problem is that each tool introduces its own state management assumptions, module resolution logic, and workflow expectations.
Terragrunt, for instance, exists specifically to solve Terraform's verbosity problem. It lets you define backend configurations once and reference them across environments. It supports dependency graphs so you can deploy a VPC before attempting to create subnets. These capabilities are valuable, but they also mean your actual infrastructure logic now spans two layers: the Terraform or OpenTofu code that defines resources, and the Terragrunt configuration that orchestrates execution.
When you lack centralized Infrastructure as Code management, those layers drift independently. Someone updates a Terragrunt dependency graph without realizing it breaks a downstream workspace. Another engineer modifies an OpenTofu module but forgets that three different Terragrunt configurations depend on its output structure. You don't discover these issues until a deployment fails in production, and the postmortem reveals that nobody had visibility into the full dependency chain.
The typical response to multi-IaC complexity is to standardize on one tool and deprecate the others. That works if you're early in your IaC journey. It's impractical if you're managing hundreds of workspaces across regulated environments where compliance audits expect immutable infrastructure definitions and audit trails for every state change.
Here's what actually happens: platform teams create custom CI/CD pipelines for each tool. Terraform runs in Jenkins. OpenTofu runs in GitHub Actions. Terragrunt configurations use a shell script someone wrote during an incident. Each pipeline implements drift detection differently. Policy enforcement exists as scattered OPA rules that don't share a common evaluation context. When an auditor asks, "How do you prevent unapproved infrastructure changes?", the honest answer is, "We run some checks in some places, and we hope teams remember to use them."
This isn't negligence. It's what emerges when Infrastructure as Code management tooling doesn't natively support the reality of polyglot IaC environments. Teams need a system that treats OpenTofu, Terraform, and Terragrunt as execution details, not architectural boundaries. The workflow layer—plan generation, policy evaluation, approval gates, state locking—should remain consistent regardless of which engine interprets the configuration.
Running `terragrunt apply` successfully doesn't mean your infrastructure is well-managed. It means Terragrunt successfully invoked OpenTofu or Terraform and applied a configuration. The actual management work—validating inputs, enforcing cost policies, detecting drift, promoting changes through environments—exists outside the execution layer.
This is where most homegrown solutions collapse under their own weight. You build a wrapper script that runs Terragrunt with the right flags. Then you add pre-commit hooks for policy checks. Then you integrate Sentinel or OPA, but only for workspaces that someone remembered to configure. Then you add Slack notifications so people know when drift occurs, but the notifications don't include enough context to act on them. Eventually, you have a Rube Goldberg machine that works until it doesn't, and debugging requires institutional knowledge that exists in one person's head.
The fundamental issue is that IaC workflow optimization requires thinking beyond execution engines. You need orchestration that understands module dependencies, workspace relationships, and policy boundaries. You need variable management that doesn't require copying YAML files between repositories. You need drift detection that runs automatically and surfaces meaningful deltas, not raw Terraform output dumped into a log file.
Treating Terragrunt as an afterthought—something teams bolt onto existing Terraform or OpenTofu pipelines—misses its architectural intent. Terragrunt exists because managing backend configurations, passing outputs between modules, and orchestrating multi-account deployments shouldn't require copying boilerplate across dozens of directories. When Infrastructure as Code management platforms support Terragrunt natively, they acknowledge this reality: the DRY principle applies to infrastructure orchestration, not just resource definitions.
Native Terragrunt support means the platform understands dependency graphs without requiring custom parsing logic. It means workspace templates can reference Terragrunt configurations directly, rather than forcing teams to flatten everything into monolithic Terraform modules. It means policy enforcement applies before Terragrunt invokes the underlying execution engine, catching invalid configurations before they generate failed plans.
This matters most in organizations running multi-region or multi-cloud architectures. A typical pattern: one Terragrunt configuration defines networking across AWS regions, another manages Kubernetes clusters, a third provisions databases. Each configuration depends on outputs from the others. Without native orchestration, teams either write brittle shell scripts to sequence these dependencies or accept that deployments sometimes fail halfway through because someone applied changes out of order.
The real test of an Infrastructure as Code management platform isn't whether it runs OpenTofu or Terraform. It's whether it provides consistent state visibility, policy enforcement, and audit trails across both. If your platform requires separate workflows for each execution engine, you've automated the mechanics but not the governance.
Consider policy evaluation. A reasonable security requirement: no S3 buckets should allow public read access. With fragmented tooling, you implement this rule multiple times. Once for Terraform workspaces using Sentinel. Again for OpenTofu configurations using OPA. A third time for Terragrunt-managed infrastructure, where you're not sure which policy engine applies because Terragrunt is just orchestrating calls to Terraform or OpenTofu. When an audit occurs, you can't prove consistent enforcement because there's no unified policy evaluation layer.
The same fragmentation affects drift detection. Terraform Cloud detects drift for Terraform-managed resources. Your OpenTofu workspaces might run scheduled reconciliation jobs, or they might not—it depends on whether someone configured them. Terragrunt configurations drift silently unless you've built custom tooling to periodically run `terragrunt plan` and parse the output. The result: partial visibility across your infrastructure estate, where "managed by IaC" becomes aspirational rather than descriptive.
Organizations exploring Terraform alternatives often focus on licensing or community governance. Those considerations matter, but they don't address the operational question: how do you manage infrastructure deployed with multiple execution engines without creating parallel workflow systems?
OpenTofu integration means more than "we can run OpenTofu commands." It means workspaces provisioned for OpenTofu behave identically to Terraform workspaces at the orchestration layer. Variable sets apply consistently. Policy evaluation uses the same rule sets. Drift detection runs on the same schedule. Approval workflows follow the same governance model. The execution engine becomes an implementation detail, not a workflow boundary.
This distinction matters during migrations. Teams don't flip entire infrastructure estates from Terraform to OpenTofu overnight. They migrate incrementally, starting with non-critical workspaces and expanding as confidence grows. If your Infrastructure as Code management platform treats each engine as a separate silo, you're managing two parallel systems during the transition. If the platform abstracts execution details behind a unified orchestration layer, the migration becomes a configuration change, not an architectural overhaul.
The hard problems in infrastructure management aren't technical; they're organizational. How do you ensure that 40 engineers across six teams follow the same approval process for production changes? How do you enforce cost policies without blocking legitimate deployments? How do you maintain audit trails that satisfy compliance requirements without turning every infrastructure change into a bureaucratic ordeal?
IaC orchestration platforms solve these problems by decoupling policy from execution. Instead of embedding governance rules in CI/CD pipelines—where they're invisible, untestable, and easy to bypass—you define them once at the platform level. Instead of writing custom scripts to sequence Terragrunt dependencies, you describe the dependency graph declaratively and let the platform handle execution order. Instead of building bespoke drift detection logic, you configure detection schedules and let the platform surface meaningful deltas.
This approach doesn't eliminate complexity. It consolidates complexity into a layer designed to manage it. Your IaC configurations remain simple: modules that define resources, Terragrunt wrappers that eliminate boilerplate, workspace configurations that specify execution context. The orchestration platform handles everything else: state locking, policy evaluation, approval workflows, audit logging, drift remediation.
Harness Infrastructure as Code Management approaches these challenges by treating the execution engine as a deployment detail, not an architectural constraint. Whether you're running OpenTofu, Terraform, or Terragrunt, the orchestration layer remains consistent: standardized pipelines for plan generation and apply operations, unified policy enforcement across all workspaces, centralized drift detection that surfaces actionable insights.
For teams managing infrastructure across multiple clouds, regions, or execution engines, Harness IaCM provides the orchestration layer that makes polyglot IaC environments manageable. The platform doesn't force you to standardize on a single tool. It provides governance, visibility, and workflow consistency regardless of which engine interprets your configurations.
The promise of Infrastructure as Code—reproducible deployments, version-controlled infrastructure, collaborative development—only materializes when you have consistent orchestration across execution engines. Running Terraform in one pipeline, OpenTofu in another, and Terragrunt through a shell script doesn't scale. It creates workflow fragmentation that defeats governance and slows teams down.
Effective Infrastructure as Code management platforms abstract execution details behind unified workflows. They treat Terragrunt as a first-class orchestration primitive, not an afterthought. They provide native support for OpenTofu alongside Terraform, recognizing that organizations migrate gradually, not overnight. Most importantly, they enforce policy, detect drift, and maintain audit trails consistently across all workspaces, regardless of which engine runs the actual infrastructure changes.
The technical lesson: orchestration complexity belongs in platforms designed to manage it, not scattered across custom scripts and fragmented CI/CD pipelines. The operational lesson: governance doesn't slow teams down when it's embedded in the workflow rather than bolted on afterward. Multi-IaC environments are manageable when you have the right orchestration layer. Without it, you're just running tools in parallel and hoping they don't conflict.
Explore how Harness Infrastructure as Code Management handles multi-IaC orchestration, or review the technical documentation or implementation details. The product roadmap outlines upcoming capabilities for workflow optimization and policy enforcement.




Most development teams today build everything around Git, and deploy with GitOps principles.
Code sits in version controlled environments, changes go through PRs, and deployments are handled through modern CI/CD. That part is pretty standard at this point, especially when using a modern DevOps platform like Harness.
MongoDB fits into that developer world and workflow pretty naturally. Data is stored in documents that look a lot like JSON, the format many developers already use in application code and APIs. Under the hood, MongoDB stores those documents as BSON, which is essentially a binary form of JSON that supports additional data types like dates, object IDs, and binary data. That means developers get a familiar model to work with, while MongoDB gets a format that is efficient for storing and querying application data.

Looks just like JSON, with native types like ObjectId and dates powered by BSON.
The tradeoff is that structure isn’t always defined upfront. Schemas change over time, and not always in a clean or consistent way.
Collections can contain documents with different shapes. Index changes can directly impact performance. These aren’t problems on their own, but they require discipline to manage safely.
MongoDB changes are often handled outside the standard development workflow, whether that’s by developers, platform teams, or database teams.
Teams rely on application-level updates or one-off scripts to backfill data, modify structures, or create indexes. These approaches work, but they’re not always consistently versioned in Git. Execution can vary across environments, and review or validation is often informal.
The result is limited visibility into what changed, when it changed, and how it was applied. Over time, that leads to inconsistencies between environments and increased risk during deployment.
Flexibility is powerful, but without proper controls it introduces risk.
To solve this, teams need to bring MongoDB changes into the same workflow they already trust for application code: Git-driven, reviewable, and automated.
GitOps for MongoDB isn’t about changing how Mongo works. It’s about changing how changes are managed.
Instead of handling updates through scripts or application logic alone, database changes are treated like application code. Index creation, schema validation rules, and migration scripts are all defined in Git and tracked over time. This includes MongoDB’s native schema validation rules, which can be versioned and applied consistently across environments.
Changes need to go through pull requests, just like any other code change. This allows developers, platform teams, and DBAs to review what’s being modified before anything runs in an environment.
From there, pipelines handle the validation and deployment. Changes are applied consistently across environments, rather than being run manually and potentially differently each time.
In practice, this means a new field, an index, or a backfill isn’t just a script someone runs once. It’s a versioned change that can be reviewed, tested, and repeated.
This isn’t about forcing rigid schemas onto MongoDB. It’s about making changes visible, consistent, and easier to manage as systems grow.
Harness DB DevOps provides the structure to do this. With Harness, we define changes as changesets, store them in Git, and deploy them through pipelines with built-in validation and policy checks.
To demonstrate how this works, we will walk through a practical MongoDB change from start to finish.
Here’s a simple example: A team needs to add a new userPreferences field to the users collection and create an index to support a new query.
Instead of writing a script and running it manually, we define the change and commit it to Git.

1. Define the change in Git
A developer creates the update as a changeset. That includes the logic to add or backfill the new field, along with the createIndex operation needed for performance. The change is committed alongside application code, like any other update.
2. Open a pull request
From there, the change goes through a pull request. Other developers or DBAs can review what’s being changed before anything runs. If something looks off, it gets caught here instead of in production.
3. Let the pipeline take over
Once the change is approved, the pipeline takes over.
The Pipeline

Before anything gets applied, the change is validated and previewed against the target environment. This helps catch issues early, whether it’s a conflict, a bad query pattern, or something that could impact performance.
This is especially important for heavy operations like index creation on massive collections, where resource contention and performance degradation are real risks. Instead of running those changes manually, pipelines can enforce safe rollout strategies like rolling index creations across replica sets, without manual intervention.
Policies are enforced as part of that same process, with required approvals, environment rules, and other guardrails checked automatically so teams aren’t relying on someone to manually verify every step.
Once everything passes, the change is deployed through the pipeline and applied consistently across environments, moving from dev to staging to production in a controlled way. No one is logging into a database to run scripts by hand.
Now, everything is tracked. You can see what was applied, where it was deployed, when it happened, and who approved it, with a full history available if something needs to be reviewed or rolled back later.
Sound familiar? This workflow should sound a lot like application delivery, where changes are versioned, reviewed, validated before deployment, and visible after.
Traditionally, database changes have been tightly controlled by DBAs. They review scripts, approve changes, and sometimes execute them manually in each environment. That model helps reduce risk, but it doesn’t scale as teams grow and release more frequently.
With a GitOps approach, that control doesn’t disappear, it moves earlier in the process.
Instead of reviewing every individual change, database teams define policies and standards up front. Those rules are then enforced automatically through pipelines. Every change must pass the same checks before it reaches an environment, without requiring manual intervention each time.
In practice, this means:
The role of the database team evolves from gatekeeper to system designer. Rather than being involved in every deployment, they define the guardrails that ensure every deployment is safe.
Developers still move quickly, but now within a controlled, repeatable system.
Bringing MongoDB into a Git-driven workflow changes how teams ship.
MongoDB's flexibility doesn't eliminate the need for structure - it just shifts the responsibility for maintaining consistency from the database itself to your development processes.
If your application is managed through Git, your database should be too.


If you've ever run an ALTER TABLE on a busy MySQL table in production, you know the feeling. The change is small. The risk isn't. Long-running table locks, queued writes, application timeouts, replication lag, a five-minute migration that turns into a half-hour incident review.
We're shipping an integration that takes that anxiety out of the loop. Harness Database DevOps now supports Percona Toolkit for MySQL as part of Liquibase-based schema management. Flip a checkbox at schema creation, and eligible changes execute through pt-online-schema-change instead of native MySQL DDL.
Native ALTER TABLE on MySQL can lock tables for as long as the change takes to apply. On a large or hot table, that means writes pile up, dependent services start timing out, and replicas fall behind.
Percona Toolkit handles the same change very differently. pt-online-schema-change creates a shadow table with the new schema, copies your data over in small chunks, uses triggers to keep the original and shadow tables in sync, then performs an atomic swap with minimal lock time. The practical upside: schema changes you can run during business hours, not at 2 AM with a runbook open.
The integration is enabled per schema. When you create a Database Schema in Harness DB DevOps:
That's it. With the box unchecked (the default), Harness DB DevOps applies your changelogs using native MySQL operations through Liquibase, exactly as before. Check it, and eligible changes route through Percona Toolkit instead.
Percona Toolkit isn't a silver bullet for every DDL. A few cases need extra thought.
Adding or dropping foreign keys can break during the table swap, so plan those changes carefully or apply them outside the toolkit. Tables without a primary key or unique index won't migrate safely either, since pt-online-schema-change needs one to chunk data deterministically. And a handful of specific operations sit outside the safe-change envelope: dropping a primary key, complex column reordering, and some storage engine swaps.
You'll also want to give the database user the right privileges: ALTER, SELECT, INSERT, and UPDATE on the target table, plus CREATE and DROP on the database for shadow table management.
The full list of supported patterns, edge cases, and required permissions is in the Harness DB DevOps docs.
If you're already running Harness DB DevOps for MySQL, the next schema you create is a good place to try this. Turn it on against a non-critical environment first, watch how it behaves on your workload, and the path to using it in production gets a lot shorter.
For teams running MySQL at scale, that's one fewer reason to schedule schema changes around your customers' sleep.
If you aren't already using Database devops, speak with our experts to discuss how you can achieve zero downtime database schema migrations.
Need more info? Contact Sales