Intelligent Cloud AutoStopping for Google Compute Engine

AutoStopping strikes again! Here's how to use it for Google Compute Engine. This article contains step-by-step directions. Try it now!

Cloud spending is on the rise, and if not managed properly, it can be a burden on a company's resources. We've seen that a significant portion of it can be saved by intelligently managing idle resources on non-production workloads. Idle resources are those resources (such as VMs) that are running but not actively receiving traffic. As a result, running them is considered cloud waste.

We have seen that companies can save 70%+ of their monthly non-production cloud compute costs if these resources are managed properly. We spoke about how to go about doing this for Kubernetes in one of our previous blogs on AutoStopping Rules for Kubernetes Clusters

Harness Intelligent Cloud AutoStopping is a smart and automated way of orchestrating the usage of non-production workloads by dynamically optimizing the usage of cloud resources by managing their idle time effectively. AutoStopping currently supports AWS, Azure, GCP, and Kubernetes clusters. 

Advantages of AutoStopping Rules

  • Automatically detect idle time and shut down idle resources.
  • Configure Rules to schedule resources and avoid wastage during non peak hours.
  • Resources are automatically turned on upon detecting traffic and thus reduce man hours as well as save on cloud costs.
  • Create dependencies between the Rules to manage infrastructure serving a specific purpose.

How to Create AutoStopping Rules for Google Compute Engine VMs

This involves 2 major steps:

  1. Create a load balancer that will be used to route and detect traffic to the virtual machines to be managed.
  2. Create a Rule that will map the virtual machines to be managed with the load balancer created in the previous step.

Custom Load Balancer

This involves the following steps:

  • Provide the Domain name. This would be used to route traffic to the load balancer. A first-class integration with Cloud DNS is on the way to help you further in mapping DNS to the custom load balancer. The custom load balancer can also be placed in the path of the traffic within your infrastructure. 
  • Configure a Custom Load Balancer. Since a custom load balancer is created and managed by Harness, you can choose an instance type of your preference for the same. The load balancer needs to be in the same VPC as the virtual machines that it would manage. A single load balancer can be utilized by multiple AutoStopping Rules to manage virtual machines across different regions and zones, but within the same VPC.
Create New Load Balancer

AutoStopping Rule

This involves the following 3 steps:

  • Configure AutoStopping Rule. Select an ideal idle time as per your preference to automatically shut down the virtual machines when not in use. Choose the virtual machines you want to manage. They need to be in the same zone. 
AutoStopping for Google Cloud Engine: Configure AutoStopping Rule
  • Set Up DNS Link. Select the load balancer to use for the Rule. Also configure routing and health check on the load balancer. You can also input a custom domain to be used for routing.
AutoStopping for Google Cloud Engine: Set Up DNS Link
  • Review the Configuration. Verify that all the details are correct and then confirm to create the Rule.
AutoStopping for Google Cloud Engine: Create Rule

Once the Rule is saved and created successfully, you can leverage AutoStopping to manage these Compute Engine VMs. Whenever they are idle beyond the configured idle time, AutoStopping will automatically shut them down. When the VM is next accessed using the DNS link configured, AutoStopping will bring it up for the user in real time.  

How AutoStopping for Google Compute Engine Works

Implementing AutoStopping involves two actions to solve the problem of optimizing for idle resources: 

  1. Start/stop host VMs.
  2. Detect traffic to the hosts.

This is best performed by using a custom load balancer, which would route traffic to the configured VMs. When the VMs are in a stopped state, the user would see a default progress page during which the VMs are brought back up, or as we call it, “warmed up.” All this is possible using a custom load balancer.

AutoStopping is implemented with the help of the following components :

  • Custom Load Balancer
  • Envoy
  • Proprietary services
  • Harness SaaS

Custom Load Balancer

A single custom load balancer can be configured to be used for multiple AutoStopping Rules that belong to the same VPC. VPCs span multiple regions, which lets us save on costs by utilizing a single custom load balancer for multiple Rules. A custom implementation was chosen over native load balancer since the native GCP HTTP(s) load balancer is in beta. It is also cost effective to launch a custom load balancer that is not limited by the number of Rules that can be configured in one load balancer.

This custom load balancer is composed of Envoy and other proprietary services.

Envoy Proxy

Envoy is an L7 proxy designed for large modern service-oriented architectures. It is usually run as a sidecar for any application, thus abstracting the network topology, and is suitably called as Envoy mesh. It comes pre-built with support for filter chains to support complex operations. Filter chains are, to put it simply, more like a chain of middleware in any API service setup.

While this is not what is used in AutoStopping, below is an example static configuration for a service running on port 8080 but listening on port 80:

static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 127.0.0.1, port_value: 80 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route: { cluster: some_service }
http_filters:
- name: envoy.filters.http.router
clusters:
- name: some_service
connect_timeout: 0.25s
type: STATIC
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: some_service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 8080

Envoy is composed of Clusters and Listeners. Clusters are the target equivalent of an AWS ALB target group. Listeners stand for the incoming port at which requests would come and be routed to clusters based on route configurations and route path matches. Filters can be configured in the Listeners to perform certain actions. 

Harness Intelligent Cloud AutoStopping uses Envoy and other proprietary services for the custom load balancer that routes traffic to configured Google Compute Engine VMs.

Get Started With AutoStopping for Google Compute Engine

  • Sign up for a demo and a Harness specialist will help you get started with Harness Cloud Cost Management. 
  • Check out these docs to learn more about creating AutoStopping Rules for Google Compute Engine.

The Modern Software Delivery Platform™

Loved by Developers, Trusted by Businesses
Get Started

Need more info? Contact Sales