Harness has a lot of customers with very different Artifact sources. One of the most famous is to use an S3 Bucket as the home for all kinds of artifacts.
Our Engineers are constantly working to enable S3 in many other integrations we have available. S3 and GCS are first-class citizens in our world, right?
In the AWS realm, I know that a lot of Solutions Architects will come with a multi-account approach. Sometimes, depending on the size of the business, 4 accounts will be suggested for each squad under a given AWS organization:
If you are good at DevOps practices, you’ll have one single source of truth for your Artifacts. And that’s where cross-account access comes into play!
Let’s see how Harness can overcome the current design of the S3 SDK/CLI. It’s super easy.
Buckle up!
We’ll work in a scenario with two accounts:
Let’s define a good Artifact source. I’ll store a couple of dummy Helm Charts (.tgz) in the S3 Bucket.
I could use GitHub Pages to start a free HTTP Server for my Charts, but that’s not the case today.
All files are only for learning purposes - they are dummies:
It’s time to define a good S3 Bucket Policy, using the principle of least privilege.
It’s important to mention that you can make our Harness Delegate assume a role in another account.
Or even better, if you use your Delegate in EKS, you can use IRSA to make the Harness Delegate Pod Service account get mapped to an IAM role.
But for S3, there’s no need for overkill, right?
In our documentation, we suggest this policy.
Connect Harness to your AWS account using the IAM roles and policies needed by Harness.
But currently, a more restrictive policy is also working. Please keep in mind that the least privilege is a continuous effort in an always-changing world.
I will use a conditional to allow the principals from Account B to reach that bucket, and give all the actions that Harness currently needs to perform the task. Here:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "DenyUnEncryptedObjectUploads",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::<your_bucket>/*",
"Condition": {
"Null": {
"s3:x-amz-server-side-encryption": "true"
}
}
},
{
"Effect": "Deny",
"Principal": "*",
"Action": [
"s3:DeleteBucket",
"s3:DeleteBucketPolicy"
],
"Resource": "arn:aws:s3:::<your_bucket>"
},
{
"Sid": "Allow list access from the org",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetBucketLocation",
"s3:GetBucketVersioning",
"s3:ListBucket",
"s3:ListBucketVersions",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::<your_bucket>/*",
"arn:aws:s3:::<your_bucket>"
],
"Condition": {
"StringEquals": {
"aws:PrincipalOrgID": "<your_org>"
}
}
}
]
}
Alright, let’s check it out! I have created this service as an example because it allows S3 as an Artifact source:
Very Important: Currently, Harness and AWS CLI will not list cross-account buckets, but that does not mean that the API will not be able to retrieve the objects. We can explicitly define that in the Artifact Source Config step.
Naturally, the cloud provider I’m picking is from Account B (matching the policy’s Org Conditional), not from the S3 Bucket Owner account:
Now let’s wait for Harness async task to grab our beloved Artifacts.
import time
time.sleep(90)
Just kidding! Here it goes:
Now you are not afraid of cross-account S3 strategies, I hope!
Any questions or comments? Let me know - I'm always happy to help.
Gabriel