Tutorial: AWS KMS S3 replication

Have you ever wondered how to replicate a S3 object encrypted with KMS from one S3 bucket to a different one in a different account and region without losing encryption at REST? Since this feature is not yet supported by Amazon AWS, I proposed a solution how to accomplish this task.

This tutorial is based on the blog post published on AWS Documentation.

 

Solution overview

This solution requires you to have access to two different AWS accounts. One S3 source bucket and one destination bucket in different accounts (however, you can do this in the same account if you want). The buckets can reside in the same region or in different regions. You need to create SNS topic which allows you to publish notifications from source bucket. SNS Topic will initiate Lambda function to copy the object from source bucket to destination bucket. Source code for the function is part of this post as well as all policies needed for this solution to work properly.

 

Required IAM permissions

A Lambda function requires the IAM role assigned to it in order to copy an object to destination bucket with the KMS encryption.

The Lambda function requires the following permissions:

  • S3:GetObject (on source bucket)
  • SNS:Receive (on SNS Topic)
  • S3:PutObject (on destination bucket)
  • KMS:Decrypt (on the key for objects on source buckets)
  • KMS:Encrypt (on the key in different account and region for destination bucket)

For the source account set Trust relationship with the destination account for the source S3 objects KMS Key.

In this post, I will provide all source code for the IAM Policies.

 

Solution walkthrough 

Create the SNS topic to fan out

Important NOTE: Create an SNS Topic in destination account

  1. In the SNS console, create a new SNS topic. Note the topic name for later. A topic is created one time per S3 bucket source, so consider naming the topic as follows: [source-bucket-name]fanout
  2. Note the SNS topic’s ARN string and then choose Other topic actions, Edit topic policy, and Advanced View.
  3. Replace the contents of the default policy with the following: 
{
  "Version": "2008-10-17",
  "Id": "",
  "Statement": [
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": [
        "SNS:Publish"
      ],
      "Resource": "arn:aws:sns:us-east-1:123123123123:s3-source-bucket-fanout",
      "Condition": {
        "ArnLike": {
          "AWS:SourceArn": "arn:aws:s3:*:*:s3-source-bucket-name"
        }
      }
    }
  ]
  1. Make the following changes in the policy:
    • For Resource, change to the ARN value for the SNS topic.
      arn:aws:sns:us-east-1:123123123123:s3-source-bucket-fanout
    • For AWS:SourceArn, change to the ARN value for the S3 source bucket. 
      arn:aws:s3:*:*:s3-source-bucket-name
  2. Choose Update policy.

Important NOTE: Make sure that the source bucket has access to publish SNS Notification to the SNS Topic.

 

Configure the source bucket

  1. In the S3 console, edit the source bucket configuration.
  2. Expand the Events section and provide a name for the new event. For example: S3 replication to dst buckets: dstbucket1 dstbucket2
  3. For Events, choose ObjectCreated(All).
  4. For Send to, choose SNS topic.
  5. For SNS topic, select Add SNS topic ARN, and put the ARN of the SNS Topic.
  6. Choose Save.

 

Create the Lambda function, IAM policy and KMS Key

Important NOTE: Create a Lambda function in destination bucket.

  1. In the Lambda console, choose Create a Lambda function.
  2. Choose Skip to skip the blueprint selection.
  3. For Runtime, choose Python 2.7.
  4. For Name, enter a function name. The function name should match the name of the S3 destination bucket exactly.
  5. Enter a description that notes the source bucket and destination bucket used.
  6. For Code entry type, choose Edit code inline.
  7. Paste the following into the code editor:
import urllib
import boto3
import ast
import json
from botocore.client import Config
print('Loading function')
def lambda_handler(event, context):
    s3 = boto3.client('s3',config=Config(signature_version='s3v4'))
    sns_message = ast.literal_eval(event['Records'][0]['Sns']['Message'])
    target_bucket = context.function_name
    source_bucket = str(sns_message['Records'][0]['s3']['bucket']['name'])
    key = str(urllib.unquote_plus(sns_message['Records'][0]['s3']['object']
['key']).decode('utf8'))
    copy_source = {'Bucket':source_bucket, 'Key':key}
    print "Copying %s from bucket %s to bucket %s ..." % (key, source_bucket,
target_bucket)
    s3.copy_object(Bucket=target_bucket, Key=key, CopySource=copy_source,
StorageClass='STANDARD', ServerSideEncryption='aws:kms', SSEKMSKeyId='arn:aws:kms:eu-central-1:123123123123:key/992da83f-f7f1-428a-a70d-041652ib43bs')
 

Change the text in the code that represents a KMS Key generated in IAM KMS. Make sure that the KMS Key is created in the region where the destination bucket is located. If you want to use standard aws/s3 key from KMS just remove the SSEKMSKeyId part from the source code and set ServerSideEncryption value to “AES256”.

arn:aws:kms:eu-central-1:123123123123:key/992da83f-f7f1-428a-a70d-041652ib43bs
  1. For Handler, leave the default value: lambdafunction.lambdahandler
  2. For Role, choose Create new role, basic execution role.
  3. In the IAM dialog box, create a new IAM execution role for Lambda.
  4. For Role Name, enter a value that includes the destination bucket name. For example: s3replicationexecutionroletobucket[dstbucketname]
  1. Expand View Policy Document and choose Edit the policy.
  2. Choose OK to confirm that you’ve read the documentation.
  3. Replace the contents of the default policy with the following:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": "arn:aws:logs:*:*:*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::source-bucket-name/*"
            ]
        },
	  {
            "Effect": "Allow",
            "Action": [
                "sns:Receive"
            ],
            "Resource": [
                "arn:aws:sns:eu-west-1:321321321321:source-bucket-name-fanout"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::destination-bucket-name/*"
            ]
        },
	  {
            "Effect": "Allow",
            "Action": [
                "kms:Decrypt"            
            ],
            "Resource": [
                "arn:aws:kms:eu-west-1:123123123:key/38e744bf-66a5-4762-6d85-gd4a78d7349c"
            ]
        },
	  {
            "Effect": "Allow",
            "Action": [
                "kms:Encrypt"
            ],
            "Resource": [
                "arn:aws:kms:eu-central-1:321321321321:key/99a3c83f-d731-438i-a70b-041542ad19cd"
            ]
        }
    ]

 

  1. Make the following changes in the policy:
    • Under the s3:GetObject action, change to the ARN value for the source bucket.
      arn:aws:s3:::source-bucket-name/*
    • Under the sns:Receive action, change to the ARN value for the SNS Topic.
      arn:aws:sns:eu-west-1:321321321321:source-bucket-name-fanout
    • Under the s3:PutObject action, change to the ARN value for the destination bucket.
      arn:aws:s3:::destination-bucket-name/*
    • Under the kms:Decrypt action, change to the ARN value of the KMS Key used for encryption on source bucket
      arn:aws:kms:eu-west-1:123123123:key/38e744bf-66a5-4762-6d85-gd4a78d7349c
    • Under the kms:Encrypt action, change to the ARN value of the KMS Key you want to use to encrypt objects on destination bucket.
      arn:aws:kms:eu-central-1:321321321321:key/99a3c83f-d731-438i-a70b-041542ad19cd
  2. Choose Allow to save the policy and close the window.
  3. For Timeout, keep the default value 5 minutes.
  4. For VPC, leave the default value No VPC.
  5. Choose Next.
  6. Review the configuration and choose Create Function.

Important NOTE: Make sure that your source bucket KMS Key have the destination bucket account set in Key users for External Accounts.

 

Create SNS Topic subscription

  1. In the SNS console, choose Topics and select the fan out topic [source-bucket-name]-fanout created earlier. Enter the topic’s details page.
  2. Choose Create Subscription.
  3. For Protocol, choose AWS Lambda.
  4. For Endpoint, select the function ARN that represents the destination bucket.
  5. Choose Create Subscription.

 

Validate the subscription

  1. Upload an object to the source bucket with KMS encryption.
  2. Verify that the object was copied successfully to the destination buckets and the encryption is correctly set.
  3. Optional: view the CloudWatch logs entry for the Lambda function execution.

For a successful execution, this should look similar to the following screenshot:

 

Additional info

Do not forget to set Lambda role access for the destination bucket KMS Key in IAM KMS part.


"Statement": [
        {
            "Sid": "Stmt1493285959480",
            "Effect": "Allow",
            "Principal": {
} ]

    "AWS": "arn:aws:iam::321321321321:role/[nameOfTheLambdaRole]"
},
"Action": "s3:GetObject",
"Resource": [
    "arn:aws:s3:::source-bucket-name",
    "arn:aws:s3:::source-bucket-name/*"
]

IAM Policy for source bucket:

    "Statement": [
        {
            "Sid": "Stmt1493285959480",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::321321321:role/[nameOfTheLambdaRole]"
            },
            "Action": "s3:GetObject",
            "Resource": [
                "arn:aws:s3:::destination-bucket-name",
                "arn:aws:s3:::destination-bucket-name/*"
            ]
        }
    ]

 

Publikované: 12. jún 2017 8:10
  • Jozef ?ajkovi?

    Cloud Architect

    Dodo je bývalý Java developer, ktorý si prešiel testingom a administráciou, až napokon zakotvil pri automatickom deploymente softvéru na Cloud. Popritom sa snaží koordinovať zopár ľudí, ktorí robia podobné veci ako on sám.

We want you

Do you see yourself working with us? Check out our vacancies. Is your ideal vacancy not in the list? Please send an open application. We are interested in new talents, both young and experienced.

Join us