aws-cdk multi-account bucket policy - aws-cdk

I have the following:
const accessLogsBucket: Bucket = new Bucket(this, 'LogsBucket', {
bucketName: `logs-${account}-${region}`,
versioned: true,
encryption: BucketEncryption.S3_MANAGED,
blockPublicAccess: BlockPublicAccess.BLOCK_ALL,
accessControl: BucketAccessControl.LOG_DELIVERY_WRITE,
removalPolicy: RemovalPolicy.RETAIN,
});
adding an S3 bucket, to which I would like to add a Policy so that other AWS accounts are able to write to it. For example I like to add this policy:
{
"Version": "2008-10-17",
"Id": "PolicyForCombinedBucket",
"Statement": [
{
"Sid": "Set permissions for objects",
"Effect": "Allow",
"Principal": {
"AWS": ["{PayerAccountA}","{PayerAccountB}"]
},
"Action": [
"s3:ReplicateObject",
"s3:ReplicateDelete"
],
"Resource": "arn:aws:s3:::{BucketName}/*"
}
]
}
So I tried:
accessLogsBucket.addToResourcePolicy(
new PolicyStatement({
effect: Effect.ALLOW,
actions: [
"s3:ReplicateObject",
"s3:ReplicateDelete"
],
principals: [new AnyPrincipal()],
resources: [
accessLogsBucket.arnForObjects("*")
]
})
)
How can I achieve this?

well, you're pretty close. But I think you have some of the wrong actions and perhaps the policies backwards.
See this link for cross account access to S3 buckets, and you can replciate these policies (or similar) in CDK. Also, because you do have block public access set, your S3 bucket needs to be in a VPC that has cross access to the VPC in the other account (through access points and vpc sharing)

You're on the right track. Only thing is you don't want to use 'new AnyPrincipal()), you probably want:
accessLogsBucket.addToResourcePolicy(
new PolicyStatement({
effect: Effect.ALLOW,
actions: [
"s3:ReplicateObject",
"s3:ReplicateDelete"
],
principals: [
new AccountPrincipal(payerAccountA),
new AccountPrincipal(payerAccountB),
],
resources: [
accessLogsBucket.arnForObjects("*")
]
})
)
payerAccountA and payerAccountB can be set any way you see fit.
I can't say whether your actions are correct for your use case.
I'd also add you could try using cdk-iam-floyd, which allows you to create these PolicyStatement objects in a more well-defined fashion.

Related

ActiveStorage can't move file to S3 - Aws::S3::Errors::AccessDenied: Access Denied

The (Simple) Problem
I'm attempting a simple heroku run rake db:seeds. It attempts to use Active Storage to move few images from app/assets/images into to AWS S3.
Here's a portion that fails:
user = User.last
file_name = "steve.png"
file_path = Rails.root.join("app", "assets", "images", "seeds")
user.user_primary_image.attach(io: File.open(file_path + file_name), filename: file_name)
Aws::S3::Errors::AccessDenied: Access Denied
from /app/vendor/bundle/ruby/2.7.0/gems/aws-sdk-core-3.104.3/lib/seahorse/client/plugins/raise_response_errors.rb:17:in `call'
What I know so far
rake db:seeds run locally with no problems
These are all set correctly:
ENV['AWS_ACCESS_KEY_ID']
ENV['AWS_SECRET_ACCESS_KEY']
ENV['AWS_REGION']
ENV['S3_BUCKET']
The IAM for the bucket has a policy which has:
allowed actions: All S3 actions (s3:*)
a resource ARN of the bucket name
The bucket has the following CORS:
(example is replaced with the actual domain)
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT",
"POST",
"DELETE"
],
"AllowedOrigins": [
"https://www.example.com"
],
"ExposeHeaders": []
},
{
"AllowedHeaders": [],
"AllowedMethods": [
"GET"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
That's all so far, but I can't think of anything else to try to debug this...
UPDATE
I created a new IAM with S3 Full Access (every permission on every bucket), and it works. So the trouble is with the more refined IAM, which of course must be implemented (it would not be smart to leave it wide open).
So I took the working implementation, then added write, list and read permissions, and narrowed to one bucket (the one it needs), and now I reproduce the error:
Aws::S3::Errors::AccessDenied: Access Denied
from /app/vendor/bundle/ruby/2.7.0/gems/aws-sdk-core-3.104.3/lib/seahorse/client/plugins/raise_response_errors.rb:17:in `call'
So I know the error has something to do with either the permissions or the resource. I cannot tell what. Perhaps a bug in S3?
UPDATE 2
With all permissions, simply limiting S3 access to one resource (i.e. one bucket) causes the process to go from working to not working. Here is the bucket setting:
and accesspoint, job, object, and storagelensconfig are all left unchanged.
So after hours of toggling and testing, trying to narrow down which particular setting was causing it to work and not work, I finally arrived at this policy being just enough to work with Active Storage, and it's limited to one bucket (i.e. not just giving access to everything in S3):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::mybucket"]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": ["arn:aws:s3:::mybucket/*"]
}
]
}
Just replace mybucket with your bucket name in both of the two places.
I found it here

Access Denied S3 with Paperclip

I'm getting acquainted with using S3 with ruby to upload files to Amazon Web Service. I recently was confronted with the following error: AWS::S3::Errors::AccessDenied Access Denied. In poking around on google, I found this post on the error. It claims that the bucket policies aren't sufficient to allow access via the web-app and that the user must be given "Administrator Access" as well.
I've given this a try and it works fine but I feel like this is an indication that I'm not doing it right, given that administrator access isn't mentioned in any other documentation I've read. I'm using the aws-sdk gem. Could anyone weigh in on whether admin access is necessary? Many thanks!
None of the existing answers actually state which policies you need to grant, so here they are: s3:PutObject, s3:DeleteObject, and s3:PutObjectAcl.
Here's the complete S3 bucket policy I'm using to allow Paperclip to put objects with the :public_read permission:
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::IAM_USER_ID:user/IAM_USER_NAME"
},
"Action": [
"s3:PutObject",
"s3:DeleteObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::S3_BUCKET_NAME/*"
}
]
}
As explained in the accepted answer, you should not need "Admin Access". However, the typical policy for giving access to a bucket, as documented in some examples given by Amazon, could not be enough for paperclip.
The following policy worked for me:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListAllMyBuckets"
],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::bucket-name-to-be-set-by-you"
]
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::bucket-name-to-be-set-by-you/*"
]
}
]
}
You should not really need the Admin Access to achieve this.
Make sure you have AWS access_key_id and secret_access_key setup in your heroku config. And, you also would need to make sure your user account has an Access Policy set in the AWS IAM Console.
See this post for some more info.
The default permission for Paperclip is :public_read unless you specify the bucket to be private.
See this for information about Module: Paperclip::Storage::S3

IAM Policy for S3 folder access based on Cognito ID

I have created an IAM policy to allow Cognito users to write to my S3 bucket, but I would like to restrict them to folders based on their Cognito ID. I've followed Amazon's instructions here and created a policy that looks like this:
{
"Effect": "Allow",
"Action": ["s3:PutObject","s3:GetObject"],
"Resource": [
"arn:aws:s3:::mybucket/myappfolder/${cognito-identity.amazonaws.com:sub}*"
]
}
But when I try to upload using the v2 of the AWS iOS SDK I get an access denied error.
If I modify the last path component of the resource to replace ${cognito-identity.amazonaws.com:sub} with the explicit identityId value I am getting from the SDK's AWSCognitoCredentialsProvider it works.
{
"Effect": "Allow",
"Action": ["s3:PutObject","s3:GetObject"],
"Resource": [
"arn:aws:s3:::mybucket/myappfolder/us-east-1:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx*"
]
}
My understanding was that these should equate to the same thing. Am I missing something in my policy, or should I be using a different path in my upload request?
** Update **
I originally had this problem in iOS, so tonight I tried doing the same thing in node.js and the result is identical. Here is the simple code I am using in node:
var s3 = new AWS.S3();
AWS.config.region = 'us-east-1';
AWS.config.credentials = new AWS.CognitoIdentityCredentials(AWSParams);
AWS.config.credentials.get(function (err) {
if (!err) {
console.log("Cognito Identity Id: " + AWS.config.credentials.identityId);
var bucketName = 'ch123_test_bucket';
var keyName = AWS.config.credentials.identityId + '.txt';
var params = {Bucket: bucketName, Key: keyName, Body: 'Hello World!'};
s3.putObject(params, function (err, data) {
if (err)
console.log(err)
else
console.log("Successfully uploaded data to " + bucketName + "/" + keyName);
});
}
And I get the same results that I get with iOS: unless I supply an explicit cognito ID in the IAM policy the API responds with 403.
I've stripped my IAM policy down to the very bare minimum. This doesn't work:
{
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:PutObject","s3:GetObject"],
"Resource": [
"arn:aws:s3:::ch123_test_bucket/${cognito-identity.amazonaws.com:sub}*"
]
}
]
}
This does:
{
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:PutObject","s3:GetObject"],
"Resource": [
"arn:aws:s3:::ch123_test_bucket/us-east-1:68a5dc49-6cc7-4289-8257-d3d5636f7034*"
]
}
]
}
I don't see what I'm missing here...the only documentation I've been able to find always shows the same example Resource value that I've been using.
Unfortunately there is currently an issue with the roles generated via the Cognito console in combination with policy variables. Please update your roles' access policy to include the following to ensure policy variables are evaluated correctly:
"Version": "2012-10-17"
2014-09-16 Update: We have updated the Amazon Cognito console to correct this issue for new roles created via the Identity Pool creation wizard. Existing roles will still need to make the modification noted above.
You are missing last slash.
{
"Effect": "Allow",
"Action": ["s3:PutObject","s3:GetObject"],
"Resource": [
"arn:aws:s3:::mybucket/cognito/myappfolder/${cognito-identity.amazonaws.com:sub}/*"
]
}
Also try to consider this article.

Paperclip is reporting access_denied when trying to upload files to S3

UPDATE: It works when I remove the explicit deny block from the bucket policy, but I need that in there to prevent people outside the site (including bots) from accessing the content.
--
I'm trying to figure out a way to set access control on my S3 content such that:
Users with HTTP referrer mydomain.com can view the files
Paperclip gem can upload files
Google bots can't crawl and index the files
Viewing existing files from the site works fine, but uploading a file gives this error in the console:
[AWS S3 403 0.094338 0 retries] put_object(:acl=>:public_read,
:bucket_name=>"mybucket",:content_length=>879394,
:content_type=>"image/jpeg",:data=>Paperclip::FileAdapter: Chrysanthemum.jpg,
:key=>"ckeditor_assets/pictures/6/original_chrysanthemum.jpg",
:referer=>"mydomain.com/")
AWS::S3::Errors::AccessDenied Access Denied
Here's the bucket policy I have:
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by mydomain.com",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObjectVersion",
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringLike": {
"aws:Referer": "mydomain.com/*"
}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringNotLike": {
"aws:Referer": "mydomain.com/*"
}
}
}
]
}
The error message is odd, because I explicitly set the referrer to mydomain.com in Paperclip settings:
production.rb:
:s3_headers => {
'Referer' => 'mydomain.com'
}
And Paperclip does indeed use it, as shown on the second to last line of the error message.
So why does it still give Access Denied?
After fiddling with it for hours, I revised my approach in light of the original three requirements I listed. I'm now explicitly denying only GetObject (as opposed to everything via "*"), and I also placed a robots.txt file at the root of my bucket and made it public. Therefore:
Users can access bucket content only when my site is the referer (maybe this header can be spoofed, but I'm not too worried at this point). I tested this by copying a resource's link and emailing it to myself, and opening it from within the email. I got access denied, which confirmed that it cannot be hotlinked on other sites.
Paperclip can upload and delete files
Google can't index the bucket contents (hopefully robots.txt will be sufficient)
My final bucket policy for those who arrive at this via Google in the future:
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by mydomain.com",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringLike": {
"aws:Referer": "https://mydomain.com/*"
}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject*",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringNotLike": {
"aws:Referer": "https://mydomain.com/*"
}
}
}
]
}
I do have a theory on why it didn't work. I was reading the S3 documentation on Specifying Conditions in a Policy, and I noticed this warning:
Important:
Not all conditions make sense for all actions. For example, it makes sense to include an s3:LocationConstraint condition on a policy that grants the s3:PutBucket Amazon S3 permission, but not for the s3:GetObject permission. S3 can test for semantic errors of this type that involve Amazon S3–specific conditions. However, if you are creating a policy for an IAM user and you include a semantically invalid S3 condition, no error is reported, because IAM cannot validate S3 conditions.
So maybe the Referer condition did not make sense for the PutObject action. I figured I'd include this in case someone decides to pick this issue up from here and pursue it to the end. :)

Disallowing unencrypted PUTs to S3

I have a working direct browser upload form to upload files from the browser to S3 directly, including enabling server side encryption for each upload via the upload policy and form fields for x-amz-server-side-encryption.
The problem is with the bucket policy on the S3 side to enforce server side encryption on uploaded files. With no bucket policy in place, my files are uploaded perfectly, with server side encryption enabled. When I add the bucket policy, uploads are rejected with an HTTP 403 (AccessDenied) error.
Here is the complete bucket policy I am using (lifted from the Amazon documentation):
{
"Version": "2008-10-17",
"Id": "PutObjPolicy",
"Statement": [
{
"Sid": "DenyUnEncryptedObjectUploads",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::MY-BUCKET-NAME/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "AES256"
}
}
}
]
}
Here is an example of the policy (unencoded and formatted) that is signed and used in the upload form:
{
"expiration": "2013-04-14T14:29:56.000Z",
"conditions": [
{
"bucket": "MY-BUCKET-NAME"
},
{
"acl": "private"
},
{
"x-amz-server-side-encryption": "AES256"
},
[
"starts-with", "$key", "uploads/"
],
{
"success_action_status": "201"
}
]
}
I'll omit the upload form details for brevity, but suffice to say, it sets the corresponding hidden form field for x-amz-server-side-encryption. Again, this works without the bucket policy in place, so I believe the client-side aspects of this are in good shape. My guess is at this point that maybe I should be specifically allowing a certain class of PUTs, but I'm not sure what to add.
Any ideas?

Resources