AWS S3 Access Policy - web browser vs API - ruby-on-rails

UPDATE: I eventually answered my own question. See the Answers section for a tutorial that solves this problem.
The question:
What exactly is the policy that is needed for an external source to access an AWS S3 bucket through the API controls?
Details:
I'm following the Rails Tutorial by Michael Hartl, and I reached the end of lesson 11 where we use CarrierWave to store image files in an AWS S3 bucket. I was able to get it to work (had to add a region ENV variable) but only with a user who has full admin privileges. Obviously that's not ideal. I created a User account specifically for the purpose, but all the walkthroughs only seem to be concerned with web browser access. In fact, I was able to create policies that would allow the user to only be able to read, write, and delete in the specific bucket, but that only worked through a web browser and not through the API. The API access only worked when I attached the AdministratorAccess policy.
Here's what I have so far:
Policy: AllowRootLevelListingOfMyBucket
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowGroupToSeeBucketListAndAlsoAllowGetBucketLocationRequiredForListBucket",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::*"
]
},
{
"Sid": "AllowRootLevelListingOfMyBucket",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::MyBucket"
],
"Condition": {
"StringEquals": {
"s3:prefix": [
""
],
"s3:delimiter": [
"/"
]
}
}
}
]
}
Policy: AllowUserToReadWriteObjectDataInMyBucket
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUserToReadWriteObjectDataInMyBucket",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::MyBucket/*"
]
}
]
}
As I said, this allows web browser access, but API access attempts return an "AccessDenied" error: Excon::Errors::Forbidden (Expected(200) <=> Actual(403 Forbidden)
What do I need to add for API access?
Update: I have narrowed down the problem a bit. There is some "Action" that I need to give permission for, but I haven't been able to identify the action exactly. But using a wildcard works, and I've been able to lock down the user account to only be able to access one bucket. Here's the change I made:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUserToReadWriteObjectDataInMyBucket",
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::MyBucket/*"
]
}
]
}

I eventually answered my own question, and created a tutorial that others might want to follow:
The first thing you need to do is go back over the code that Hartl provided. Make sure you typed it (or copy/pasted it) in exactly as shown. Out of all the code in this section, there is only one small addition you might need to make. The "region" environment variable. This is needed if you create a bucket that is not in the default US area. More on this later. Here is the code for /config/initializers/carrier_wave.rb:
if Rails.env.production?
CarrierWave.configure do |config|
config.fog_credentials = {
# Configuration for Amazon S3
:provider => 'AWS',
:aws_access_key_id => ENV['S3_ACCESS_KEY'],
:aws_secret_access_key => ENV['S3_SECRET_KEY'],
:region => ENV['S3_REGION']
}
config.fog_directory = ENV['S3_BUCKET']
end
end
That line :region => ENV['S3_REGION'] is a problem for a lot of people. As you continue this tutorial you will learn what it's for.
You should be using that block of code exactly as shown. Do NOT put your actual keys in there. We'll send them to Heroku separately.
Now let's move on to your AWS account and security.
First of all, create your AWS account. For the most part, it is like signing up for any web site. Make a nice long password and store it someplace secure, like an encrypted password manager. When you make your account, you will be given your first set of AWS keys. You will not be using those in this tutorial, but you might need them at some point in the future so save those somewhere safe as well.
Go to the S3 section and make a bucket. It has to have a unique
name, so I usually just put the date on the end and that does it. For example, you might name it "my-sample-app-bucket-20160126". Once you
have created your bucket, click on the name, then click on Properties.
It's important for you to know what "Region" your bucket is in. Find it,
and make a note of it. You'll use it later.
Your main account probably has full permissions to everything, so let's not use that for transmitting random data between two web services. This could cost you a lot of money if it got out. We'll make a limited user instead. Make a new User in the IAM section. I named it "fog", because that's the cloud service software that handles the sending and receiving. When you create it, you will have the option of displaying and/or downoading the keys associated with the new user. It's important you keep this in a safe
and secure place. It does NOT go into your code, because that will probably
end up in a repository where other people can see it. Also, don't give this
new user a password, since it will not be logging into the AWS dashboard.
Make a new Group. I called mine "s3railsbucket". This is where the
permissions will be assigned. Add "fog" to this group.
Go to the Policies section. Click "Create Policy" then select "Create Your
Own Policy". Give it a name that starts with "Allow" so it will show up near
the top of the list of policies. It's a huge list. Here's what I did:
Policy Name: AllowFullAccessToMySampleAppBucket20160126
Description: Allows remote write/delete access to S3 bucket named
my-sample-app-bucket-20160126.
Policy Document:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::my-sample-app-bucket-20160126",
"arn:aws:s3:::my-sample-app-bucket-20160126/*"
]
}
]
}
Go back to the Group section, select the group you made, then add
your new policy to the group.
That's it for AWS configuration. I didn't need to make a policy to allow
"fog" to list the contents of the bucket, even though most tutorials I tried
said that was necessary. I think it's only necessary when you want a user
that can log in through the dashboard.
Now for the Heroku configuration. This stuff gets entered in at your
command prompt, just like 'heroku run rake db:migrate' and such. This is
where you enter the actual Access Key and Secret Key you got from the "fog" user you created earlier.
$ heroku config:set S3_ACCESS_KEY=THERANDOMKEYYOUGOT
$ heroku config:set S3_SECRET_KEY=an0tHeRstRing0frAnDomjUnK
$ heroku config:set S3_REGION=us-west-2
$ heroku config:set S3_BUCKET=my-sample-app-bucket-20160126
Look again at that last one. Remember when you looked at the Properties of
your S3 bucket? This is where you enter the code associated with your
region. If your bucket is not in Oregon, you will have to change us-west-2 to your actual region code. This link worked when this tutorial was written:
http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
If that doesn't work, Google "AWS S3 region codes".
After doing all this and double-checking for mistakes in the code, I got
Heroku to work with AWS for storage of pictures!

Related

ActiveStorage can't move file to S3 - Aws::S3::Errors::AccessDenied: Access Denied

The (Simple) Problem
I'm attempting a simple heroku run rake db:seeds. It attempts to use Active Storage to move few images from app/assets/images into to AWS S3.
Here's a portion that fails:
user = User.last
file_name = "steve.png"
file_path = Rails.root.join("app", "assets", "images", "seeds")
user.user_primary_image.attach(io: File.open(file_path + file_name), filename: file_name)
Aws::S3::Errors::AccessDenied: Access Denied
from /app/vendor/bundle/ruby/2.7.0/gems/aws-sdk-core-3.104.3/lib/seahorse/client/plugins/raise_response_errors.rb:17:in `call'
What I know so far
rake db:seeds run locally with no problems
These are all set correctly:
ENV['AWS_ACCESS_KEY_ID']
ENV['AWS_SECRET_ACCESS_KEY']
ENV['AWS_REGION']
ENV['S3_BUCKET']
The IAM for the bucket has a policy which has:
allowed actions: All S3 actions (s3:*)
a resource ARN of the bucket name
The bucket has the following CORS:
(example is replaced with the actual domain)
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT",
"POST",
"DELETE"
],
"AllowedOrigins": [
"https://www.example.com"
],
"ExposeHeaders": []
},
{
"AllowedHeaders": [],
"AllowedMethods": [
"GET"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
That's all so far, but I can't think of anything else to try to debug this...
UPDATE
I created a new IAM with S3 Full Access (every permission on every bucket), and it works. So the trouble is with the more refined IAM, which of course must be implemented (it would not be smart to leave it wide open).
So I took the working implementation, then added write, list and read permissions, and narrowed to one bucket (the one it needs), and now I reproduce the error:
Aws::S3::Errors::AccessDenied: Access Denied
from /app/vendor/bundle/ruby/2.7.0/gems/aws-sdk-core-3.104.3/lib/seahorse/client/plugins/raise_response_errors.rb:17:in `call'
So I know the error has something to do with either the permissions or the resource. I cannot tell what. Perhaps a bug in S3?
UPDATE 2
With all permissions, simply limiting S3 access to one resource (i.e. one bucket) causes the process to go from working to not working. Here is the bucket setting:
and accesspoint, job, object, and storagelensconfig are all left unchanged.
So after hours of toggling and testing, trying to narrow down which particular setting was causing it to work and not work, I finally arrived at this policy being just enough to work with Active Storage, and it's limited to one bucket (i.e. not just giving access to everything in S3):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::mybucket"]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": ["arn:aws:s3:::mybucket/*"]
}
]
}
Just replace mybucket with your bucket name in both of the two places.
I found it here

Access Denied S3 with Paperclip

I'm getting acquainted with using S3 with ruby to upload files to Amazon Web Service. I recently was confronted with the following error: AWS::S3::Errors::AccessDenied Access Denied. In poking around on google, I found this post on the error. It claims that the bucket policies aren't sufficient to allow access via the web-app and that the user must be given "Administrator Access" as well.
I've given this a try and it works fine but I feel like this is an indication that I'm not doing it right, given that administrator access isn't mentioned in any other documentation I've read. I'm using the aws-sdk gem. Could anyone weigh in on whether admin access is necessary? Many thanks!
None of the existing answers actually state which policies you need to grant, so here they are: s3:PutObject, s3:DeleteObject, and s3:PutObjectAcl.
Here's the complete S3 bucket policy I'm using to allow Paperclip to put objects with the :public_read permission:
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::IAM_USER_ID:user/IAM_USER_NAME"
},
"Action": [
"s3:PutObject",
"s3:DeleteObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::S3_BUCKET_NAME/*"
}
]
}
As explained in the accepted answer, you should not need "Admin Access". However, the typical policy for giving access to a bucket, as documented in some examples given by Amazon, could not be enough for paperclip.
The following policy worked for me:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListAllMyBuckets"
],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::bucket-name-to-be-set-by-you"
]
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::bucket-name-to-be-set-by-you/*"
]
}
]
}
You should not really need the Admin Access to achieve this.
Make sure you have AWS access_key_id and secret_access_key setup in your heroku config. And, you also would need to make sure your user account has an Access Policy set in the AWS IAM Console.
See this post for some more info.
The default permission for Paperclip is :public_read unless you specify the bucket to be private.
See this for information about Module: Paperclip::Storage::S3

Paperclip is reporting access_denied when trying to upload files to S3

UPDATE: It works when I remove the explicit deny block from the bucket policy, but I need that in there to prevent people outside the site (including bots) from accessing the content.
--
I'm trying to figure out a way to set access control on my S3 content such that:
Users with HTTP referrer mydomain.com can view the files
Paperclip gem can upload files
Google bots can't crawl and index the files
Viewing existing files from the site works fine, but uploading a file gives this error in the console:
[AWS S3 403 0.094338 0 retries] put_object(:acl=>:public_read,
:bucket_name=>"mybucket",:content_length=>879394,
:content_type=>"image/jpeg",:data=>Paperclip::FileAdapter: Chrysanthemum.jpg,
:key=>"ckeditor_assets/pictures/6/original_chrysanthemum.jpg",
:referer=>"mydomain.com/")
AWS::S3::Errors::AccessDenied Access Denied
Here's the bucket policy I have:
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by mydomain.com",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObjectVersion",
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringLike": {
"aws:Referer": "mydomain.com/*"
}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringNotLike": {
"aws:Referer": "mydomain.com/*"
}
}
}
]
}
The error message is odd, because I explicitly set the referrer to mydomain.com in Paperclip settings:
production.rb:
:s3_headers => {
'Referer' => 'mydomain.com'
}
And Paperclip does indeed use it, as shown on the second to last line of the error message.
So why does it still give Access Denied?
After fiddling with it for hours, I revised my approach in light of the original three requirements I listed. I'm now explicitly denying only GetObject (as opposed to everything via "*"), and I also placed a robots.txt file at the root of my bucket and made it public. Therefore:
Users can access bucket content only when my site is the referer (maybe this header can be spoofed, but I'm not too worried at this point). I tested this by copying a resource's link and emailing it to myself, and opening it from within the email. I got access denied, which confirmed that it cannot be hotlinked on other sites.
Paperclip can upload and delete files
Google can't index the bucket contents (hopefully robots.txt will be sufficient)
My final bucket policy for those who arrive at this via Google in the future:
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by mydomain.com",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringLike": {
"aws:Referer": "https://mydomain.com/*"
}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject*",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringNotLike": {
"aws:Referer": "https://mydomain.com/*"
}
}
}
]
}
I do have a theory on why it didn't work. I was reading the S3 documentation on Specifying Conditions in a Policy, and I noticed this warning:
Important:
Not all conditions make sense for all actions. For example, it makes sense to include an s3:LocationConstraint condition on a policy that grants the s3:PutBucket Amazon S3 permission, but not for the s3:GetObject permission. S3 can test for semantic errors of this type that involve Amazon S3–specific conditions. However, if you are creating a policy for an IAM user and you include a semantically invalid S3 condition, no error is reported, because IAM cannot validate S3 conditions.
So maybe the Referer condition did not make sense for the PutObject action. I figured I'd include this in case someone decides to pick this issue up from here and pursue it to the end. :)

Disallowing unencrypted PUTs to S3

I have a working direct browser upload form to upload files from the browser to S3 directly, including enabling server side encryption for each upload via the upload policy and form fields for x-amz-server-side-encryption.
The problem is with the bucket policy on the S3 side to enforce server side encryption on uploaded files. With no bucket policy in place, my files are uploaded perfectly, with server side encryption enabled. When I add the bucket policy, uploads are rejected with an HTTP 403 (AccessDenied) error.
Here is the complete bucket policy I am using (lifted from the Amazon documentation):
{
"Version": "2008-10-17",
"Id": "PutObjPolicy",
"Statement": [
{
"Sid": "DenyUnEncryptedObjectUploads",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::MY-BUCKET-NAME/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "AES256"
}
}
}
]
}
Here is an example of the policy (unencoded and formatted) that is signed and used in the upload form:
{
"expiration": "2013-04-14T14:29:56.000Z",
"conditions": [
{
"bucket": "MY-BUCKET-NAME"
},
{
"acl": "private"
},
{
"x-amz-server-side-encryption": "AES256"
},
[
"starts-with", "$key", "uploads/"
],
{
"success_action_status": "201"
}
]
}
I'll omit the upload form details for brevity, but suffice to say, it sets the corresponding hidden form field for x-amz-server-side-encryption. Again, this works without the bucket policy in place, so I believe the client-side aspects of this are in good shape. My guess is at this point that maybe I should be specifically allowing a certain class of PUTs, but I'm not sure what to add.
Any ideas?

Prevent image hotlinking; only allow referer and redirected requests?

I have some images in a bucket on S3. My app uses these images. What I want is the following:
Only allow the image to be accessed if:
The referer is my site - This I can already do with a bucket policy
The user was redirected from my site
The problem is the redirect here, because, when redirected, no referer is sent to Amazon S3.
Is there a way to limit access to my S3 files in the way I described above?
My current bucket policy looks like this:
{
"Version": "2008-10-17",
"Id": "e9c9be4d-cdfc-470c-8582-1d5a9e4d04be",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringLike": {
"aws:Referer": "http://myapp.com/*"
}
}
}
]
}
Have your files be private.
Use signed URLs in the links/redirects to your images.
The signed URLs include an expiration; Amazon will not show your image past the expiration.
The signed URLs cannot be forged; Amazon will not show your image if the signature is missing or invalid.
This guy appears to have solved the problem:
http://www.powercram.com/2010/07/s3-bucket-policy-to-restrict-access-by.html

Resources