Access Denied S3 with Paperclip - ruby-on-rails

I'm getting acquainted with using S3 with ruby to upload files to Amazon Web Service. I recently was confronted with the following error: AWS::S3::Errors::AccessDenied Access Denied. In poking around on google, I found this post on the error. It claims that the bucket policies aren't sufficient to allow access via the web-app and that the user must be given "Administrator Access" as well.
I've given this a try and it works fine but I feel like this is an indication that I'm not doing it right, given that administrator access isn't mentioned in any other documentation I've read. I'm using the aws-sdk gem. Could anyone weigh in on whether admin access is necessary? Many thanks!

None of the existing answers actually state which policies you need to grant, so here they are: s3:PutObject, s3:DeleteObject, and s3:PutObjectAcl.
Here's the complete S3 bucket policy I'm using to allow Paperclip to put objects with the :public_read permission:
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::IAM_USER_ID:user/IAM_USER_NAME"
},
"Action": [
"s3:PutObject",
"s3:DeleteObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::S3_BUCKET_NAME/*"
}
]
}

As explained in the accepted answer, you should not need "Admin Access". However, the typical policy for giving access to a bucket, as documented in some examples given by Amazon, could not be enough for paperclip.
The following policy worked for me:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListAllMyBuckets"
],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::bucket-name-to-be-set-by-you"
]
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::bucket-name-to-be-set-by-you/*"
]
}
]
}

You should not really need the Admin Access to achieve this.
Make sure you have AWS access_key_id and secret_access_key setup in your heroku config. And, you also would need to make sure your user account has an Access Policy set in the AWS IAM Console.
See this post for some more info.
The default permission for Paperclip is :public_read unless you specify the bucket to be private.
See this for information about Module: Paperclip::Storage::S3

Related

AWS Polly Policy json for polly:SynthesizeSpeech IAM

I am trying to set up the proper policy json for IAM programmatic access, but the following runs into the below error ... I have tried it also without sid and with Resource as an array.
"Version": "2012-10-17",
"Statement":
{
"Effect": "Allow",
"Sid": "AllowAllPollyActions",
"Action": [
"polly:*"
],
"Resource": "*"
}}
is not authorized to perform: polly:SynthesizeSpeech with an explicit deny in an identity-based policy

AWS::S3::Errors::InvalidAccessKeyId Security Token Service Credentials are not working

I am using Security Token Service to generate credentials access_key_id and secret_access_key to pass it to the client side. I am able to generate credentials successfully. But, when I use dynamically generated credentials I got this error
AWS::S3::Errors::InvalidAccessKeyId The AWS Access Key Id you provided does not exist in our records
Followings are the steps I have used to generate credentials with Security Token Service
Create Role
I have created a role on AWS with following policy (to list and upload files on AWS S3 bucket)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::<bucket_name>"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::<bucket_name>/*"
}
]
}
Edit Trust Relationship
I have edited trust relationship to assign role to the IAM user
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<account_id>:user/<user_name>"
},
"Action": "sts:AssumeRole",
"Condition": {}
}
]
}
How I am generating credentials
sts = Aws::STS::Client.new
session = sts.assume_role(role_arn: 'arn:aws:iam::<account_id>:role/<role_name>', role_session_name: 'test_session')
#aws_access_key_id = session.credentials.access_key_id
#aws_secret_access_key = session.credentials.secret_access_key
#aws_security_token = session.credentials.session_token
#expires_at = session.credentials.expiration
I am able to generated credentials but credentials are invalid.
Thanks

Restrict direct access to s3 files from website only it should open

I am trying to restrict direct access to files in aws s3 bucket, only from website that file should be visible, I tried with different policies. But nothing is working for me.
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by www.example.com and example.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::ex-bucket/*",
"Condition": {
"StringLike": {"aws:Referer": ["https://www.example.com/*","https://example.com/*"]}
}
},
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::ex-bucket/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "00.00.000.00"
},
"IpAddress": {
"aws:SourceIp": "00.00.000.00"
}
}
}
]
}
Any suggestions on this.
If you want to allow access to bucket with specific Domain try this
Below is an example of how to set www.example.com and example.com as valid refers.
Add the following policy in your “Add bucket policy” field:
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests originating from www.example.com and example.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::s3-foo-bar/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://www.example.com/*",
"http://example.com/*"
]
}
}
}
By default accounts are restricted from accessing S3 unless they have been given access via policy. However, S3 is designed by default to allow any IP address access. So to block IP's you would have to specify denies explicitly in the policy instead of allows.
{
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPDeny",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::s3-foo-bar/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "Your IP Address CIDR Notation"
}
}
}
]
}
If you want to disable direct access to bucket:
You can make the files private and generate signed URLs to allow someone temporary access to the files: Share an Object with Others
Another Method can be you can Serve Private Content via Cloudfront
Or you can set a policy to restrict people from accessing the files directly which is mentioned above, only allowing them to access a file if it was linked from your website
Or you can serve those files by sending them through some script, i.e. download the file to your server and returning the file contents from there. In which case you can just make those files private in S3 or even move them to a separate bucket that is completely private. I would not recommend this method because it puts more load on your server.
Hope This helps

Paperclip is reporting access_denied when trying to upload files to S3

UPDATE: It works when I remove the explicit deny block from the bucket policy, but I need that in there to prevent people outside the site (including bots) from accessing the content.
--
I'm trying to figure out a way to set access control on my S3 content such that:
Users with HTTP referrer mydomain.com can view the files
Paperclip gem can upload files
Google bots can't crawl and index the files
Viewing existing files from the site works fine, but uploading a file gives this error in the console:
[AWS S3 403 0.094338 0 retries] put_object(:acl=>:public_read,
:bucket_name=>"mybucket",:content_length=>879394,
:content_type=>"image/jpeg",:data=>Paperclip::FileAdapter: Chrysanthemum.jpg,
:key=>"ckeditor_assets/pictures/6/original_chrysanthemum.jpg",
:referer=>"mydomain.com/")
AWS::S3::Errors::AccessDenied Access Denied
Here's the bucket policy I have:
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by mydomain.com",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObjectVersion",
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringLike": {
"aws:Referer": "mydomain.com/*"
}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringNotLike": {
"aws:Referer": "mydomain.com/*"
}
}
}
]
}
The error message is odd, because I explicitly set the referrer to mydomain.com in Paperclip settings:
production.rb:
:s3_headers => {
'Referer' => 'mydomain.com'
}
And Paperclip does indeed use it, as shown on the second to last line of the error message.
So why does it still give Access Denied?
After fiddling with it for hours, I revised my approach in light of the original three requirements I listed. I'm now explicitly denying only GetObject (as opposed to everything via "*"), and I also placed a robots.txt file at the root of my bucket and made it public. Therefore:
Users can access bucket content only when my site is the referer (maybe this header can be spoofed, but I'm not too worried at this point). I tested this by copying a resource's link and emailing it to myself, and opening it from within the email. I got access denied, which confirmed that it cannot be hotlinked on other sites.
Paperclip can upload and delete files
Google can't index the bucket contents (hopefully robots.txt will be sufficient)
My final bucket policy for those who arrive at this via Google in the future:
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by mydomain.com",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringLike": {
"aws:Referer": "https://mydomain.com/*"
}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject*",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringNotLike": {
"aws:Referer": "https://mydomain.com/*"
}
}
}
]
}
I do have a theory on why it didn't work. I was reading the S3 documentation on Specifying Conditions in a Policy, and I noticed this warning:
Important:
Not all conditions make sense for all actions. For example, it makes sense to include an s3:LocationConstraint condition on a policy that grants the s3:PutBucket Amazon S3 permission, but not for the s3:GetObject permission. S3 can test for semantic errors of this type that involve Amazon S3–specific conditions. However, if you are creating a policy for an IAM user and you include a semantically invalid S3 condition, no error is reported, because IAM cannot validate S3 conditions.
So maybe the Referer condition did not make sense for the PutObject action. I figured I'd include this in case someone decides to pick this issue up from here and pursue it to the end. :)

Prevent image hotlinking; only allow referer and redirected requests?

I have some images in a bucket on S3. My app uses these images. What I want is the following:
Only allow the image to be accessed if:
The referer is my site - This I can already do with a bucket policy
The user was redirected from my site
The problem is the redirect here, because, when redirected, no referer is sent to Amazon S3.
Is there a way to limit access to my S3 files in the way I described above?
My current bucket policy looks like this:
{
"Version": "2008-10-17",
"Id": "e9c9be4d-cdfc-470c-8582-1d5a9e4d04be",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringLike": {
"aws:Referer": "http://myapp.com/*"
}
}
}
]
}
Have your files be private.
Use signed URLs in the links/redirects to your images.
The signed URLs include an expiration; Amazon will not show your image past the expiration.
The signed URLs cannot be forged; Amazon will not show your image if the signature is missing or invalid.
This guy appears to have solved the problem:
http://www.powercram.com/2010/07/s3-bucket-policy-to-restrict-access-by.html

Resources