Aws::S3::Errors::AccessDenied ROLLBACK 500 Internal Server Error - ruby-on-rails

Im trying to upload an image on s3 bucket using ruby on rails and paperclip, but its not working I have tried almost everything on the web.
I know there is many questions about this but I have tried most of them, and nothing worked please review the question cause i listed what i tried in the question
I did set IAM User and User has AmazonS3FullAccess policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
I did set policy on the bucket
{
"Version": "2012-10-17",
"Id": "Policy1557294263403",
"Statement": [
{
"Sid": "Stmt1557294241958",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::726051891502:user/borroup-admin"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::borroup",
"arn:aws:s3:::borroup/*"
]
}
]
}
I did set CORS configuration editor on the bucket
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
I made sure all the Public access settings for this bucket set to false
This is ruby on rails config
Note: im using the user access_key_id and secret_access_key for this
config.paperclip_defaults = {
storage: :s3,
path: ':class/:attachment/:id/:style/:filename',
s3_host_name: 's3.us-east-2.amazonaws.com',
s3_credentials:{
bucket:'borroup',
access_key_id: '************',
secret_access_key:'***************************',
s3_region:'us-east-2'
}
}
I do get this error when I try to upload the image
Aws::S3::Errors::AccessDenied in PhotosController#create
When I check the bucket log I get this

Are the bucket and IAM user in different account ?
If so, the bucket policy is incorrect, the correct bucket policy
{
"Version": "2012-10-17",
"Id": "Policy1557294263403",
"Statement": [
{
"Sid": "Stmt1557294241958",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::726051891502:user/borroup-admin"
},
"Action": "s3:",
"Resource": ["arn:aws:s3:::borroup","arn:aws:s3:::borroup/"]
}
]
}
// --> borroup/asterisk --> for some reason, asterisk symbol removed here or maybe I don't know how to use stackoverflow correctly.
If the User and bucket in the same account, the policy doesn't matter because IAM user has full permission.

Related

Push image to ECR works for private repositories only

I'm trying to push an image to a public repository in ECR. To be able to do so, I created a policy that gives push permissions and attached this policy to my user. The policy in JSON format is the following:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:*"
],
"Resource": "arn:aws:ecr:us-east-1:*:repository/my-app"
},
{
"Effect": "Allow",
"Action": [
"ecr:*"
],
"Resource": "arn:aws:ecr:us-east-1:*:repository/my-app-public"
},
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken"
],
"Resource": "*"
}
]
}
The push works fine for the private repository but gives the error denied: Not Authorized when I try to push the image to the public repo. How can I push an image to an ECR public repo?
ECR Public is its own service (with its own ecr-public:* actions). To push images to ECR Public, a set of ecr-public actions is needed in the statement.
The first example here should get you on the right track: https://docs.aws.amazon.com/AmazonECR/latest/public/public-repository-policy-examples.html

NS-Vue/Rails Presigned PUT request to S3 bucket giving 403

My Front-end is a Nativescript-Vue app. Backend is Rails. I'm getting a presigned url from the rrails server and using that to send a put request on the client side to do an image upload. I've generated a presigned url on rails like so, following this:
def create_presigned_url
filename = "#{self.id}.jpg"
Aws.config[:credentials]=Aws::Credentials.new(
"secret_id",
"secret_key")
s3 = Aws::S3::Resource.new(region: 'ap-southeast-1')
bucket = 'bucket_name'
obj = s3.bucket(bucket).object(filename)
self.presigned_url = obj.presigned_url(:put, { acl: 'public-read' })
self.update_column(:image_url, obj.public_url)
end
Long story short, the above code generates a presigned url and I use it to do a put request on the client-side using the NativeScript-background-http plugin:
var session = bghttp.session("image-upload");
UploadFile(session, file, url) {
var request = {
url: url,
method: "PUT",
headers: {
"Content-Type": "application/octet-stream"
},
description: `Uploading ${file.substr(file.lastIndexOf("/") + 1)}`
};
var task = session.uploadFile(file, request);
}
The image upload works fine, it shows:
LOG from device Nexus 6P: 'currentBytes: 4096'
LOG from device Nexus 6P: 'totalBytes: 622121'
LOG from device Nexus 6P: 'eventName: progress'
LOG from device Nexus 6P: 'currentBytes: 323584'
LOG from device Nexus 6P: 'totalBytes: 622121'
LOG from device Nexus 6P: 'eventName: progress'
LOG from device Nexus 6P: 'currentBytes: 606208'
LOG from device Nexus 6P: 'eventName: progress'
LOG from device Nexus 6P: 'totalBytes: 622121'
LOG from device Nexus 6P: 'currentBytes: 622121'
LOG from device Nexus 6P: 'totalBytes: 622121'
LOG from device Nexus 6P: 'eventName: progress'
LOG from device Nexus 6P: 'eventName: error'
LOG from device Nexus 6P: 'eventName: 403'
LOG from device Nexus 6P: 'eventName: {}'
There's a 403 error, the response is:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
...
I've googled the error and seen that all the responses on SO are about having incorrect AWS keys however, I have made sure I have the correct AWS credentials on rails. I suspect it may have something to do with the content type whilst generating the presigned url but I'm not sure. My bucket permissions seem to be correct but I could've missed something there. I've set the policy and CORS.
This is the bucket policy:
{
"Version": "2012-10-17",
"Id": "my-policy-id",
"Statement": [
{
"Sid": "my-sid",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::my-id:user/my-user"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::my-bucket/*"
}
]
}
This is the CORS:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<ExposeHeader>ETag</ExposeHeader>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
and my IAM user has the necessary policy as well.
Any insight would be appreciated.
EDIT: I've even deleted the bucket policy and granted all public access to the bucket however I'm still seeing the 403 error. The error is the signature one.
I had to change self.presigned_url = obj.presigned_url(:put, { acl: 'public-read' }) to self.presigned_url = obj.presigned_url(:put, expires_in: 10*60, content_type: 'application/octet-stream')
and the bucket ACL for Everyone to Public List, Private Write. The bucket Policy to
{
"Version": "2012-10-17",
"Id": "policy_id",
"Statement": [
{
"Sid": "my_statement_id",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::user_id:user/iam_user"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::my_bucket/*"
},
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::my_bucket/*"
}
]
}

Why does browser upload to S3 bucket return 403 forbidden?

I've read dozens of proposed answers to this question online over the past few days, so I'm guessing the answer is pretty case-specific. I have a bucket for uploads from a Rails development app and have cors set up like this (for use with localhost):
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
and I have a bucket policy set up like this:
{
"Version": "2012-10-17",
"Id": ID,
"Statement": [
{
"Sid": SID,
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectAcl",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::BUCKET_NAME/*"
}
]
}
]
I'm using dropzone with the s3_direct_upload gem, and that all seems to be working fine. I've double-checked key, secret, bucket name, and region. Do I also need an IAM policy in addition to the bucket policy?

How can I set S3 ACL parameters with Refile gem integration with rails?

I am using Refile gem to upload images on S3 with Rails 4. With my current settings, I am able to view the images through S3 URL only after updating the ACL manually.
Is there any way to configure Refile gem to set the ACL params to public_read?
I am able to access the images now by updating the S3 bucket policy to this:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::MY_BUCKET_NAME/*"
}
]
}

'Permission Denied' when getting images on S3 from Grails

I have a grails 2.1.1 app that is accessing images stored in a bucket on s3, which I am accessing using the Grails-AWS plugin.
Everything works fine when I use "grails run-app" and the server is localhost:8080/myApp. I can put and get files with no problem.
But when I deploy the war file to Amazon Elastic Beanstalk I get the following error when trying to get an image:
java.io.FileNotFoundException: 90916.png (Permission denied)
at java.io.FileOutputStream.<init>(FileOutputStream.java:209)
at java.io.FileOutputStream.<init>(FileOutputStream.java:160)
at com.sommelier.domain.core.MyDomainObject.getPicture(MyDomainObject.groovy:145)
Here is my code for getting the image that is initiating the error:
File getPicture() {
def url = aws.s3().on("mybucket").url(image, "myfoldername")
File imageFile = new File(image)
def fileOutputStream = new FileOutputStream(imageFile)
def out = new BufferedOutputStream(fileOutputStream)
out << new URL(url).openStream()
out.close()
return imageFile
}
I have set the permissions on my s3 bucket as wide open as I can. I have used the "Add more permissions" button and added every possible option.
Here is my bucket policy:
{
"Version": "2008-10-17",
"Id": "Policy1355414697022",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*"
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::mybucket/*"
}
]
}
And my CORS configuration:
<CORSConfiguration>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Any thoughts? Is this a S3 permissions problem, or is there something else?
It seems you're trying to create the file where you don't have write permission.
It's better practice to not save a copy to the app server. If you can I suggest you return the manipulation/content/whatever from the object in memory.
But if you really do need the file locally for some reason, you should have write permission in /tmp

Resources