Push image to ECR works for private repositories only - docker

I'm trying to push an image to a public repository in ECR. To be able to do so, I created a policy that gives push permissions and attached this policy to my user. The policy in JSON format is the following:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:*"
],
"Resource": "arn:aws:ecr:us-east-1:*:repository/my-app"
},
{
"Effect": "Allow",
"Action": [
"ecr:*"
],
"Resource": "arn:aws:ecr:us-east-1:*:repository/my-app-public"
},
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken"
],
"Resource": "*"
}
]
}
The push works fine for the private repository but gives the error denied: Not Authorized when I try to push the image to the public repo. How can I push an image to an ECR public repo?

ECR Public is its own service (with its own ecr-public:* actions). To push images to ECR Public, a set of ecr-public actions is needed in the statement.
The first example here should get you on the right track: https://docs.aws.amazon.com/AmazonECR/latest/public/public-repository-policy-examples.html

Related

Amazon-s3: An error occurred (InvalidRequest) when calling the PutObject operation. Specifying both Canned ACLs and Header Grants is not allowed`

I try to copy files from Jenkins server to the s3 server, but have the error An error occurred (InvalidRequest) when calling the PutObject operation.
There are aws options:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::hhhh-backups/*"
}
]
}
The command with with I try to copy:
aws s3 cp allure-report/ s3://hhhh-backups --grants read=uri=http://acs.amazonaws.com/groups/global/AllUsers --recursive
screenshot:
I also added flag acl bucket-owner-full-controller and have other error:
An error occurred (InvalidRequest) when calling the PutObject operation: Specifying both Canned ACLs and Header Grants is not allowed
How to resolve it? In general, I need to copy reports to the s3 from Jenkins. I can't do this with UI, I can't execute this with code (since I haven't AccessKey) and finally I can't execute this with a script from aws cli.
Also, I can't manage the aws options independently, but I can ask another to do it.
you can try adding the acl flag and set it to bucket-owner-full-control.
your modified command will look like :
aws s3 cp allure-report/ s3://hhhh-backups --acl bucket-owner-full-control --recursive
for your reference :
https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html

Aws::S3::Errors::AccessDenied ROLLBACK 500 Internal Server Error

Im trying to upload an image on s3 bucket using ruby on rails and paperclip, but its not working I have tried almost everything on the web.
I know there is many questions about this but I have tried most of them, and nothing worked please review the question cause i listed what i tried in the question
I did set IAM User and User has AmazonS3FullAccess policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
I did set policy on the bucket
{
"Version": "2012-10-17",
"Id": "Policy1557294263403",
"Statement": [
{
"Sid": "Stmt1557294241958",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::726051891502:user/borroup-admin"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::borroup",
"arn:aws:s3:::borroup/*"
]
}
]
}
I did set CORS configuration editor on the bucket
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
I made sure all the Public access settings for this bucket set to false
This is ruby on rails config
Note: im using the user access_key_id and secret_access_key for this
config.paperclip_defaults = {
storage: :s3,
path: ':class/:attachment/:id/:style/:filename',
s3_host_name: 's3.us-east-2.amazonaws.com',
s3_credentials:{
bucket:'borroup',
access_key_id: '************',
secret_access_key:'***************************',
s3_region:'us-east-2'
}
}
I do get this error when I try to upload the image
Aws::S3::Errors::AccessDenied in PhotosController#create
When I check the bucket log I get this
Are the bucket and IAM user in different account ?
If so, the bucket policy is incorrect, the correct bucket policy
{
"Version": "2012-10-17",
"Id": "Policy1557294263403",
"Statement": [
{
"Sid": "Stmt1557294241958",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::726051891502:user/borroup-admin"
},
"Action": "s3:",
"Resource": ["arn:aws:s3:::borroup","arn:aws:s3:::borroup/"]
}
]
}
// --> borroup/asterisk --> for some reason, asterisk symbol removed here or maybe I don't know how to use stackoverflow correctly.
If the User and bucket in the same account, the policy doesn't matter because IAM user has full permission.

Aws Codedeploy + Elasticbeanstalk using Bitbucket

I have one repository in bitbucket, for deploying to elasticbeanstalk i am using codedeploy(i couldn't find any better solution). I have created elsticbeanServceRole in IAM and the policies, i have attached to that roles are
AmazonEC2FullAccess
AdministratorAccess
AmazonAPIGatewayAdministrator
codedeployServiceRolePolicy (Custom Policy)
This is the content of codedeployServiceRolePolicy
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"autoscaling:PutLifecycleHook",
"autoscaling:DeleteLifecycleHook",
"autoscaling:RecordLifecycleActionHeartbeat",
"autoscaling:CompleteLifecycleAction",
"autoscaling:DescribeAutoscalingGroups",
"autoscaling:PutInstanceInStandby",
"autoscaling:PutInstanceInService",
"ec2:Describe*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Then i copied Role ARN and paste in codedeploy application settings
I have successfully configured the bitbucket codedeploy settings, for that i created one role called bitbucketRole with custom policy. Policy content is
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:*",
"codedeploy:*",
"ec2:*",
"elasticloadbalancing:*",
"iam:AddRoleToInstanceProfile",
"iam:CreateInstanceProfile",
"iam:CreateRole",
"iam:DeleteInstanceProfile",
"iam:DeleteRole",
"iam:DeleteRolePolicy",
"iam:GetInstanceProfile",
"iam:GetRole",
"iam:GetRolePolicy",
"iam:ListInstanceProfilesForRole",
"iam:ListRolePolicies",
"iam:ListRoles",
"iam:PassRole",
"iam:PutRolePolicy",
"iam:RemoveRoleFromInstanceProfile",
"s3:*"
],
"Resource": "*"
}
]
}
Problem
Now when i click on "Deploy to AWS" in bitbucket, deployment from bitbucket to codedeploy is triggering but i am getting error at codedeploy console
The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems. (Error code: HEALTH_CONSTRAINTS).
Please Help me
Is CodeDeploy agent running on your instances? Also, can you paste the Error Info from one of the failed instances?
HEALTH_CONSTRAINTS usually means CodeDeploy was unable to continue the deployment due to the health constraints set in the deployment config. Too many instances already failed and CodeDeploy can't take down any more instances.

How can I set S3 ACL parameters with Refile gem integration with rails?

I am using Refile gem to upload images on S3 with Rails 4. With my current settings, I am able to view the images through S3 URL only after updating the ACL manually.
Is there any way to configure Refile gem to set the ACL params to public_read?
I am able to access the images now by updating the S3 bucket policy to this:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::MY_BUCKET_NAME/*"
}
]
}

'Permission Denied' when getting images on S3 from Grails

I have a grails 2.1.1 app that is accessing images stored in a bucket on s3, which I am accessing using the Grails-AWS plugin.
Everything works fine when I use "grails run-app" and the server is localhost:8080/myApp. I can put and get files with no problem.
But when I deploy the war file to Amazon Elastic Beanstalk I get the following error when trying to get an image:
java.io.FileNotFoundException: 90916.png (Permission denied)
at java.io.FileOutputStream.<init>(FileOutputStream.java:209)
at java.io.FileOutputStream.<init>(FileOutputStream.java:160)
at com.sommelier.domain.core.MyDomainObject.getPicture(MyDomainObject.groovy:145)
Here is my code for getting the image that is initiating the error:
File getPicture() {
def url = aws.s3().on("mybucket").url(image, "myfoldername")
File imageFile = new File(image)
def fileOutputStream = new FileOutputStream(imageFile)
def out = new BufferedOutputStream(fileOutputStream)
out << new URL(url).openStream()
out.close()
return imageFile
}
I have set the permissions on my s3 bucket as wide open as I can. I have used the "Add more permissions" button and added every possible option.
Here is my bucket policy:
{
"Version": "2008-10-17",
"Id": "Policy1355414697022",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*"
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::mybucket/*"
}
]
}
And my CORS configuration:
<CORSConfiguration>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Any thoughts? Is this a S3 permissions problem, or is there something else?
It seems you're trying to create the file where you don't have write permission.
It's better practice to not save a copy to the app server. If you can I suggest you return the manipulation/content/whatever from the object in memory.
But if you really do need the file locally for some reason, you should have write permission in /tmp

Resources