I recently created a new repository in AWS ECR, and I'm attempting to push an image. I'm copy/pasting the directions provided via the "View push commands" button on the repository page. I'll copy those here for reference:
aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin 123456789.dkr.ecr.us-west-2.amazonaws.com
("Login succeeded")
docker build -t myorg/myapp .
docker tag myorg/myapp:latest 123456789.dkr.ecr.us-west-2.amazonaws.com/myorg/myapp:latest
docker push 123456789.dkr.ecr.us-west-2.amazonaws.com/myorg/myapp:latest
However, when I get to the docker push step, I see:
> docker push 123456789.dkr.ecr.us-west-2.amazonaws.com/myorg/myapp:latest
The push refers to repository [123456789.dkr.ecr.us-west-2.amazonaws.com/myorg/myapp]
a53c8ed5f326: Retrying in 1 second
78e16537476e: Retrying in 1 second
b7e38d172e62: Retrying in 1 second
f1ff72b2b1ca: Retrying in 1 second
33b67aceeff0: Retrying in 1 second
c3a550784113: Waiting
83fc4b4db427: Waiting
e8ade0d39f19: Waiting
487d5f9ec63f: Waiting
b24e42eb9639: Waiting
9262398ff7bf: Waiting
804aae047b71: Waiting
5d33f5d87bf5: Waiting
4e38024e7e09: Waiting
EOF
I'm wondering if this has something to do with the permissions/policies associated with this repository. Right now there are no statements attached to this repository. Is that the missing part? If so, what would that statement look like? I've tried this, but it had no effect:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPutImage",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789:root"
},
"Action": "ecr:PutImage"
}
]
}
Bonus Points:
I eventually want to use this in a CDK CodeBuildAction. I was getting the same error as above, so I check to see if I was getting the same result in my local terminal, which I am. So if the policy statement needs to be different for use in the CDK CodeBuildAction those details would be appreciated as well.
Thank you in advance for and advice.
I was having the same problem when trying to upload the image manually using the AWS and Docker CLI. I was able to fix it by going into ECR -> Repositories -> Permissions then adding a new policy statement with principal:* and the following actions:
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability",
"ecr:CompleteLayerUpload",
"ecr:GetDownloadUrlForLayer",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
Be sure to add more restrictive principals. I was just trying to see if permissions were the problem in this case and sure enough they were.
The accepted answer works correctly in resolving the issue. However, as has been mentioned in the answer, allowing principal:* is risky and can get your ECR compromised.
Be sure to add specific principal(s) i.e. IAM Users/Roles such that only those Users/Roles will be allowed to execute the mentioned "Actions". Following JSON policy can be added in Amazon ECR >> Repositories >> Select Required Repository >> Permissions >> Edit policy JSON to get this resolved quickly:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "Statement1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<AccountNumber>:role/<RoleName>"
},
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:CompleteLayerUpload",
"ecr:GetDownloadUrlForLayer",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
]
}
]
}
I had this issue when the repository didn't exist in ECR - I assumed that pushing would create it, but it didn't.
Creating it before pushing solved the problem.
It turns out it was a missing/misconfigured policy. I was able to get it working within CodeBuild by adding a role with the AmazonEC2ContainerRegistryPowerUser managed policy:
new CodeBuildAction({
actionName: "ApplicationBuildAction",
input: this.applicationSourceOutput,
outputs: [this.applicationBuildOutput],
project: new PipelineProject(this, "ApplicationBuildProject", {
vpc: this.codeBuildVpc,
securityGroups: [this.codeBuildSecurityGroup],
environment: {
buildImage: LinuxBuildImage.STANDARD_5_0,
privileged: true,
},
environmentVariables: {
ECR_REPO_URI: {
value: ECR_REPO_URI,
},
ECR_REPO_NAME: {
value: ECR_REPO_NAME,
},
AWS_REGION: {
value: this.region,
}
},
buildSpec: BuildSpec.fromObject({
version: "0.2",
phases: {
pre_build: {
commands: [
"echo 'Logging into Amazon ECR...'",
"aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $ECR_REPO_URI",
"COMMIT_HASH=$(echo \"$CODEBUILD_RESOLVED_SOURCE_VERSION\" | head -c 8)"
]
},
build: {
commands: [
"docker build -t $ECR_REPO_NAME:latest ."
]
},
post_build: {
commands: [
"docker tag $ECR_REPO_NAME:latest $ECR_REPO_URI/$ECR_REPO_NAME:latest",
"docker tag $ECR_REPO_NAME:latest $ECR_REPO_URI/$ECR_REPO_NAME:$COMMIT_HASH",
"docker push $ECR_REPO_URI/$ECR_REPO_NAME:latest",
"docker push $ECR_REPO_URI/$ECR_REPO_NAME:$COMMIT_HASH",
]
}
}
}),
// * * ADDED THIS ROLE HERE * *
role: new Role(this, "application-build-project-role", {
assumedBy: new ServicePrincipal("codebuild.amazonaws.com"),
managedPolicies: [ManagedPolicy.fromAwsManagedPolicyName("AmazonEC2ContainerRegistryPowerUser")]
})
}),
});
In my case, the repo was not created on ECR. Creating it fixed it.
The same message ("Retrying in ... seconds" in loop) may be seen when running "docker push" without first creating the corresponding repo in ECR ("myorg/myapp" in your example). Run:
aws ecr create-repository --repository-name myorg/myapp --region us-west-2
The problem is your iam-user have not permission to full access of ecr so attach below policy to your iam-user.
follow photo for policy attachment
For anyone running into this issue, my problem was having the wrong AWS profile/account configured in my AWS cli.
run aws configure and add the keys of the account having access to ECR repository.
If you have multiple AWS accounts using the cli, then check out this solution.
Just had this problem. It was permission related. In my case I was using CDKv2, which assumes a specific role in order to upload assets. Because the user I was deploying as did not have permission to assume that role, it failed. The hint was these warning messages that appeared during the deploy:
current credentials could not be used to assume 'arn:aws:iam::12345:role/cdk-abcde1234-image-publishing-role-12345-ap-southeast-2', but are for the right account. Proceeding anyway.
current credentials could not be used to assume 'arn:aws:iam::12345:role/cdk-abcde1234-file-publishing-role-12345-ap-southeast-2', but are for the right account. Proceeding anyway.
Yes, updating the permissions on your ECR repo would fix it, but since CDK is supposed to maintain this for you, the proper solution is to allow your user to assume the CDK role so you don't need to mess with ECR permissions yourself.
In my case I did this by granting the sts:AssumeRole permission for the resource arn:aws:iam::*:role/cdk-*. This allowed my user to assume both the file upload role and the image upload role.
After granting this permission, the CDK errors about being unable to assume the role went away, and I was able to deploy successfully.
For me, the problem was that the repository name on ECR had to be the same as the name of the app/repository I was pushing. Tried all fixes here, didn't work. This did!
Browse ECR -> Repositories -> Permissions
Edit JSON Policy.
Add these actions.
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability",
"ecr:CompleteLayerUpload",
"ecr:GetDownloadUrlForLayer",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
And Add "*" in Resources.
Save it.
You're good to go, Now you can push the image to ECR.
If you have MFA enforcement policy on your account that might be the problem because you have to have a token for getting action. Take a look at this AWS document to get a token on CLI.
I was uploading from EC2 instance and I was missing to specify the region to my awscli, the login was successful but the docker push command was Retrying all the time, I have set the correct permissions on the ECR repo side
This line fix the issue for me and
aws configure set default.region us-west-1
In my case I used wrong AWS credentials and aws configure with correct credentials resolved the issue.
Related
I logged in to docker normally, and the authentication information was also checked, but the jib build fails.
docker login
cat ~/.docker/config.json
{
"auths": {
"https://index.docker.io/v1/": {}
},
"credsStore": "desktop"
}%
Docker login is successful.
// build.gradle
jib {
from {
image = "eclipse-temurin:17"
}
to {
image = "username/${project.name}:${project.version}"
tags = ["latest"]
}
}
and command ./gradlew jib
error message
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':jib-test:jib'.
> com.google.cloud.tools.jib.plugins.common.BuildStepsExecutionException: Build image failed, perhaps you should make sure your credentials for 'registry-1.docker.io/library/eclipse-temurin' are set up correctly. See https://github.com/GoogleContainerTools/jib/blob/master/docs/faq.md#what-should-i-do-when-the-registry-responds-with-unauthorized for help
Looks like a duplicate of these:
How to setup Jib container to authenticate with docker remote registry to pull images?
401 Unauthorized when using jib to create docker image
https://github.com/GoogleContainerTools/jib/issues/3677
Try emptying config.json entirely or just delete the file. Particularly, remove the entry for "https://index.docker.io/v1/" and credsStore.
I have Jenkins running in Kubernetes along with Kanika to build image and I want to push to GCR.
And for the service account, I use "owner" level service account (just for PoC).
My pipeline:
podTemplate(
containers: [
containerTemplate (
name: 'kaniko',
image: 'gcr.io/kaniko-project/executor:debug-v1.3.0',
ttyEnabled: true,
command: 'sleep 1000000',
args: '',
resourceRequestCpu: '0.5',
resourceRequestMemory: '500Mi'
)
],
serviceAccount: 'jenkins-service-account'
} {
node(POD_LABEL) {
try {
stage('Prepare') {
git([
url: 'https://myrepo.example.com/example-kaniko.git',
branch: 'master',
credentialId: 'jenkins-github'
])
}
container('kaniko') {
stage ('Build image') {
sh '/kaniko/executor -c `pwd` --cache=true --skip-unused-stages=true --single-snapshot --destination=asia.gcr.io/[MY_PROJECT_ID]/testing-1:v1'
}
}
} catch (e) {
throw e
} finally {
echo "Done"
}
}
But still, I got an error:
error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: checking push permission for "asia.gcr.io/[MY_PROJECT_ID]/testing-1:v1": resolving authorization for asia.gcr.io failed: error getting credentials - err: exit status 1, out: docker-credential-gcr/helper: could not retrieve GCR's access token: compute: Received 403 Unable to generate access token; IAM returned 403 Forbidden: The caller does not have permission
This error could be caused by a missing IAM policy binding on the target IAM service account.
How to solve this problem?
Or do I use a wrong method?
Please help, thank you!
Take a look at this document and make sure you have proper authentication method set up.
Additionally, you can check your container registry service account.
There's also a similar question here.
I try to copy files from Jenkins server to the s3 server, but have the error An error occurred (InvalidRequest) when calling the PutObject operation.
There are aws options:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::hhhh-backups/*"
}
]
}
The command with with I try to copy:
aws s3 cp allure-report/ s3://hhhh-backups --grants read=uri=http://acs.amazonaws.com/groups/global/AllUsers --recursive
screenshot:
I also added flag acl bucket-owner-full-controller and have other error:
An error occurred (InvalidRequest) when calling the PutObject operation: Specifying both Canned ACLs and Header Grants is not allowed
How to resolve it? In general, I need to copy reports to the s3 from Jenkins. I can't do this with UI, I can't execute this with code (since I haven't AccessKey) and finally I can't execute this with a script from aws cli.
Also, I can't manage the aws options independently, but I can ask another to do it.
you can try adding the acl flag and set it to bucket-owner-full-control.
your modified command will look like :
aws s3 cp allure-report/ s3://hhhh-backups --acl bucket-owner-full-control --recursive
for your reference :
https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html
The steps i followed are:
Ansible login as root user
Update Server pacakges
Create a user called deploy
Clone a Git Repository from bitbucket.org
I want to clone the repository as deploy user in his home directory using ssh forwarding method.
But the issue is that, I am not able to get permissions even through ssh forwarding and the error returns as :Doesn't have rights to access the repository.
My inventory file:
[production]
rails ansible_host=(my host ip) ansible_user=ubuntu
My ansible.cfg file looks like this:
[ssh_connection]
pipelining=True
ssh_args = -o ForwardAgent=true
My playbook looks like this:
---
- hosts: production
remote_user: root
become: yes
tasks:
- name: Update all packages to latest version
apt:
upgrade: dist
- add deploy user tasks here
(deploy user add task)
- name: APP | Clone repo
git:
repo: git#github.com:e911/Nepali-POS-Tagger.git
dest: home/deploy/myproject
accept_hostkey: true
force: true
become: yes
become_user: deploy
tags: app
My deploy user is created but for some reason I cannot clone the user as deploy user. It doesnot have access right. I have researched and think this seems to be because of ssh keys not being attached. When I login in as ubuntu and switch user as deploy the attached keys are not forwarded to deploy. But I cannot have a solution for this.
How do you solve this ? Or what am I doing wrong here?
Here is the error snippet:
fatal: [rails]: FAILED! => {
"changed": false,
"cmd": "/usr/bin/git clone --origin origin '' /home/deploy/myproject",
"invocation": {
"module_args": {
"accept_hostkey": true,
"archive": null,
"bare": false,
"clone": true,
"depth": null,
"dest": "/home/deploy/myproject",
"executable": null,
"force": true,
"gpg_whitelist": [],
"key_file": null,
"recursive": true,
"reference": null,
"refspec": null,
"remote": "origin",
"repo": "git#github.com:e911/Nepali-POS-Tagger.git",
"separate_git_dir": null,
"ssh_opts": null,
"track_submodules": false,
"umask": null,
"update": true,
"verify_commit": false,
"version": "HEAD"
}
},
"msg": "",
"rc": 128,
"stderr": "Cloning into '/home/deploy/myproject'...\ngit#github.com: Permission denied (publickey).\r\nfatal: Could not read from remote repository.\n\nPlease make sure you have the correct access rights\nand the repository exists.\n",
"stderr_lines": [
"Cloning into '/home/deploy/myproject'...",
"git#github.com: Permission denied (publickey).",
"fatal: Could not read from remote repository.",
"",
"Please make sure you have the correct access rights",
"and the repository exists."
],
"stdout": "",
"stdout_lines": []
}
I have tried the solutions here: Ansible and Git Permission denied (publickey) at Git Clone but it was of not help.
We have alternative solution, using HTTP instead of SSH:
For GitHub:
Generate a Token from link: https://github.com/settings/tokens
Give permission with scope: repo (full control of private repositories)
Use that token git+https://<TOKEN>:x-oauth-basic#github.com/<ORGANIZATION>/<REPO>.git#<BRANCH>
For BitBucket:
Generate a random Password for your repo from link: https://bitbucket.org/account/settings/app-passwords
Give permission with scope Repositories: Read
Use that password to clone your repo as: git clone https://<USERNAME>:<GENERATED_PASSWORD>#bitbucket.org/<ORGANIZATION>/<REPO>.git
Hope this could be an alternative for the solution.
I am trying to push a docker container image to the Google Container Engine registry:
$ sudo gcloud docker push gcr.io/<my_project>/node
The push refers to a repository [gcr.io/<my_project>/node] (len: 1)
24c43271ae0b: Image already exists
70a0868daa56: Image already exists
1d86f57ee56d: Image already exists
a46b473f6491: Image already exists
a1ac8265769f: Image already exists
290bb858e1c3: Image already exists
d6fc4b5b4917: Image already exists
3842411e5c4c: Image already exists
7a01cc5f27b1: Image already exists
dbacfa057b30: Image already exists
latest: digest: sha256:02be2e66ad2fe022f433e228aa43f32d969433402820035ac8826880dbc325e4 size: 17236
Received unexpected HTTP status: 500 Internal Server Error
I can not make the command verbose more. Neither with:
$ sudo gcloud docker push gcr.io/<my_project>/node --verbosity info
nor with this command that should work:
$ sudo gcloud docker --log-level=info push gcr.io/sigma-cairn-99810/node
usage: gcloud docker [EXTRA_ARGS ...] [optional flags]
ERROR: (gcloud.docker) unrecognized arguments: --log-level=info
according to the documentation (see EXTRA_ARGS) and --log-level=info is a valid docker option:
SYNOPSIS
gcloud docker [EXTRA_ARGS ...] [--authorize-only, -a]
[--docker-host DOCKER_HOST]
[--server SERVER,[SERVER,...], -s SERVER,[SERVER,...]; default="gcr.io,us.gcr.io,eu.gcr.io,asia.gcr.io,b.gcr.io,bucket.gcr.io,appengine.gcr.io"]
[GLOBAL-FLAG ...]
...
POSITIONAL ARGUMENTS
[EXTRA_ARGS ...]
Arguments to pass to docker.
I am using the default service account that GCP installs on my container-vm machine instance. I have given it also Owner permissions to all resources in <my_project>.
UPDATE:
Running sudo gsutil ls -bL gs://artifacts.<my_project>.appspot.com I get:
gs://artifacts.<my_project>.appspot.com/ :
Storage class: STANDARD
Location constraint: US
Versioning enabled: None
Logging configuration: None
Website configuration: None
CORS configuration: None
Lifecycle configuration: None
ACL: []
Default ACL: []
If I do the same thing after authenticating with my non-service account, I get both ACL and Default ACL:
ACL:
[
{
"entity": "project-owners-262451203973",
"projectTeam": {
"projectNumber": "262451203973",
"team": "owners"
},
"role": "OWNER"
},
{
"entity": "project-editors-262451203973",
"projectTeam": {
"projectNumber": "262451203973",
"team": "editors"
},
"role": "OWNER"
},
{
"entity": "project-viewers-262451203973",
"projectTeam": {
"projectNumber": "262451203973",
"team": "viewers"
},
"role": "READER"
}
]
Default ACL:
[
{
"entity": "project-owners-262451203973",
"projectTeam": {
"projectNumber": "262451203973",
"team": "owners"
},
"role": "OWNER"
},
{
"entity": "project-editors-262451203973",
"projectTeam": {
"projectNumber": "262451203973",
"team": "editors"
},
"role": "OWNER"
},
{
"entity": "project-viewers-262451203973",
"projectTeam": {
"projectNumber": "262451203973",
"team": "viewers"
},
"role": "READER"
}
]
Can you run sudo gsutil ls -bL gs://artifacts.<my_project>.appspot.com and see if you can access to the GCS bucket. This will verify permissions for storage for the docker image.
While I think you should have permission by being added to owner, this will verify if you do or not.
As for the EXTRA_ARGS, I think --log-level="info" is only valid for the command docker daemon, docker push does not recognize --log-level="info"
UPDATE
From reviewing the logs again, you are pushing a mostly existing image, as the "image already exist" log entries indicate. On the first new write step it failed. That indicates that the problem seems likely to be that the instance you started with originally only had read only scope.
Can you please run this command and share the output.
curl -H "Metadata-Flavor:Google" http://metadata/computeMetadata/v1/instance/service-accounts/default/scopes
We are looking for the scope https://www.googleapis.com/auth/devstorage.read_write.
What might have happened is that the instance was not originally created with this scope, and as the scope on an instance cannot be modified, it maintains only being able to read.
If this is the case, the solution would likely be creating a new instance.
We will file a bug to ensure better messaging is provided in situations like this.