Jenkins Kaniko Failed to Push to GCR - jenkins

I have Jenkins running in Kubernetes along with Kanika to build image and I want to push to GCR.
And for the service account, I use "owner" level service account (just for PoC).
My pipeline:
podTemplate(
containers: [
containerTemplate (
name: 'kaniko',
image: 'gcr.io/kaniko-project/executor:debug-v1.3.0',
ttyEnabled: true,
command: 'sleep 1000000',
args: '',
resourceRequestCpu: '0.5',
resourceRequestMemory: '500Mi'
)
],
serviceAccount: 'jenkins-service-account'
} {
node(POD_LABEL) {
try {
stage('Prepare') {
git([
url: 'https://myrepo.example.com/example-kaniko.git',
branch: 'master',
credentialId: 'jenkins-github'
])
}
container('kaniko') {
stage ('Build image') {
sh '/kaniko/executor -c `pwd` --cache=true --skip-unused-stages=true --single-snapshot --destination=asia.gcr.io/[MY_PROJECT_ID]/testing-1:v1'
}
}
} catch (e) {
throw e
} finally {
echo "Done"
}
}
But still, I got an error:
error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: checking push permission for "asia.gcr.io/[MY_PROJECT_ID]/testing-1:v1": resolving authorization for asia.gcr.io failed: error getting credentials - err: exit status 1, out: docker-credential-gcr/helper: could not retrieve GCR's access token: compute: Received 403 Unable to generate access token; IAM returned 403 Forbidden: The caller does not have permission
This error could be caused by a missing IAM policy binding on the target IAM service account.
How to solve this problem?
Or do I use a wrong method?
Please help, thank you!

Take a look at this document and make sure you have proper authentication method set up.
Additionally, you can check your container registry service account.
There's also a similar question here.

Related

Authentication problem of my pipeline with my gitlab project

I am in multi-branch option with my jenkins and I have a problem of authentication to Gitlab. Here is my jenkins file :
pipeline {
agent any
environment {
registry = "*****#gmail.com/test"
registryCredential = 'test'
dockerImage = ''
}
stages {
stage('Cloning our Git') {
steps{
git 'https://gitlab.com/**********/*************/************.git'
}
}
stage('Build docker image') {
steps {
script {
dockerImage = docker.build registry + ":$BUILD_NUMBER"
}
}
}
stage('Deploy our image') {
steps{
script {
docker.withRegistry( '', registryCredential ){
dockerImage.push()
}
}
}
}
stage('Cleaning up') {
steps{
sh "docker rmi $registry:$BUILD_NUMBER"
}
}
}
}
This is the error I got:
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags --force --progress -- https://gitlab.com/************/*******/***************.git +refs/heads/:refs/remotes/origin/" returned status code 128:
stdout:
stderr: remote: HTTP Basic: Access denied. The provided password or token is incorrect or your account has 2FA enabled and you must use a personal access token instead of a password. See https://gitlab.com/help/topics/git/troubleshooting_git#error-on-git-fetch-http-basic-access-denied
I would like to know how to authenticate with the jenkinsfile to gitlab or if you have a better solution for me I am interested. Thanks
If you follow the link provided in the error message, you end up here:
https://docs.gitlab.com/ee/user/profile/account/two_factor_authentication.html#troubleshooting
You need to create a Personal Access Token which is kind of a special ID to delegate access to parts of your account rights.
The documentation for PAT is here:
https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html
In the Gitlab repository interface, it is under Settings > Access Tokens.
As you try to read an HTTPS repository, it seems you need to create a token with rights read_repository.
Then you should be able to access the repository with:
https://<my-user-id>:<my-pat>#gitlab.com/<my-account>/<my-project-name>.git

Terraform docker_registry_image error: 'unable to get digest: Got bad response from registry: 400 Bad Request'

I am trying to use CDF for terraform to build and push a docker image to AWS ECR. I have decided to use terraform docker provider for it. Here is my code
class MyStack extends TerraformStack {
constructor(scope: Construct, name: string) {
super(scope, name);
const usProvider = new aws.AwsProvider(this, "us-provider", {
region: "us-east-1",
defaultTags: {
tags: {
Project: "CV",
Name: "CV",
},
},
});
const repo = new aws.ecr.EcrpublicRepository(this, "docker-repo", {
provider: usProvider,
repositoryName: "cv",
forceDestroy: true,
});
const authToken = new aws.ecr.DataAwsEcrpublicAuthorizationToken(
this,
"auth-token",
{
provider: usProvider,
}
);
new docker.DockerProvider(this, "docker-provider", {
registryAuth: [
{
address: repo.repositoryUri,
username: authToken.userName,
password: authToken.password,
},
],
});
new docker.RegistryImage(this, "image-on-public-ecr", {
name: repo.repositoryUri,
buildAttribute: {
context: __dirname,
},
});
}
}
But during deployment, I have this error: Unable to create image, image not found: unable to get digest: Got bad response from registry: 400 Bad Request. But it still is able to push to the registry, I can see it from the AWS console.
I can't seem to find any mistake in my code, and I don't understand the error. I hope you can help
In the Terraform execution model is build so that Terraform first finds all the information it needs to get the current state of your infrastructure and the in a second step calculates the plan of changes that need to be applied to get the current state into the that you described through your configuration.
This poses a problem here, the provider you declare is using information that is only available once the plan is being put into action, there is no repo url / auth token before the ECR repo is being created.
There a different ways to solve this problem: You can make use of the cross-stack references / multi-stack feature and split the ECR repo creation into a separate TerraformStack that deploys beforehand. You can pass a value from that stack into your other stack and use it to configure the provider.
Another way to solve this is by building and pushing your image outside of the terraform provider through the null provider with a local provisioner as it's done in the docker-aws-ecs E2E example

jib authentication not working when use docker

I logged in to docker normally, and the authentication information was also checked, but the jib build fails.
docker login
cat ~/.docker/config.json
{
"auths": {
"https://index.docker.io/v1/": {}
},
"credsStore": "desktop"
}%
Docker login is successful.
// build.gradle
jib {
from {
image = "eclipse-temurin:17"
}
to {
image = "username/${project.name}:${project.version}"
tags = ["latest"]
}
}
and command ./gradlew jib
error message
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':jib-test:jib'.
> com.google.cloud.tools.jib.plugins.common.BuildStepsExecutionException: Build image failed, perhaps you should make sure your credentials for 'registry-1.docker.io/library/eclipse-temurin' are set up correctly. See https://github.com/GoogleContainerTools/jib/blob/master/docs/faq.md#what-should-i-do-when-the-registry-responds-with-unauthorized for help
Looks like a duplicate of these:
How to setup Jib container to authenticate with docker remote registry to pull images?
401 Unauthorized when using jib to create docker image
https://github.com/GoogleContainerTools/jib/issues/3677
Try emptying config.json entirely or just delete the file. Particularly, remove the entry for "https://index.docker.io/v1/" and credsStore.

Pushing an image to ECR, getting "Retrying in ... seconds"

I recently created a new repository in AWS ECR, and I'm attempting to push an image. I'm copy/pasting the directions provided via the "View push commands" button on the repository page. I'll copy those here for reference:
aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin 123456789.dkr.ecr.us-west-2.amazonaws.com
("Login succeeded")
docker build -t myorg/myapp .
docker tag myorg/myapp:latest 123456789.dkr.ecr.us-west-2.amazonaws.com/myorg/myapp:latest
docker push 123456789.dkr.ecr.us-west-2.amazonaws.com/myorg/myapp:latest
However, when I get to the docker push step, I see:
> docker push 123456789.dkr.ecr.us-west-2.amazonaws.com/myorg/myapp:latest
The push refers to repository [123456789.dkr.ecr.us-west-2.amazonaws.com/myorg/myapp]
a53c8ed5f326: Retrying in 1 second
78e16537476e: Retrying in 1 second
b7e38d172e62: Retrying in 1 second
f1ff72b2b1ca: Retrying in 1 second
33b67aceeff0: Retrying in 1 second
c3a550784113: Waiting
83fc4b4db427: Waiting
e8ade0d39f19: Waiting
487d5f9ec63f: Waiting
b24e42eb9639: Waiting
9262398ff7bf: Waiting
804aae047b71: Waiting
5d33f5d87bf5: Waiting
4e38024e7e09: Waiting
EOF
I'm wondering if this has something to do with the permissions/policies associated with this repository. Right now there are no statements attached to this repository. Is that the missing part? If so, what would that statement look like? I've tried this, but it had no effect:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPutImage",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789:root"
},
"Action": "ecr:PutImage"
}
]
}
Bonus Points:
I eventually want to use this in a CDK CodeBuildAction. I was getting the same error as above, so I check to see if I was getting the same result in my local terminal, which I am. So if the policy statement needs to be different for use in the CDK CodeBuildAction those details would be appreciated as well.
Thank you in advance for and advice.
I was having the same problem when trying to upload the image manually using the AWS and Docker CLI. I was able to fix it by going into ECR -> Repositories -> Permissions then adding a new policy statement with principal:* and the following actions:
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability",
"ecr:CompleteLayerUpload",
"ecr:GetDownloadUrlForLayer",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
Be sure to add more restrictive principals. I was just trying to see if permissions were the problem in this case and sure enough they were.
The accepted answer works correctly in resolving the issue. However, as has been mentioned in the answer, allowing principal:* is risky and can get your ECR compromised.
Be sure to add specific principal(s) i.e. IAM Users/Roles such that only those Users/Roles will be allowed to execute the mentioned "Actions". Following JSON policy can be added in Amazon ECR >> Repositories >> Select Required Repository >> Permissions >> Edit policy JSON to get this resolved quickly:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "Statement1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<AccountNumber>:role/<RoleName>"
},
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:CompleteLayerUpload",
"ecr:GetDownloadUrlForLayer",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
]
}
]
}
I had this issue when the repository didn't exist in ECR - I assumed that pushing would create it, but it didn't.
Creating it before pushing solved the problem.
It turns out it was a missing/misconfigured policy. I was able to get it working within CodeBuild by adding a role with the AmazonEC2ContainerRegistryPowerUser managed policy:
new CodeBuildAction({
actionName: "ApplicationBuildAction",
input: this.applicationSourceOutput,
outputs: [this.applicationBuildOutput],
project: new PipelineProject(this, "ApplicationBuildProject", {
vpc: this.codeBuildVpc,
securityGroups: [this.codeBuildSecurityGroup],
environment: {
buildImage: LinuxBuildImage.STANDARD_5_0,
privileged: true,
},
environmentVariables: {
ECR_REPO_URI: {
value: ECR_REPO_URI,
},
ECR_REPO_NAME: {
value: ECR_REPO_NAME,
},
AWS_REGION: {
value: this.region,
}
},
buildSpec: BuildSpec.fromObject({
version: "0.2",
phases: {
pre_build: {
commands: [
"echo 'Logging into Amazon ECR...'",
"aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $ECR_REPO_URI",
"COMMIT_HASH=$(echo \"$CODEBUILD_RESOLVED_SOURCE_VERSION\" | head -c 8)"
]
},
build: {
commands: [
"docker build -t $ECR_REPO_NAME:latest ."
]
},
post_build: {
commands: [
"docker tag $ECR_REPO_NAME:latest $ECR_REPO_URI/$ECR_REPO_NAME:latest",
"docker tag $ECR_REPO_NAME:latest $ECR_REPO_URI/$ECR_REPO_NAME:$COMMIT_HASH",
"docker push $ECR_REPO_URI/$ECR_REPO_NAME:latest",
"docker push $ECR_REPO_URI/$ECR_REPO_NAME:$COMMIT_HASH",
]
}
}
}),
// * * ADDED THIS ROLE HERE * *
role: new Role(this, "application-build-project-role", {
assumedBy: new ServicePrincipal("codebuild.amazonaws.com"),
managedPolicies: [ManagedPolicy.fromAwsManagedPolicyName("AmazonEC2ContainerRegistryPowerUser")]
})
}),
});
In my case, the repo was not created on ECR. Creating it fixed it.
The same message ("Retrying in ... seconds" in loop) may be seen when running "docker push" without first creating the corresponding repo in ECR ("myorg/myapp" in your example). Run:
aws ecr create-repository --repository-name myorg/myapp --region us-west-2
The problem is your iam-user have not permission to full access of ecr so attach below policy to your iam-user.
follow photo for policy attachment
For anyone running into this issue, my problem was having the wrong AWS profile/account configured in my AWS cli.
run aws configure and add the keys of the account having access to ECR repository.
If you have multiple AWS accounts using the cli, then check out this solution.
Just had this problem. It was permission related. In my case I was using CDKv2, which assumes a specific role in order to upload assets. Because the user I was deploying as did not have permission to assume that role, it failed. The hint was these warning messages that appeared during the deploy:
current credentials could not be used to assume 'arn:aws:iam::12345:role/cdk-abcde1234-image-publishing-role-12345-ap-southeast-2', but are for the right account. Proceeding anyway.
current credentials could not be used to assume 'arn:aws:iam::12345:role/cdk-abcde1234-file-publishing-role-12345-ap-southeast-2', but are for the right account. Proceeding anyway.
Yes, updating the permissions on your ECR repo would fix it, but since CDK is supposed to maintain this for you, the proper solution is to allow your user to assume the CDK role so you don't need to mess with ECR permissions yourself.
In my case I did this by granting the sts:AssumeRole permission for the resource arn:aws:iam::*:role/cdk-*. This allowed my user to assume both the file upload role and the image upload role.
After granting this permission, the CDK errors about being unable to assume the role went away, and I was able to deploy successfully.
For me, the problem was that the repository name on ECR had to be the same as the name of the app/repository I was pushing. Tried all fixes here, didn't work. This did!
Browse ECR -> Repositories -> Permissions
Edit JSON Policy.
Add these actions.
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability",
"ecr:CompleteLayerUpload",
"ecr:GetDownloadUrlForLayer",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
And Add "*" in Resources.
Save it.
You're good to go, Now you can push the image to ECR.
If you have MFA enforcement policy on your account that might be the problem because you have to have a token for getting action. Take a look at this AWS document to get a token on CLI.
I was uploading from EC2 instance and I was missing to specify the region to my awscli, the login was successful but the docker push command was Retrying all the time, I have set the correct permissions on the ECR repo side
This line fix the issue for me and
aws configure set default.region us-west-1
In my case I used wrong AWS credentials and aws configure with correct credentials resolved the issue.

Azure devops pipeline restAPI How to pass varible with SourceFolder for CopyFiles Task , self hosted agent in container

my set up is as follow
I have hosted agent that as first job copies files from the self-hosted agent which is started as a docker container
the hosted pipeline is triggered with pipeline "run" rest API :
https://learn.microsoft.com/en-us/rest/api/azure/devops/pipelines/runs/run%20pipeline?view=azure-devops-rest-6.0
this is how the body looks like now :
"resources": {
"repositories:": {
"self": {
"refName": "refs/heads/my_branch"
}
}
}
it is working great.
now the part of the hosted pipeline looks like this :
- job: self_hosted_connect
timeoutInMinutes: 10
pool: Default
steps:
- task: CopyFiles#2
inputs:
SourceFolder: '/home/copy_dir'
Contents: '**'
TargetFolder: '$(build.artifactstagingdirectory)'
also, work great.
My questions are :
I like to send in the "run" rest API another parameter that contains the SourceFolder path
so that the CopyFiles task will be dynamic and not have hardcode SourceFolder path
When i run the self-hosted agent from docker how do i tell the self-hosted agent to include the directory outside its working dir? so the pipeline will not fail with the error :
#[error]Unhandled: Not found SourceFolder: /home/copy_dir
UPDATE
i updated the request to :
{
"resources": {
"repositories:": {
"self": {
"refName": "refs/heads/my_branch"
}
}
},
"templateParameters": {
"Folderpath":"{/home/foo/my_dir}"
}
}
but I'm getting an error:
{
"$id": "1",
"innerException": null,
"message": "Unexpected parameter 'Folderpath'",
"typeName": "Microsoft.Azure.Pipelines.WebApi.PipelineValidationException, Microsoft.Azure.Pipelines.WebApi",
"typeKey": "PipelineValidationException",
"errorCode": 0,
"eventId": 3000
}
send in the "run" rest API another parameter that contains the SourceFolder path
We can use runtime parameters in pipeline.
YAML sample:
parameters:
- name: Folderpath
displayName: 'configure Folder path'
type: string
default: {SourceFolder path}
steps:
- task: CopyFiles#2
inputs:
SourceFolder: '${{ parameters.Folderpath}}'
Contents: '**'
TargetFolder: '$(build.artifactstagingdirectory)'
Request URL:
POST https://dev.azure.com/{organization}/{project}/_apis/pipelines/{pipelineId}/runs?api-version=6.0-preview.1
Request Body:
{
"resources":{
"repositories":{
"self":{"refName":"refs/heads/{my_branch}"
}
}
},
"templateParameters": {
"Folderpath":"{SourceFolder path}"
}
}
how do i tell the self-hosted agent to include the directory outside its working dir?
We can copy the local folder or azure DevOps predefined variables to define the source folder.
Update1
We should define the parameter in the YAML build, if not, we will get the error Unexpected parameter 'Folderpath'"
UPDATE 2
as i like it to take from the real path (the one i pass in the request ) on the disc where the self-hosted docker
running and not relative to the docker working dir, so now it gives me this error :
[error]Unhandled: Not found SourceFolder: /azp/agent/_work/1/s/{/home/copy_dir}
where /azp is the docker working dir
i configured docker from this link :
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/docker?view=azure-devops

Resources