I'm building a docker container with spotify's maven plugin and try to push to ecr afterwards.
This happens using cloudbees Build and Publish plugin after managing to login with the Amazon ECR plugin.
This works like a charm on the jenkins master.
But on the slave I get:
no basic auth credentials
Build step 'Docker Build and Publish' marked build as failure
Is pushing from slaves out of scope for the ECR Plugin or did I miss something?
The answers here didn't work for my pipeline. I find this solution working, and also clean:
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding', accessKeyVariable: 'AWS_ACCESS_KEY_ID', credentialsId: 'myCreds', secretKeyVariable: 'AWS_SECRET_ACCESS_KEY']]) {
sh '''
aws ecr get-login-password --region ${AWS_REGION} | docker login --username AWS --password-stdin ${REGISTRY}
..
'''
}
This solution doesn't require aws cli v2.
You might be falling foul of the bug reported in the ECR plugin here: https://issues.jenkins-ci.org/browse/JENKINS-44143
Various people in that thread are describing slightly different symptoms, but the common theme is that docker was failing to use the auth details that had been correctly generated by the ECR plugin.
I found in my case this was because the ECR plugin was saving to one docker config and the docker-commons plugin (which handles the actual work of the docker API) was reading from another. Docker changed config formats and locations in an earlier version which caused the conflict.
The plugin author offers a workaround which is to essentially just nuke both config files first:
node {
//cleanup current user docker credentials
sh 'rm ~/.dockercfg || true'
sh 'rm ~/.docker/config.json || true'
//configure registry
docker.withRegistry('https://ID.ecr.eu-west-1.amazonaws.com', 'ecr:eu-west-1:86c8f5ec-1ce1-4e94-80c2-18e23bbd724a') {
//build image
def customImage = docker.build("my-image:${env.BUILD_ID}")
//push image
customImage.push()
}
You might want to try that purely as a debugging step and quick fix (if it works you can be confident this bug is your issue).
My permanent fix was to simply create the new style dockercfg manually with a sensible default, and then set the environment variable to point to it.
I did this in my Dockerfile which creates my Jenkins instance like so:
RUN mkdir -p $JENKINS_HOME/.docker/ && \
echo '{"auths":{}}' > $JENKINS_HOME/.docker/config.json
ENV DOCKER_CONFIG $JENKINS_HOME/.docker
You have not credentials in the slave, that is the problem you have. I fix this problem injecting this credentials in every pipeline that run in the on demand slaves.
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding', accessKeyVariable: 'AWS_ACCESS_KEY_ID', credentialsId: 'AWS_EC2_key', secretKeyVariable: 'AWS_SECRET_ACCESS_KEY']]) {
sh "aws configure set aws_access_key_id ${AWS_ACCESS_KEY_ID}"
sh "aws configure set aws_secret_access_key ${AWS_SECRET_ACCESS_KEY}"
sh '$(aws ecr get-login --no-include-email --region eu-central-1)'
sh "docker push ${your_ec2_repo}/${di_name}:image_name${newVersion}"
Of course you need to have installed the aws-cli in the slave
Related
I have created a Jenkins Credential (Secret File Type) and want to use this credential in my Ansible playbook.
Ansible is able to able to identify the Jenkins credential when it is running on localhost.
But, when the playbook runs on the remote host, it is unable to identify the value of my Jenkins credential.
My Jenkinsfile stage for Ansible looks like this:
stage('Build Image') {
steps {
withCredentials([file(credentialsId: 'private.key', variable: 'PRIVATE_KEY')]) {
runAnsible('playbook.yml', [
env: dev,
key: PRIVATE_KEY
.
.
])
}
}
}
Error:
"No such file or directory"
Edit: runAnsible is a part of jenkins-shared-libs. Basically, it is just like calling
ansible-playbook playbook.yaml --extra-vars '.....'
I am trying to deploy container using helm from Jenkins pipeline. I have install Kubernetes plugin for jenkins and provided it local running kubernetes URL and the config file in credentials. When I am doing 'Test connection', it is showing'Connected to Kubernetes 1.16+'.
But when I run helm install command from pipeline it gives error
Error: Kubernetes cluster unreachable: the server could not find the requested resource
Note: I am able to do all operation using CLI and also from Jenkins pipeline by using withCredentials and passing cred file variable name(created in jenkins credentials). I just want to do this without wrapping it inside 'withCredentials' .
Both Jenkins and kubernetes are running seperately on windows 10. Please help
Helm uses kubectl config file. I'm using an step like this.
steps {
container('helm') {
withCredentials([file(credentialsId: 'project-credentials', variable: 'PULL_KEYFILE')]) {
sh """
gcloud auth activate-service-account --key-file=${PULL_KEYFILE} --project project-name
gcloud container clusters get-credentials cluster-name --zone us-east1
kubectl create namespace ${NAMESPACE} --dry-run -o yaml | kubectl apply -f -
"""
helm upgrade --install release-name .
}
}
}
Currently I am trying to add version number or build number for Docker image to deploy on Kubernetes cluster. Previously I was working only with :latest tag. But when I am using latest tag , I found problem for pulling from Dockerhub image registry. So when I am using the build number to my docker image like <image-name>:{build-number} .
Application Structure
In my Kubernetes deployment, I am using deployment and service. I am defining my image repository in my deployment file like the following,
containers:
- name: test-kube-deployment-container
image: samplekubernetes020/testimage:latest
ports:
- name: http
containerPort: 8085
protocol: TCP
Here instead of latest tag, I want to put build number with my image in deployment YAML.
Can I use an environment variable for holding the random build number for accessing like <image-name>:${buildnumber} ?
If i want to use a environment variable which providing the random number how I can generate a random number to a environment variable?
Updates On Image Version Implementation
My modified Jenkinsfile contains the step like following to assign the image version number to image. But still I am not getting the updated result after changes to repository,
I created step like the following in Jenkinsfile
stage ('imagebuild')
{
steps
{
sh 'docker build -f /var/lib/jenkins/workspace/jpipeline/pipeline/Dockerfile -t spacestudymilletech010/spacestudykubernetes /var/lib/jenkins/workspace/jpipeline/pipeline'
sh 'docker login --username=my-username --password=my-password'
sh "docker tag spacestudymilletech010/spacestudykubernetes:latest spacestudymilletech010/spacestudykubernetes:${VERSION}"
sh 'docker push spacestudymilletech010/spacestudykubernetes:latest'
}
}
And my deployment YAML file contains like the following,
containers:
- name: test-kube-deployment-container
image: spacestudymilletech010/spacestudykubernetes:latest
ports:
- name: http
containerPort: 8085
protocol: TCP
Confusions
NB: When I am checking the dockerhub repository, every time it showing the latest push status
So my confusions are:
Is there any problem with pulling latest image in my deployment.yaml file?
Is the problem when I am tagging the image at my machine from where I am building the image and pushing?
The standard way or at least the way that has worked for most of us is to create versioned or tagged images. For example
samplekubernetes020/testimage:1
samplekubernetes020/testimage:2
samplekubernetes020/testimage:3
...
...
Now I will try to answer your actual question which is how do I update the image which is in my deployment when my image tag upgrades?
Enter Solution
When you compile and build a new image with latest version of code, tag it with an incremental unique version. This tag can be anything unique or build number, etc.
Then push this tagged image to docker registry
Once the image is uploaded, this is when you can use kubectl or kubernetes API to update the deployment with the latest container image.
kubectl set image deployment/my-deployment test-kube-deployment-container=samplekubernetes020/testimage:1 --record
The above set of steps generally take place in your CI pipeline, where you store the image version or the image: version in the environment variable itself.
Update Post comment
Since you are using Jenkins, you can get the current build number and commit-id and many other variables in Jenkinsfile itself as Jenkins injects these values at builds runtime. For me, this works. Just a reference.
environment {
NAME = "myapp"
VERSION = "${env.BUILD_ID}-${env.GIT_COMMIT}"
IMAGE = "${NAME}:${VERSION}"
}
stages {
stage('Build') {
steps {
echo "Running ${VERSION} on ${env.JENKINS_URL}"
git branch: "${BRANCH}", .....
echo "for brnach ${env.BRANCH_NAME}"
sh "docker build -t ${NAME} ."
sh "docker tag ${NAME}:latest ${IMAGE_REPO}/${NAME}:${VERSION}"
}
}
}
This Jenkins pipeline approach worked for me.
I am using Jenkins build number as a tag for docker image, pushing to docker hub. Now applying yaml file to k8s cluster and then updating the image in deployment with same tag.
Sample pipeline script snippet is here,
stage('Build Docker Image'){
sh 'docker build -t {dockerId}/{projectName}:${BUILD_NUMBER} .'
}
stage('Push Docker Image'){
withCredentials([string(credentialsId: 'DOKCER_HUB_PASSWORD', variable: 'DOKCER_HUB_PASSWORD')]) {
sh "docker login -u {dockerId} -p ${DOKCER_HUB_PASSWORD}"
}
sh 'docker push {dockerId}/{projectName}:${BUILD_NUMBER}'
}
stage("Deploy To Kuberates Cluster"){
sh 'kubectl apply -f {yaml file name}.yaml'
sh 'kubectl set image deployments/{deploymentName} {container name given in deployment yaml file}={dockerId}/{projectName}:${BUILD_NUMBER}'
}
Jenkins Version - 2.164.1
Jenkins Docker Plugin Version – 1.1.6
Docker Version - 18.09.3, build 774a1f4
Problem:-
I have below code in my Jenkins scripted pipeline section. I have added my private Docker registry URL and Credentials added under Manage Jenkins --> Configure System. But pipeline Job is failing for docker login.
Error form Jenkins - ERROR: docker login failed
Code:-
stage('Build') {
withDockerRegistry(credentialsId: 'docker-reg-credentails', url: 'http://registryhub:8081/nexus/') {
image = docker.image('registryhub:8085/ubuntu-16:1')
image.pull()
docker.image('registryhub:8085/ubuntu-16:1').inside {
sh 'cat /etc/issue'
}
}
}
Inside Stage, do something like below:
script
{
def server = Nexus.server 'docker-reg-credentails'
def buildRegistry = [ url: 'http://registryhub:8081/nexus/', credentialsId: 'docker-reg-credentails' ]
def rtDocker = Nexus.docker server: server
withDockerRegistry(registry: buildRegistry )
{
sh 'docker pull hello-world'
sh 'docker tag hello-world:latest hello-world:latest2'
rtDocker.addProperty("status", "stable")
def buildInfo = rtDocker.push 'hello-world:latest', 'docker-local'
// Step 4: Publish the build-info to Nexus: server.publishBuildInfo buildInfo
server.publishBuildInfo buildInfo
}
}
If you try to run docker login explicitely in sh you can get more information about the cause of fail. Most probable cause would be access denied on connection to docker daemon. So you need add Jenkins account to docker group, e. g.
sudo usermod -a -G docker jenkins
The following are in my .gitlab-ci.yml
stages:
- build
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
services:
- docker:dind
build-image:
image: docker:stable
stage: build
script:
- docker build --no-cache -t repo/myimage:$CI_JOB_ID .
- docker push repo/myimage:$CI_JOB_ID
I've setup the DOCKER_AUTH_CONFIG in Gitlab like following (to contain all possibilities of matching)
{
"auths": {
"https://index.docker.io": {
"auth": "...."
},
"https://index.docker.io/v1/": {
"auth": "..."
},
"https://index.docker.io/v2/": {
"auth": "..."
},
"index.docker.io/v1/": {
"auth": "..."
},
"index.docker.io/v2/": {
"auth": "..."
},
"docker.io/repo/myimage": {
"auth": "..."
}
}
}
However, whenever trying to push the image, the following error occurred
$ docker push repo/myimage:$CI_JOB_ID
The push refers to repository [docker.io/repo/myimage]
ce6466f43b11: Preparing
719d45669b35: Preparing
3b10514a95be: Preparing
63dcf81c7ca7: Waiting
3b10514a95be: Waiting
denied: requested access to the resource is denied
ERROR: Job failed: exit code 1
It worked when I use docker login with username/password. Anyone please show me what I did wrong to get it to work with DOCKER_AUTH_CONFIG?
Thanks heaps
Regards
Tin
To use the content of DOCKER_AUTH_CONFIG as docker login, just store it in $HOME/.docker/config.json, e.g. as follows:
before_script:
- mkdir -p $HOME/.docker
- echo $DOCKER_AUTH_CONFIG > $HOME/.docker/config.json
Ref: https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#option-3-use-docker_auth_config
This allows to use a single config to load images for build containers and to access the registry inside the build from the same configuration source.
note: this replaces an execution of docker login
see also: https://docs.docker.com/engine/reference/commandline/login/#privileged-user-requirement
DOCKER_AUTH_CONFIG works when you are trying to pull the image from your private repository. Here is the function that uses that config variable. That function is only used by getDockerImage function.
So whenever you need to push your image inside your job's script section, you need the docker login step before that.
The documenation describing DOCKER_AUTH_CONFIG doesn't show any example with several credentials. The documented syntax is:
{
"auths":{
"registry.hub.docker.com":{
"auth":"xxxxxxxxxxxxxxxxxxxxxxxxxxxx" // base 64 encoded username:password
}
}
}
Still, as you said, you can use before_script at the beginning of the gitlab-ci.yml file or inside each job if you need several authentifications:
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" --password-stdin
Where $CI_REGISTRY_USER and CI_REGISTRY_PASSWORD would be secret variables.
And after each script or at the beginning of the whole file:
after_script:
- docker logout
I wrote an answer about using Gitlab CI and Docker to build docker images :
How to build, push and pull multiple docker containers with gitlab ci?
Using --password-stdin and secrets instead of a plain -p <password> is a better alternative.
EDIT: The example syntax in mypost is taken from this awesome answer from #Ruwanka Madhushan Can't Access Private MySQL Docker Image From Gitlab CI. You should go see for yourself
SECOND EDIT: You should protect your secret variable only if you want to make them available for protected branches or tags. If you didn't setup any protected brnach or tag, do not use protected variables.
From the doc: Variables could be protected. Whenever a variable is protected, it would only be securely passed to pipelines running on the protected branches or protected tags. The other pipelines would not get any protected variables.
https://docs.gitlab.com/ee/ci/variables/#protected-variables
In my case the issue was following these docs blindly
https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#define-an-image-from-a-private-container-registry
They tell you to do the following if you need to manually generate the token:
# The use of "-n" - prevents encoding a newline in the password.
echo -n "my_username:my_password" | base64
# Example output to copy
bXlfdXNlcm5hbWU6bXlfcGFzc3dvcmQ=
My password had spaces in so...
# Correct encoding
> echo "username:password with spaces in it" | base64
dXNlcm5hbWU6cGFzc3dvcmQgd2l0aCBzcGFjZXMgaW4gaXQK
# Encoding as in docs
> echo -n "username:password with spaces in it" | base64
dXNlcm5hbWU6cGFzc3dvcmQgd2l0aCBzcGFjZXMgaW4gaXQ=
If you, like us, have a lot of pipelines and don't want to edit all gitlab ci configs everywhere, you can also configure this once per runner.
In /etc/gitlab-runner/config.toml add a pre_build_script:
[[runners]]
environment = ["DOCKER_AUTH_CONFIG={\"auths\":{\"https://index.docker.io/v1/\":{\"auth\":\"YOUR TOKEN\"}}}"]
pre_build_script = "mkdir ~/.docker -p && echo $DOCKER_AUTH_CONFIG > ~/.docker/config.json"
A little more information can be found in Gitlab's docs.
Since I wanted to access my Gitlab container registry from a Gitlab pipeline, I really wanted to use the $CI_JOB_TOKEN. I've achieved this in a relatively clean and clear manner I think.
I've defined the following variables:
variables:
DOCKER_AUTH: echo -n "gitlab-ci-token:$CI_JOB_TOKEN" | base64
DOCKER_AUTH_CONFIG: echo {\"auths\":{\"gitlab.container.registry.url\":{\"auth\":\"$(eval $DOCKER_AUTH)\"}}}
And the following before_script which evaluates the above mentioned variables and creates the docker config.json
before_script:
- mkdir -p $HOME/.docker
- eval $DOCKER_AUTH_CONFIG > $HOME/.docker/config.json