I have created a Jenkins Credential (Secret File Type) and want to use this credential in my Ansible playbook.
Ansible is able to able to identify the Jenkins credential when it is running on localhost.
But, when the playbook runs on the remote host, it is unable to identify the value of my Jenkins credential.
My Jenkinsfile stage for Ansible looks like this:
stage('Build Image') {
steps {
withCredentials([file(credentialsId: 'private.key', variable: 'PRIVATE_KEY')]) {
runAnsible('playbook.yml', [
env: dev,
key: PRIVATE_KEY
.
.
])
}
}
}
Error:
"No such file or directory"
Edit: runAnsible is a part of jenkins-shared-libs. Basically, it is just like calling
ansible-playbook playbook.yaml --extra-vars '.....'
Related
I am running a CI pipeline for a repo in Jenkins using declarative pipeline.
The repo now contains its own Dockerfile at .docker/php/Dockerfile, which I want to use to build a container and run the pipeline in.
Normally, I get the code in the container using a volume in docker-compose.yaml:
volumes:
- .:/home/wwwroot/comms-and-push
...So I set up my Jenkinsfile like this:
pipeline {
agent {
dockerfile {
dir ".docker/php"
args "-v .:/home/wwwroot/comms-and-push"
}
}
stages {
...
However, this results in an error when running the pipeline:
Error: docker: Error response from daemon: create .: volume name is too short, names should be at least two alphanumeric characters.
I cannot specify the full path because I don't know it in this context -- it's running in some Jenkins workspace.
What I've tried so far:
Using the WORKSPACE variable
args "-v ${WORKSPACE}:/home/wwwroot/comms-and-push"
results in error:
No such property: WORKSPACE for class: groovy.lang.Binding
Setting an environment variable before the pipeline:
environment {
def WORKSPACE = pwd()
}
pipeline {
agent {
dockerfile {
dir '.docker/php'
args "-v ${env.WORKSPACE}/:/home/wwwroot/comms-and-push"
}
}
...
results in ${env.WORKSPACE} resolving to null.
The standard Jenkins Docker integration already knows how to mount the workspace directory into a container. It has the same filesystem path inside different containers and directly on a worker outside a container. You don't need to supply a docker run -v argument yourself.
agent {
dockerfile {
dir ".docker/php"
// No args
}
}
stages {
stage('Diagnostics') {
sh "pwd" // Prints the WORKSPACE
sh "ls" // Shows the build tree contents
sh "ls /" // Shows the image's root directory
}
}
If you look at the extended Jenkins logs, you'll see that it provides the -v option itself.
I would suggest to use the docker.image.inside() method, so in your case it is going to be something like docker.image.inside("-v /home/wwwroot/comms-and-push:/home/wwwroot/comms-and-push:rw").
Once i try to run docker container inside jenkins pipeline it fails - log. Jenkins is local. Since there's
Jenkins does not seem to be running inside a container
line in console output i assume that in might be necessary to run containerized Jenkins?
Dockerfile
FROM ubuntu
ENV customnEnvVar="test."
Jenkinsfile
#!groovy
pipeline {
agent { dockerfile true }
stages {
steps {
sh 'echo customEnvVar = $customEnvVar'
}
}
}
Jenkins Version - 2.164.1
Jenkins Docker Plugin Version – 1.1.6
Docker Version - 18.09.3, build 774a1f4
Problem:-
I have below code in my Jenkins scripted pipeline section. I have added my private Docker registry URL and Credentials added under Manage Jenkins --> Configure System. But pipeline Job is failing for docker login.
Error form Jenkins - ERROR: docker login failed
Code:-
stage('Build') {
withDockerRegistry(credentialsId: 'docker-reg-credentails', url: 'http://registryhub:8081/nexus/') {
image = docker.image('registryhub:8085/ubuntu-16:1')
image.pull()
docker.image('registryhub:8085/ubuntu-16:1').inside {
sh 'cat /etc/issue'
}
}
}
Inside Stage, do something like below:
script
{
def server = Nexus.server 'docker-reg-credentails'
def buildRegistry = [ url: 'http://registryhub:8081/nexus/', credentialsId: 'docker-reg-credentails' ]
def rtDocker = Nexus.docker server: server
withDockerRegistry(registry: buildRegistry )
{
sh 'docker pull hello-world'
sh 'docker tag hello-world:latest hello-world:latest2'
rtDocker.addProperty("status", "stable")
def buildInfo = rtDocker.push 'hello-world:latest', 'docker-local'
// Step 4: Publish the build-info to Nexus: server.publishBuildInfo buildInfo
server.publishBuildInfo buildInfo
}
}
If you try to run docker login explicitely in sh you can get more information about the cause of fail. Most probable cause would be access denied on connection to docker daemon. So you need add Jenkins account to docker group, e. g.
sudo usermod -a -G docker jenkins
Why is it that docker not found when i use docker as an agent in jenkins pipeline?
+ docker inspect -f . node:7-alpine
/var/jenkins_home/workspace/poobao-aws-services#tmp/durable-
13f890b0/script.sh: 2: /var/jenkins_home/workspace/project-
name#tmp/durable-13f890b0/script.sh: docker: not found
In Global Tools Configuration, I have docker as automatically install.
I have docker set to install automatically as follows, with a declarative pipeline as follows...
My jenkinsfile then has this initialization stage (amended from here)
stage('Install dependencies') {
steps {
script {
def dockerTool = tool name: 'docker', type: 'org.jenkinsci.plugins.docker.commons.tools.DockerTool'
withEnv(["DOCKER=${dockerTool}/bin"]) {
//stages
//here we can trigger: sh "sudo ${DOCKER}/docker ..."
}
}
}
}
When built it then installs automatically...
I'm building a docker container with spotify's maven plugin and try to push to ecr afterwards.
This happens using cloudbees Build and Publish plugin after managing to login with the Amazon ECR plugin.
This works like a charm on the jenkins master.
But on the slave I get:
no basic auth credentials
Build step 'Docker Build and Publish' marked build as failure
Is pushing from slaves out of scope for the ECR Plugin or did I miss something?
The answers here didn't work for my pipeline. I find this solution working, and also clean:
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding', accessKeyVariable: 'AWS_ACCESS_KEY_ID', credentialsId: 'myCreds', secretKeyVariable: 'AWS_SECRET_ACCESS_KEY']]) {
sh '''
aws ecr get-login-password --region ${AWS_REGION} | docker login --username AWS --password-stdin ${REGISTRY}
..
'''
}
This solution doesn't require aws cli v2.
You might be falling foul of the bug reported in the ECR plugin here: https://issues.jenkins-ci.org/browse/JENKINS-44143
Various people in that thread are describing slightly different symptoms, but the common theme is that docker was failing to use the auth details that had been correctly generated by the ECR plugin.
I found in my case this was because the ECR plugin was saving to one docker config and the docker-commons plugin (which handles the actual work of the docker API) was reading from another. Docker changed config formats and locations in an earlier version which caused the conflict.
The plugin author offers a workaround which is to essentially just nuke both config files first:
node {
//cleanup current user docker credentials
sh 'rm ~/.dockercfg || true'
sh 'rm ~/.docker/config.json || true'
//configure registry
docker.withRegistry('https://ID.ecr.eu-west-1.amazonaws.com', 'ecr:eu-west-1:86c8f5ec-1ce1-4e94-80c2-18e23bbd724a') {
//build image
def customImage = docker.build("my-image:${env.BUILD_ID}")
//push image
customImage.push()
}
You might want to try that purely as a debugging step and quick fix (if it works you can be confident this bug is your issue).
My permanent fix was to simply create the new style dockercfg manually with a sensible default, and then set the environment variable to point to it.
I did this in my Dockerfile which creates my Jenkins instance like so:
RUN mkdir -p $JENKINS_HOME/.docker/ && \
echo '{"auths":{}}' > $JENKINS_HOME/.docker/config.json
ENV DOCKER_CONFIG $JENKINS_HOME/.docker
You have not credentials in the slave, that is the problem you have. I fix this problem injecting this credentials in every pipeline that run in the on demand slaves.
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding', accessKeyVariable: 'AWS_ACCESS_KEY_ID', credentialsId: 'AWS_EC2_key', secretKeyVariable: 'AWS_SECRET_ACCESS_KEY']]) {
sh "aws configure set aws_access_key_id ${AWS_ACCESS_KEY_ID}"
sh "aws configure set aws_secret_access_key ${AWS_SECRET_ACCESS_KEY}"
sh '$(aws ecr get-login --no-include-email --region eu-central-1)'
sh "docker push ${your_ec2_repo}/${di_name}:image_name${newVersion}"
Of course you need to have installed the aws-cli in the slave