Jenkins ansible plugin can't find executable - jenkins

I'm running a Jenkins on Amazon EC2--the master in a Docker container and an agent on a separate box. My playbook executes an Ansible script, using the Jenkins Ansible plugin.
I had to install a new version of Ansible on the agent. I installed Ansible from git using the Running from Source instructions, and installed to /home/ec2-user/ansible. If I ssh to the agent and run which ansible I get ~/ansible/bin/ansible as expected. I entered /home/ec2-user/ansible/bin in the 'Ansible executables directory' for my new install, at the Manage Jenkins > Global Tool Configuration page.
When I run my Jenkins pipeline, however, I get this:
Running on docker-agent-1 in /home/ec2-user/jenkins/workspace/planet-healthcare-pipeline
[Pipeline] {
[Pipeline] pwd
[Pipeline] stage
[Pipeline] { (Download source and capture commit ID)
[Pipeline] sh
[planet-healthcare-pipeline] Running shell script
+ which ansible
which: no ansible in (/usr/local/bin:/bin:/usr/bin)
It says it's running on docker-agent-1 (which is the name of my agent), and I can see Ansible if I ssh there. Why can't Jenkins find the ansible executable?
UPDATE: After adding PATH as an environment variable, it can find Ansible, but now something else breaks. Here's the new output:
Running on docker-agent-1 in /home/ec2-user/jenkins/workspace/planet-healthcare-pipeline
[Pipeline] {
[Pipeline] pwd
[Pipeline] stage
[Pipeline] { (Download source and capture commit ID)
[Pipeline] sh
[planet-healthcare-pipeline] Running shell script
+ which ansible
/home/ec2-user/ansible/bin/ansible
[Pipeline] sh
[planet-healthcare-pipeline] Running shell script
+ ansible --version
ansible 2.2.0 (devel 1975a545bd) last updated 2016/09/20 16:19:06 (GMT +000)
lib/ansible/modules/core: (detached HEAD 70d4ff8e38) last updated 2016/09/20 16:19:08 (GMT +000)
lib/ansible/modules/extras: (detached HEAD db7a3f48e1) last updated 2016/09/20 16:19:09 (GMT +000)
config file = /home/ec2-user/jenkins/workspace/planet-healthcare-pipeline/ansible.cfg
configured module search path = Default w/o overrides
[Pipeline] git
> git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
> git config remote.origin.url git#bitbucket.org:planetgroup/planethealthcareportal.git # timeout=10
Fetching upstream changes from git#bitbucket.org:planetgroup/planethealthcareportal.git
> git --version # timeout=10
using GIT_SSH to set credentials Deployment key for Planet Healthcare Portal
> git fetch --tags --progress git#bitbucket.org:planetgroup/planethealthcareportal.git +refs/heads/*:refs/remotes/origin/*
> git rev-parse refs/remotes/origin/develop^{commit} # timeout=10
> git rev-parse refs/remotes/origin/origin/develop^{commit} # timeout=10
Checking out Revision e69608a15c9d433e2a22824c7e607048332a4160 (refs/remotes/origin/develop)
> git config core.sparsecheckout # timeout=10
> git checkout -f e69608a15c9d433e2a22824c7e607048332a4160
> git branch -a -v --no-abbrev # timeout=10
> git branch -D develop # timeout=10
> git checkout -b develop e69608a15c9d433e2a22824c7e607048332a4160
> git rev-list e69608a15c9d433e2a22824c7e607048332a4160 # timeout=10
[Pipeline] sh
[planet-healthcare-pipeline] Running shell script
+ git rev-parse --verify HEAD
[Pipeline] readFile
[Pipeline] echo
Current commit ID: e69608a
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Copy application.yml to environment)
[Pipeline] withCredentials
[Pipeline] {
[Pipeline] sh
[planet-healthcare-pipeline] Running shell script
+ sudo cp **** config/application.yml
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Build image)
[Pipeline] sh
[planet-healthcare-pipeline] Running shell script
+ docker build -t planethealthcare/portal_app .
Sending build context to Docker daemon 557.1 kB
Sending build context to Docker daemon 1.114 MB
Sending build context to Docker daemon 1.671 MB
Sending build context to Docker daemon 2.228 MB
Sending build context to Docker daemon 2.785 MB
Sending build context to Docker daemon 3.342 MB
Sending build context to Docker daemon 3.398 MB
Step 1 : FROM ruby:2.3
---> 7b66156f376c
Step 2 : MAINTAINER David Ham <dham#uxfactory.com>
---> Using cache
---> 47f6f577f049
Step 3 : RUN apt-get update && apt-get install -y build-essential curl gstreamer1.0-plugins-base gstreamer1.0-tools gstreamer1.0-x libqt5webkit5-dev qt5-default xvfb && apt-get clean && rm -rf /var/lib/apt/lists/* && mkdir -p /app
---> Using cache
---> 38c1313e574d
Step 4 : WORKDIR /app
---> Using cache
---> 75a023d99fce
Step 5 : COPY Gemfile Gemfile.lock ./
---> Using cache
---> c39c81496a6b
Step 6 : ENV QMAKE /usr/bin/qmake
---> Using cache
---> 3226bf5f4e63
Step 7 : RUN bundle install --retry 20
---> Using cache
---> 91cb9908d53a
Step 8 : COPY . ./
---> 7330a8f5ba7c
Removing intermediate container bd55b7deddaf
Step 9 : EXPOSE 3000
---> Running in 76e6418e2b3f
---> 81427ffb31f5
Removing intermediate container 76e6418e2b3f
Step 10 : CMD bundle exec rails server
---> Running in c2a90c3c59f6
---> 15ab02b3ab8d
Removing intermediate container c2a90c3c59f6
Successfully built 15ab02b3ab8d
[Pipeline] dockerFingerprintFrom
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Run test suite)
[Pipeline] sh
[planet-healthcare-pipeline] Running shell script
+ docker run -d -p 5432:5432 -e POSTGRES_PASSWORD=postgres -e POSTGRES_DB=phc_portal_test postgres:9.5
[Pipeline] dockerFingerprintRun
[Pipeline] sh
[planet-healthcare-pipeline] Running shell script
+ docker inspect -f . planethealthcare/portal_app
.
[Pipeline] withDockerContainer
$ docker run -t -d -u 500:500 --link 85511ce90ce11c24818ae63bbbf7ab47745be7d96807d450b4adebd4c3196c5e:postgres -p 3000:3000 -e RAILS_ENV=test -w /home/ec2-user/jenkins/workspace/planet-healthcare-pipeline -v /home/ec2-user/jenkins/workspace/planet-healthcare-pipeline:/home/ec2-user/jenkins/workspace/planet-healthcare-pipeline:rw -v /home/ec2-user/jenkins/workspace/planet-healthcare-pipeline#tmp:/home/ec2-user/jenkins/workspace/planet-healthcare-pipeline#tmp:rw -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** --entrypoint cat planethealthcare/portal_app
[Pipeline] {
[Pipeline] echo
running tests...
[Pipeline] sh
[planet-healthcare-pipeline] Running shell script
+ rails db:migrate
/home/ec2-user/jenkins/workspace/planet-healthcare-pipeline#tmp/durable-32785ba4/script.sh: 2: /home/ec2-user/jenkins/workspace/planet-healthcare-pipeline#tmp/durable-32785ba4/script.sh: rails: not found
[Pipeline] }
$ docker stop 3acf37726ce1061d2e0f6e8d0cec882c707b42e710916636b17aaece4f516f2d
$ docker rm -f 3acf37726ce1061d2e0f6e8d0cec882c707b42e710916636b17aaece4f516f2d
[Pipeline] // withDockerContainer
[Pipeline] sh
[planet-healthcare-pipeline] Running shell script
+ docker stop 85511ce90ce11c24818ae63bbbf7ab47745be7d96807d450b4adebd4c3196c5e
85511ce90ce11c24818ae63bbbf7ab47745be7d96807d450b4adebd4c3196c5e
+ docker rm -f 85511ce90ce11c24818ae63bbbf7ab47745be7d96807d450b4adebd4c3196c5e
85511ce90ce11c24818ae63bbbf7ab47745be7d96807d450b4adebd4c3196c5e
[Pipeline] }
[Pipeline] // stage
[Pipeline] mail
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 127
Finished: FAILURE
And here's the pipeline:
node('docker') {
currentBuild.result = "SUCCESS"
try{
def git_commit = ""
def workspace = pwd()
def APPLICATION_YML
def image
stage("Download source and capture commit ID") {
sh "which ansible"
sh "ansible --version"
// Download source
git branch: 'develop', credentialsId: 'b96345a1-543c-4ccd-9a86-deca7203625c', url: 'git#bitbucket.org:planetgroup/planethealthcareportal.git'
// Get the commit ID
sh 'git rev-parse --verify HEAD > GIT_COMMIT'
git_commit = readFile('GIT_COMMIT').take(7)
echo "Current commit ID: ${git_commit}"
}
stage("Copy application.yml to environment"){
// write the application.yml to a file
withCredentials([[$class: 'FileBinding', credentialsId: '67dbd2e7-008f-4463-89a6-9645060e8ec8', variable: 'APPLICATION_YML']]) {
sh "sudo cp ${env.APPLICATION_YML} config/application.yml"
}
}
stage("Build image"){
image = docker.build "planethealthcare/portal_app"
}
stage("Run test suite"){
// start postgres
def postgres95 = docker.image('postgres:9.5')
postgres95.withRun("-p 5432:5432 -e POSTGRES_PASSWORD=postgres -e POSTGRES_DB=phc_portal_test"){ postgres ->
image.inside("--link ${postgres.id}:postgres -p 3000:3000 -e RAILS_ENV=test") {
echo "running tests..."
sh "rails db:migrate"
sh "rspec --tag ~pending"
sh "cucumber"
}
}
}
stage("Push to ECR registry"){
docker.withRegistry('https://0000000000.dkr.ecr.us-east-1.amazonaws.com', 'ecr:dham'){
image.push "${git_commit}"
image.push 'latest'
}
}
stage("Deploy app"){
// run the playbook
ansiblePlaybook([
colorized: true,
credentialsId: 'planet-healthcare',
installation: 'ansible-2-2-0',
inventory: 'staging',
playbook: 'deploy.yml',
extras: "--extra-vars 'app_build_id=${git_commit}''"
])
}
}
catch(err) {
currentBuild.result = "FAILURE"
mail body: "project build error: ${err}\n\n\n ${currentBuild.description}" ,
subject: 'project build failed',
to: 'me#example.com'
throw err
}
}
It's failing in the "Run test suite" stage--it can't find rails to run rails db:migrate, even though I know it's in the container.
Why would setting PATH on the agent affect a script that happens inside a Docker container?

Do you execute which ansible in your script? It searches only defined PATHs.
And it seems /home/ec2-user/ansible/bin is not in /usr/local/bin:/bin:/usr/bin (from your output).
You may go to agent-node's settings in Jenkins and add PATH environment variable with $PATH:/home/ec2-user/ansible/bin value.

Related

docker trust sign not working inside jenkins pipeline

I am trying to sign my images which are being built on the Jenkins pipeline, I have generated and added a signer by the following command by manual login into the Jenkins server. I have logged the detail about the keys in the logs
I Ran this command inside /var/jenkins_home folder
Step 1 (Create a pub and private key)
docker trust key generate jeff // This created jeff.pub on same directory.
Step 2 ( Add a signer)
docker trust signer add --key jeff.pub jeff jchand3/backend-test // This will add signer to image
Step 3
sh "export DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE=password" // To avoid passphrase prompt
After above step if I run this command manually by login into Jenkins server it is working fine. But when I run the same command from Pipeline I am getting error.
docker trust sign jchand3/backend-test:latest
I am running Jenkins on docker container
FROM jenkins/jenkins:lts
USER root
RUN mkdir -p /tmp/download && \
curl -L https://download.docker.com/linux/static/stable/x86_64/docker-18.03.1-ce.tgz | tar -xz -C /tmp/download && \
rm -rf /tmp/download/docker/dockerd && \
mv /tmp/download/docker/docker* /usr/local/bin/ && \
rm -rf /tmp/download && \
groupadd -g 999 docker && \
usermod -aG staff,docker,daemon jenkins
RUN gpasswd -a jenkins staff
#RUN chown jenkins:jenkins /var/run/docker.sock
USER jenkins
Pipeline code
pipeline {
agent any
environment {
imageName = "jchand3/backend-test"
registryCredential = 'dockerhub'
dockerImage = ''
}
stages {
stage('Git checkout') {
steps {
git branch: 'main', url: 'https://github.com/jitenderchand1/node-app.git'
}
}
stage('Docker build & publish') {
steps {
sh "docker build -t $imageName:$BUILD_NUMBER ."
script {
docker.withRegistry('', registryCredential) {
sh "docker trust inspect $imageName:$BUILD_NUMBER"
sh "ls -l ~/.docker/trust/private"
sh "export DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE=password"
sh "docker trust sign $imageName:$BUILD_NUMBER"
}
}
}
}
}
}
Started by user jitender chand
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/jenkins_home/workspace/declarative-pipeline-backend
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Git checkout)
[Pipeline] git
The recommended git tool is: NONE
No credentials specified
> git rev-parse --resolve-git-dir /var/jenkins_home/workspace/declarative-pipeline-backend/.git # timeout=10
Fetching changes from the remote Git repository
> git config remote.origin.url https://github.com/jitenderchand1/node-app.git # timeout=10
Fetching upstream changes from https://github.com/jitenderchand1/node-app.git
> git --version # timeout=10
> git --version # 'git version 2.30.2'
> git fetch --tags --force --progress -- https://github.com/jitenderchand1/node-app.git +refs/heads/*:refs/remotes/origin/* # timeout=10
> git rev-parse refs/remotes/origin/main^{commit} # timeout=10
Checking out Revision cfe15aaa9e25b6d78b4486cde740fea4e93a3ebd (refs/remotes/origin/main)
> git config core.sparsecheckout # timeout=10
> git checkout -f cfe15aaa9e25b6d78b4486cde740fea4e93a3ebd # timeout=10
> git branch -a -v --no-abbrev # timeout=10
> git branch -D main # timeout=10
> git checkout -b main cfe15aaa9e25b6d78b4486cde740fea4e93a3ebd # timeout=10
Commit message: "Update README.md"
> git rev-list --no-walk cfe15aaa9e25b6d78b4486cde740fea4e93a3ebd # timeout=10
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Docker build & publish)
[Pipeline] sh
+ docker build -t jchand3/backend-test:83 .
Sending build context to Docker daemon 84.99kB
Step 1/9 : FROM node
---> 57fb6bbb2edf
Step 2/9 : WORKDIR /app
---> Using cache
---> 84213de7b60d
Step 3/9 : COPY package.json .
---> Using cache
---> 146e6ea31489
Step 4/9 : RUN npm install
---> Using cache
---> 243d25f3e1c6
Step 5/9 : COPY . .
---> Using cache
---> 46c0b5241727
Step 6/9 : EXPOSE 80
---> Using cache
---> 9290ee6aebcc
Step 7/9 : ENV MONGODB_USERNAME=root
---> Using cache
---> 04a667a24acd
Step 8/9 : ENV MONGODB_PASSWORD=secret
---> Using cache
---> 206e15f7f85f
Step 9/9 : CMD ["npm", "start"]
---> Using cache
---> cb2de8b73a2b
Successfully built cb2de8b73a2b
Successfully tagged jchand3/backend-test:83
[Pipeline] script
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] withDockerRegistry
Using the existing docker config file.Removing blacklisted property: auths$ docker login -u jchand3 -p ******** https://index.docker.io/v1/
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Login Succeeded
[Pipeline] {
[Pipeline] sh
+ docker trust inspect jchand3/backend-test:83
[
{
"Name": "jchand3/backend-test:83",
"SignedTags": [],
"Signers": [
{
"Name": "jeff",
"Keys": [
{
"ID": "33ef31f00726af8e2e09ba75e6c56f2395c35813bd6b2f5533683865dfe1f108"
}
]
}
],
"AdminstrativeKeys": [
{
"Name": "Root",
"Keys": [
{
"ID": "a1b34513aaf27d8d6b92e2864833432c562cbcbb4913919d79c70708d4a1802a"
}
]
},
{
"Name": "Repository",
"Keys": [
{
"ID": "98f50403950134193ecbb3585c96dba1bb74332732156ef290211d4940719770"
}
]
}
]
}
]
[Pipeline] sh
+ ls -l /var/jenkins_home/.docker/trust/private
total 12
-rw------- 1 jenkins jenkins 416 Aug 23 07:30 33ef31f00726af8e2e09ba75e6c56f2395c35813bd6b2f5533683865dfe1f108.key
-rw------- 1 jenkins jenkins 455 Aug 23 07:31 98f50403950134193ecbb3585c96dba1bb74332732156ef290211d4940719770.key
-rw------- 1 jenkins jenkins 416 Aug 23 07:31 a470d6ea202282cee7f141628ba3adc071b6125663d2c6ec75b5f0fa80e6d3b9.key
[Pipeline] sh
+ export DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE=password
[Pipeline] sh
+ docker trust sign jchand3/backend-test:83
Signing and pushing trust data for local image jchand3/backend-test:83, may overwrite remote trust data
The push refers to repository [docker.io/jchand3/backend-test]
12782e858bc1: Preparing
105c63a15d50: Preparing
267f7e4e00b8: Preparing
0f43320c4359: Preparing
14d2bb1782b2: Preparing
804ccdfedc4e: Preparing
6645aae7d038: Preparing
82d42de1648b: Preparing
54acb5a6fa0b: Preparing
8d51c618126f: Preparing
9ff6e4d46744: Preparing
a89d1d47b5a1: Preparing
655ed1b7a428: Preparing
804ccdfedc4e: Waiting
6645aae7d038: Waiting
82d42de1648b: Waiting
54acb5a6fa0b: Waiting
8d51c618126f: Waiting
a89d1d47b5a1: Waiting
655ed1b7a428: Waiting
9ff6e4d46744: Waiting
12782e858bc1: Layer already exists
267f7e4e00b8: Layer already exists
0f43320c4359: Layer already exists
14d2bb1782b2: Layer already exists
105c63a15d50: Layer already exists
804ccdfedc4e: Layer already exists
6645aae7d038: Layer already exists
82d42de1648b: Layer already exists
54acb5a6fa0b: Layer already exists
8d51c618126f: Layer already exists
9ff6e4d46744: Layer already exists
a89d1d47b5a1: Layer already exists
655ed1b7a428: Layer already exists
83: digest: sha256:8bac5b293f90c71fcfbceb5ae47d032e2b710150ad26aeec91157b290c796e8b size: 3049
Signing and pushing trust metadata
failed to sign docker.io/jchand3/backend-test:83: no valid signing keys for delegation roles
[Pipeline] }
[Pipeline] // withDockerRegistry
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE
Edit:
I have found some people reported the same issue and they find out the root cause of this. Can someone guide me how to implement the solution in pipeline
https://groups.google.com/g/jenkinsci-users/c/qYFBEd0M4pU

./jmeter: not found error when running Jmeter on Jenkins scripted pipeline

I have a Jenkins pipeline for .Net Core REST API and I am getting an error on the command for executing Jmeter tests :
[Pipeline] { (Performance Test)
[Pipeline] sh
+ docker exec 884627942e26 bash
[Pipeline] sh
+ /bin/sh -c cd /opt/apache-jmeter-5.4.1/bin
[Pipeline] sh
+ /bin/sh -c ./jmeter -n -t /home/getaccountperftest.jmx -l /home/golide/Reports/LoadTestReport.csv -e -o /home/golide/Reports/PerfHtmlReport
-n: 1: -n: ./jmeter: not found
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Performance Test Report)
Stage "Performance Test Report" skipped due to earlier failure(s)
I have jmeter running as a Docker container on the server as per this guide Jmeter On Linux and I am able to extract the reports but this same command fails when I run within Jenkins context :
/bin/sh -c ./jmeter -n -t /home/getaccountperftest.jmx -l /home/golide/Reports/LoadTestReport.csv -e -o /home/golide/Reports/PerfHtmlReport
This is my pipeline :
pipeline {
agent any
triggers {
githubPush()
}
environment {
NAME = "cassavagateway"
REGISTRYUSERNAME = "golide"
WORKSPACE = "/var/lib/jenkins/workspace/OnlineRemit_main"
VERSION = "${env.BUILD_ID}-${env.GIT_COMMIT}"
IMAGE = "${NAME}:${VERSION}"
}
stages {
.....
.....
stage ("Publish Test Report") {
steps{
publishHTML target: [
allowMissing: false,
alwaysLinkToLastBuild: true,
keepAll: true,
reportDir: '/var/lib/jenkins/workspace/OnlineRemit_main/IntegrationTests/BuildReports/Coverage',
reportFiles: 'index.html',
reportName: 'Code Coverage'
]
archiveArtifacts artifacts: 'IntegrationTests/BuildReports/Coverage/*.*'
}
}
stage ("Performance Test") {
steps{
sh 'docker exec 884627942e26 bash'
sh '/bin/sh -c cd /opt/apache-jmeter-5.4.1/bin'
sh '/bin/sh -c ./jmeter -n -t /home/getaccountperftest.jmx -l /home/golide/Reports/LoadTestReport.csv -e -o /home/Reports/HtmlReport'
sh 'docker cp 884627942e26:/home/Reports/HtmlReport /var/lib/jenkins/workspace/FlexToEcocash_main/IntegrationTests/BuildReports/Coverage bash'
}
}
stage ("Publish Performance Test Report") {
steps{
step([$class: 'ArtifactArchiver', artifacts: '**/*.jtl, **/jmeter.log'])
}
}
stage ("Docker Build") {
steps {
sh 'cd /var/lib/jenkins/workspace/OnlineRemit_main/OnlineRemit'
echo "Running ${VERSION} on ${env.JENKINS_URL}"
sh "docker build -t ${NAME} /var/lib/jenkins/workspace/OnlineRemit_main/OnlineRemit"
sh "docker tag ${NAME}:latest ${REGISTRYUSERNAME}/${NAME}:${VERSION}"
}
}
stage("Deploy To K8S"){
sh 'kubectl apply -f {yaml file name}.yaml'
sh 'kubectl set image deployments/{deploymentName} {container name given in deployment yaml file}={dockerId}/{projectName}:${BUILD_NUMBER}'
}
}
}
My issues :
What doI need to change for that command to execute ?
How can I incorporate a condition to break the pipeline if the tests fail?
Jenkins Environment : Debian 10
Platform : .Net Core 3.1
The Shift-Left.jtl is a results file which JMeter will generate after execution of the `Shift-Left.jmx
By default it will be in CSV format, depending on what you're trying to achieve you can:
Generate charts from the .CSV file
Generate HTML Reporting Dashboard
If you have Jenkins Performance Plugin you can get performance trend graphs, possibility to automatically fail the build depending on various criteria, etc.

Running ssh-agent within docker on jenkins doesnt work

I am trying to use a container within my jenkins pipeline, however I cant get ssh-agent to work inside it. I am on v1.19 of the plugin, when I run the below code I get
Host key verification failed. fatal: Could not read from remote
repository.
Please make sure you have the correct access rights and the repository
exists.
However if I run the code from outside the image it works perfect, proving that the user has the correct permissions.
node('nodeName'){
cleanWs()
ws("short"){
withDockerRegistry([credentialsId: 'token', url: "https://private.repo.com"]) {
docker.image("img:1.0.0").inside("-u root:root --network=host") {
sshagent(credentials: ["bitbucket_token"]) {
sh "mkdir ~/.ssh"
sh 'ssh-keyscan bitbucket.company.com >> ~/.ssh/known_hosts'
sh 'git clone ssh://git#bitbucket.company.com:PORT/repo.git'
}
}
}
}
}
Here is the output:
[Pipeline] sshagent
[ssh-agent] Using credentials jenkins (bitbucket_token)
[ssh-agent] Looking for ssh-agent implementation...
[ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine)
$ docker exec abcdef123456 ssh-agent
SSH_AUTH_SOCK=/tmp/ssh-qwertyu/agent.15
SSH_AGENT_PID=22
Running ssh-add (command line suppressed)
Identity added: /home/jenkins/short#tmp/private_key_8675309.key (/home/jenkins/short#tmp/private_key_8675309.key)
[ssh-agent] Started.
[Pipeline] {
[Pipeline] sh
+ mkdir /root/.ssh
[Pipeline] sh
+ ssh-keyscan bitbucket.company.com
# bitbucket.company.com:22 SSH-2.0-OpenSSH_6.6.1
# bitbucket.company.com:22 SSH-2.0-OpenSSH_6.6.1
# bitbucket.company.com:22 SSH-2.0-OpenSSH_6.6.1
[Pipeline] sh
+ git clone ssh://git#bitbucket.company.com:PORT/repo.git
Cloning into 'repo'...
Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
[Pipeline] }
$ docker exec --env ******** --env ******** abcdef123456 ssh-agent -k
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 22 killed;
[ssh-agent] Stopped.
[Pipeline] // sshagent
Im completely stumped by this

How to create a Jenkins pipeline that builds a Docker image

I'm new to Docker and Jenkins and I'm trying to create a Jenkins Pipeline that builds a Docker image.
I'm stuck when trying to build and keep receiving this error:
/var/jenkins_home/workspace/Docker-Pipeline#tmp/durable-a11b32f8/script.sh: line 1: docker: command not found
I've installed ubuntu on a VM.
Installed docker.
Installed jenkins/jenkins from dockerhub.
I followed this tutorial for the rest:
https://www.youtube.com/watch?v=z32yzy4TrKM&t=147s
I'm doing the exact same thing as him but it keeps failing.
Started by user admin
Obtained Jenkinsfile from git https://github.com/naorca/NodeApp.git
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start o
f Pipeline
[Pipeline] node
Running on Jenkins in /var/jenkins_home/workspace/Docker-Pipeline
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Clone repository)
[Pipeline] checkout
No credentials specified
> git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
> git config remote.origin.url https://github.com/naorca/NodeApp.git # timeout=10
Fetching upstream changes from https://github.com/naorca/NodeApp.git
> git --version # timeout=10
> git fetch --tags --progress https://github.com/naorca/NodeApp.git +refs/heads/*:refs/remotes/origin/*
> git rev-parse refs/remotes/origin/master^{commit} # timeout=10
> git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision b74538f2f34b6c28306fcca8119e215b87124e5e (refs/remotes/origin/master)
> git config core.sparsecheckout # timeout=10
> git checkout -f b74538f2f34b6c28306fcca8119e215b87124e5e
Commit message: "Update Jenkinsfile"
> git rev-list --no-walk b74538f2f34b6c28306fcca8119e215b87124e5e # timeout=10
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Build image)
[Pipeline] sh
+ docker build -t naorca/nodeapp .
/var/jenkins_home/workspace/Docker-Pipeline#tmp/durable-a11b32f8/script.sh: line 1: docker: command not found
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 127
Finished: FAILURE
TL;DR: You must have Docker within your Jenkins Agent.
Following the process you described above, I got Jenkins up and running using the latest jenkins/jenkins image from Docker Hub. After looking over the container's file system, I confirmed what I had speculated about in my comment on your question: Docker is not installed within the Jenkins container. Assuming you are using the Jenkins Master server as your Agent for the pipeline job, you have a couple options that come to mind:
Extend the existing docker container -- using something like FROM jenkins/jenkins inside of a new docker file -- to include your dependencies.
Bind your existing docker daemon from the host into the run-time of the Jenkins container.
While I am partial to the first solution, I found an implementation of the second solution on the Docker Forums: "Using docker in a dockerized Jenkins container" I then gave that solution a try and can confirm that Docker is present for me in the Jenkins Master container after launching the Jenkins container with the following command:
docker run -v /var/run/docker.sock:/var/run/docker.sock \
-v $(which docker):$(which docker) \
-p 8000:8080 \
-p 50000:50000 \
jenkins/jenkins
I am not sure, but I would imagine there could be some negative security implications for the host of the Jenkins Master due to mounting its own Docker socket and Docker executable into the container; however, I would leave that up to someone more knowledgeable about Docker's internals to determine. Regardless, I can confirm that the solution above does work.

Jenkins pipeline with Docker and Artifactory

How does one use Artifactory in a Pipeline Jenkins job with Conan, and run everything in a Docker container?
I have this Jenkinsfile right now:
def LINUX_DOCKER_IMAGE = "<docker_image>"
def ARTIFACTORY_NAME = "<server-name>"
def ARTIFACTORY_REPO = "<repo-name>"
String setup_conan = "config install <git url>"
node('linux') {
stage("Get Sources"){
checkout scm
}
docker.image(LINUX_DOCKER_IMAGE) {
def server = Artifactory.server ARTIFACTORY_NAME
def client = Artifactory.newConanClient userHome: "/tmp/conan_home"
def serverName = client.remote.add server: server, repo: ARTIFACTORY_REPO
stage("Setup Conan") {
client.run(command: setup_conan)
}
stage("Build package") {
client.run(comnand: "create --profile Linux-Release . foo/bar")
}
stage("Upload package") {
String command = "upload -r ${serverName} --all --check --confirm \"myproject/*\""
def b = client.run(command: command)
server.publishBuildInfo b
}
}
}
But the Artifactory.newConanClient() function fails:
[...]
[Pipeline] InitConanClient
[myproject] $ docker exec --env BUILD_DISPLAY_NAME=#19 ... <container sha> sh -c 'conan config set log.trace_file="/tmp/conan-home/conan_log.log" '
[Pipeline] ConanAddRemote
[myproject] $ docker exec --env BUILD_DISPLAY_NAME=#19 ... <container sha> sh -c "conan remote add <server ID> <repo url> "
WARN: Remotes registry file missing, creating default one in /tmp/conan-home/.conan/registry.txt
[Pipeline] ConanAddUser
Adding conan user '<username>', server '<server ID>'
[myproject] $ docker exec --env BUILD_DISPLAY_NAME=#19 ... <container sha> sh -c ********
sh: -c: line 0: unexpected EOF while looking for matching `''
sh: -c: line 1: syntax error: unexpected end of file
[Pipeline] }
[...]
Can I setup the client differently? I can run the Conan commands in s regular sh {} but then how would I tell Artifactoy about it?
This is an escaping issue. Jenkins-Artifactory-plugin run Conan executable with /bin/sh.
There is a Jira issue for that. There you can find a snapshot which resolves the problem.
The fix will be included in the next Jenkins-Artifactory-plugin release. In the meantime, you can download the snapshot version

Resources