How does one use Artifactory in a Pipeline Jenkins job with Conan, and run everything in a Docker container?
I have this Jenkinsfile right now:
def LINUX_DOCKER_IMAGE = "<docker_image>"
def ARTIFACTORY_NAME = "<server-name>"
def ARTIFACTORY_REPO = "<repo-name>"
String setup_conan = "config install <git url>"
node('linux') {
stage("Get Sources"){
checkout scm
}
docker.image(LINUX_DOCKER_IMAGE) {
def server = Artifactory.server ARTIFACTORY_NAME
def client = Artifactory.newConanClient userHome: "/tmp/conan_home"
def serverName = client.remote.add server: server, repo: ARTIFACTORY_REPO
stage("Setup Conan") {
client.run(command: setup_conan)
}
stage("Build package") {
client.run(comnand: "create --profile Linux-Release . foo/bar")
}
stage("Upload package") {
String command = "upload -r ${serverName} --all --check --confirm \"myproject/*\""
def b = client.run(command: command)
server.publishBuildInfo b
}
}
}
But the Artifactory.newConanClient() function fails:
[...]
[Pipeline] InitConanClient
[myproject] $ docker exec --env BUILD_DISPLAY_NAME=#19 ... <container sha> sh -c 'conan config set log.trace_file="/tmp/conan-home/conan_log.log" '
[Pipeline] ConanAddRemote
[myproject] $ docker exec --env BUILD_DISPLAY_NAME=#19 ... <container sha> sh -c "conan remote add <server ID> <repo url> "
WARN: Remotes registry file missing, creating default one in /tmp/conan-home/.conan/registry.txt
[Pipeline] ConanAddUser
Adding conan user '<username>', server '<server ID>'
[myproject] $ docker exec --env BUILD_DISPLAY_NAME=#19 ... <container sha> sh -c ********
sh: -c: line 0: unexpected EOF while looking for matching `''
sh: -c: line 1: syntax error: unexpected end of file
[Pipeline] }
[...]
Can I setup the client differently? I can run the Conan commands in s regular sh {} but then how would I tell Artifactoy about it?
This is an escaping issue. Jenkins-Artifactory-plugin run Conan executable with /bin/sh.
There is a Jira issue for that. There you can find a snapshot which resolves the problem.
The fix will be included in the next Jenkins-Artifactory-plugin release. In the meantime, you can download the snapshot version
Related
Does anybody know why:
…
steps
{
script
{
sshagent(credentials: ['jenk'])
{
sh "git remote show …" //This does not work !
bat "git remote show …" //This works ??
}
}
}
...
The 'jenk' credentials are managed via Jenkins->credentials->System->global credentials
EDIT:
Sorry forgot the error msg:
Host key verification failed
fatal: Could not read from remote repository
Jenkins was configured using CYGWIN_NT-6.3-WOW (i686 Cygwin) for the sh commands.
After all this commands cleared everything:
if (isUnix())
{
echo "Jenkins runs on Linux"
}
else
{
echo "Jenkins runs on Windows"
}
echo "show shell kernel version (uname -a) : "
def res = sh (script: "uname -a", returnStdout: true)
echo "${res}" //=>CYGWIN_NT-6.3-WOW...
res2 = sh (script: "ls -al ~/.ssh", returnStdout: true)
echo "${res2}"
So the solution to the problem above is therefore adding the ssh-keys to cygwin
If you need your credentials you could do this:
https://codurance.com/2019/05/30/accessing-and-dumping-jenkins-credentials/
So I have this setup
stage('Build') {
steps {
sh """ docker-compose -f docker-compose.yml up -d """
sh """ docker-compose -f docker-compose.yml exec -T app buildApp """
}
stage('Start UI server') {
steps {
script { env.NETWORK_ID = get network id with some script }
sh """ docker-compose -f docker-compose.yml exec -d -T app startUiServer """
}
}
stage('UI Smoke Testing') {
agent {
docker {
alwaysPull true
image 'some custom image'
registryUrl 'some custom registry'
registryCredentialsId 'some credentials'
args "-u root --network ${env.NETWORK_ID}"
}
}
steps { sh """ run the tests """ }
}
And for some reason
the pipeline fails with this error. Most of the time, not all the time
java.io.IOException: Failed to run image 'my image'. Error: docker: Error response from daemon: network 3c5b5b45ca0e not found.
So the Network ID is the right one. I've checked.
Any ideas why this is failing?
i really appreciate any help.
i am trying to ssh into a remote host and then execute certain commands on the remote host's shell. Following is my pipeline code.
pipeline {
agent any
environment {
// comment added
APPLICATION = 'app'
ENVIRONMENT = 'dev'
MAINTAINER_NAME = 'jenkins'
MAINTAINER_EMAIL = 'jenkins#email.com'
}
stages {
stage('clone repository') {
steps {
// cloning repo
checkout scm
}
}
stage('Build Image') {
steps {
script {
sshagent(credentials : ['jenkins-pem']) {
sh "echo pwd"
sh 'ssh -t -t ubuntu#xx.xxx.xx.xx -o StrictHostKeyChecking=no'
sh "echo pwd"
sh 'sudo -i -u root'
sh 'cd /opt/docker/web'
sh 'echo pwd'
}
}
}
}
}
}
But upon running this job it executes sh 'ssh -t -t ubuntu#xx.xxx.xx.xx -o StrictHostKeyChecking=no' successfully but it stops there and does not execute any further commands. I want to execute the commands that are written after ssh command inside the remote host's shell. any help is appreciated.
I would try something like this:
sshagent(credentials : ['jenkins-pem']) {
sh "echo pwd"
sh 'ssh -t -t ubuntu#xx.xxx.xx.xx -o StrictHostKeyChecking=no "echo pwd && sudo -i -u root && cd /opt/docker/web && echo pwd"'
}
I resolve this issue
script
{
sh """ssh -tt login#host << EOF
your command
exit
EOF"""
}
stage("DEPLOY CONTAINER"){
steps {
script {
sh """
#!/bin/bash
sudo ssh -i /path/path/keyname.pem username#serverip << EOF
sudo bash /opt/filename.sh
exit 0
<< EOF
"""
}
}
}
There is a better way to run commands on remote using SSH. I know this is late answer but I just explored this thing so would like to share and this will help others to resolve this problem easily.
I just found this link helpful on how to run multiple commands on remote using SSH. Also we can run multiple commands conditionally as mentioned in above blog.
By going through it, I found the syntax:
ssh username#hostname "command1; command2;commandN"
Now, how to run command inside remote hots using SSH in Jenkins pipeline?
Here is the solution:
pipeline {
agent any
environment {
/*
define your command in variable
*/
remoteCommands =
"""java --version;
java --version;
java --version """
}
stages {
stage('Login to remote host') {
steps {
sshagent(['ubnt-creds']) {
/*
Provide variable as argument in ssh command
*/
sh 'ssh -tt username#hostanem $remoteCommands'
}
}
}
}
}
Firstly and optionally, you can define a variable that holds all commands separated by ;(semicolon) and then pass it as parameter in command.
Another way, you can also pass your commands directly to ssh command as
sh "ssh -tt username#hostanem 'command1;command2;commandN'"
I have used it in my code and it's working great!
see the output here
Happy Learning :)
I'm trying to execute an SSH command from inside a Docker container in a Jenkins pipeline. I'm using the CloudBees Docker Pipeline Plugin to spin up the container and execute commands, and the SSH Agent Plugin to manage my SSH keys. Here's a basic version of my Jenkinsfile:
node {
step([$class: 'WsCleanup'])
docker.image('node').inside {
stage('SSH') {
sshagent (credentials: [ 'MY_KEY_UUID' ]) {
sh "ssh -vvv -o StrictHostKeyChecking=no ubuntu#example.org uname -a"
}
}
}
}
When the SSH command runs, I get this error:
+ ssh -vvv -o StrictHostKeyChecking=no ubuntu#example.org uname -a
No user exists for uid 1005
I combed through the logs and realized the Docker Pipeline Plugin is automatically telling the container to run with the same user that is logged in on the host by passing a UID as a command line argument:
$ docker run -t -d -u 1005:1005 [...]
I decided to check what users existed in the host and the container by running cat /etc/passwd in each environment. Sure enough, the list of users was different in each. 1005 was the jenkins user on the host machine, but that UID didn't exist in the container. To solve the issue, I mounted /etc/passwd from the host to the container when spinning it up:
node {
step([$class: 'WsCleanup'])
docker.image('node').inside('-v /etc/passwd:/etc/passwd') {
stage('SSH') {
sshagent (credentials: [ 'MY_KEY_UUID' ]) {
sh "ssh -vvv -o StrictHostKeyChecking=no ubuntu#example.org uname -a"
}
}
}
}
The solution provided by #nathan-thompson is awesome, but in my case I was unable to find the user even in the /etc/passwd of the host machine! It means mounting the passwd file did not fix the problem. This question https://superuser.com/questions/580148/users-not-found-in-etc-passwd suggested some users are logged in the host using an identity provider like LDAP.
The solution was finding a way to add the proper line to the passwd file on the container. Calling getent passwd $USER on the host will provide the passwd line for the Jenkins user running the container.
I added a step running on the node (and not the docker agent) to get the line and save it in a file. Then in the next step I mounted the generated passwd to the container:
stages {
stage('Create passwd') {
steps {
sh """echo \$(getent passwd \$USER) > /tmp/tmp_passwd
"""
}
}
stage('Test') {
agent {
docker {
image '*******'
args '***** -v /tmp/tmp_passwd:/etc/passwd'
reuseNode true
registryUrl '*****'
registryCredentialsId '*****'
}
}
steps {
sh """ssh -i ********
"""
}
}
}
I just found another solution to this problem, that I want to share. It differentiates from the existing solutions in that it allows to run the complete pipeline in one agent, instead of per stage.
The trick is to, instead of directly using an image, refer to a Dockerfile (which may be build FROM the original) and then add the user:
# Dockerfile
FROM node
ARG jenkinsUserId=
RUN if ! id $jenkinsUserId; then \
usermod -u ${jenkinsUserId} jenkins; \
groupmod -g ${nodeId} jenkins; \
fi
// Jenkinsfile
pipeline {
agent {
dockerfile {
additionalBuildArgs "--build-arg jenkinsUserId=\$(id -u jenkins)"
}
}
}
agent {
docker {
image 'node:14.10.1-buster-slim'
args '-u root:root'
}
}
environment {
SSH_deploy = credentials('e99988ea-6bdc-45fc-b9e1-536b875bcac7')
}
stage('build') {
steps {
sh '''#!/bin/bash
eval $(ssh-agent -s)
cat $SSH_deploy | tr -d '\r' | ssh-add -
touch .env
echo 'REACT_APP_BASE_API = "//172.22.132.115:8080"' >> .env
echo 'REACT_APP_ADMIN_PANEL_URL = "//172.22.132.115"' >> .env
yarn install
CI=false npm run build
ssh -t -o StrictHostKeyChecking=no root#172.22.132.115 'rm -rf /usr/local/src/build'
scp -r -o StrictHostKeyChecking=no build root#172.22.132.115:/usr/local/src/
ssh -t -o StrictHostKeyChecking=no root#172.22.132.115 'systemctl restart nginx'
'''
}
From the solution provided by Nathan Thompson, I modified it this way for Jenkins DOCKER build container which runs inside a Jenkins DOCKER-slave. #docker in docker
if (validated_parameters.custom_gradle_image){
docker.image(validated_parameters.custom_gradle_image).inside(" -v /etc/passwd:/etc/passwd -v /var/lib/jenkins/.ssh/:/var/lib/jenkins/.ssh/ "){
sshagent(['jenkins-git-io']){
sh "${gradleCommand}"
}
I'm running a Jenkins on Amazon EC2--the master in a Docker container and an agent on a separate box. My playbook executes an Ansible script, using the Jenkins Ansible plugin.
I had to install a new version of Ansible on the agent. I installed Ansible from git using the Running from Source instructions, and installed to /home/ec2-user/ansible. If I ssh to the agent and run which ansible I get ~/ansible/bin/ansible as expected. I entered /home/ec2-user/ansible/bin in the 'Ansible executables directory' for my new install, at the Manage Jenkins > Global Tool Configuration page.
When I run my Jenkins pipeline, however, I get this:
Running on docker-agent-1 in /home/ec2-user/jenkins/workspace/planet-healthcare-pipeline
[Pipeline] {
[Pipeline] pwd
[Pipeline] stage
[Pipeline] { (Download source and capture commit ID)
[Pipeline] sh
[planet-healthcare-pipeline] Running shell script
+ which ansible
which: no ansible in (/usr/local/bin:/bin:/usr/bin)
It says it's running on docker-agent-1 (which is the name of my agent), and I can see Ansible if I ssh there. Why can't Jenkins find the ansible executable?
UPDATE: After adding PATH as an environment variable, it can find Ansible, but now something else breaks. Here's the new output:
Running on docker-agent-1 in /home/ec2-user/jenkins/workspace/planet-healthcare-pipeline
[Pipeline] {
[Pipeline] pwd
[Pipeline] stage
[Pipeline] { (Download source and capture commit ID)
[Pipeline] sh
[planet-healthcare-pipeline] Running shell script
+ which ansible
/home/ec2-user/ansible/bin/ansible
[Pipeline] sh
[planet-healthcare-pipeline] Running shell script
+ ansible --version
ansible 2.2.0 (devel 1975a545bd) last updated 2016/09/20 16:19:06 (GMT +000)
lib/ansible/modules/core: (detached HEAD 70d4ff8e38) last updated 2016/09/20 16:19:08 (GMT +000)
lib/ansible/modules/extras: (detached HEAD db7a3f48e1) last updated 2016/09/20 16:19:09 (GMT +000)
config file = /home/ec2-user/jenkins/workspace/planet-healthcare-pipeline/ansible.cfg
configured module search path = Default w/o overrides
[Pipeline] git
> git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
> git config remote.origin.url git#bitbucket.org:planetgroup/planethealthcareportal.git # timeout=10
Fetching upstream changes from git#bitbucket.org:planetgroup/planethealthcareportal.git
> git --version # timeout=10
using GIT_SSH to set credentials Deployment key for Planet Healthcare Portal
> git fetch --tags --progress git#bitbucket.org:planetgroup/planethealthcareportal.git +refs/heads/*:refs/remotes/origin/*
> git rev-parse refs/remotes/origin/develop^{commit} # timeout=10
> git rev-parse refs/remotes/origin/origin/develop^{commit} # timeout=10
Checking out Revision e69608a15c9d433e2a22824c7e607048332a4160 (refs/remotes/origin/develop)
> git config core.sparsecheckout # timeout=10
> git checkout -f e69608a15c9d433e2a22824c7e607048332a4160
> git branch -a -v --no-abbrev # timeout=10
> git branch -D develop # timeout=10
> git checkout -b develop e69608a15c9d433e2a22824c7e607048332a4160
> git rev-list e69608a15c9d433e2a22824c7e607048332a4160 # timeout=10
[Pipeline] sh
[planet-healthcare-pipeline] Running shell script
+ git rev-parse --verify HEAD
[Pipeline] readFile
[Pipeline] echo
Current commit ID: e69608a
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Copy application.yml to environment)
[Pipeline] withCredentials
[Pipeline] {
[Pipeline] sh
[planet-healthcare-pipeline] Running shell script
+ sudo cp **** config/application.yml
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Build image)
[Pipeline] sh
[planet-healthcare-pipeline] Running shell script
+ docker build -t planethealthcare/portal_app .
Sending build context to Docker daemon 557.1 kB
Sending build context to Docker daemon 1.114 MB
Sending build context to Docker daemon 1.671 MB
Sending build context to Docker daemon 2.228 MB
Sending build context to Docker daemon 2.785 MB
Sending build context to Docker daemon 3.342 MB
Sending build context to Docker daemon 3.398 MB
Step 1 : FROM ruby:2.3
---> 7b66156f376c
Step 2 : MAINTAINER David Ham <dham#uxfactory.com>
---> Using cache
---> 47f6f577f049
Step 3 : RUN apt-get update && apt-get install -y build-essential curl gstreamer1.0-plugins-base gstreamer1.0-tools gstreamer1.0-x libqt5webkit5-dev qt5-default xvfb && apt-get clean && rm -rf /var/lib/apt/lists/* && mkdir -p /app
---> Using cache
---> 38c1313e574d
Step 4 : WORKDIR /app
---> Using cache
---> 75a023d99fce
Step 5 : COPY Gemfile Gemfile.lock ./
---> Using cache
---> c39c81496a6b
Step 6 : ENV QMAKE /usr/bin/qmake
---> Using cache
---> 3226bf5f4e63
Step 7 : RUN bundle install --retry 20
---> Using cache
---> 91cb9908d53a
Step 8 : COPY . ./
---> 7330a8f5ba7c
Removing intermediate container bd55b7deddaf
Step 9 : EXPOSE 3000
---> Running in 76e6418e2b3f
---> 81427ffb31f5
Removing intermediate container 76e6418e2b3f
Step 10 : CMD bundle exec rails server
---> Running in c2a90c3c59f6
---> 15ab02b3ab8d
Removing intermediate container c2a90c3c59f6
Successfully built 15ab02b3ab8d
[Pipeline] dockerFingerprintFrom
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Run test suite)
[Pipeline] sh
[planet-healthcare-pipeline] Running shell script
+ docker run -d -p 5432:5432 -e POSTGRES_PASSWORD=postgres -e POSTGRES_DB=phc_portal_test postgres:9.5
[Pipeline] dockerFingerprintRun
[Pipeline] sh
[planet-healthcare-pipeline] Running shell script
+ docker inspect -f . planethealthcare/portal_app
.
[Pipeline] withDockerContainer
$ docker run -t -d -u 500:500 --link 85511ce90ce11c24818ae63bbbf7ab47745be7d96807d450b4adebd4c3196c5e:postgres -p 3000:3000 -e RAILS_ENV=test -w /home/ec2-user/jenkins/workspace/planet-healthcare-pipeline -v /home/ec2-user/jenkins/workspace/planet-healthcare-pipeline:/home/ec2-user/jenkins/workspace/planet-healthcare-pipeline:rw -v /home/ec2-user/jenkins/workspace/planet-healthcare-pipeline#tmp:/home/ec2-user/jenkins/workspace/planet-healthcare-pipeline#tmp:rw -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** --entrypoint cat planethealthcare/portal_app
[Pipeline] {
[Pipeline] echo
running tests...
[Pipeline] sh
[planet-healthcare-pipeline] Running shell script
+ rails db:migrate
/home/ec2-user/jenkins/workspace/planet-healthcare-pipeline#tmp/durable-32785ba4/script.sh: 2: /home/ec2-user/jenkins/workspace/planet-healthcare-pipeline#tmp/durable-32785ba4/script.sh: rails: not found
[Pipeline] }
$ docker stop 3acf37726ce1061d2e0f6e8d0cec882c707b42e710916636b17aaece4f516f2d
$ docker rm -f 3acf37726ce1061d2e0f6e8d0cec882c707b42e710916636b17aaece4f516f2d
[Pipeline] // withDockerContainer
[Pipeline] sh
[planet-healthcare-pipeline] Running shell script
+ docker stop 85511ce90ce11c24818ae63bbbf7ab47745be7d96807d450b4adebd4c3196c5e
85511ce90ce11c24818ae63bbbf7ab47745be7d96807d450b4adebd4c3196c5e
+ docker rm -f 85511ce90ce11c24818ae63bbbf7ab47745be7d96807d450b4adebd4c3196c5e
85511ce90ce11c24818ae63bbbf7ab47745be7d96807d450b4adebd4c3196c5e
[Pipeline] }
[Pipeline] // stage
[Pipeline] mail
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 127
Finished: FAILURE
And here's the pipeline:
node('docker') {
currentBuild.result = "SUCCESS"
try{
def git_commit = ""
def workspace = pwd()
def APPLICATION_YML
def image
stage("Download source and capture commit ID") {
sh "which ansible"
sh "ansible --version"
// Download source
git branch: 'develop', credentialsId: 'b96345a1-543c-4ccd-9a86-deca7203625c', url: 'git#bitbucket.org:planetgroup/planethealthcareportal.git'
// Get the commit ID
sh 'git rev-parse --verify HEAD > GIT_COMMIT'
git_commit = readFile('GIT_COMMIT').take(7)
echo "Current commit ID: ${git_commit}"
}
stage("Copy application.yml to environment"){
// write the application.yml to a file
withCredentials([[$class: 'FileBinding', credentialsId: '67dbd2e7-008f-4463-89a6-9645060e8ec8', variable: 'APPLICATION_YML']]) {
sh "sudo cp ${env.APPLICATION_YML} config/application.yml"
}
}
stage("Build image"){
image = docker.build "planethealthcare/portal_app"
}
stage("Run test suite"){
// start postgres
def postgres95 = docker.image('postgres:9.5')
postgres95.withRun("-p 5432:5432 -e POSTGRES_PASSWORD=postgres -e POSTGRES_DB=phc_portal_test"){ postgres ->
image.inside("--link ${postgres.id}:postgres -p 3000:3000 -e RAILS_ENV=test") {
echo "running tests..."
sh "rails db:migrate"
sh "rspec --tag ~pending"
sh "cucumber"
}
}
}
stage("Push to ECR registry"){
docker.withRegistry('https://0000000000.dkr.ecr.us-east-1.amazonaws.com', 'ecr:dham'){
image.push "${git_commit}"
image.push 'latest'
}
}
stage("Deploy app"){
// run the playbook
ansiblePlaybook([
colorized: true,
credentialsId: 'planet-healthcare',
installation: 'ansible-2-2-0',
inventory: 'staging',
playbook: 'deploy.yml',
extras: "--extra-vars 'app_build_id=${git_commit}''"
])
}
}
catch(err) {
currentBuild.result = "FAILURE"
mail body: "project build error: ${err}\n\n\n ${currentBuild.description}" ,
subject: 'project build failed',
to: 'me#example.com'
throw err
}
}
It's failing in the "Run test suite" stage--it can't find rails to run rails db:migrate, even though I know it's in the container.
Why would setting PATH on the agent affect a script that happens inside a Docker container?
Do you execute which ansible in your script? It searches only defined PATHs.
And it seems /home/ec2-user/ansible/bin is not in /usr/local/bin:/bin:/usr/bin (from your output).
You may go to agent-node's settings in Jenkins and add PATH environment variable with $PATH:/home/ec2-user/ansible/bin value.