I am trying to execute the curl command from Jenkins declarative pipeline on a remote server, however it is running on Jenkins node instead of server.
pipeline {
agent {
label
}
stages {
stage('TEst ssh') {
steps {
script {
sh '''
ssh -t user#test << ENDSSH
echo "ssh to server"
cd /opt/apps
url=$(curl -H 'X-JFrog-Art-Api: Artifactory_token' 'Artifactory_url' |jq -r '.uri')
echo $url
ENDSSH
'''
}
}
}
}
}
I am getting "Curl command not found". can anyone suggest a solution for the same?
Just put the full path to curl in the command, /bin/curl? Or change the cmd to set and check tbe path, then figure out why /bin is not in the path or if curl is even there.
Note: remote ssh login is not the same login sequence as a remote shell.
My jenkins is run in docker, I write a demo to remote my server with ssh-agent.
Here is my pipeline
pipeline {
agent any
stages {
stage('Hello') {
steps {
sshagent (credentials: ['hehu']) {
sh 'ssh -o StrictHostKeyChecking=no -l yunwei xxx.xxx.xx.25 -a'
sh 'pwd'
sh 'whoami'
}
}
}
}
}
Output
It looks like pwd and whoami command still run in jenkins docker not my server. I have no idea how to use this plugin, I can't find any usage from ssh-agent document.
You should use:
sh 'ssh -o StrictHostKeyChecking=no -l yunwei x.x.x.x pwd && whoami && cmd...'
Does anybody know why:
…
steps
{
script
{
sshagent(credentials: ['jenk'])
{
sh "git remote show …" //This does not work !
bat "git remote show …" //This works ??
}
}
}
...
The 'jenk' credentials are managed via Jenkins->credentials->System->global credentials
EDIT:
Sorry forgot the error msg:
Host key verification failed
fatal: Could not read from remote repository
Jenkins was configured using CYGWIN_NT-6.3-WOW (i686 Cygwin) for the sh commands.
After all this commands cleared everything:
if (isUnix())
{
echo "Jenkins runs on Linux"
}
else
{
echo "Jenkins runs on Windows"
}
echo "show shell kernel version (uname -a) : "
def res = sh (script: "uname -a", returnStdout: true)
echo "${res}" //=>CYGWIN_NT-6.3-WOW...
res2 = sh (script: "ls -al ~/.ssh", returnStdout: true)
echo "${res2}"
So the solution to the problem above is therefore adding the ssh-keys to cygwin
If you need your credentials you could do this:
https://codurance.com/2019/05/30/accessing-and-dumping-jenkins-credentials/
I am trying to remove the directory junit located in the workspace of my Jenkins job using scripted Pipeline which looks somewhat like this:
node {
stage('Build') {
checkout scm
app = docker.build("...")
}
stage('Test') {
app.withRun("--name = ${CONTAINER_ID} ...") {
// sh "mkdir -p junit"
// sh "rm -rf junit/"
dir "junit" {
deleteDir
}
sh "docker exec ${CONTAINER_ID} /bin/bash -c 'source venv/bin/activate && python run.py test -x junit'"
sh "docker cp ${CONTAINER_ID}:/home/foo/junit junit"
}
}
junit 'junit/*.xml'
}
However I am getting the following (red haring?) error, e.g.
java.lang.ClassCastException:
hudson.tasks.junit.pipeline.JUnitResultsStep.testResults expects class
java.lang.String but received class
org.jenkinsci.plugins.workflow.cps.CpsClosure2
However when I am using the shell steps:
sh "mkdir -p junit"
sh "rm -rf junit/"
It works as expected. What am I doing wrong?
Try to use parentheses:
dir ("junit") {
deleteDir()
}
I'm trying to execute an SSH command from inside a Docker container in a Jenkins pipeline. I'm using the CloudBees Docker Pipeline Plugin to spin up the container and execute commands, and the SSH Agent Plugin to manage my SSH keys. Here's a basic version of my Jenkinsfile:
node {
step([$class: 'WsCleanup'])
docker.image('node').inside {
stage('SSH') {
sshagent (credentials: [ 'MY_KEY_UUID' ]) {
sh "ssh -vvv -o StrictHostKeyChecking=no ubuntu#example.org uname -a"
}
}
}
}
When the SSH command runs, I get this error:
+ ssh -vvv -o StrictHostKeyChecking=no ubuntu#example.org uname -a
No user exists for uid 1005
I combed through the logs and realized the Docker Pipeline Plugin is automatically telling the container to run with the same user that is logged in on the host by passing a UID as a command line argument:
$ docker run -t -d -u 1005:1005 [...]
I decided to check what users existed in the host and the container by running cat /etc/passwd in each environment. Sure enough, the list of users was different in each. 1005 was the jenkins user on the host machine, but that UID didn't exist in the container. To solve the issue, I mounted /etc/passwd from the host to the container when spinning it up:
node {
step([$class: 'WsCleanup'])
docker.image('node').inside('-v /etc/passwd:/etc/passwd') {
stage('SSH') {
sshagent (credentials: [ 'MY_KEY_UUID' ]) {
sh "ssh -vvv -o StrictHostKeyChecking=no ubuntu#example.org uname -a"
}
}
}
}
The solution provided by #nathan-thompson is awesome, but in my case I was unable to find the user even in the /etc/passwd of the host machine! It means mounting the passwd file did not fix the problem. This question https://superuser.com/questions/580148/users-not-found-in-etc-passwd suggested some users are logged in the host using an identity provider like LDAP.
The solution was finding a way to add the proper line to the passwd file on the container. Calling getent passwd $USER on the host will provide the passwd line for the Jenkins user running the container.
I added a step running on the node (and not the docker agent) to get the line and save it in a file. Then in the next step I mounted the generated passwd to the container:
stages {
stage('Create passwd') {
steps {
sh """echo \$(getent passwd \$USER) > /tmp/tmp_passwd
"""
}
}
stage('Test') {
agent {
docker {
image '*******'
args '***** -v /tmp/tmp_passwd:/etc/passwd'
reuseNode true
registryUrl '*****'
registryCredentialsId '*****'
}
}
steps {
sh """ssh -i ********
"""
}
}
}
I just found another solution to this problem, that I want to share. It differentiates from the existing solutions in that it allows to run the complete pipeline in one agent, instead of per stage.
The trick is to, instead of directly using an image, refer to a Dockerfile (which may be build FROM the original) and then add the user:
# Dockerfile
FROM node
ARG jenkinsUserId=
RUN if ! id $jenkinsUserId; then \
usermod -u ${jenkinsUserId} jenkins; \
groupmod -g ${nodeId} jenkins; \
fi
// Jenkinsfile
pipeline {
agent {
dockerfile {
additionalBuildArgs "--build-arg jenkinsUserId=\$(id -u jenkins)"
}
}
}
agent {
docker {
image 'node:14.10.1-buster-slim'
args '-u root:root'
}
}
environment {
SSH_deploy = credentials('e99988ea-6bdc-45fc-b9e1-536b875bcac7')
}
stage('build') {
steps {
sh '''#!/bin/bash
eval $(ssh-agent -s)
cat $SSH_deploy | tr -d '\r' | ssh-add -
touch .env
echo 'REACT_APP_BASE_API = "//172.22.132.115:8080"' >> .env
echo 'REACT_APP_ADMIN_PANEL_URL = "//172.22.132.115"' >> .env
yarn install
CI=false npm run build
ssh -t -o StrictHostKeyChecking=no root#172.22.132.115 'rm -rf /usr/local/src/build'
scp -r -o StrictHostKeyChecking=no build root#172.22.132.115:/usr/local/src/
ssh -t -o StrictHostKeyChecking=no root#172.22.132.115 'systemctl restart nginx'
'''
}
From the solution provided by Nathan Thompson, I modified it this way for Jenkins DOCKER build container which runs inside a Jenkins DOCKER-slave. #docker in docker
if (validated_parameters.custom_gradle_image){
docker.image(validated_parameters.custom_gradle_image).inside(" -v /etc/passwd:/etc/passwd -v /var/lib/jenkins/.ssh/:/var/lib/jenkins/.ssh/ "){
sshagent(['jenkins-git-io']){
sh "${gradleCommand}"
}