I would like to execute a Jenkins pipeline:
stage('Deploy watchers') {
ansiblePlaybook(
playbook: "watcher-manage.yml",
extraVars: [
target: 'dev-dp-manager-1'
]
)
}
This produces ansible-playbook watcher-manage.yml -e target=dev-dp-manager-1.
This execution leads to:
fatal: [dev-dp-manager-1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).\r\n", "unreachable": true}
According to the documentation I need to add sudo: true to make the ansible command execute with root privileges. If I do so:
stage('Deploy watchers') {
ansiblePlaybook(
sudo: true,
playbook: "watcher-manage.yml",
extraVars: [
target: 'dev-dp-manager-1'
]
)
}
This produces ansible-playbook watcher-manage.yml -s -U root -e target=dev-dp-manager-1. Nevertheless I get the same error.
If I try to say sudo ansible-playbook ... my command succeeds.
My question is whether I can achieve the desired execution by using the plugin or I have to write the ansible command by hand?
Thanks!
What worked for me was:
stage('Deploy watchers') {
sh 'sudo ansible-playbook watcher-manage.yml --extra-vars="target=dev-dp-manager-1"'
}
Since the Jenkins' linux user doesn't need have access to ssh keys, simply lifting its permissions for this one command does the job.
Related
I am trying to execute the curl command from Jenkins declarative pipeline on a remote server, however it is running on Jenkins node instead of server.
pipeline {
agent {
label
}
stages {
stage('TEst ssh') {
steps {
script {
sh '''
ssh -t user#test << ENDSSH
echo "ssh to server"
cd /opt/apps
url=$(curl -H 'X-JFrog-Art-Api: Artifactory_token' 'Artifactory_url' |jq -r '.uri')
echo $url
ENDSSH
'''
}
}
}
}
}
I am getting "Curl command not found". can anyone suggest a solution for the same?
Just put the full path to curl in the command, /bin/curl? Or change the cmd to set and check tbe path, then figure out why /bin is not in the path or if curl is even there.
Note: remote ssh login is not the same login sequence as a remote shell.
i am trying to ssh into a remote host and then execute certain commands on the remote host's shell. Following is my pipeline code.
pipeline {
agent any
environment {
// comment added
APPLICATION = 'app'
ENVIRONMENT = 'dev'
MAINTAINER_NAME = 'jenkins'
MAINTAINER_EMAIL = 'jenkins#email.com'
}
stages {
stage('clone repository') {
steps {
// cloning repo
checkout scm
}
}
stage('Build Image') {
steps {
script {
sshagent(credentials : ['jenkins-pem']) {
sh "echo pwd"
sh 'ssh -t -t ubuntu#xx.xxx.xx.xx -o StrictHostKeyChecking=no'
sh "echo pwd"
sh 'sudo -i -u root'
sh 'cd /opt/docker/web'
sh 'echo pwd'
}
}
}
}
}
}
But upon running this job it executes sh 'ssh -t -t ubuntu#xx.xxx.xx.xx -o StrictHostKeyChecking=no' successfully but it stops there and does not execute any further commands. I want to execute the commands that are written after ssh command inside the remote host's shell. any help is appreciated.
I would try something like this:
sshagent(credentials : ['jenkins-pem']) {
sh "echo pwd"
sh 'ssh -t -t ubuntu#xx.xxx.xx.xx -o StrictHostKeyChecking=no "echo pwd && sudo -i -u root && cd /opt/docker/web && echo pwd"'
}
I resolve this issue
script
{
sh """ssh -tt login#host << EOF
your command
exit
EOF"""
}
stage("DEPLOY CONTAINER"){
steps {
script {
sh """
#!/bin/bash
sudo ssh -i /path/path/keyname.pem username#serverip << EOF
sudo bash /opt/filename.sh
exit 0
<< EOF
"""
}
}
}
There is a better way to run commands on remote using SSH. I know this is late answer but I just explored this thing so would like to share and this will help others to resolve this problem easily.
I just found this link helpful on how to run multiple commands on remote using SSH. Also we can run multiple commands conditionally as mentioned in above blog.
By going through it, I found the syntax:
ssh username#hostname "command1; command2;commandN"
Now, how to run command inside remote hots using SSH in Jenkins pipeline?
Here is the solution:
pipeline {
agent any
environment {
/*
define your command in variable
*/
remoteCommands =
"""java --version;
java --version;
java --version """
}
stages {
stage('Login to remote host') {
steps {
sshagent(['ubnt-creds']) {
/*
Provide variable as argument in ssh command
*/
sh 'ssh -tt username#hostanem $remoteCommands'
}
}
}
}
}
Firstly and optionally, you can define a variable that holds all commands separated by ;(semicolon) and then pass it as parameter in command.
Another way, you can also pass your commands directly to ssh command as
sh "ssh -tt username#hostanem 'command1;command2;commandN'"
I have used it in my code and it's working great!
see the output here
Happy Learning :)
I am running a mysql sidecar like the following :
docker.image("mysql:5.6").withRun("-e MYSQL_ALLOW_EMPTY_PASSWORD=yes -e", '--lower_case_table_names=1') { c ->
docker.image("mysql:5.6").inside("--link ${c.id}:mysql") {
/* Wait until MySQL service is up */
sh "while ! mysqladmin ping -u root -h mysql -p ; do sleep 1; done"
sh "mysql -u root -h mysql -p --batch -e 'show databases;'"
}
dockerRunArgs.add("--link ${c.id}:mysql")
docker.build(image, dockerBuildArgs.join(' ')).inside(dockerRunArgs.join(' ')) {
// the actual building, archiving, deployment, etc, stages go here
withCredentials([string(credentialsId: 'CREDENTIALID', variable: 'VARIABLE')]) {
stage('Build') {
sh 'chmod 777 ./build.sh'
sh "./build.sh"
}
stage('DB migrations checkout ') {
checkout([
$class: 'GitSCM',
branches: [[name: 'develop']],
userRemoteConfigs: [[
credentialsId: 'TOKEN',
url: 'mygithuburl.git'
]]
])
sh 'composer install --prefer-dist --no-interaction --no-dev --no-progress'
sh 'php artisan migrate:refresh --seed'
}
}
}
}
This is as shown in Jenkins documentation. Now I need to run some of the other services like Redis, Elasticsearch, Memcached and Beanstalkd . So where I need to add these docker images ?
Now I am building the docker image inside the MySQL docker image. Is it possible to run each of the container side cars in one stage and then do the migrations and run tests in next stage ?
I am using Jenkins Pipline and Groovy script to download a zip on slave machine of jenkins.
Following is my code:
pipeline {
agent { label '<my slave label>' }
stage('Download') {
steps {
script {
def url = "<server url>"
def processDownload = ['bash', '-c', "curl -g -k --noproxy \"*\" -o <output-dir> \"${url}\""].execute()
processDownload.waitFor()
def processUnzip = ['bash', '-c', "7z e lwbs.zip"].execute()
processUnzip.waitFor()
}
}
}
}
I am getting following error:
Warning: Failed to create the file Warning:
output-dir/newFile.zip: No such file or directory
I have checked following:
When I use the same curl command using command prompt, it run's successfully.
I have also ensured that proper user permissions are granted to
allow jenkins to write to this directory.
The directory exists and there are no spaces in the directory url
There is enough disk space available on slave
Server URL and certificates are correct
Many SO but none mentions issue on jenkins slave
Is there anything I am missing?
Any help is appreciated. Thank you.
After long hours of research, I found out the bash command triggered using following command
def processDownload = ['bash', '-c', "curl -g -k --noproxy \"*\" -o <output-dir> \"${url}\""].execute()
Jenkins will always execute it on Master.
When I change it to normal shell command, Jenkins is correctly executing it on slave machine. Moreover to unzip, I used Pipeline Utility Steps plugin which provides unzipping functionality which gets executed on slave. Following is my working code:
stage('Download') {
steps {
script {
url = "<server url>"
sh "curl -k --noproxy \"*\" -o \"<output-dir>\" \"${url}\""
unzip(dir: '', glob: '', zipFile: 'fileName.zip')
}
}
}
I'm trying to execute an SSH command from inside a Docker container in a Jenkins pipeline. I'm using the CloudBees Docker Pipeline Plugin to spin up the container and execute commands, and the SSH Agent Plugin to manage my SSH keys. Here's a basic version of my Jenkinsfile:
node {
step([$class: 'WsCleanup'])
docker.image('node').inside {
stage('SSH') {
sshagent (credentials: [ 'MY_KEY_UUID' ]) {
sh "ssh -vvv -o StrictHostKeyChecking=no ubuntu#example.org uname -a"
}
}
}
}
When the SSH command runs, I get this error:
+ ssh -vvv -o StrictHostKeyChecking=no ubuntu#example.org uname -a
No user exists for uid 1005
I combed through the logs and realized the Docker Pipeline Plugin is automatically telling the container to run with the same user that is logged in on the host by passing a UID as a command line argument:
$ docker run -t -d -u 1005:1005 [...]
I decided to check what users existed in the host and the container by running cat /etc/passwd in each environment. Sure enough, the list of users was different in each. 1005 was the jenkins user on the host machine, but that UID didn't exist in the container. To solve the issue, I mounted /etc/passwd from the host to the container when spinning it up:
node {
step([$class: 'WsCleanup'])
docker.image('node').inside('-v /etc/passwd:/etc/passwd') {
stage('SSH') {
sshagent (credentials: [ 'MY_KEY_UUID' ]) {
sh "ssh -vvv -o StrictHostKeyChecking=no ubuntu#example.org uname -a"
}
}
}
}
The solution provided by #nathan-thompson is awesome, but in my case I was unable to find the user even in the /etc/passwd of the host machine! It means mounting the passwd file did not fix the problem. This question https://superuser.com/questions/580148/users-not-found-in-etc-passwd suggested some users are logged in the host using an identity provider like LDAP.
The solution was finding a way to add the proper line to the passwd file on the container. Calling getent passwd $USER on the host will provide the passwd line for the Jenkins user running the container.
I added a step running on the node (and not the docker agent) to get the line and save it in a file. Then in the next step I mounted the generated passwd to the container:
stages {
stage('Create passwd') {
steps {
sh """echo \$(getent passwd \$USER) > /tmp/tmp_passwd
"""
}
}
stage('Test') {
agent {
docker {
image '*******'
args '***** -v /tmp/tmp_passwd:/etc/passwd'
reuseNode true
registryUrl '*****'
registryCredentialsId '*****'
}
}
steps {
sh """ssh -i ********
"""
}
}
}
I just found another solution to this problem, that I want to share. It differentiates from the existing solutions in that it allows to run the complete pipeline in one agent, instead of per stage.
The trick is to, instead of directly using an image, refer to a Dockerfile (which may be build FROM the original) and then add the user:
# Dockerfile
FROM node
ARG jenkinsUserId=
RUN if ! id $jenkinsUserId; then \
usermod -u ${jenkinsUserId} jenkins; \
groupmod -g ${nodeId} jenkins; \
fi
// Jenkinsfile
pipeline {
agent {
dockerfile {
additionalBuildArgs "--build-arg jenkinsUserId=\$(id -u jenkins)"
}
}
}
agent {
docker {
image 'node:14.10.1-buster-slim'
args '-u root:root'
}
}
environment {
SSH_deploy = credentials('e99988ea-6bdc-45fc-b9e1-536b875bcac7')
}
stage('build') {
steps {
sh '''#!/bin/bash
eval $(ssh-agent -s)
cat $SSH_deploy | tr -d '\r' | ssh-add -
touch .env
echo 'REACT_APP_BASE_API = "//172.22.132.115:8080"' >> .env
echo 'REACT_APP_ADMIN_PANEL_URL = "//172.22.132.115"' >> .env
yarn install
CI=false npm run build
ssh -t -o StrictHostKeyChecking=no root#172.22.132.115 'rm -rf /usr/local/src/build'
scp -r -o StrictHostKeyChecking=no build root#172.22.132.115:/usr/local/src/
ssh -t -o StrictHostKeyChecking=no root#172.22.132.115 'systemctl restart nginx'
'''
}
From the solution provided by Nathan Thompson, I modified it this way for Jenkins DOCKER build container which runs inside a Jenkins DOCKER-slave. #docker in docker
if (validated_parameters.custom_gradle_image){
docker.image(validated_parameters.custom_gradle_image).inside(" -v /etc/passwd:/etc/passwd -v /var/lib/jenkins/.ssh/:/var/lib/jenkins/.ssh/ "){
sshagent(['jenkins-git-io']){
sh "${gradleCommand}"
}