My jenkins is run in docker, I write a demo to remote my server with ssh-agent.
Here is my pipeline
pipeline {
agent any
stages {
stage('Hello') {
steps {
sshagent (credentials: ['hehu']) {
sh 'ssh -o StrictHostKeyChecking=no -l yunwei xxx.xxx.xx.25 -a'
sh 'pwd'
sh 'whoami'
}
}
}
}
}
Output
It looks like pwd and whoami command still run in jenkins docker not my server. I have no idea how to use this plugin, I can't find any usage from ssh-agent document.
You should use:
sh 'ssh -o StrictHostKeyChecking=no -l yunwei x.x.x.x pwd && whoami && cmd...'
Related
I have 2 aws ubuntu instance: 1st-server and 2nd-server.
Below is my jenkins pipeline script which create docker image and runs container on 1st-server and push the image to docker hub repo. That's working fine.
I want to pull image and deploy it on 2nd-server.
When I do ssh for 2nd server through below pipeline script but it logins to 1st-server, even if ssh credential ('my-ssh-key') are of 2nd-server. I'm confused how it logging to 1st-server and I checked with touch commands so the file is creating on 1st-server.
pipeline {
environment {
registry = "docker-user/docker-repo"
registryCredential = 'docker-cred'
dockerImage = ''
}
agent any
stages {
stage('Cloning Git') {
steps {
git url: 'https://github.com/git-user/jenkins-flask-tutorial.git/'
}
}
stage('Building image') {
steps{
script {
sh "sudo docker build -t flask-app-one ."
sh "sudo docker run -p 5000:5000 --name flask-app-one -d flask-app-one "
sh "docker tag flask-app-one:latest docker-user/myrepo:flask-app-push-test"
}
}
}
stage('Push Image') {
steps{
script {
docker.withRegistry( '', registryCredential ) {
sh "docker push docker-user/docker-repo:flask-app-push-test"
sshagent(['my-ssh-key']) {
sh 'ssh -o StrictHostKeyChecking=no ubuntu#2ndserver && cd /home/ubuntu/ && sudo touch test-file && docker pull docker-user/docker-repo:flask-app-push-test'
}
}
}
}
}
My question is, how to login to 2nd server and pull the docker image on 2nd server via through jenkins pipeline script? Help me out where I'm doing wrong.
This is more of an alternative than a solution. You can execute the remote commands as part of ssh. This will execute the command on the server and disconnect.
ssh name#ip "ls -la /home/ubuntu/"
i am trying to ssh into a remote host and then execute certain commands on the remote host's shell. Following is my pipeline code.
pipeline {
agent any
environment {
// comment added
APPLICATION = 'app'
ENVIRONMENT = 'dev'
MAINTAINER_NAME = 'jenkins'
MAINTAINER_EMAIL = 'jenkins#email.com'
}
stages {
stage('clone repository') {
steps {
// cloning repo
checkout scm
}
}
stage('Build Image') {
steps {
script {
sshagent(credentials : ['jenkins-pem']) {
sh "echo pwd"
sh 'ssh -t -t ubuntu#xx.xxx.xx.xx -o StrictHostKeyChecking=no'
sh "echo pwd"
sh 'sudo -i -u root'
sh 'cd /opt/docker/web'
sh 'echo pwd'
}
}
}
}
}
}
But upon running this job it executes sh 'ssh -t -t ubuntu#xx.xxx.xx.xx -o StrictHostKeyChecking=no' successfully but it stops there and does not execute any further commands. I want to execute the commands that are written after ssh command inside the remote host's shell. any help is appreciated.
I would try something like this:
sshagent(credentials : ['jenkins-pem']) {
sh "echo pwd"
sh 'ssh -t -t ubuntu#xx.xxx.xx.xx -o StrictHostKeyChecking=no "echo pwd && sudo -i -u root && cd /opt/docker/web && echo pwd"'
}
I resolve this issue
script
{
sh """ssh -tt login#host << EOF
your command
exit
EOF"""
}
stage("DEPLOY CONTAINER"){
steps {
script {
sh """
#!/bin/bash
sudo ssh -i /path/path/keyname.pem username#serverip << EOF
sudo bash /opt/filename.sh
exit 0
<< EOF
"""
}
}
}
There is a better way to run commands on remote using SSH. I know this is late answer but I just explored this thing so would like to share and this will help others to resolve this problem easily.
I just found this link helpful on how to run multiple commands on remote using SSH. Also we can run multiple commands conditionally as mentioned in above blog.
By going through it, I found the syntax:
ssh username#hostname "command1; command2;commandN"
Now, how to run command inside remote hots using SSH in Jenkins pipeline?
Here is the solution:
pipeline {
agent any
environment {
/*
define your command in variable
*/
remoteCommands =
"""java --version;
java --version;
java --version """
}
stages {
stage('Login to remote host') {
steps {
sshagent(['ubnt-creds']) {
/*
Provide variable as argument in ssh command
*/
sh 'ssh -tt username#hostanem $remoteCommands'
}
}
}
}
}
Firstly and optionally, you can define a variable that holds all commands separated by ;(semicolon) and then pass it as parameter in command.
Another way, you can also pass your commands directly to ssh command as
sh "ssh -tt username#hostanem 'command1;command2;commandN'"
I have used it in my code and it's working great!
see the output here
Happy Learning :)
I am trying to remove the directory junit located in the workspace of my Jenkins job using scripted Pipeline which looks somewhat like this:
node {
stage('Build') {
checkout scm
app = docker.build("...")
}
stage('Test') {
app.withRun("--name = ${CONTAINER_ID} ...") {
// sh "mkdir -p junit"
// sh "rm -rf junit/"
dir "junit" {
deleteDir
}
sh "docker exec ${CONTAINER_ID} /bin/bash -c 'source venv/bin/activate && python run.py test -x junit'"
sh "docker cp ${CONTAINER_ID}:/home/foo/junit junit"
}
}
junit 'junit/*.xml'
}
However I am getting the following (red haring?) error, e.g.
java.lang.ClassCastException:
hudson.tasks.junit.pipeline.JUnitResultsStep.testResults expects class
java.lang.String but received class
org.jenkinsci.plugins.workflow.cps.CpsClosure2
However when I am using the shell steps:
sh "mkdir -p junit"
sh "rm -rf junit/"
It works as expected. What am I doing wrong?
Try to use parentheses:
dir ("junit") {
deleteDir()
}
This is the groovy script for a simple build pipeline that uses the docker image of SQL Server on Linux:
def PowerShell(psCmd) {
bat "powershell.exe -NonInteractive -ExecutionPolicy Bypass -Command \"\$ErrorActionPreference='Stop';$psCmd;EXIT \$global:LastExitCode\""
}
node {
stage('git checkout') {
git 'file:///C:/Projects/SsdtDevOpsDemo'
}
stage('build dacpac') {
bat "\"${tool name: 'Default', type: 'msbuild'}\" /p:Configuration=Release"
stash includes: 'SsdtDevOpsDemo\\bin\\Release\\SsdtDevOpsDemo.dacpac', name: 'theDacpac'
}
stage('start container') {
sh 'docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=P#ssword1" --name SQLLinuxLocal2 -d -i -p 15566:1433 microsoft/mssql-server-linux'
}
stage('deploy dacpac') {
unstash 'theDacpac'
bat "\"C:\\Program Files\\Microsoft SQL Server\\140\\DAC\\bin\\sqlpackage.exe\" /Action:Publish /SourceFile:\"SsdtDevOpsDemo\\bin\\Release\\SsdtDevOpsDemo.dacpac\" /TargetConnectionString:\"server=localhost,15566;database=SsdtDevOpsDemo;user id=sa;password=P#ssword1\""
}
stage('run tests') {
PowerShell('Start-Sleep -s 5')
}
stage('cleanup') {
sh 'docker stop SQLLinuxLocal2'
sh 'docker rm SQLLinuxLocal2'
}
}
I got to this point with some help with a question I posted a day or so ago, this was my attempt (with some help) at doing the same thing but with the docker plugin:
def PowerShell(psCmd) {
bat "powershell.exe -NonInteractive -ExecutionPolicy Bypass -Command \"\$ErrorActionPreference='Stop';$psCmd;EXIT \$global:LastExitCode\""
}
node {
stage('git checkout') {
git 'file:///C:/Projects/SsdtDevOpsDemo'
}
stage('Build Dacpac from SQLProj') {
bat "\"${tool name: 'Default', type: 'msbuild'}\" /p:Configuration=Release"
stash includes: 'SsdtDevOpsDemo\\bin\\Release\\SsdtDevOpsDemo.dacpac', name: 'theDacpac'
}
stage('start container') {
docker.image('-e "ACCEPT_EULA=Y" -e "SA_PASSWORD=P#ssword1" --name SQLLinuxLocal2 -d -i -p 15566:1433 microsoft/mssql-server-linux').withRun() {
unstash 'theDacpac'
bat "\"C:\\Program Files\\Microsoft SQL Server\\140\\DAC\\bin\\sqlpackage.exe\" /Action:Publish /SourceFile:\"SsdtDevOpsDemo\\bin\\Release\\SsdtDevOpsDemo.dacpac\" /TargetConnectionString:\"server=localhost,15566;database=SsdtDevOpsDemo;user id=sa;password=P#ssword1\""
}
sh 'docker run -d --name SQLLinuxLocal2 microsoft/mssql-server-linux'
}
stage('sleep') {
PowerShell('Start-Sleep -s 30')
}
stage('cleanup') {
sh 'docker stop SQLLinuxLocal2'
sh 'docker rm SQLLinuxLocal2'
}
}
The problem with this is that although it works, the docker run -d line spins up a different incarnation of the carnation. Could someone please point me in the correct direction as to getting the same result as per the first pipeline but by using the docker plugin.
I'm trying to execute an SSH command from inside a Docker container in a Jenkins pipeline. I'm using the CloudBees Docker Pipeline Plugin to spin up the container and execute commands, and the SSH Agent Plugin to manage my SSH keys. Here's a basic version of my Jenkinsfile:
node {
step([$class: 'WsCleanup'])
docker.image('node').inside {
stage('SSH') {
sshagent (credentials: [ 'MY_KEY_UUID' ]) {
sh "ssh -vvv -o StrictHostKeyChecking=no ubuntu#example.org uname -a"
}
}
}
}
When the SSH command runs, I get this error:
+ ssh -vvv -o StrictHostKeyChecking=no ubuntu#example.org uname -a
No user exists for uid 1005
I combed through the logs and realized the Docker Pipeline Plugin is automatically telling the container to run with the same user that is logged in on the host by passing a UID as a command line argument:
$ docker run -t -d -u 1005:1005 [...]
I decided to check what users existed in the host and the container by running cat /etc/passwd in each environment. Sure enough, the list of users was different in each. 1005 was the jenkins user on the host machine, but that UID didn't exist in the container. To solve the issue, I mounted /etc/passwd from the host to the container when spinning it up:
node {
step([$class: 'WsCleanup'])
docker.image('node').inside('-v /etc/passwd:/etc/passwd') {
stage('SSH') {
sshagent (credentials: [ 'MY_KEY_UUID' ]) {
sh "ssh -vvv -o StrictHostKeyChecking=no ubuntu#example.org uname -a"
}
}
}
}
The solution provided by #nathan-thompson is awesome, but in my case I was unable to find the user even in the /etc/passwd of the host machine! It means mounting the passwd file did not fix the problem. This question https://superuser.com/questions/580148/users-not-found-in-etc-passwd suggested some users are logged in the host using an identity provider like LDAP.
The solution was finding a way to add the proper line to the passwd file on the container. Calling getent passwd $USER on the host will provide the passwd line for the Jenkins user running the container.
I added a step running on the node (and not the docker agent) to get the line and save it in a file. Then in the next step I mounted the generated passwd to the container:
stages {
stage('Create passwd') {
steps {
sh """echo \$(getent passwd \$USER) > /tmp/tmp_passwd
"""
}
}
stage('Test') {
agent {
docker {
image '*******'
args '***** -v /tmp/tmp_passwd:/etc/passwd'
reuseNode true
registryUrl '*****'
registryCredentialsId '*****'
}
}
steps {
sh """ssh -i ********
"""
}
}
}
I just found another solution to this problem, that I want to share. It differentiates from the existing solutions in that it allows to run the complete pipeline in one agent, instead of per stage.
The trick is to, instead of directly using an image, refer to a Dockerfile (which may be build FROM the original) and then add the user:
# Dockerfile
FROM node
ARG jenkinsUserId=
RUN if ! id $jenkinsUserId; then \
usermod -u ${jenkinsUserId} jenkins; \
groupmod -g ${nodeId} jenkins; \
fi
// Jenkinsfile
pipeline {
agent {
dockerfile {
additionalBuildArgs "--build-arg jenkinsUserId=\$(id -u jenkins)"
}
}
}
agent {
docker {
image 'node:14.10.1-buster-slim'
args '-u root:root'
}
}
environment {
SSH_deploy = credentials('e99988ea-6bdc-45fc-b9e1-536b875bcac7')
}
stage('build') {
steps {
sh '''#!/bin/bash
eval $(ssh-agent -s)
cat $SSH_deploy | tr -d '\r' | ssh-add -
touch .env
echo 'REACT_APP_BASE_API = "//172.22.132.115:8080"' >> .env
echo 'REACT_APP_ADMIN_PANEL_URL = "//172.22.132.115"' >> .env
yarn install
CI=false npm run build
ssh -t -o StrictHostKeyChecking=no root#172.22.132.115 'rm -rf /usr/local/src/build'
scp -r -o StrictHostKeyChecking=no build root#172.22.132.115:/usr/local/src/
ssh -t -o StrictHostKeyChecking=no root#172.22.132.115 'systemctl restart nginx'
'''
}
From the solution provided by Nathan Thompson, I modified it this way for Jenkins DOCKER build container which runs inside a Jenkins DOCKER-slave. #docker in docker
if (validated_parameters.custom_gradle_image){
docker.image(validated_parameters.custom_gradle_image).inside(" -v /etc/passwd:/etc/passwd -v /var/lib/jenkins/.ssh/:/var/lib/jenkins/.ssh/ "){
sshagent(['jenkins-git-io']){
sh "${gradleCommand}"
}