I got stuck in Jenkins Pipeline with ssh command. The error is:
+ ssh
/var/lib/jenkins/workspace/test-docker-jenkins#tmp/durable-2c3c7fb4/script.sh: line 1: ssh: not found
script returned exit code 127
My Jenkins File is:
pipeline {
agent {
docker {
image 'node:15.12.0-alpine'
}
}
stages {
stage("Prepare") {
steps {
sh "yarn"
}
}
stage("Build") {
steps {
sh "yarn build"
}
}
stage("Deploy") {
steps {
sh "ssh"
}
}
}
}
Does anyone know how to resolve this problem? Or is there anyway ssh to remote server in Jenkins Pipeline? Thank in advance. Have a good day!
You are trying to ssh from a docker container of image node:15.12.0-alpine and it doesn't contain ssh. From Jenkins, you can of course do SSH here is the SSH plugin of Jenkins and relevant documentation. https://www.jenkins.io/doc/pipeline/steps/ssh-steps/
Related
I want to connect to my server and then clone the project from my repo to the server in question and do a docker build. Could you show me a template? Because with my own code it only does the build on jenkins. It doesn't connect to the server to do the pull.
`
pipeline {
agent any
stages {
stage('Pulling our project') {
steps{
withCredentials([gitUsernamePassword(credentialsId: 'GitlabCred')]) {
sh 'git pull origin jks'
}
}
}
stage('Building our project') {
agent any
steps {
sh 'docker compose up -d --build'
}
}
}
}
`
I'm running Jenkins as a container and for some reason Im having issues :D.
After the pipeline runs docker build -t testwebapp:latest . I get docker: Exec format error on the Build image stage
The pipeline command docker.build seems to do what it should so something is wrong with my env?
The Jenkins docker-compose include the docker.sock so the running jenkins should be allowed to piggyback of the host docker?
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Pipeline script defined in Jenkins:
pipeline {
agent any
stages {
stage('Initialize Docker') {
steps {
script {
def dockerHome = tool 'myDocker'
env.PATH = "${dockerHome}/bin:${env.PATH}"
}
}
}
stage('Checkout') {
steps {
git branch: 'main', url: 'github url'
}
}
stage('Build image') {
steps {
script {
docker.build("testwebapp:latest")
}
}
}
}
post {
failure {
script {
currentBuild.result = 'FAILURE'
}
}
}
}
The global tool configuration is pretty standard:
Jenkins global tool config
I've created my Jenkinsfile for building my project in production and the pipeline looks like this:
pipeline {
agent any
stages {
stage('Pull') {
steps {
sh '''ssh ${SSH_USER}#${SERVER_ADDRESS} <<EOF
cd ${SOURCE_FOLDER}/project
git pull
git status
EOF'''
}
}
stage('Composer') {
parallel {
stage('Composer') {
steps {
sh '''ssh ${SSH_USER}#${SERVER_ADDRESS} <<EOF
docker run --rm -v ${SOURCE_FOLDER}/project:/app composer/composer:latest install
EOF'''
}
}
stage('Composer 2') {
steps {
sh '''ssh ${SSH_USER}#${SERVER_ADDRESS} <<EOF
docker run --rm -v ${SOURCE_FOLDER}/project/sub:/app
composer/composer:latest install
EOF'''
}
}
}
}
}
}
Is there a way to have all the stages all in one single SSH connection in order to minimise the overhead and the connection number?
I've done all the SSL stuff manually by creating the keys and pasting the public key on the production machine.
You can create a function for the connection and pass the SSH_USER & SERVER_ADDRESS as input parameters to that function. Call this function from all your stages.
Our env: Jenkins version: 2.138.3
Kubernetes plugin: 1.13.5
Sshagent plugin: 1.17
I have a job that runs OK on an AWS machine (use sshagent works as it should) but when I run the same job on our Kubernetes cluster it failed on ssh error.
Attached the working pipeline:
pipeline {
agent {
label 'deploy-test'
}
stages {
stage('sshagent') {
steps {
script {
sshagent(['deploy_user']) {
sh 'ssh -o StrictHostKeyChecking=no 99.99.999.99 ls'
}
}
}
}
}
}
If I change the label to label 'k8s-slave', it fails on:
+ ssh -o StrictHostKeyChecking=no 99.99.999.99 ls
Warning: Permanently added '99.99.999.99' (ECDSA) to the list of known hosts.
Permission denied (publickey).
Any idea?
just added my kubernetes configuration in Jenkins
I am attempting to mount a volume for my Docker agent with Jenkins pipeline. The following is my JenkinsFile:
pipeline {
agent none
environment {
DOCKER_ARGS = '-v /tmp/my-cache:/home/my-cache'
}
stages {
stage('Build') {
agent {
docker {
image 'my-image:latest'
args '$DOCKER_ARGS'
}
}
steps {
sh 'ls -la /home'
}
}
}
}
Sadly it fails to run, and the following can be seen from the pipeline.log file.
java.io.IOException: Failed to run image 'my-image:latest'. Error: docker: Error response from daemon: create /tmp/my-cache: " /tmp/my-cache" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path.
See 'docker run --help'.
However, the following JenkinsFile does work:
pipeline {
agent none
environment {
DOCKER_ARGS = '/tmp/my-cache:/home/my-cache'
}
stages {
stage('Build') {
agent {
docker {
image 'my-image:latest'
args '-v $DOCKER_ARGS'
}
}
steps {
sh 'ls -la /home'
}
}
}
}
The only difference is the -v flag is hardcoded outside of the environment variable.
I am new to Jenkins, so I have struggled to find any documentation on this behaviour. Could somebody please explain why I can't define my Docker agent args entirely in an environment variable?