I'm trying to use sshagent plugin to deploy to remote server.
when using below syntax, I'm getting
pipeline {
agent any
stages {
stage('Deploy') {
steps {
sshagent(['nginx-ec2']) {
// some block
sh "ssh -o StrictHostKeyChecking=no ubuntu#<host_ip>"
sh "whoami"
}
}
}
}
}
getting output:
[Pipeline] sh (hide)
+ whoami
jenkins
while I'm expecting to run script at the remote server using provided credentials !!
So, I had to run it this way
pipeline {
agent any
stages {
stage('Deploy') {
steps {
sshagent(['nginx-ec2']) {
// some block
sh "ssh -o StrictHostKeyChecking=no -l ubuntu <host_ip> 'whoami && \
sudo apt update && sudo apt install -y docker.io && \
sudo usermod -aG docker ubuntu && \
source .bashrc && \
docker run -d nginx'"
}
}
}
}
}
Is there any "clean" way to run script on remote server with ubuntu instead of jenkins user ?
Edit:
I understand I need to run it under the ssh command not as separate sh script otherwise, it will run as jenkins and I'm able to do it in the scripted way as below.
That's why I'm asking if there's a better way to write it in the declarative way.
node {
stage('Deploy'){
def dockerRun = "whoami && \
sudo apt update && sudo apt install -y docker.io && \
sudo usermod -aG docker ubuntu && \
source .bashrc && \
docker run -d nginx "
sshagent(['nginx-ec2']) {
sh "ssh -o StrictHostKeyChecking=no ubuntu#<host_ip> '${dockerRun}' "
}
}
}
Thanks,
As noted, you should select a credential which does reference the right remote username, as seen in the SSH Agent Jenkins plugin:
node {
sshagent (credentials: ['deploy-dev']) {
sh 'ssh -o StrictHostKeyChecking=no -l cloudbees 192.168.1.106 uname -a'
}
}
Plus, I would execute only one script which would have the all sequence of commands you want to execute remotely.
Well, so far this is the best way to do so, in-spite of repetition!
pipeline {
agent any
stages {
stage('Deploy') {
steps {
sshagent(['nginx-ec2']) {
// some block
sh "ssh -o StrictHostKeyChecking=no -l ubuntu <remote_ip> 'whoami'"
sh "ssh -o StrictHostKeyChecking=no -l ubuntu <remote_ip> 'sudo apt update && sudo apt install -y docker.io'"
sh "ssh -o StrictHostKeyChecking=no -l ubuntu <remote_ip> 'sudo usermod -aG docker ubuntu'"
sh "ssh -o StrictHostKeyChecking=no -l ubuntu <remote_ip> 'source .bashrc'"
sh "ssh -o StrictHostKeyChecking=no -l ubuntu <remote_ip> 'docker run -d -nginx'"
}
}
}
}
}
Related
Below is the error from Jenkins console output:
+ sonar-scanner -Dsonar.login=**** -Dsonar.projectBaseDir=.
/var/lib/jenkins/workspace/Mtr-Pipeline_develop#2#tmp/durable-0080bcff/script.sh: 1: /var/lib/jenkins/workspace/Mtr-Pipeline_develop#2#tmp/durable-0080bcff/script.sh: sonar-scanner: Permission denied
I have setup the token and pasted the key in t-m-sonar-login variable in Jenkins global credentials.But I dont think it should be the keys causing `permission denied error. Can someone provide some pointers to look into the issue.
stage('SonarQube scan') {
agent {
dockerfile { filename 'sonar/Dockerfile' }
}
steps {
withCredentials([string(credentialsId: 't-m-sonar-login', variable: 'SONAR_LOGIN')]) {
script {
unstash 'coverage'
unstash 'testResults'
sh 'sonar-scanner -Dsonar.login="$SONAR_LOGIN" -Dsonar.projectBaseDir=. || true'
}
}
}
}
sonar/Dockerfile:
FROM node:15-buster
################
# Install java #
################
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive \
apt-get -y install openjdk-11-jre-headless && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
############################
# Install SonarQube client #
############################
WORKDIR /root
RUN apt-get install -y curl grep sed unzip
RUN curl --insecure -o ./sonarscanner.zip -L https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.4.0.2170-linux.zip
RUN unzip -q sonarscanner.zip
RUN rm sonarscanner.zip
RUN mv sonar-scanner-4.4.0.2170-linux sonar-scanner
ENV SONAR_RUNNER_HOME=/root/sonar-scanner
ENV PATH $PATH:/root/sonar-scanner/bin
# Include Sonar configuration and project paths
COPY ./sonar/sonar-runner.properties ./sonar-scanner/conf/sonar-scanner.properties
# Ensure Sonar uses the provided Java for musl instead of a borked glibc one
RUN sed -i 's/use_embedded_jre=true/use_embedded_jre=false/g' /root/sonar-scanner/bin/sonar-scanner
Please check whether Java is available on the system where SonarQube Scanner is running.
Another thing you can try is:
Go to SonarQube Scanner Directory -> Go to bin -> chmod +x sonar-scanner
So I want to start using Jenkins to build my app and then test it and push my image to local repo.
Because I have 2 images to push I would like to use docker-compose, but docker-compose is missing.
I installed Jenkins through Portainer, and I'm using the jenkins/jenkins:lts image.
Is there a way to install docker-compose into the container without having to create my own Dockerfile for it?
My Jenkins pipeline so far is:
node {
stage('Clone repository') {
checkout([$class: 'GitSCM',
branches: [[name: '*/master' ]],
extensions: scm.extensions,
userRemoteConfigs: [[
url: 'repo-link',
credentialsId: 'credentials'
]]
])
}
stage('Build image') {
sh 'cd src/ && docker-compose build'
}
stage('Push image') {
sh 'docker-compose push'
}
}
You can either install docker-compose during image build time (via Dockerfile):
FROM jenkins/jenkins
USER root
RUN curl -L \
"https://github.com/docker/compose/releases/download/1.25.3/docker-compose-$(uname -s)-$(uname -m)" \
-o /usr/local/bin/docker-compose \
&& chmod +x /usr/local/bin/docker-compose
USER jenkins
Or you can install docker-compose after the Jenkins container is already running via the same CURL command:
$ sudo curl -L https://github.com/docker/compose/releases/download/1.25.3/run.sh -o /usr/local/bin/docker-compose
$ sudo chmod +x /usr/local/bin/docker-compose
We have several Jenkins agents running as physical machines.
So far I ran my Jenkins pipeline on the agents themselves, I now try to move the building and test execution into docker containers using the Jenins docker plugin.
Below you can find a simplified version of our Jenkinsfile which uses gradle to build, test, and package a Java Spring Boot application.
node {
stage('Preparation') {
cleanWs()
checkout scm
notifyBitbucket()
}
}
pipeline {
agent {
docker {
image "our-custom-registry.com/jenkins-build:latest"
registryUrl 'https://our-custom-registry.com'
registryCredentialsId '...'
alwaysPull true
args "-u jenkins -v /var/run/docker.sock:/var/run/docker.sock" // the pipeline itself required docker
}
}
stages {
stage('Build') {
steps {
sh './gradlew assemble classes testClasses'
}
}
stage('Test') {
parallel {
stage('Unit Tests') {
when { expression { return build_params.ENABLE_UNITTEST } }
steps {
sh './gradlew test'
junit UNIT_TEST_RESULT_DIR
}
}
stage('Integration Tests') {
when { expression { return build_params.ENABLE_INTEGRATION } }
steps {
sh './gradlew integration'
junit INTEGRATION_TEST_RESULT_DIR
}
}
}
}
stage('Finalize') {
stage('Docker Push') {
when { expression { return build_params.ENABLE_DOCKER_PUSH } }
steps {
sh './gradlew pushDockerImage'
}
}
}
}
post {
cleanup {
cleanWs()
}
always {
script {
node {
currentBuild.result = currentBuild.result ?: 'SUCCESS'
notifyBitbucket()
}
}
}
}
}
Below is the Dockerfile I use for the build image. As you can see I manually create a Jenkins user and add the to the docker groups (unfortunately the GID is 998 or 999 depending on the Jenkins agent).
FROM openjdk:8-jdk-stretch
USER root
# prerequisites:
# - a user and a group Jenkins with UID/GID=1001 exist
# - the user home is /var/jenkins
# - the user is in the docker group
# - on some agents docker has the gid 998, on some it is 999
RUN apt-get update \
&& apt-get -y install apt-transport-https ca-certificates curl gnupg2 software-properties-common rsync tree \
&& curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add - \
&& add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable" \
&& apt-get update \
&& apt-get -y install docker-ce docker-ce-cli containerd.io \
&& groupadd -g 1001 jenkins \
&& groupadd -f -g 998 docker1 \
&& groupadd -f -g 999 docker2 \
&& useradd -d "/var/jenkins" -u 1001 -g 1001 -m -s /bin/bash jenkins \
&& usermod -a -G 998 jenkins \
&& usermod -a -G 999 jenkins
USER jenkins
Jenkins then executes the following command
docker run -t -d -u 1001:1001 -u jenkins -v /var/run/docker.sock:/var/run/docker.sock -w /var/jenkins/workspace/JOB_NAME -v /var/jenkins/workspace/JOB_NAME:/var/jenkins/workspace/JOB_NAME:rw,z -v /var/jenkins/workspace/JOB_NAME#tmp:/var/jenkins/workspace/JOB_NAME#tmp:rw,z -e ******** ... our-custom-registry.com/base/jenkins-build:latest cat
This pipeline works just fine... sometimes!
Most of the times, however, some files get mysteriously lost.
For example, my build.gradle consists of multiple other files that are included.
At some point during the build, one of these files seems to be missing.
+ ./gradlew pushDockerImage
FAILURE: Build failed with an exception.
* Where:
Build file '/var/****/workspace/JOB_NAME/build.gradle' line: 35
* What went wrong:
A problem occurred evaluating root project 'foo'.
> Could not read script '/var/****/workspace/JOB_NAME/build/gradle/scripts/springboot-plugin.gradle' as it does not exist.
It is always a different file that goes missing.
I started running tree just before the ./gradlew just to make sure the file is not actually removed.
Does anybody have an idea what might be going on here?
Update
Forget everything I said about docker, this is a pure Gradle and Jenkins problem.
When I replace the docker agent with a plain Jenkins agent the same problems occur.
The problem seems to be that you cannot run multiple gradle tasks in parallel on the same directory.
I am trying to create a Jenkins pipeline where I need to execute multiline shell commands.
stage ('Test'){
name="myserver"
sh '''
"ssh -o StrictHostKeyChecking=no ${myserver} 'rm -rf temp && mkdir -p temp && mkdir -p real'"
'''
}
But it is always returning error as "command not found". If I run the same with
sh "ssh -o StrictHostKeyChecking=no ${myserver} 'rm -rf temp && mkdir -p temp && mkdir -p real' "
Is there a different way to access variable in multiline shell?
You need to use """ like this:
sh """
"ssh -o StrictHostKeyChecking=no ${myserver} 'rm -rf temp && mkdir -p temp && mkdir -p real'"
"""
I've created a docker images to be able to run node >= 7.9.0 and monogodb for testing in Jenkins. Some might argue that testing with mongodb is not correct approach but the app uses it extensively and I have some complex updates and deletes so I need it there.
Docker file is under dockerfiles/test/Dockerfile in my github repo. When using the pipeline syntax the docker images is built successfully but I can't do sh 'npm install' or sh 'npm -v' in the steps of the pipeline. The docker images is tested and if I build it locally and run it I can do the npm install there. sh 'node -v' runs successfully in the pipeline and also sh 'ls'.
Here is the pipeline syntax.
pipeline {
agent { dockerfile { dir 'dockerfiles/test' } }
stages {
stage('Build') {
steps {
sh 'npm install'
}
}
}
post {
always {
echo 'I will always say Hello again!'
}
}
}
I get this error: ERROR: script returned exit code -1. I can't see anything wrong here. I've also tested with other node images with the same result. If I run it with a node slave I can do the installation but I do not want to have many different slaves with a lot of setups for integration tests.
And here is the dockerfile
FROM ubuntu:16.04
ENV DEBIAN_FRONTEND noninteractive
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927
RUN echo "deb http://repo.mongodb.org/apt/ubuntu $(cat /etc/lsb-release | grep DISTRIB_CODENAME | cut -d= -f2)/mongodb-org/3.2 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-3.2.list
RUN apt-get update && apt-get install -y \
curl && \
curl -sL https://deb.nodesource.com/setup_7.x | bash - && \
apt-get install -y nodejs && \
apt-get install -y mongodb-org
RUN mkdir -p /data/db
RUN export LC_ALL=C
RUN groupadd -g 1000 jenkins && useradd -u 1000 jenkins -g jenkins
EXPOSE 27017
CMD ["/usr/bin/mongod"]
Found a workaround to a similar problem.
Problem
Jenkins running a pipeline job
This job is running commands inside a debian slim container
All commands are failing instantly with no error output, only a ERROR: script returned exit code -1
Running the container outside docker and executing the same commands with the same user is working as it should be
Extract from Jenkinfile :
androidImage = docker.build("android")
androidImage.inside('-u root') {
stage('Install'){
sh 'npm install' // is failing with generic error and no output
}
Solution
Found the answer on Jenkins bugtracker : https://issues.jenkins-ci.org/browse/JENKINS-35370 and on Jenkins Docker Pipeline Exit Code -1
My problem was solved by installing the procps package in my debian Dockerfile :
apt-get install -y procps
I replicated your setup as faithfully as I could. I used your Dockerfile and Jenkinsfile, and here's my package.json:
{
"name": "minimal",
"description": "Minimal package.json",
"version": "0.0.1",
"devDependencies": {
"mocha": "*"
}
}
It failed like this for me during npm install:
npm ERR! Error: EACCES: permission denied, mkdir '/home/jenkins'
I updated one line in your Dockerfile to add --create-home:
RUN groupadd -g 1000 jenkins && useradd -u 1000 jenkins -g jenkins --create-home
And the build passed. Kudos to #mkobit for keying in on the issue and linking to the jenkins issue that will make this cleaner in the future.