Execute a Jenkins pipeline on remote host without a slave - jenkins

My Jenkins is on host1 and I wish to trigger ansible which is on host2 using Jenkins pipeline. This can be done by creating a slave node on host2 and specifying agent in Jenkins pipeline.
However, I do not have a Jenkins slave on host2.
Instead, Jenkins has connectivity to host2 by means of Server Groups Center which can be found under Jenkins Global Configuration
Do I need a Jenkins slave on host2 ? If not, then how can I use Server Groups Center in Jenkins pipeline to trigger Ansible on host2? Sample code pleaseā€¦

Do I need a Jenkins slave on host2?
No, if you have Linux on host2: you can simply run any command over SSH.
How can I use Server Groups Center in Jenkins pipeline to trigger Ansible on host2?
Server Groups Center block comes from SSH2 Easy plugin, which is very old and doesn't support Jenkins pipeline. So you can't use information from that block of settings in your pipeline.
But there are other plugins for SSH; try Publish over SSH plugin for example. This plugin adds Publish over SSH block to Jenkins Global Configuration, where you can specify host2 connection parameters.
And then you can write pipeline step as follows ({HOST2} is the name of host2 that you type in Publish over SSH block in Jenkins Global Configuration):
steps {
sshPublisher \
failOnError: true, \
publishers: [ \
sshPublisherDesc( \
configName: "{HOST2}", \
transfers: [ \
sshTransfer (execCommand: "ansible -m ping all -i inventory_file", execTimeout: 120000) \
] \
) \
]
}

Related

How to configure Jenkins job to build with Docker?

I'm attempting to build a branch using Jenkins and a 'docker in the docker' container to build a container from src.
I define the Docker cloud instance here:
Should an extra tab be available that enable the job to use the Docker cloud instance setup above?
The job is a multi-branch pipeline:
But when I attempt to configure a job that uses the docker cloud instance, configured above, the option to build with docker is not available:
The build log contains:
time="2021-04-04T14:27:16Z" level=error msg="failed to dial gRPC:
cannot connect to the Docker daemon. Is 'docker daemon' running on
this host?: dial unix /var/run/docker.sock: connect: no such file or
directory" error during connect: Post
http://%2Fvar%2Frun%2Fdocker.sock/v1.40/build?buildargs=%7B%7D&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&rm=1&session=vgpahcarinxfh05klhxyk02gg&shmsize=0&t=ron%2Fml-services&target=&ulimits=null&version=1:
context canceled [Pipeline] } [Pipeline] // stage [Pipeline] }
[Pipeline] // node [Pipeline] End of Pipeline [Bitbucket] Notifying
commit build result [Bitbucket] Build result notified ERROR: script
returned exit code 1 Finished: FAILURE
which suggests the build is searching for Docker on the same host as Jenkins, but I'm attempting to build with Docker on a different host?
Have I configured Docker with Jenkins correctly?
My Jenkinsfile contains:
node {
def app
stage('Clone repository') {
checkout scm
}
stage('Build image') {
app = docker.build("ron/services")
}
stage('Push image') {
docker.withRegistry('https://registry.hub.docker.com', 'git') {
app.push("${env.BUILD_NUMBER}")
app.push("latest")
}
}
}
Update:
Clicking the checkmark at Expose DOCKER_HOST , rebuilding contains error:
+ docker build -t ron/services .
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
[Bitbucket] Notifying commit build result
[Bitbucket] Build result notified
ERROR: script returned exit code 1
Finished: FAILURE
The docker cli tries to connect using the docker socket in /var/run. This means that no external daemon is configured, for example using the environment variable DOCKER_HOST.
Try clicking the checkmark at Expose DOCKER_HOST.
Not clear if this is what you are trying to do, but configuring Docker cloud will tell your Jenkins to launch a container on 10.241.0.198 (client), and run your jenkins job in that container. To make this work, there are a couple of things to check:
ensure that jenkins user on jenkins server can access port 2371 on client, ie 'Test Connection' returns success
Turn on 'Expose DOCKER_HOST' if you want to use docker in the container
configure ssh so that jenkins user on jenkins server can ssh to the container when it's running on the client (CMD ["/usr/sbin/sshd", "-D"] in Dockerfile)
In Docker Agent Template: configure a label; turn on 'enabled'; configure a docker image to run in the container; set Remote filesystem Root: /home/jenkins
In Container Settings: (very important!!) add /var/run/docker.sock:/var/run/docker.sock to Volumes
To get your Pipeline job to run on the docker image, set the agent label to the label you provided in step 4.
A couple of gotchas when creating the image to run in the container:
install both openssh-clients and openssh-server
install java
install any other build tools you might need, eg git
install docker if you want docker in docker support
configure for sftp in /etc/ssh/sshd_config eg, Add
# override default of no subsystems
Subsystem sftp /usr/lib/openssh/sftp-server
Match group sftp
X11Forwarding no
AllowTCPForwarding no
ForceCommand internal-sftp

How to run docker container in a remote machine

I am trying to run this jenkins pipeline code via DOCKER. I am using AWS ec2-user as a VM here. This code is working fine, but...
node{
stage('SCM CHECKOUT'){
git 'https://bitbucket.org/rajesh212/myapp.git'
}
stage('MVN BUILD'){
def mvnHome = tool name: 'maven', type: 'maven'
sh "${mvnHome}/bin/mvn clean package"
}
stage('DEPLOYMENT VIA DOCKER'){
def customImage = docker.build("image:${env.BUILD_ID}")
docker.image("image:${env.BUILD_ID}").withRun('-p 9090:8080'){sleep 10000}
}
If I am not giving sleep command then this job ran
successfully but my docker container start and stop immediately. i.e
I am not able to get the output. How to solve this problem?
I want to run this docker image on a remote machine? how to do it?
In order to run on a remote server, you must use the withServer command.
As for the container stopping, try changing the withRun command to withRun('-d -p 9090:8080')
If you are using declarative pipelines, try this ssh command. As a prerrequisite you need to set up a key pair to allow Jenkins to ssh into the remote server. An specific ssh key pair for deployment is recommended for security issues:
stage('Deploy to Production') {
steps{
sh 'ssh -i path/to/deploy_private_key user#DNS_REMOTE_SERVER "docker run -d REGISTRY/YOUR_DOCKER_IMAGE:TAG"'
}
}
Use the -d parameter to run the container in detached mode.
Hope it helps.

Jenkins: How do I lint Jenkins pipelines from the command line?

I would like to be able to perform linting on Jenkins pipelines and it seems that Groovy linting is not enough.
How can I do this?
HTTP without crumb.
If you want to use HTTP and don't want to use CRUMB. just add your username and password using the '-u' parameter. Replace <username> and <password> with the username and password of your user. Also Check that the url of the jenkins server is correct.
curl --user <username>:<password> -X POST -F "jenkinsfile=<Jenkinsfile" http://localhost:8080/pipeline-model-converter/validate
src
If for some reason you can't use Jenkins server linter, you can use npm-groovy-lint (works with Declarative or Scripted Jenkinsfile, and also groovy shared libraries)
https://github.com/nvuillam/npm-groovy-lint
npm install -g npm-groovy-lint
npm-groovy-lint // in the root directory of the Jenkinsfile
Looks like there are two options for linting pipeline scripts, one via the cli on the leader or an http POST call:
Linting via the CLI with SSH
# ssh (Jenkins CLI)
# JENKINS_SSHD_PORT=[sshd port on master]
# JENKINS_HOSTNAME=[Jenkins master hostname]
ssh -p $JENKINS_SSHD_PORT $JENKINS_HOSTNAME declarative-linter < Jenkinsfile
Linting via HTTP POST using curl
# curl (REST API)
# Assuming "anonymous read access" has been enabled on your Jenkins instance.
# JENKINS_URL=[root URL of Jenkins master]
# JENKINS_CRUMB is needed if your Jenkins master has CRSF protection enabled as it should
JENKINS_CRUMB=`curl "$JENKINS_URL/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,\":\",//crumb)"`
curl -X POST -H $JENKINS_CRUMB -F "jenkinsfile=<Jenkinsfile" $JENKINS_URL/pipeline-model-converter/validate
https://jenkins.io/doc/book/pipeline/development/#linter
In addition to kongkoro's answer, there is a tool to lint Jenkinsfile.
https://www.npmjs.com/package/jflint
# install
$ npm install -g jflint
# usage
# JENKINS_URL=[root URL of Jenkins master]
$ jflint -j $JENKINS_URL Jenkinsfile
What the jflint does is the same as curl in the official document, and jflint works only with declarative pipelines too. But it's easier to use.
SSH
Methods using Jenkins SSH interface to run the linter:
Enable SSH service in the Configure Global Security page and assign the port (e.g. 2222).
Add your Public SSH Key in your user's profile in Jenkins (JENKINS_URL/user/USER/configure).
Confirm the SSH access by SSHing to Jenkins and run:
ssh -l admin -p 2222 localhost help
Validate your local Jenkinsfile using the following command on Jenkins box:
ssh -l admin -p 2222 localhost declarative-linter < ./Jenkinsfile
For further details, read Pipeline Development Tools.
Furthermore, to simplify, you can add the following section to your ~/.ssh/config:
Host jenkins-cli
HostName localhost
User admin
Port 2222
ProxyJump jenkins-host.example.com
Then run: ssh jenkins-cli declarative-linter < ./Jenkinsfile.
You can also consider creating the following shell alias (e.g. to your startup files):
alias jenkins-lint="ssh jenkins-cli declarative-linter < ./Jenkinsfile"
Then just run: jenkins-lint.
POST
Validate a Jenkinsfile by using the following curl command:
curl --user username:password -X POST -F "jenkinsfile=<Jenkinsfile" http://jenkins-url:8080/pipeline-model-converter/validate
For details, please read How to validate a Jenkinsfile page.
VS Code plugin
Using VS Code IDE editor, you can install Jenkins Pipeline Linter Connector plugin and configure accordingly to the instructions, so it can post your Jenkinsfile to your Jenkins Server via POST request.
If you want to lint Jenkins pipelines which can be scripted or declarative.
Then the best solution is to lint using the jenkins-cli.jar.
I tried whatever I could possibly get my hands at but this really looks like the best and most convenient to use.
Requirements would be - java
Download the cli jar
$ curl -O https://<jenkins-server>/jnlpJars/jenkins-cli.jar
Lint the Jenkins pipeline script - either Scripted or Declarative
$ java -jar jenkins-cli.jar -s '<jenkins-server-url' -auth <username>:<password> declarative-linter < Jenkinsfile
Its always best to use the jenkins server url where it will be placed as that takes care of checking if the necessary plugins, etc are in place for the pipeline to function correctly.
Jenkins-CLI

how to run docker commands inside jenkins pipeline jobs

In my Manage Jenkins > Global Tool Configuration, i have already configured a tool called "docker" as follows:
name: docker
install automatically: CHECKED
docker version: latest
Then all I have in my jenkinsfile is the following and nothing else:
node {
DOCKER_HOME = tool "docker"
sh """
echo $DOCKER_HOME
ls $DOCKER_HOME/bin/
$DOCKER_HOME/bin/docker images
$DOCKER_HOME/bin/docker ps -a
"""
}
I get an error like this "Cannot connect to the Docker daemon. Is the docker daemon running on this host?".
Following is the full console log:
Started by user Syed Rakib Al Hasan
[Pipeline] node
Running on master in /var/jenkins_home/workspace/helloDocker
[Pipeline] {
[Pipeline] tool
[Pipeline] sh
[helloDocker] Running shell script
+ echo /var/jenkins_home/tools/org.jenkinsci.plugins.docker.commons.tools.DockerTool/docker
/var/jenkins_home/tools/org.jenkinsci.plugins.docker.commons.tools.DockerTool/docker
+ ls /var/jenkins_home/tools/org.jenkinsci.plugins.docker.commons.tools.DockerTool/docker/bin/
docker
+ /var/jenkins_home/tools/org.jenkinsci.plugins.docker.commons.tools.DockerTool/docker/bin/docker images
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE
How do i ensure that the docker daemon/service is running/started before my pipeline reaches the line to run docker commands.
Is there any other native docker-build-step plugin way to achieve what I am doing here? Like docker ps -a or docker images or docker build -t?
Some assumptions:
Let's say my chosen node do not already have docker/docker-engine installed/running in my host machine. That's the purpose of the tool command to automatically install docker in the node if it is not already there.
This Jenkins plugin is for the docker client; I'd solve (work around) by:
setting up jenkins slaves where docker daemon is reachable, add a label
setting up a housekeeping job which will fail if docker daemon was not reachable (so we can notify the infra team without having the QA to figure out and escalate the problem)
assign jobs which assumes the docker daemon to be reachable to this label
I hope it helps, and I'm curious if any of you have a better solution!

How to have all Jenkins slave tasks executed with nice?

We have a number of Jenkins jobs which may get executed over Jenkins slaves. Is it possible to globally set the nice level of Jenkins tasks to make sure that all Jenkins tasks get executed with a higher nice level?
Yes, that's possible. The "trick" is to start the slave agent with the proper nice level already; all Jenkins processes running on that slave will inherit that.
Jenkins starts the slave agent via ssh, effectively running a command like
cd /path/to/slave/root/dir && java -jar slave.jar
On the Jenkins node config page, you can define a "Prefix Start Slave Command" and a "Suffix Start Slave Command" to have this nice-d. Set as follows:
Prefix Start Slave Command: nice -n -10 sh -c '
Suffix Start Slave Command: '
With that, the slave startup command becomes
nice -n -10 sh -c 'cd "/path/to/slave/root/dir" && java -jar slave.jar'
This assumes that your login shell is a bourne shell. For csh, you will need a different syntax. Also note that this may fail if your slave root path contains blanks.
I usually prefer to "Launch slave via execution of command on the Master", and invoke ssh myself from within a shell wrapper. Then you can select cipher and client of choice, and also setting niceness can be done without Prefix/Suffix kludges and without whitespace pitfalls.

Resources