pipeline script to execute jobs on different nodes (slave) configured in Jenins - jenkins

I have a jenkins server (version 2.34.2) installed in a VM linux ubuntu 16.04. I would like executing different jobs on two different slaves with a pipeline script. I already add my slave by configuring nodes in Jenkins.
I try with this code:
node('MASTER'){
stage 'Initialisation Socket MASTER'
echo "INITIALISATION SOCKET MASTER....."
sh 'javac /home/user/TCPEchoServer.java'
sh 'cd /home/user java TCPEchoServer'
}
node('slave'){
stage 'Initialisation Socket SLAVE'
echo "INITIALISATION SOCKET SLAVE....."
sh 'javac /home/user/TCPEchoClient.java'
sh 'cd /home/user java TCPEchoClient 160.98.34.85 2007'
}
Here is the configuration of my node "slave"
Node - Slave - configuration
But I can't executing some job on this node. Someone can help me please ?
THANKS

Related

Terraform ignore parallelism flag when running through Jenkins

I am running Terraform job using Jenkins pipeline. Terraform refresh is taking too long 10m~, using -parallelism=60 (local)terraform runs much faster 2.5m~.
When running the same config through Jenkins salve with parallelism I don't see any improve in running time.
Jenkins ver. 2.154
Jenkins Docker plugin 1.1.6
SSH Agent plugin 1.19
Flow: Jenkins master creates job -> Jenkins slave running Docker image of terraform
def terraformRun(String envName, String terraformAction, String dirName = 'env') {
sshagent(['xxxxxxx-xxx-xxx-xxx-xxxxxxxx']) {
withEnv(["ENV_NAME=${envName}", "TERRAFORM_ACTION=${terraformAction}", "DIR_NAME=${dirName}"]) {
sh '''
#!/bin/bash
set -e
ssh-keyscan -H "bitbucket.org" >> ~/.ssh/known_hosts
AUTO_APPROVE=""
echo terraform "${TERRAFORM_ACTION}" on "${ENV_NAME}"
cd "${DIR_NAME}"
export TF_WORKSPACE="${ENV_NAME}"
echo "terraform init"
terraform init -input=false
echo "terraform refresh"
terraform apply -refresh-only -auto-approve -parallelism=60 -var-file=tfvars/"${ENV_NAME}".tfvars -var-file=../variables.tfvars # Refresh is working but it seems to ignore parallelism
echo "terraform ${TERRAFORM_ACTION}"
if [ ${TERRAFORM_ACTION} = "apply" ]; then
AUTO_APPROVE="-auto-approve"
fi
terraform ${TERRAFORM_ACTION} -refresh=false -var-file=tfvars/"${ENV_NAME}".tfvars -var-file=../variables.tfvars ${AUTO_APPROVE}
echo "terraform ${TERRAFORM_ACTION} on ${ENV_NAME} finished successfully."
'''
}
}
}

Unable to run multiple command parellely in background on remote host using Jenkins

I have a jenkins job DSL that run on a remote node (Linux OS) using "Restrict where this project can be run" Label.
It has "Build" step -> "Execute shell"
In the execute shell i have mentioned
sh /app/runalljobs.sh &
On the remote node host runalljobs.sh looks like below:
cat runalljobs.sh
ansible-playbook /app/test.yml -e argu=arg1
ansible-playbook /app/test.yml -e argu=arg2
.....
.....
ansible-playbook /app/test.yml -e argu=arg16
The runalljobs.sh is suppose to start 16 ansible processes in the background when runalljob.sh is executed.
This works fine when the script is executed manually from the remote node putty shell.
However, I want the script to start the ansible processes to run in the background on the remote node when invoked using jenkins job which is not happening.
I also tried commenting sh /app/runalljobs.sh &
and adding individual ansible command in the "Execute shell" as below:
ansible-playbook /app/test.yml -e argu=arg1 &
ansible-playbook /app/test.yml -e argu=arg2 &
.....
.....
ansible-playbook /app/test.yml -e argu=arg16 &
But this too did not trigger the ansible processes on the target node.
It works if i remove the "&" and then all the ansible commands runs serially one after the other on remote.
However, I wish all the ansible commands to be triggered parallely in the background and the Jenkins executor should proceed into executing other execute shell tasks.
Can you please suggest how can i acheive the requirement ?
Jenkins allows you to perform tasks in parallel, but there's a catch. This requires you switching to Jenkins Pipeline and then use of parallel. Then, your build script would look like:
pipeline {
agent 'my-remote-machine'
stages {
...
stage('Ansible stuff') {
parallel {
stage('arg1') {
steps {
sh 'ansible-playbook /app/test.yml -e argu=arg1'
}
}
stage('arg2') {
steps {
sh 'ansible-playbook /app/test.yml -e argu=arg2'
}
}
...
}
}
}
}
If your command lines are quite similar (as in your example), you could use matrix section, to simplify the code:
matrix {
axes {
axis {
name 'ARG'
values 'arg1', 'arg2', 'arg3'
}
}
stages {
stage('test') {
sh 'ansible-playbook /app/test.yml -e argu=${ARG}'
}
}
}
I know that this solution is a radical change - Jenkins Pipelines is a whole new world of CI. But it may be worth of effort because Pipelines are very promoted by Jenkins authors and lot of plugins are rewritten to work with them.

Jenkins Kubernetes Plugin - sshagent to git clone terraform module

I am attempting to use sshagent in Jenkins to pass my private key into the terraform container to allow terraform to source a module in a private repo.
stage('TF Plan') {
steps {
container('terraform') {
sshagent (credentials: ['6c92998a-bbc4-4f27-b925-b50c861ef113']){
sh 'ssh-add -L'
sh 'terraform init'
sh 'terraform plan -out myplan'
}
}
}
}
When running the job it fails with the following output:
[ssh-agent] Using credentials (id_rsa_jenkins)
[ssh-agent] Looking for ssh-agent implementation...
[ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine)
Executing shell script inside container [terraform] of pod [gcp-tf-builder-h79rb-h5f3m]
Executing command: "ssh-agent"
exit
SSH_AUTH_SOCK=/tmp/ssh-2xAa2W04uQV6/agent.20; export SSH_AUTH_SOCK;
SSH_AGENT_PID=21; export SSH_AGENT_PID;
echo Agent pid 21;
SSH_AUTH_SOCK=/tmp/ssh-2xAa2W04uQV6/agent.20
SSH_AGENT_PID=21
Running ssh-add (command line suppressed)
Identity added: /home/jenkins/agent/workspace/demo#tmp/private_key_2729797926.key (user#workstation.local)
[ssh-agent] Started.
[Pipeline] {
[Pipeline] sh
+ ssh-add -L
ssh-rsa REDACTED user#workstation.local
[Pipeline] sh
+ terraform init
[0m[1mInitializing modules...[0m
- module.demo_proj
Getting source "git::ssh://git#bitbucket.org/company/terraform-module"
[31mError downloading modules: Error loading modules: error downloading 'ssh://git#bitbucket.org/company/deploy-kickstart-project': /usr/bin/git exited with 128: Cloning into '.terraform/modules/e11a22f40c64344133a98e564940d3e4'...
Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
[0m[0m
[Pipeline] }
Executing shell script inside container [terraform] of pod [gcp-tf-builder-h79rb-h5f3m]
Executing command: "ssh-agent" "-k"
exit
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 21 killed;
[ssh-agent] Stopped.
I've triple checked and I am for sure using the correct key pair. I am able to git clone locally from my mac to the repo with no issues.
An important note is that this Jenkins deployment is running within Kubernetes. The Master stays up and uses the Kubernetes plugin to spawn agents.
What does the Host key verification failed. error mean? From my research it can be due to known_hosts not properly being set. Is ssh-agent responsible for that?
Turns out it was an issue with known_hosts not being set. As a workaround we added this to our jenkinsfile
environment {
GIT_SSH_COMMAND = "ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no"
}

SSH from jenkins to same host

I have a jenkins instance running on my raspberry pi 3 and i also have my (simple) apache webserver running on the same raspberry pi.
I've got a pipeline from jenkins to fetch a git repo, build it and put (via scp) the build files on my webserver.
I have a ssh private/public key setup, but it's a bit stupid (?) to have an ssh key when the jenkins is hosted on the same 'machine' with the same ip address no?
Anyway, on my raspberry pi i have setup the autorized keys file and the known host file with the public key on it, and i've added the private key to jenkins via the ssh-agent plugin.
Here you have my jenkinsfile thats being used by jenkins to define my pipeline:
node{
stage('Checkout') {
checkout scm
}
stage('install') {
nodejs(nodeJSInstallationName: 'nodeJS10.5.0') {
sh "npm install"
}
}
stage('build'){
nodejs(nodeJSInstallationName: 'nodeJS10.5.0') {
sh "npm run build"
}
}
stage('connect ssh and remove files') {
sshagent (credentials: ["0527982f-7794-45d0-99b0-135c868c5b36"]) {
sh "ssh pi#123.456.789.123 -p 330 rm -rf /var/www/html/*"
}
}
stage('upload new files'){
sshagent (credentials: ["0527982f-7794-45d0-99b0-135c868c5b36"]) {
sh "scp -P 330 -r ./build/* pi#123.456.789.123:/var/www/html"
}
}
}
Here is the output from the second to last job that is failing:
[Pipeline] }
[Pipeline] // nodejs
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (connect ssh and remove files)
[Pipeline] sh
[Deploy_To_Production] Running shell script
+ ssh pi#123.456.789.123 -p 330 rm -rf /var/www/html/asset-manifest.json /var/www/html/css /var/www/html/favicon.ico /var/www/html/fonts /var/www/html/images /var/www/html/index.html /var/www/html/manifest.json /var/www/html/service-worker.js /var/www/html/static /var/www/html/vendor
Host key verification failed.
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 255
Finished: FAILURE
Note: I've changed my IP address and my ssh port for security reasons.
Manually i can ssh to my raspberry pi and i can execute the commands manually from my laptop (both from same and other domain works).
I've also port forwarded the local ip so that i connect to it via SSH when i'm not home.
I guess I'm doing something wrong with the SSH keys etc, but i'm no expert whatsoever!
Can anyone help?
I need 4 more reputation point to comment, so I must write answer:)
Try use -v to debug ssh connection:
stage('connect ssh and remove files') {
sshagent (credentials: ["0527982f-7794-45d0-99b0-135c868c5b36"]) {
sh "ssh -v pi#123.456.789.123 -p 330 rm -rf /var/www/html/*"
}
}
In another hand
Host key verification failed means that the host key of the remote host was changed or you don't have the host key of the remote host. So at first try just ssh -v pi#123.456.789.123 as Jenkins user, from Jenkins host.
The issue was indeed that the host key verification was failing. I think this was due to not trusting the host.
But the real issue was pointed out by #3sky (see other answer). I needed to login as the jenkins user and try to ssh to my raspberry pi (which are both on the same machine).
So these are the steps i did:
Login via ssh to my raspberry pi
ssh -v pi#123.456.789.123 -p 330
Then I switched user to the jenkins user. After some google search i've found out how
sudo su -s /bin/bash jenkins
Then i ssh again to my own machine (where i already was ssh'ed in), so that i get the pop-up for thrusting this host once and for all!
ssh -v pi#123.456.789.123 -p 330
This solved my issue! Big thanks to 3sky for helping out!

Jenkins inside docker how to configure path for hp fortify sourceanalyzer

I am running my jenkins instance inside docker.
I am trying to do fortify scan as a post-build step.
I have
HPE Security Fortify Jenkins Plugin
installed.
Now when I try to do something like
def call(String maven_version) {
withMaven(maven: maven_version) {
script {
sh "sourceanalyzer -b %JOB_NAME% -jdk 1.7 -extdirs %WORKSPACE%/target/deps/libs/ %WORKSPACE%/target/deps/src/**/* -source target/%JOB_NAME%.fpr"
}
}
}
But I get
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Fortify Analysis)
[Pipeline] withMaven
[withMaven] Options: []
[withMaven] Available options:
[withMaven] using JDK installation provided by the build agent
[withMaven] using Maven installation 'Maven 3.3.9'
[Pipeline] {
[Pipeline] script
[Pipeline] {
[Pipeline] sh
[Running shell script
+ sourceanalyzer -b %JOB_NAME% -jdk 1.7 -extdirs %WORKSPACE%/target/deps/libs/ %WORKSPACE%/target/deps/src/**/* -source target/%JOB_NAME%.fpr
script.sh: sourceanalyzer: not found"
I think all I need to do is create an environment variable for sourceanalyzer, but how do I see where that plugin-is, since this is a docker container and not really an operating system running. Thats where the source of my confusion is.
It is not looking for environment variable.
sourceanalyzer is a executable. and it's not available in the PATH.
Additionally : you can consider docker container as an Operating system (aggregated of multiple things and layers togather before starting.)
If you want to get into RUNNING instance of your JENKIN image then launch following command. (Ensure your container is running).
#>docker exec -it <container-id> sh
Container id is available when you launch
#>docker ps

Resources