Can we run Jenkins build for a different machine like my script is on another remote machine and my Jenkins setup is on a different remote machine?
I have multiple JMX scripts created on different machines but I want to run them from Jenkins on a single machine
Yes, there are multiple ways to do this. One way is you can use the Jenkins SSH Steps plugin for this. Following is an example for a remote execution of a command.
node {
def remote = [:]
remote.name = 'test'
remote.host = 'test.domain.com'
remote.user = 'root'
remote.password = 'password'
remote.allowAnyHosts = true
stage('Remote SSH') {
sshCommand remote: remote, command: "ls -lrt"
sshCommand remote: remote, command: "for i in {1..5}; do echo -n \"Loop \$i \"; date ; sleep 1; done"
}
}
Either connect these machines to Jenkins as Build Agents and set your project or pipeline to execute on the remote machine(s) or install JMeter Slaves on those machines and run your test on Jenkins having JMeter Master there.
More information:
Jenkins : Distributed builds
How to Perform Distributed Testing in JMeter
Related
I am trying to migrate a teamcity pipeline to a jenkins one.
I am used to teamcity, and I still finding my way around Jenkins, but I haven't found a way to do this.
I want to have a pipeline with 2 steps, one for compiling an application, and a second step to send it to a server using SSH with private keys.
I can easily do that in Teamcity (after some research) but with Jenkins I have researched it and haven't found a way to get there, nor examples over the web.
I have tried a freestyle project where I can configure SSH agent but I can't seem to find where to place the rest of the logic, and a multibranch pipeline which uses a Jenkinsfile within the repo to compile, but nowhere to set details for sending files over.
Should I use a multibranch pipeline and "send" the files from within the jenkinsfile? If so, how do I tell Jenkins to make the key available to the docker container?
Or should I use the freestyle project, and if so, how do I tell it to use the Jenkinsfile first, and then send the resulting file or files to the destination server?
Or should I use something completely different?
You can do that in a pipeline :
1/ Add your private key to Jenkins by creating a new credential (ssh Username with private key)
2/ In a jenkinsFile :
node {
try {
stage ("build") {
dir('Build') {
env.NODEJS_HOME = "${tool 'Node 12.12'}"
env.PATH="${env.NODEJS_HOME}/bin:${env.PATH}"
sh 'npm install'
sh 'npm pack'
}
}
stage ("Deploy") {
def remote = [:]
remote.name = 'name of your server'
remote.host = 'ip of your server'
remote.allowAnyHosts = true
dir ('Build') {
withCredentials([sshUserPrivateKey(
credentialsId: 'id_of_your_previously_created_credential',
keyFileVariable: 'identityKey',
passphraseVariable: 'passphraseV',
usernameVariable: 'userR')])
{
remote.user = userR
remote.passphrase = passphraseV
remote.identityFile = identityKey
sshPut remote: remote, from: "yourArchive.tgz", into: '.'
}
}
}
} catch (e) {
println "Caught: ${e}"
throw e
}
}
I've used npm for the exemple, with 'npm pack' creating an archive that we will upload to a server.
You might also need to install (if not already done) the following Plugin:
SSH Credentials Plugin
SSH Pipeline Steps
Credentials Binding Plugin
Credentials Plugin
I have a user interactive shell script that runs successfully on my Linux server. But when I try to run it via jenkins, it doesn't run.
I have created a Jenkinsfile.
Jenkinsfile
node('slaves') {
try
{
def app
stage('Remmove Docker service') {
sh 'sshpass ssh docusr#10.26.13.12 "/path/to/shell/script"'
}
}
}
Shell Script
#!/bin/bash
read -p "Hi kindly enter the api name : " api
docker service logs $api --raw
The shell Scipt runs successfully on my local server, when I try to run it on Jenkins using Jenkinsfile, it doesn't accept $api variable in my shell script which is user interactive.
What you are trying to achieve doesn't serve any purpose of automating your job by jenkins, if I correctly understood. So, your job is actually seeking a user input and it's in best interest to have a parameterized jenkins build in this case.
For your case, you can still give an argument to the sshpass command $api and have it read from the jenkins environment itself Or, better make your jenkins build parameterized and use your user input $api as the parameter.
I'm trying to create a Jenkins Pipeline or group of itens to help me create a custom CI/CD for my projects and right now i'm stuck at the deploy part, i want to deploy on the same server that my jenkins is running (Windows Server/IIS). I would also like to know how to deploy to another server (Windows Server/IIS), this second one would be my production env.
I have managed to clone, build and archive using two approaches with Jenkins:
Pipelines
I have managed to create a pipeline that will clone my project, execute my build and then archive the artifacts from my build. The problem is, how do i deploy the artifact now?
This is my pipeline script
node {
stage('git clone') {
// Get some code from a GitHub repository
git 'my-git-url'
}
stage('npm install') {
bat label: 'npm install',
script: '''cd app
npm install'''
}
stage('gulp install') {
bat label: 'gulp install',
script: '''cd app
npm i gulp'''
}
stage('gulp production') {
bat label: 'gulp production',
script: '''cd app
gulp production'''
}
stage('create artifact') {
archiveArtifacts artifacts: 'app/dist/**',
onlyIfSuccessful: true
}
}
Freestyle projects
I have managed to create a project that will build and then archive the artifact using Execute shell build step and the Archive the artifacts post-build actions. How can i deploy the artifact using this approach? On this case i'm trying to trigger a second freestyle project to execute the deploy.
According to your question : "I want to deploy on the same server that my jenkins is running (Windows Server/IIS)" .. and comments I will suggest some approaches.
Windows
Use windows as operative system for production environments is not recommended. Linux is the only and the best choice.
IIS
I don't recommed IIS to deploys static assets. You need something more light and scalable. You could use :
nodejs with pm2 (https://expressjs.com/en/starter/static-files.html)
nginx (https://medium.com/#jgefroh/a-guide-to-using-nginx-for-static-websites-d96a9d034940)
apache (http://book.seaside.st/book/advanced/deployment/deployment-apache/serving-files)
docker
Deploy on IIS
Deploy static assets on IIS is just copy and paste the files on some folder and point IIS configurations to that folder:
https://www.atlantic.net/hipaa-compliant-hosting/how-to-build-static-website-iis/
Basic deploy on IIS using Jenkins
After your build commands, you just need to copy the build results (css.js.html.etc) and paste to some folder like c://webapps/my-app (pre-configured in IIS).
You can do this using a simple shell execution in free style project or pipeline script like https://stackoverflow.com/a/53159387/3957754
You could use this approach to deploy your static assets on the same server that your jenkins is running.
Advanced deploy on IIS using Jenkins
Microsoft has a tool called MSDeploy. Basically is a command line tool to deploy apps on remote IIS:
msdeploy.exe -verb:sync -source:contentPath="" -dest:contentPath=""
More details here:
https://stackoverflow.com/a/12032030/3957754
Note: You can't run MS deploy commands that talk to the MSDeploy service on the same machine
Jenkins Agent
Jenkins agent is an application that runs on a remote server, not where jenkins master node runs.
https://wiki.jenkins.io/display/JENKINS/Step+by+step+guide+to+set+up+master+and+agent+machines+on+Windows
Your master jenkins could use an agent in the remote or localhost IIS and execute jenkins jobs with copy and paste approach.
I have a jenkins jobs that executes with 'Publish over SSH'. The job connects to the remote server, transfers files and runs and ansible playbook.
The playbook runs as intended, confirmed by the logs. However at the end of the job an error is returned, failing the job. It's causing problems as it's preventing the pipeline from working correctly.
SSH: EXEC: completed after 402,593 ms
SSH: Disconnecting configuration [server] ...
ERROR: Exception when publishing, exception message [Exec exit status not zero. Status [2]]
Build step 'Send files or execute commands over SSH' changed build result to UNSTABLE
[Run Playbook] $ /bin/sh -xe /tmp/jenkins1528195779014969962.sh
+ echo Finished
Finished
Finished: UNSTABLE
Is there a setting missing to allow this to pass?
never used the 'Publish over SSH' you are referingto, but I can recommend Jenkins Ansible Plugin. I am running several playbooks in pipeline stages here successfully from labeled build slaves (have one dedicated slave that has Ansible installed) targeting Linux hosts on cloud infrastructure via SSH.
Especially in combination with the ANSI color plugin the output very readable.
If you cannot try that plugin, check whats the return code of the playbook run shell call.
I use jenkins master-slave configuration for capturing Performance metrics of a product. We have observed that jenkins-slave tends to accumulate memory and thus influences the Performance metrics being captured.
To ensure consistency of the metrics being captured, we are thinking of restarting jenkins slave every day from the master, when there are no jobs running on the slave. Is this feasible?
How can we accomplish it?
Note: Using jenkins-slave as a service is not an option because we are having other security access issues with it.
I know this answer is coming in a bit late :
This is how I did the same for the same reasons, not sure if this is the best way to achieve this, but it solved many of our problems :
For Windows Machines :
Create a job that simply runs "shutdown -r -f" on windows machines.
It will restart the machines.
Now bringing it back online part. For similar reasons as yours, I
didn't use "jenkins-slave as a service". Instead I configured the
nodes to connect via JNLP client, and then added the slave.jar
command for each node in Window's task scheduler (to run on
startup)
Now the job restarts the machine and the Windows machine bring
itself online on Jenkins itself right after restart.
For Mac Machines :
The process is comparatively easier on mac. First, make a job to run
"shutdown -r now" on Mac node
The node should simply be setup to get connected via ssh. That will
take care of bringing it up online on Jenkins.
This was the "execute shell" part of my script to restart all the machines used for our automation :
distro=`uname`
if [ "$distro" = "Windows_NT" ] || [ "$distro" = "WindowsNT" ] ;then
echo "Restarting Windows Machine...."
shutdown -r -f
else
echo "Restarting Mac Machine...."
sudo shutdown -r now
fi
PS:
It's not exactly related to the question, but may be useful for the situation that you specified. It may be a good idea to add a batch script to clean temp files on startup of Windows machines.
Add following to a batch script (Say, cleanTemp.bat) in the startup folder of your Windows machine.
(For Windows 10, C:\Users\\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup)
rmdir %temp% /s /q
md %temp%
If you still need an answer:
https://wiki.apache.org/general/Jenkins#How_do_I_restart_a_Jenkins_Unix_Slave.3F
Although, I just did a disconnect and then I saw that the processes died in the slave. I did not have to kill them manually.
Then launch the slave again and that's it.
This is good from web UI. I have not searched for CLI for this yet.
Create a job e.g. "Reboot-Slave", and set it with shell "shutdown -r -t 0", and take the target slave name as a parameter. (in this way, the restart command will be executed directly on the target slave that you want to restart.)
Create another job e.g. "Reboot-Check-Slave-Online", in this job, you should call the 1st job and pass the target slave name as parameter, plus, you'd better write some logic to determine whether your slave finished the restarting and connected to Jenkins server again, you can implement it by adding an "Execute system groovy script" step in your job and write below code:
import hudson.model.*
def target_slave_param = "target_slave"
def resolver = build.buildVariableResolver
def target_slave = resolver.resolve(target_slave_param)
println "target_slave is: ${target_slave}"
def status = 0;
//do{
println "Searching for ${target_slave}";
slave = Hudson.instance.slaves.find({it.name == target_slave});
if (slave != null)
{
computer = slave.getComputer();
if (computer.isOffline())
{
println "Error! $target_slave is offline.";
status = 1;
}
else
{
println "OK: $target_slave is online";
}
}
else
{
println "Slave $target_slave not found!";
status = 1;
}
//}
Steps:
Install Node and Label parameter plugin
Check This project is parameterized option:
Use following command in Execute shell field:
(sudo bash -c "(sleep 30 && sudo shutdown -r now) &") &
Jenkins job is detached correctly and shows success execution.