We have a number of Jenkins jobs which may get executed over Jenkins slaves. Is it possible to globally set the nice level of Jenkins tasks to make sure that all Jenkins tasks get executed with a higher nice level?
Yes, that's possible. The "trick" is to start the slave agent with the proper nice level already; all Jenkins processes running on that slave will inherit that.
Jenkins starts the slave agent via ssh, effectively running a command like
cd /path/to/slave/root/dir && java -jar slave.jar
On the Jenkins node config page, you can define a "Prefix Start Slave Command" and a "Suffix Start Slave Command" to have this nice-d. Set as follows:
Prefix Start Slave Command: nice -n -10 sh -c '
Suffix Start Slave Command: '
With that, the slave startup command becomes
nice -n -10 sh -c 'cd "/path/to/slave/root/dir" && java -jar slave.jar'
This assumes that your login shell is a bourne shell. For csh, you will need a different syntax. Also note that this may fail if your slave root path contains blanks.
I usually prefer to "Launch slave via execution of command on the Master", and invoke ssh myself from within a shell wrapper. Then you can select cipher and client of choice, and also setting niceness can be done without Prefix/Suffix kludges and without whitespace pitfalls.
Related
I write Jenkins pipeline which in the end will trigger execution of java process on remote host. Currently this last stage looks like:
stage('end') {
sh '''
ssh jenkins#xxx.xxx.xxx.xxx java -jar /opt/stat/stat.jar
'''
}
The process successfully started on remote machine but Jenkins job never ends. Is there any flag telling job must be completed?
Seems like maybe your java command does not exit but stays running? And that's probably the desired behavior? What about putting the process in the background on the remote machine.
stage('end') {
sh '''
ssh jenkins#xxx.xxx.xxx.xxx "java -jar /opt/stat/stat.jar &>/dev/null &"
'''
}
I have a master and multiple slave machines. The slaves are configured to start via ssh from the master. Initially, there is a single java process running the slave.jar. Occasionally, I'll login to a slave and find that there are 2 or even sometimes 3 java processes running slave.jar. This is while no jobs are running.
How many slave processes should be running when the slave is idle?
tomcat 54054 53913 0 Sep02 ? 00:00:00 bash -c cd "/var/hudson" && java -jar slave.jar
tomcat 54055 53914 0 Sep02 ? 00:00:00 bash -c cd "/var/hudson" && java -jar slave.jar
tomcat 54080 54054 1 Sep02 ? 01:11:45 java -jar slave.jar
tomcat 54081 54055 2 Sep02 ? 01:44:17 java -jar slave.jar
I had the same problem... and recognized after hours(!) of investigation, that our backup system (master) was online and connected to the same slaves (a backup was installed there for test reasons).
Each time-triggered build was triggered nearly at the same time and failed completetly randomly in about a third of the builds jobs executed.
So perhaps there's really another master connecting to your slaves?
I am running Jenkins on Ubuntu 14.04 (Trusty Tahr) with slave nodes via SSH. We're able to communicate with the nodes to run most commands, but when a command requires a tty input, we get the classic
the input device is not a TTY
error. In our case, it's a docker exec -it command.
So I'm searching through loads of information about Jenkins, trying to figure out how to configure the connection to the slave node to enable the -t option to force tty instances, and I'm coming up empty. How do we make this happen?
As far as I know, you cannot give -t to the ssh that Jenkins fires up (which makes sense, as Jenkins is inherently detached). From the documentation:
When the SSH slaves plugin connects to a slave, it does not run an interactive shell. Instead it does the equivalent of your running "ssh slavehost command..." a few times...
However, you can defeat this in your build scripts by...
looping back to yourself: ssh -t localhost command
using a local PTY generator: script --return -c "command" /dev/null
As per the title, in Jenkins how can I add new slave nodes to my build cluster using the CLI, or if there is not a CLI option, is there another scriptable approche that can be used?
Basic CLI instruction can be found here.
The following CLI command should get the new node configuration XML as stdin:
java -jar jenkins-cli.jar -s [JENKINS_URL] create-node [NewNodeName]
For example, if you want to copy an existing node, you can use:
java -jar jenkins-cli.jar -s [JENKINS_URL] get-node [NodeToCopyFrom] | java -jar jenkins-cli.jar -s [JENKINS_URL] create-node [NewNodeName]
Many people use the Swarm Plugin to eliminate the need to actually add slaves manually. You would need to script the install of the swarm agent of course, but that should be pretty straight forward.
I'm trying to start a Cassandra instance (0.8.10) from Jenkins (latest version, 1.463).
Inside a "free-style project" job, I have a "Execute shell" build step, where I have tried a couple of approaches:
.../tools/apache-cassandra-0.8.10/bin/cassandra -f
and
.../tools/apache-cassandra-0.8.10/bin/cassandra
The first approach starts Cassandra ok, but Jenkins doesn't exit the build and keeps on building. If I stop the build, the Cassandra process dies as well.
The second approach fails because the Cassandra project dies as soon as the build finishes.
I have also tried:
.../tools/apache-cassandra-0.8.10/bin/cassandra -f &
that is kind of lame, and doesn't work anyway.
Any ideas on how to start Cassandra from Jenkins?
Try using nohup with &. Also pipe stdout and stderr to a file or /dev/null:
nohup .../tools/apache-cassandra-0.8.10/bin/cassandra -f > /dev/null 2>/dev/null &