I have a Jenkins master setup which has 2 linux slaves and a windows slave. I have a configuration where all boxes are switched off in the night and restarted in the morning. The Jenkins master shows 2 linux nodes in the morning however it does not show windows slave (it just disappears and not even shown offline). The Jenkins version I am using : 2.73.
The problem was related to swarm configuration which was resolved after putting together correct configuration files and enable on machine startup (to handle a situation if the machine goes down).
Related
I have my jenkins installed on windows . and have two linux node added to it.
for some reason I needed to restart the jenkins service and now my linux is not coming online.but
when I go to these nodes I can see the option to mark them offline Which I believe comes when node is actually online like below
[![enter image description here][2]][2]
How To bring these nodes back online. before jenkins restart everything was fine and I have not made any changes to slaves.
**Changes I made to jenkins.xml before restart was **
Changed the java version
<executable>C:\Program Files\Java\jdk-17.0.5\bin\java.exe</executable>
Added keystore value in pre existing argument
<arguments>-Xrs -Xmx512m -Dhudson.lifecycle=hudson.lifecycle.WindowsServiceLifecycle -Dhudson.model.DirectoryBrowserSupport.CSP="" -jar "%BASE%\jenkins.war" --httpPort=-1 --httpsPort=8000 -httpsKeyStore="jenkins.jks" --httpsKeyStorePassword=xxxxx -webroot="%BASE%\war"</arguments>
I don't think your changes had anything to do with the error. Try these options:
Go into the Linux VMs and make sure they are online
Ping the Linux VMs from the master VM to see if they are reachable
Edit the slave node configuration, check everything is as it was before, and save
Finally, remove the slave nodes and add them again
The on-demand slaves are being created successfully from Jenkins. The first build on the slave is successful but the subsequent builds are fails. The restart of the slave or restart of the wimrc services allows the build to proceed again.
The tcpdump shows no errors. Can't figure out what the issue is. Looks like issues Jenkins communicating with on demand slaves using wimrc.
Has anybody faced similar issue?
The on-demand slaves are windows slave
The issue was with the "MaxMemoryPerShellMB" parameter of the Winrm. This was set to too low. So when the npm or git was doing a checkout it was running out this memory in the Winrm shell.
I have increased to 1GB and its working fine now
I have two different jenkins. I have a windows machine which acts as a slave for both jenkins , workspace is different. I have used Launch Agent via java web start (slave.jar). I want to know that will this cause any problems.
In short two different master have a common slave , will it cause any issue.
It will never cause an issue till the working directories of slaves are different. I had used this to be part of large CI setup with less than 8 machines with configurations of 20+ slaves
The same will be the case with different masters where the working directories of slaves are not the same on the remote machine
Today morning, we noticed all Putty Jobs running Jenkins were closed due to Network Issue. Once network was up, we re-started Jenkins and we observed that Jenkins Dashboard was not showing ANY jobs. We had around 80 Jobs on the dash board. We are using VM servers for Master/Slave setup. Config.xml is fine. What do we do? how do we get back on track?
All the jenkins jobs are basically xml config files kept in jenkins home.
If your Jenkins is not showing these jobs then it is not using same home directory.
Kindly check jenkins process to see which directory it is pointing to.
We have a master Jenkins running on a Linux system. The same master is attached as node using "Launch slave via execution of a command on the master". It has the same FS root as the JENKINS_HOME. The command is ssh "machine_name" "shell_script"
The shell script gets the latest slave.jar and runs it.
The master has 0 executors. The node has been given 7. I'm seeing weird behavior in the builds, like workspaces being deleted once a day, etc. I'm not sure if this is related to the way the Jenkins Master-slave is configured.
Any ideas if this is a supported configuration?