The on-demand slaves are being created successfully from Jenkins. The first build on the slave is successful but the subsequent builds are fails. The restart of the slave or restart of the wimrc services allows the build to proceed again.
The tcpdump shows no errors. Can't figure out what the issue is. Looks like issues Jenkins communicating with on demand slaves using wimrc.
Has anybody faced similar issue?
The on-demand slaves are windows slave
The issue was with the "MaxMemoryPerShellMB" parameter of the Winrm. This was set to too low. So when the npm or git was doing a checkout it was running out this memory in the Winrm shell.
I have increased to 1GB and its working fine now
Related
I am running jenkins multi branch job, suddenly it not allow me to change the configuration changes, its keep on loading without any timeout issue.
Can you please some one help me on this ?
You could have a look at the Jenkins master machine CPU and memory. Look what is consuming them. I have seen this happening when the CPU is nearly 100 %. In this case, restarting the Jenkins process or Jenkins master machine could help.
Try to remember/ask colleagues if there are any recent changes to Jenkins master machine. We had similar issues after installing plugins.
Avoid executing jobs on Jenkins master, use slave agents.
You may need to clean up old builds if you are not doing this already.
in my case, after disabling / enabling all plugins one by one, it was the "AWS SQS Build Trigger Plugin", causing the "save / apply" buttons to move, and not be functional
We recently tried moving our Windows Jenkins slaves to run as a service instead of just running the slave agent jnlp file.
According to the Mercurial Plugin (https://wiki.jenkins-ci.org/display/JENKINS/Mercurial+Plugin),
The default installation runs windows service with "local system" account, which does not seem to have enough priveleges for hg to execute, so You could try running Jenkins service with the same account as TortoiseHG, which will allow it to complete.
This we did, and it worked. For a while.
But sometimes after there was a disconnect between the Jenkins slave and master, it would stop working. Jenkins would call mercurial and it would hang, just like it would do if the service was running with the "local system" account.
I could sometimes get it to start working again by restarting the Jenkins service on the slave. But somtimes I'd have to go back in and re-set the service to run with an elevated account.
Has anybody else experienced anything like this? Is there any way to keep the Jenkins Service running with elevated priveleges?
In the job post build action, am archiving the artifacts. 90% of the time, when the jenkins job reaches this step, the slave on which it is running hangs (or) goes offline (or) the job hangs and if I kill the job it throws a "Caused by: java.lang.OutOfMemoryError: Java heap space" error.
Am running Jenkins ver 1.560.
Has anyone seen this or is aware of a fix for this? Any help is appreciated.
Thanks
It looks like you're running into https://issues.jenkins-ci.org/browse/JENKINS-22734 which started in version 1.560 and will be fixed in 1.563.
It's always a good idea to browse the Jenkins change log, especially the Community Ratings section, when you install a new version.
Whenever Hudson master will run out of space, slaves will disconnect and will have to be restarted.
You need to check Hudson master box and see how much space is allocated to the drive where hudson is running.
Another thing to note is that even if a job is running on slave, artifacts are archived always on master. So space allocation on master should be done properly.
I ran into this issue with 1.560v of Jenkins. Right now I have disabled the archiving of the maven artifacts from the "Build" section.
We have a master Jenkins running on a Linux system. The same master is attached as node using "Launch slave via execution of a command on the master". It has the same FS root as the JENKINS_HOME. The command is ssh "machine_name" "shell_script"
The shell script gets the latest slave.jar and runs it.
The master has 0 executors. The node has been given 7. I'm seeing weird behavior in the builds, like workspaces being deleted once a day, etc. I'm not sure if this is related to the way the Jenkins Master-slave is configured.
Any ideas if this is a supported configuration?
we have jenkins project. use case:
jenkins triggers the build
slave agent builds application
server with slave agent goes to reboot (for any reason, for example, problem with electricity, somebody rebooted it, resource shortage and so on)
after that jenkins reports about failed build. how can we automatically relaunch application building in jenkins when slave agent recovered from failure?
There are two aspects to this issue -
Jenkins Server needs to reschedule the build that failed(when the slave-machine crashed).
Install the Naginator Plugin
Set it to rebuild whatever job you have set on the problematic slave
Jenkins Slave needs to restart automatically as soon as its host is up again.
On Windows, for example, you need to set it with a service that starts automatically
Note the Naginator Plugin doesn't know what caused the build to fail,
so it will try to rebuild any build that fails.
To solve this, scan the log for an indication that the slave crashed
and set a regular expression (in the Naginator) to catch it.
Cheers