I build openembedded image with jenkins pipeline. The pipeline ends successfully (according to logs). The finished workspace has about 40 GB. The problem is that even although the pipeline finishes, the job freezes for at least several hours. I am not sure if it ever recovers, as I have always killed it (cannot afford to have jenkins blocked for several days).
I don't observe this when building something small (~ 1GB). I also don't observe it when I wipe out the entire build as the last step. And I don't have any deployment explicitly set up.
What could be wrong?
Fixed. The problem was caused by cppcheck plugin, which was left activated for all jobs on Jenkins. After disabling the plugin the big jobs end normally.
There was a question about how I figured it out. I realized that there is an active plugin that performs a post-processing on the build output, and that the build output is really, really big. So I tried disabling the plugin and it helped.
Lesson learned: Don't put any post-processing plugins under the Overview and statistics of all builds view, as they are activated for all builds, even the undesirable ones.
Related
My organization just updated our Jenkins instance and are now seeing problems with our pipeline jobs. All of our declarative pipeline jobs now just stop at the “[Pipeline] Start of Pipeline” and will hang there indefinitely (see screenshot below)
When looking at what the job is doing in terms of the executors, it seems to be just sitting idily on the controller node and not even in the build queue. We have 4 executors on the build in node.
Background Info: We are running our Jenkins instance on Windows Server 2012 on premises. Our recent update was to 2.362.2. I can’t remember exactly the version number we updated from, but is was over 2 years old
Does anyone have any troubleshooting steps we could try? We tried downgrading relevant pipeline plugins but that did not seem to work. Have also tried looking at logs but am unsure if I have found any relevant information. If anyone needs more context, info, I’d be happy to provide it. I just don’t know where to even begin with troubleshooting this problem
I am relatively new to Jenkins.
I created a declarative pipeline in Jenkins where users are asked to enter their branch name and then Jenkins builds that specific branch (for example, origin/mybranch).
This allows me to run a quick set of tests for specific branches.
The developers can run the pipeline multiple times and today I block multiple such pipelines from running simultaneously because if they do, one overwrites the other.
This happens because the first pipeline writes to c:\Jenkins\workspace\QuickBuild and when another such job run is writes to that exact same folder, killing the original run.
Blocking was the solution I found to prevent this but I would like it so that when one run is finishing up (using less than 8 cores) the next run in queue will already start running with whatever cores are freed up.
I would have though this would be a basic concept of Jenkins.
Am I missing something? Am I doing it wrong?
Following MaratC and Zett42's suggestions, I ended up adding this to my script:
agent
{
node {
customWorkspace "${params.Branch}"
}
}
This causes Jenkins to create each build in a different folder and they don't step on each others' toes.
The only downside is that you can't build the same branch simultaneously but that's a corner case.
Also, I could add a random number to the workspace to enable this as well.
I am running jenkins multi branch job, suddenly it not allow me to change the configuration changes, its keep on loading without any timeout issue.
Can you please some one help me on this ?
You could have a look at the Jenkins master machine CPU and memory. Look what is consuming them. I have seen this happening when the CPU is nearly 100 %. In this case, restarting the Jenkins process or Jenkins master machine could help.
Try to remember/ask colleagues if there are any recent changes to Jenkins master machine. We had similar issues after installing plugins.
Avoid executing jobs on Jenkins master, use slave agents.
You may need to clean up old builds if you are not doing this already.
in my case, after disabling / enabling all plugins one by one, it was the "AWS SQS Build Trigger Plugin", causing the "save / apply" buttons to move, and not be functional
We are using Jenkins (on Linux) to manage our Maven builds.
Our users can create their own job and sometimes (3 or 4 times per year), they are doing a mistake and the job generates a large log file (79 GB the last time...).
I had a look to existing plugins and I didn't find something to monitor the Jenkins log size.
For example, if the log size exceeds 200MB (when the job is running), I would like to automatically stop the build.
If you developed such shell scripts or Jenkins plugins, can you share your solution?
Thanks :)
You can use the Logfilesizechecker Plugin:
This plugin monitors the size of the output file of a build and aborts the build if the log file gets too big.
Or, if this has also an impact on the runtime, the Build-timeout Plugin:
This plugin allows you to automatically abort a build if it's taking too long. Once the timeout is reached, Jenkins behaves as if an invisible hand has clicked the "abort build" button.
How would someone clean up a Jenkins job such that the build stability rating is reset and not affected by previous builds? I created a build job and through trial and error, I finally got the job to compile/build correctly. However, I don't want all the previous test builds to affect the build stability rating. I tried deleting all the builds and restarting Jenkins but it still says 20 of the last 25 builds failed. I looked in the $JENKINS_HOME directory (~/.jenkins) and couldn't find anything regarding build stability. Thanks.
When you configure the job you can tell it how long to keep the logs for - either days or build. Set this to one build to clear it out then reset it back again