We are using Jenkins (on Linux) to manage our Maven builds.
Our users can create their own job and sometimes (3 or 4 times per year), they are doing a mistake and the job generates a large log file (79 GB the last time...).
I had a look to existing plugins and I didn't find something to monitor the Jenkins log size.
For example, if the log size exceeds 200MB (when the job is running), I would like to automatically stop the build.
If you developed such shell scripts or Jenkins plugins, can you share your solution?
Thanks :)
You can use the Logfilesizechecker Plugin:
This plugin monitors the size of the output file of a build and aborts the build if the log file gets too big.
Or, if this has also an impact on the runtime, the Build-timeout Plugin:
This plugin allows you to automatically abort a build if it's taking too long. Once the timeout is reached, Jenkins behaves as if an invisible hand has clicked the "abort build" button.
Related
I have set up around 10 Jenkins jobs in a windows server and those job runs for roughly around 1h each to finish.
But in it one of the job which take roughly more than 1.5h, it frequently stops with Stack overflow error.
I tried increasing Heap size to 1024m as
Jenkins throwing java.lang.StackOverflowError -- Not just on unstash suggests.
I even reduced # of executors to 5 earlier it was at 10.
I even set up JavaMelody plugin in jenkins to get a basic report and to see if the heap size overloads.
I haven't checked the JavaMelody plugin status while the jobs are running but when I checked the status afterwards I do not see any overload issues in heap size.
I check the VM also it has more than enough space in it.
But still either the job suddenly stops without any error or it shows this "Stack overflow" error.
The job configuration is not complex either,
First I try to delete files in the workspace then by using a Test.exe (tool that was developed to run API testing) and passing parameters to it then passing that standard output to a log file and finally echoing that log back in to the jenkins console. (because I wanted to save the jenkins console output to a separate log file as well)
Are there any workarounds or solutions to resolve this?
We have started to use jenkins from last few months and now the size of home directory is about 50GB. I noticed that size of Jobs and workspace directories are about 20 GB each. How can I clean them? What kind of strategy I should use?
Consider the various Jenkins areas that tend to grow excessively. The key areas are: system logs, job logs, artifact storage and job workspaces. This will detail options to best manage each of these.
System logs
System logs may be found in <JENKINS_HOME>/logs or /var/log/jenkins/jenkins.log, depending on your installation. By default, Jenkins does not always include log rotation (logrotate), especially if running straight from the war. The solution is to add logrotate. This Cloudbees post and my S/O response add details.
You can also set the Jenkins System Property hudson.triggers.SafeTimerTask.logsTargetDir to relocate the logs outside the <JENKINS_HOME>. Why answered later.
Job Logs
Each job has an option to [ X ] Discard old builds. As of LTS 2.222.1, Jenkins introduced a Global Build discarder (pull #4368) with similar options and default actions. This is a global setting, Prior to that, job logs (and artifacts) were retained forever by default (not good).
Advanced options can manage artifact retention (from post-build action, "Archive the artifacts" separately.
What's in Jobs directory?
The Jobs directory contains a directory for every job (and folders if you use them). Inside the directory is the job config.xml (a few KB in size), plus a directory builds. builds has a numbered directory holding the build logs for each retained build, a copy of the config.xml at runtime and possibly some additional files of record (changelog.xml, injectedEnvVars.txt). IF you chose the Archive the artifacts option, there's also an archive directory, which contains the artifacts from that build.
Jenkins System Property, jenkins.model.Jenkins.buildsDir, lets you relocate the builds to outside the <JENKINS_HOME>
Why Relocate logs outside <JENKINS_HOME>?
I would strongly recommend relocating both the system logs and the job / build logs (and artifacts). By moving the system logs and build logs (and artifacts if ticked) outside of <JENKINS_HOME>, what's left is the really important stuff to back and restore Jenkins and jobs in the event of disaster or migration. Carefully read and understand the steps "to support migration of existing build records" to avoid build-related errors. It also makes it much easier to analyze which job logs are consuming all the space and why (ie: logs vs artifacts).
Workspaces
Workspaces are where the source code is checked out and the job (build) is executed. Workspaces should be ephemeral. Best Practicesare to start with an empty workspace and clean up when you are done - use Workspace Cleanup ( cleanWS() ) plugun, unless necessary.
The OP's indication of a workspaces in the Jenkins controller suggests jobs are being run on the master. say that's not a good (or secure) practice, except lightweight pipelines always execute on master. Mis-configured job pipelines will also fall back to master (will try find reference). You can set up a node physically running on the same server as the master for better security.
You can use cleanws() EXCLUDE and INCLUDE patterns to selectively clean the workspace if delete all is not viable.
There are two Jenkins System Properties to control the location of the workspace directory. For the master: jenkins.model.Jenkins.workspacesDir and for the nodes/agents: hudson.model.Slave.workspaceRoot. Again, as these are ephemeral, get them out of <JENKINS_HOME> so you can better manage and monitor.
Finally, one more space consideration...
Both maven and npm cache artifacts in a local repository. Typically that is located in the user's $HOME directory. If incrementing versions often, that content will get stale and bloated. It's a cache, so take a time hit every once in a while and purge it or otherwise mange the content.
However, it's possible to relocate the cache elsewhere through maven and npm settings. Also, if running a maven step, every step has the Advanced option to have a private repository. That is located within in job's workspace. The benefit is you know what your build is using; no contamination. The downside though is massive duplication and wasted space if all jobs have private repos and you never clean them out or delete the workspaces, or longer build times every time if you cleaned. Consider using s the cleanWS() or a separate job to purge as needed.
The workspaces can be cleaned after and/or prior any execution. I recommend doing it prior and after an execution. After the build, do it only on successful builds. In case of errors, you can enter the workspaces and check there for any clue. You do it on the pipeline using the CleanWs() command.
For jobs directory you can select on your jobs the amount of time / maximum of executions to store. This is more complicated because it depends on what you want to save. For example if there is a lot of builds and you don't mind deleting that information you could save 10 builds during 30 days . That configuration is on the job configuration under job properties and search for "Discard old builds" and "Days to keep build" and "Max # builds to keep".
My suggestion is that you use larger numbers at first and then you can test how it behaves
I want to have one Jenkins job control the build number of another job but without the inconvenience of reloading the entire project configuration from disk. I have seen that it's easily possible to directly update the nextBuildNumber file of the target job (I can do this as a build step of Job A) but this does not take effect immediately. Restarting Jenkins or even reloading the Jenkins configs from disk takes way too long and can only be done when there are no builds in progress.
I have tried the groovy script mentioned in the below post by running it from the Manage Jenkins > Script Console. The same post also suggests the script can be saved as a file and run from the CLI. Can it be run from a build step?
I want Job A to determine Job B's next build number and set it so that Job B can run (later in the same day) with the desired build number.
https://stackoverflow.com/a/20077362/4306857
Perhaps I should clarify. I'm not familiar with Groovy so I'm looking at the various build step options like "Execute Windows batch command" which I have a lot of experience with. I can see an "Invoke Gradle script" option so I was wondering if there may be a plugin that can run Groovy scripts perhaps?
The reason this requirement has arisen is that we are compiling a product for two different platforms. We want to compile the codebase almost simultaneously for both platforms with two jobs (A & B) which will both update the JIRA cases included in the builds. We feel it will be less confusing to have both these jobs running with the same build number so that when we talk about a particular issue being addressed in build #75, say, we won't have to qualify that by stating the full job name. If JOB-A build #75 and JOB-B build #75 are both compiled on the same day from the same codebase we can test and compare the results of both builds with far less confusion than if the build numbers go out of sync.
Obviously, in the short term we will use the Set Next Build Number plugin to manually keep the build numbers in step but we want to automate this if possible.
Depends on whether or not you are using Version Number plugin:
[ X ] Create a formatted version number
Build Display Name [ X ] Use the formatted version number for build display name.
Assuming you are NOT, this groovy script will do:
def NextNumber=42
job=Jenkins.instance.getItemByFullName('path/to/jobName')
job.nextBuildNumber = NextNumber
job.save();
You will need groovy plugin for that. Place that in an "Execute system Groovy script" step. Make sure to choose system groovy. That will execute on the master, where the job config and metadata is stored so you have access to the Jenkins internals and data.
I'd suggest you should really be using the above options rather than relying on "keeping both jobs in sync" via a script or manually. You can then pass the label to be used from the first job as a parameter to the second job. That would also require Parameterized Trigger as well as Version Number plugins.
You can even use ${BUILD_DATE_FORMATTED} or ${BUILD_TIMESTAMP}, etc.
Postdate: thinking about the problemspace from a different perspective, that of running 2+ builds on different platforms (simultaneously), there's a plugin for that: Matrix project. You can run it as a freeatyle job on multiple nodes or is excellently described as Matrix building in scripted pipeline. Not sure how that would tie in to JIRA.
I build openembedded image with jenkins pipeline. The pipeline ends successfully (according to logs). The finished workspace has about 40 GB. The problem is that even although the pipeline finishes, the job freezes for at least several hours. I am not sure if it ever recovers, as I have always killed it (cannot afford to have jenkins blocked for several days).
I don't observe this when building something small (~ 1GB). I also don't observe it when I wipe out the entire build as the last step. And I don't have any deployment explicitly set up.
What could be wrong?
Fixed. The problem was caused by cppcheck plugin, which was left activated for all jobs on Jenkins. After disabling the plugin the big jobs end normally.
There was a question about how I figured it out. I realized that there is an active plugin that performs a post-processing on the build output, and that the build output is really, really big. So I tried disabling the plugin and it helped.
Lesson learned: Don't put any post-processing plugins under the Overview and statistics of all builds view, as they are activated for all builds, even the undesirable ones.
Recently our organization decided to move from using Maven/Cargo-plugin to deploy our applications to using Puppet. We still have all of our builds and test jobs in Jenkins. So what I'm trying to figure out is how do I trigger a specific Jenkins job based on a specific line being changed in a puppet manifest? We are using a manifest that has all of our deployed components and their versions. If I change the version of one of the components, I want a specific test job to be triggered based on which component was changed. And eventually I will want to rollback the puppet change if the test fails. Has anyone done something related?
I haven't used it for your specific use case, but for a "pull" scenario where you want to monitor the contents of the Puppet manifest for changes, the Jenkins FSTrigger plugin should work for you as long as your Jenkins job can access the Puppet manifest file. You can set it up to look for changes in the entire file content, or just in a particular part of the contents.
If you want a "push" scenario to trigger a build as soon as the Puppet manifest is changed, you could write a script that runs after the changes are saved, checks which components have been changed, and triggers a build via the Jenkins CLI.