I'm having trouble getting consistent benefit from ccache in my jenkins pipeline builds. I'm setting CCACHE_BASEDIR to the parent directory of my current build directory (this works out to something like /opt/jenkins/workspace). Given this basedir, I would expect all PR/branch builds that share this common parent to be able to find hits in the cache, but alas they do not. I do see cache hits for subsequent builds in a given directory (if I manually rebuild a particular PR, for example), which implies that CCACHE_BASEDIR is not working like I would expect.
To further diagnose, I've tried setting CCACHE_LOGFILE and although that file is produced by the build, it is effectively empty (it contains only two lines indicating the version of ccache).
Can anyone suggest specific settings or techniques that have worked to get maximum benefit from ccache in jenkins pipelines, or other things to try to diagnose the problem? What might cause the empty ccache log file?
I'm running ccache 3.3.4.
The solution to the first part of the question is probably to set hash_dir = false (CCACHE_NOHASHDIR=1 if using environment variables) or setting -fdebug-prefix-map=old=new for relocating debug info to a common prefix (e.g. -fdebug-prefix-map=$PWD=.). More details can be found in the "Compiling in different directories" section in the ccache manual.
Regarding CCACHE_LOGFILE: I've never heard about that problem before (I'm the ccache maintainer, BTW), but if you set CCACHE_LOGFILE to a relative file path, try setting it to an absolute path instead.
Related
I want to display non-code differences between current build and the latest known successful build on Jenkins.
By non-code differences I mean things like:
Environment variables (includes Jenkins parameters) (set), maybe with some filter
Version of system tool packages (rpm -qa | sort)
Versions of python packages installed (pip freeze)
While I know how to save and archive these files as part of the build, the only part that is not clear is how to generate the diff/change-report regarding differences found between current build and the last successful build.
Please note that I am looking for a pipeline compatible solution and ideally I would prefer to make this report easily accessible on Jenkins UI, like we currently have with SCM changelogs.
Or to rephrase this, how do I create build manifest and diff it against last known successful one? If anyone knows a standard manifest format that can easily be used to combine all these information it would be great.
you always ask the most baller questions, nice work. :)
we always try to push as many things into code as possible because of the same sort of lack of traceability you're describing with non-code configuration. we start with using Jenkinsfiles, so we capture a lot of the build configuration there (in a way that still shows changes in source control). for system tool packages, we get that into the app by using docker and by inheriting from a specific tag of the docker base image. so even if we want to change system packages or even the python version, for example, that would manifest as an update of the FROM line in the app's Dockerfile. Even environment variables can be micromanaged by docker, to address your other example. There's more detail about how we try to sidestep your question at https://jenkins.io/blog/2017/07/13/speaker-blog-rosetta-stone/.
there will always be things that are hard to capture as code, and builds will therefore still fail and be hard to debug occasionally, so i hope someone pipes up with a clean solution to your question.
I'm slowly replacing traditional jobs with Jenkins pipelines. We've got some jobs which I've previously optimised by only deleting some key files from the workspace of a previous build - thus we end up with incremental builds rather than full ones. FTR this makes our basic builds 3/4 times faster, and I'm keen to preserve it.
I need to delete those files (to simplify real scenario) that contain "cache". I currently use "**/cache" as an include parameter to the Delete Workspace build step. Question: is there something similar already in pipeline steps? I could probably do it using find or similar, but this has to work on Windows too and that has portability implications.
You could use the cleanWS step to clean up certain parts of the workspace. However, it is a plugin you can find here: Workspace Cleanup Plugin.
You can find syntax about a snippet generator for this step at your-jenkins-url/pipeline-syntax/
I've switched away from using cleanWS having used it. Rather I am using the file operations to explicitly delete the files concerned.
The file operations act there and then. The cleanWs acts at the end of a run and can't be relied upon if that run went wrong and did not finish - e.g. syntax error - or that was running a different script.
I'm having a very hard time finding any information about this. I've just created a Build-Deploy-Test build definition for one of our main projects but when the workflow runs, it reports a wrong value for the "$(BuildLocation)" macro, which breaks everything from the deployment phase onwards (the tests also try to run over this wrong path).
I know what is causing the problem, but I don't know how to fix it. The build definition we are redirecting the lab one to is configured to build the 'Release' configuration of our solutions. The drop folder is "\outputServer\drops". I expected the BuildLocation macro to then return "\\outputServer\drops\<BuildName>\<BuildNameFormat>", but the macro is returning "\\outputServer\drops\<BuildName>\<BuildNameFormat>\Release" instead.
I initially thought that this was an incompatibility between the LabDefaultTemplate.11.xaml template (which is the one I'm trying to use) and the old DefaultTemplate.xaml, which I based our custom template over. I tried updating our custom template to take the new default (DefaultTemplate.11.1.xaml) as a base, but after converting the template the problem persists.
Even after looking at the code on the DefaultTemplate.11.1, I still don't see it filter the output by configuration names at all. The only processing in there is based on the solution or project name, which is disabled by default (controlled by the 'Solution Specific Build Outputs' option under the Advanced category, on the build definition configuration).
Why is it assuming that the drop folder ends with 'Release' while the dropped outputs are not placed on this folder at all? I managed to make the deployment scripts to run fine by appending a ".." to the path, like this $(BuildLocation)\..\myScript but when the workflow tries to run the automated tests it seems to be using this same macro and obviously doesn't find the test dlls.
It would be possible to work around this by not specifying a build configuration on the 'Items to Build' element in the definition options (thus letting it choose the default ones), but specifying the configuration was a conscious decision on our part, because there are differences in the files and some configs are transformed differently when the project is built in Release mode.
I'm currently using VS2012 Update 3/TFS 2012 Update 2 it this helps any.
Update:
Ok I found where it is doing this inside the template itself. The fact that the lab workflow is very simple helped here.
Inside the Compute build location needed 'if statement', there is an assignment that seems to be doing this weird concatenation. Here is the code:
If(LabWorkflowParameters.BuildDetails.Configuration Is Nothing, BuildLocation, If(LabWorkflowParameters.BuildDetails.Configuration.IsEmpty Or (SelectedBuildDetail.Information.GetNodesByType(Microsoft.TeamFoundation.Build.Common.InformationTypes.ConfigurationSummary, True)).Count = 1, BuildLocation, If(LabWorkflowParameters.BuildDetails.Configuration.IsPlatformEmptyOrAnyCpu, BuildLocation + "\" + LabWorkflowParameters.BuildDetails.Configuration.Configuration, BuildLocation + "\" + LabWorkflowParameters.BuildDetails.Configuration.Platform + "\" + LabWorkflowParameters.BuildDetails.Configuration.Configuration)))
I'm not even sure what this is supposed to mean. This behavior seems to be a bug to me, since the build template itself (not the lab one) does NOT do this concatenation. How can the LabTemplate assume this type of thing?
Just removing the activity from the LabDefaultTemplate build process template seems to work.
I'm not sure what the meaning or purpose is of that Assign activity, but it seems to work fine for us without it.
Given that:
There seems to be no easy way to get a list of "changed" files in Jenkins (see here and here)
There seems to be no fast way to get a list of files changed since label xxxx
How can I go about optimising our build so that when we run PMD it only runs against files that have been modified since the last green build.
Backing up a bit… our PMD takes 3–4 minutes to run against ~1.5 million lines of code, and if it finds a problem the report invariably runs out of memory before it completes. I'd love to trim a couple of minutes off of our build time and get a good report on failures. My original approach was that I'd:
get the list of changes from Jenkins
run PMD against a union of that list and the contents of pmd_failures.txt
if PMD fails, include a list of failing files in pmd_failures.txt
More complicated than I'd like, but worth having a build that is faster but still reliable.
Once I realised that Jenkins was not going to easily give me what I wanted, I realised that there was another possible approach. We label every green build. I could simply get the list of files changed since the label and then I could do away with the pmd_failures.txt entirely.
No dice. The idea of getting a list of files changed since label xxxx from Perforce seems to have never been streamlined from:
$ p4 files //path/to/branch/...#label > label.out
$ p4 files //path/to/branch/...#now > now.out
$ diff label.out now.out
Annoying, but more importantly even slower for our many thousands of files than simply running PMD.
So now I'm looking into trying to run PMD in parallel with other build stuff, which is still wasted time and resources and makes our build more complex. It seems to me daft that I can't easily get a list of changed files from Jenkins or from Perforce. Has anyone else found reasonable workaround for these problems?
I think I've found the answer, and I'll mark my answer as correct if it works.
It's a bit more complex than I'd like, but I think it's worth the 3-4 minutes saved (and potential memory issues).
At the end of a good build, save the good changelist as a Perforce counter. (post-build task). Looks like this:
$ p4 counter last_green_trunk_cl %P4_CHANGELIST%
When running PMD, read the counter into the property last.green.cl and get the list of files from:
$ p4 files //path/to/my/branch/...#${last.green.cl},now
//path/to/my/branch/myfile.txt#123 - edit change 123456 (text)
//path/to/my/branch/myotherfile.txt#123 - add change 123457 (text)
etc...
(have to parse the output)
Run PMD against those files.
That way we don't need the pmd_failures.txt and we only run PMD against files that have changed since the last green build.
[EDIT: changed it to use p4 counter, which is way faster than checking in a file. Also, this was very successful so I will mark it as answered]
I'm not 100% sure since I've never use Perforce with Jenkins, but I believe Perforce passes the changelist number through the environment variable $P4_CHANGELIST. With that, you can run the p4 filelog -c $P4_CHANGELIST which should give you the files from that particular changelist. From there, it shouldn't be hard to script something up to just get the changed files (plus the old failures into PMD).
I haven't use Perforce in a long time, but I believe the -Ztag parameter makes it easier to parse P4 output for the various scripting languages.
Have you thought about using automatic labels? They're basically just an alias for a changelist number, so it's easier to get the set of files that differ between two automatic labels.
I've recently been trying to set up PHPUnderControl, a Continuous Integration server based on CruisControl. Part of the checks I'd like to run is the PHP CodeSniffer (PHPCS) to detect "code smell". However, letting this run on my codebase results in an extreme amount of problems being detected. Most of these are found in libraries that I've included in my SVN repository through an svn:externals directive, and hence aren't under my control.
Is it possible to tell PHP_CodeSniffer to ignore part of my SVN repository, while still validating other parts?
Found the solution - one can add the --ignore switch to the set of arguments passed into phpcs.
[--ignore=<patterns>]
Use
$ phpcs --help
to displays all information about commandline usage.