I'm currently refactoring old Packer code, switching from JSON config files with a bunch of shell scripts to HCL files using its features (like default variables values, expressions...).
With my old code, I initialized most of the internal variables inside the shell script, and I also use it to display a message containing all those variables just before the Packer build itself.
Now, I'm migrating to HCL code (which I think would be easier to maintain than the previous shell) but I miss the idea of displaying a message with at-runtime-user-defined computed or variables/locals.
I'm currently trying to do that with a shell or shell-local provisioner, but even if I succeed this message will still appears in the middle or the end of the build, not before (in my case the boot on Debian ISO and OS installation is still done before).
Does any one have an idea how to do it?
Related
I'm having trouble getting consistent benefit from ccache in my jenkins pipeline builds. I'm setting CCACHE_BASEDIR to the parent directory of my current build directory (this works out to something like /opt/jenkins/workspace). Given this basedir, I would expect all PR/branch builds that share this common parent to be able to find hits in the cache, but alas they do not. I do see cache hits for subsequent builds in a given directory (if I manually rebuild a particular PR, for example), which implies that CCACHE_BASEDIR is not working like I would expect.
To further diagnose, I've tried setting CCACHE_LOGFILE and although that file is produced by the build, it is effectively empty (it contains only two lines indicating the version of ccache).
Can anyone suggest specific settings or techniques that have worked to get maximum benefit from ccache in jenkins pipelines, or other things to try to diagnose the problem? What might cause the empty ccache log file?
I'm running ccache 3.3.4.
The solution to the first part of the question is probably to set hash_dir = false (CCACHE_NOHASHDIR=1 if using environment variables) or setting -fdebug-prefix-map=old=new for relocating debug info to a common prefix (e.g. -fdebug-prefix-map=$PWD=.). More details can be found in the "Compiling in different directories" section in the ccache manual.
Regarding CCACHE_LOGFILE: I've never heard about that problem before (I'm the ccache maintainer, BTW), but if you set CCACHE_LOGFILE to a relative file path, try setting it to an absolute path instead.
I want to display non-code differences between current build and the latest known successful build on Jenkins.
By non-code differences I mean things like:
Environment variables (includes Jenkins parameters) (set), maybe with some filter
Version of system tool packages (rpm -qa | sort)
Versions of python packages installed (pip freeze)
While I know how to save and archive these files as part of the build, the only part that is not clear is how to generate the diff/change-report regarding differences found between current build and the last successful build.
Please note that I am looking for a pipeline compatible solution and ideally I would prefer to make this report easily accessible on Jenkins UI, like we currently have with SCM changelogs.
Or to rephrase this, how do I create build manifest and diff it against last known successful one? If anyone knows a standard manifest format that can easily be used to combine all these information it would be great.
you always ask the most baller questions, nice work. :)
we always try to push as many things into code as possible because of the same sort of lack of traceability you're describing with non-code configuration. we start with using Jenkinsfiles, so we capture a lot of the build configuration there (in a way that still shows changes in source control). for system tool packages, we get that into the app by using docker and by inheriting from a specific tag of the docker base image. so even if we want to change system packages or even the python version, for example, that would manifest as an update of the FROM line in the app's Dockerfile. Even environment variables can be micromanaged by docker, to address your other example. There's more detail about how we try to sidestep your question at https://jenkins.io/blog/2017/07/13/speaker-blog-rosetta-stone/.
there will always be things that are hard to capture as code, and builds will therefore still fail and be hard to debug occasionally, so i hope someone pipes up with a clean solution to your question.
In Jenkins, is it possible to export windows batch variables as build parameters? I know using build parameters inside windows batch blocks is, I use it a lot.
For example, I have a windows batch block that creates a variable, say A, like
SET A="MyVar"
Is it possible to use it in running MSBuild, passing it like it was a build parameter, in the (working) syntax of /p:AssemblyName=%A% or /p:AssemblyName=${A} ?
Neither of these seem to work (my variable is always empty).
Update: #Tuffwer suggested using the EnvInject plugin. I have been trying, but so far without success. Here's a sample I created to illustrate my original intent:
I want to create a variable which contents will be determined based on a condition applied in one of the build parameters. Then, I want to use that variable as a parameter to the MsBuild command line, using the /p:[Key]=[Value] syntax (which requires the Jenkins MsBuild plugin, if I am not wrong).
I still can't get this to work, now using EnvInject. I need to reference the value of a windows batch variable inside a further build step.
Update II: I turned into Environment Script Plugin, which did the job for me.
Tuffwer suggested using the EnvInject plugin, I tried did not succeed attaining what I intended. I edited my post including my EnvInject attempt, but in the meantime went searching again for other solutions.
This time I came across the Environment Script Plugin, which did the job for me.
Steps:
Mark Generate environment variables from script
For each variable you want to "export", you need to issue an echo [varName]=[value] statement.
That's all.
My build then creates an assembly named either TRUE.exe or FALSE.exe, depending on the build parameter MyBool value.
I have a significant set of Groovy pipeline scripts for our Jenkins build process. I am in the process of moving those scripts onto another instances, and would like to replicate the set of approved scripts that were not originally white listed.
Is it possible to export the list of approved signatures and import them into another instance?
The only other solution I have is to constantly run and rerun the scripts and approving each signature as it breaks the build. Since the scripts are quite complex, and not every run is guaranteed to hit each line, this is not going to be a quick process.
The other option would be to create a master 'white list' script which runs all the currently non-approved scripts again and again until all instances had been approved.
Neither of these options is great, so I'm hoping for a simple import/export to avoid having to do this work altogether, but I certainly can't see an option available to be in the UI.
Cheers
I do not believe there is import/export functionality by default but maybe there's a plugin that'll do it.
If you have access to the directory Jenkins' is installed or runs in you should be able to find the scriptApproval.xml file.
If you explore that you'll find approvedScriptHashes and approvedSignatures etc. You can lift this file entirely and paste it in the new instance or copy across the specifics you need (either way you'll need a restart).
Looks like there's an open request for this sort of functionality here
I'm having a very hard time finding any information about this. I've just created a Build-Deploy-Test build definition for one of our main projects but when the workflow runs, it reports a wrong value for the "$(BuildLocation)" macro, which breaks everything from the deployment phase onwards (the tests also try to run over this wrong path).
I know what is causing the problem, but I don't know how to fix it. The build definition we are redirecting the lab one to is configured to build the 'Release' configuration of our solutions. The drop folder is "\outputServer\drops". I expected the BuildLocation macro to then return "\\outputServer\drops\<BuildName>\<BuildNameFormat>", but the macro is returning "\\outputServer\drops\<BuildName>\<BuildNameFormat>\Release" instead.
I initially thought that this was an incompatibility between the LabDefaultTemplate.11.xaml template (which is the one I'm trying to use) and the old DefaultTemplate.xaml, which I based our custom template over. I tried updating our custom template to take the new default (DefaultTemplate.11.1.xaml) as a base, but after converting the template the problem persists.
Even after looking at the code on the DefaultTemplate.11.1, I still don't see it filter the output by configuration names at all. The only processing in there is based on the solution or project name, which is disabled by default (controlled by the 'Solution Specific Build Outputs' option under the Advanced category, on the build definition configuration).
Why is it assuming that the drop folder ends with 'Release' while the dropped outputs are not placed on this folder at all? I managed to make the deployment scripts to run fine by appending a ".." to the path, like this $(BuildLocation)\..\myScript but when the workflow tries to run the automated tests it seems to be using this same macro and obviously doesn't find the test dlls.
It would be possible to work around this by not specifying a build configuration on the 'Items to Build' element in the definition options (thus letting it choose the default ones), but specifying the configuration was a conscious decision on our part, because there are differences in the files and some configs are transformed differently when the project is built in Release mode.
I'm currently using VS2012 Update 3/TFS 2012 Update 2 it this helps any.
Update:
Ok I found where it is doing this inside the template itself. The fact that the lab workflow is very simple helped here.
Inside the Compute build location needed 'if statement', there is an assignment that seems to be doing this weird concatenation. Here is the code:
If(LabWorkflowParameters.BuildDetails.Configuration Is Nothing, BuildLocation, If(LabWorkflowParameters.BuildDetails.Configuration.IsEmpty Or (SelectedBuildDetail.Information.GetNodesByType(Microsoft.TeamFoundation.Build.Common.InformationTypes.ConfigurationSummary, True)).Count = 1, BuildLocation, If(LabWorkflowParameters.BuildDetails.Configuration.IsPlatformEmptyOrAnyCpu, BuildLocation + "\" + LabWorkflowParameters.BuildDetails.Configuration.Configuration, BuildLocation + "\" + LabWorkflowParameters.BuildDetails.Configuration.Platform + "\" + LabWorkflowParameters.BuildDetails.Configuration.Configuration)))
I'm not even sure what this is supposed to mean. This behavior seems to be a bug to me, since the build template itself (not the lab one) does NOT do this concatenation. How can the LabTemplate assume this type of thing?
Just removing the activity from the LabDefaultTemplate build process template seems to work.
I'm not sure what the meaning or purpose is of that Assign activity, but it seems to work fine for us without it.