I want to display non-code differences between current build and the latest known successful build on Jenkins.
By non-code differences I mean things like:
Environment variables (includes Jenkins parameters) (set), maybe with some filter
Version of system tool packages (rpm -qa | sort)
Versions of python packages installed (pip freeze)
While I know how to save and archive these files as part of the build, the only part that is not clear is how to generate the diff/change-report regarding differences found between current build and the last successful build.
Please note that I am looking for a pipeline compatible solution and ideally I would prefer to make this report easily accessible on Jenkins UI, like we currently have with SCM changelogs.
Or to rephrase this, how do I create build manifest and diff it against last known successful one? If anyone knows a standard manifest format that can easily be used to combine all these information it would be great.
you always ask the most baller questions, nice work. :)
we always try to push as many things into code as possible because of the same sort of lack of traceability you're describing with non-code configuration. we start with using Jenkinsfiles, so we capture a lot of the build configuration there (in a way that still shows changes in source control). for system tool packages, we get that into the app by using docker and by inheriting from a specific tag of the docker base image. so even if we want to change system packages or even the python version, for example, that would manifest as an update of the FROM line in the app's Dockerfile. Even environment variables can be micromanaged by docker, to address your other example. There's more detail about how we try to sidestep your question at https://jenkins.io/blog/2017/07/13/speaker-blog-rosetta-stone/.
there will always be things that are hard to capture as code, and builds will therefore still fail and be hard to debug occasionally, so i hope someone pipes up with a clean solution to your question.
Related
I am preparing for making a minor bug fix to bazel java code. Am working on a Linux distribution.
Following the instructions in https://bazel.build/contributing.html but I encounter problems with two of the test instructions:
In the section about "Compiling bazel" the third parapgraph state: "In addition to the Bazel binary, you might want to build the various tools Bazel uses. They are located in //src/java_tools/..., //src/objc_tools/... and //src/tools/... and their directories contain README files describing their respective utility." If I follow this the //src/tools/... fail because there is no xcrun command in the Linux environment I am using. I suppose this is MacOS platform specific tests?
The next paragraph instructs you to build a distribution package, that you then unpack in a new directory, and then do: "bazel test //src/... //third_party/ijar/...". I now get an error that windows.h is missing, which I suppose is Windows platform specific tests.
Some questions:
So is there an easy way to run tests only for the current platform?
Is the instructions good enough?
If the instructions should be updated, what is the best way to notify the ones managing that documentation page?
Thanks for your interest in contributing to Bazel! The bazel-dev mailing list is a better avenue for these questions.
The tests that you want to run largely depend on the changes you make, but when you make a pull request, the Bazel CI will run all of Bazel's tests to make sure that nothing breaks.
So is there an easy way to run tests only for the current platform?
It depends, and this is still a work in progress where we want to make Bazel more aware of platforms and toolchains without specifying additional flags.
In general, you don't need to modify or worry about the //src/*_tools packages unless you're making direct changes to them.
Is the instructions good enough?
The instructions will never be perfect, and we're always looking for ways to make it clearer and more concise.
If the instructions should be updated, what is the best way to notify the ones managing that documentation page?
Please file an issue on the GitHub repository or email the bazel-dev mailing list for further discussion.
I have multiple environments. They are debug, dev, and prod. I'd like to refer to an image by the latest dev (latest) or dev (version 1.1) or prod (latest). How would I go about tagging builds and pushes?
My first thought was to create separate repositories for each environment debug, dev, and prod. But I am starting to wonder if I can do this with just one repository. If its possible to do with one container what would the syntax be when building and pushing?
This is what has worked best for me and my team and I recommend it:
I recommend having a single repo per project for all environments, it is easier to manage. Especially if you have microservices, then your project is composed by multiple microservices. Managing one repo per env per project is a pain.
For example, I have a users api.
The docker repo is users. And this repo is used by alpha, dev and beta.
We create an env variable called $DOCKER_TAG in our CI/CD service and set it at the time the build is created, like this:
DOCKER_TAG: $(date +%Y%m%d).$BUILD_NUMBER => This is in bash.
Where $BUILD_NUMBER is previously set by the build being run when the CI/CD run is triggered. For example, when we merge a PR, a build is triggered, as build no. 1, so $BUILD_NUMBER: 1.
The resulting tag looks like this when used: 20171612.1
so our docker image is: users:20171612.1
Why this format?
It allows us to deploy the same tag on different environments with a
run task.
It helps us keep track when an image was created and what
build it belongs to.
Through the build number, we can find the commit information and map all together as needed, nice for trobleshooting.
It allows us to use the same docker repo per project.
It is nice to know when we created the image from the tag itself.
So, when we merge, we create a single build. Then that build is deployed as needed to the different environments. We don't create an independent build per environment. And we keep track on what's deployed where.
If there's a bug in an environment with certain tag, we pull such tag, build and trobleshoot and reproduce the issue under that condition. If we find an issue, we have the build number in the tag 20171612.1 so we know the build no. 1 has the issue. We check our CI/CD service and that tells us what commit is the most current. We check out that commit hash from git and debug and fix the issue. Then we deploy it as a hotfix, for example.
If you don't have a CI/CD yet, and you are doing this manually, just set the tag in that format manually (pretty much type the full string as is) and instead of a build number, use a commit short git hash (if you are using git):
20170612.ed73d4f
So you know what is the most current commit so you can troubleshoot issues with a specific image and map back to the code to create fixes as needed.
You can also define any other suffix for your tag that maps to the code version, so you can easily troubleshoot (e.g. map to git tags if you are using those).
Try it, adjust it as need it and do what it works best for you and your team. There's many ways to go around tagging. We tried many and this one is our favorite so far.
Hope this is of help.
There's two schools of thought, stable tagging, where you update a single tag, and unique tagging. Each have their pros and cons. Stable tags can create instability when deploying to self healing clusters as a new node might pull a new version, while the rest of the cluster is running a slightly older version. Unique tagging is a best practice for deployment. However, to manage base image updates of OS & Framework patching, you'll want to build upon stable tags in your dockerfile, and enable automatic container builds. For a more detailed walk through, with visuals, here's a post:
https://blogs.msdn.microsoft.com/stevelasker/2018/03/01/docker-tagging-best-practices-for-tagging-and-versioning-docker-images/
I think "lastest" is for the last productive image. It is what I expect in the docker hub, although there are no images in development.
On the other hand you can use tags like for example 0.0.1-dev. When this image is finished you can do the tag again and push, and then the repository will detect that the layers are already in the repository.
Now when you are about a candidate version to come out to production, you have to only have the semantic version despite not being in an environment pruduccion. That's what I would do.
I have a significant set of Groovy pipeline scripts for our Jenkins build process. I am in the process of moving those scripts onto another instances, and would like to replicate the set of approved scripts that were not originally white listed.
Is it possible to export the list of approved signatures and import them into another instance?
The only other solution I have is to constantly run and rerun the scripts and approving each signature as it breaks the build. Since the scripts are quite complex, and not every run is guaranteed to hit each line, this is not going to be a quick process.
The other option would be to create a master 'white list' script which runs all the currently non-approved scripts again and again until all instances had been approved.
Neither of these options is great, so I'm hoping for a simple import/export to avoid having to do this work altogether, but I certainly can't see an option available to be in the UI.
Cheers
I do not believe there is import/export functionality by default but maybe there's a plugin that'll do it.
If you have access to the directory Jenkins' is installed or runs in you should be able to find the scriptApproval.xml file.
If you explore that you'll find approvedScriptHashes and approvedSignatures etc. You can lift this file entirely and paste it in the new instance or copy across the specifics you need (either way you'll need a restart).
Looks like there's an open request for this sort of functionality here
I am using Jenkins for a project and would like to know if the following is possible. I have four separate SVN modules which are checked out as part of the job. Each SVN module is added to a separate directory. Depending on which module is updated during the SCM polling, I would like to only build certain directories.
With Cruise Control, I was able to set a variable for each module that was updated and passed those variables to the ant build script to control the build.
Has anyone done anything similar or have any ideas?
Thanks,
Sean
This Question is pretty complex. You are touching too much different parts of CI builserver and some tasks out of it.
Basically ... providing job / project in Jenkins with information that controls behavior of build itself is not best way, but if you have no other option, well, then you have no other option.
Build itself should be enough agnostic and it should contain all the parameters enabling build to be successful both in CI, and in Workstation ( from cmd.exe, for example ).
Depending on which module is updated during the SCM polling, I would
like to only build certain directories.
So basically you want Maven build system, which provides model/module based conditional build, not building one single Project, like Ant does.
With Cruise Control, I was able to set a variable for each module that
was updated and passed those variables to the ant build script to
control the build.
Here you want to have some kind of similar Build Triggering capability. Here comes place where without more detailed explanation of requirements only thing I can suggest is to check out Pramatreized Build Trigger plugin, which would allow to trigger build by parameters you set.
Has anyone done anything similar or have any ideas?
Finally, here you can also check out this plugin: https://wiki.jenkins-ci.org/display/JENKINS/Conditional+BuildStep+Plugin
In the conclusion, some features are provided by Jenkins out-of-the-box, so if you use Ant, you can easily use Environment variables and start building your needed behavior. Usually after investing some time by thinking how to do something without help from tons of Jenkins plugins it somehow makes you really understand, what is the key of thing you want to achieve.
Hope I helped somewhat. Cheers, mate.
We have an automated build and QA process for our software, using tfs/teambuild and msbuild, and we want to be able to know (for audit purposes) whether a component has gone through that process or not.
For example, if a library is installed on a user's machine, I'd like to be able to inspect it in some way to tell that it went through the build. In particular, I want to be able to distinguish it from components built directly on a developer's machine, and then manually installed.
What is the best way to do this? Code signing as part of the build process seems closest to these requirements, but presumably this would not cover any 3rd-party libraries that might be used? I also read about the ILMerge tool to merge all assemblies into one, but then I don't know enough to work out whether they can then be signed or not?
I'm sure we're not the first people to have the requirement, so casting around for any ideas or hints from others who might have done such a thing
Thanks!
Our developer builds are set to keep the versions at "0.0.0.0", but our build server marks the build based on a pre-configured version and automagically generated build string. "1.0.3.xxx". Your build server doesn't allow for this?
Your build process should be updating each of your projects assemblyinfo.cs files (or a global linked equivalent), you can do this with the TFS changeset number, so like the previous poster indicated you end up with the property on each dll of 1.0.changeset.buildno or something similar. You can do this easily in msbuild.
You could have the values of each assembly info file set in source control to be something obvious like 0 or 999.
A lot of what your asking is about process and training as well though.
If your using installers or zips to package your deliverables then you can also label them with the build number as part of your build process.
But if you have changeset you have the link from dll to code, so traceable, coupled with links to third party dll references as defined in each csproj.