Jenkins Promoted build - jenkins

I don't understand what Promoted build really is and how it works. Can someone please explain to me like to a 10 years old kid. If you can provide some sample examples would help me a lot.
Thanks

In a typical software developing organization with CI system, there are 10's or 100's of continuous builds daily. Only one of those builds (usually the latest stable) is selected and "promoted" to be a Release Candidate (RC), which goes to the next quality gate - usually the QA department. Then, they select one of those RC's (others are dropped) and again, "promote" it to the next level - either to staging environment, validation etc. Then, finally... one of these builds is again "promoted" to be an official release.
Why is that important?
Visibility: You would want to distinguish many "regular", continuous builds from few, selected "RC" builds.
Retention: If you commit often (which is the best practice), you will likely get lot of daily builds, and would like to implement a retention policy (e.g. only keep last 100 builds or only builds from the last 7 days). You will then want to make sure promoted builds (RCs) are locked against retention. This is mostly important if you deploy binaries to customers, and may need the exact binary to reproduce an escaping bug in the future (though you still have the source code in the repository, I've seen cases where escaping bugs relate to the build process rather than the source code - due to rapid changes to the build process, or time-of-build sensitive data like digital signatures).
Permissions: you may want to prevent access to builds with "half baked" features from non-developers.
Binary Repositories: you may want to publish only meaningful builds to an external binary repository.
Builds in Jenkins can be "promoted" either manually or automatically, using plugins like Promoted Builds Plugin. You can also create your entire "promotion" workflow using pipeline scripts. Here's an example:
a "Continuous" job that polls SCM and builds on every change. It has a retention policy to keep only the last 50 builds. Access is restricted only to developers;
a "Release Candidates" job that copies artifacts from a manually selected build (using parameters). Access is allowed to QA testers;
a "Releases" jobs that copies artifacts from a manually selected RC. Access is allowed to the entire organization. Binaries are released to external/public repository.
I hope this answers your question :-)

Related

In Jenkins, can we delete build artifacts for older builds, but keep build details/logs?

As is good practise, I've got Jenkins set up at work to automatically build everything for continuous integration, pulling files from our Git repositories. On our development branches, builds get kicked off automatically whenever anyone commits a change. When we want to do formal testing, we pull the build from Jenkins and use that; and when we want to sign off a change request, we quote the Jenkins build number where the change went in. So far, so good.
The problem we have is that builds are a significant size. For our SDK, we have to build across multiple platforms so that we can check it works on all of them. At maybe 50MB per build, this starts to mount up! Short term I can keep asking IT to give me more storage space, but longer term I'd like a more strategic solution
The obvious answer in Jenkins is to set up deletion rules, whether deleting after some time or after some number of builds. The problem then though is that if we delete that older development build, we lose the traceability of what we tested. I'm sure most engineers at one time or another have had to do a binary chop through older builds to find an obscure bug/regression which was only spotted some time later. For me, it is unacceptable to lose that history.
The important feature of build history though is not the binary build artifacts, but the build log recording what Git commits (or anything else; toolchain versions for example) went into each build. That's what lets us go back to investigate older builds and recreate them if required. The build log is relatively small (and highly compressible, being a text file). We do still need to keep build artifacts for recent builds though, so that testers can use them. So I'm thinking a better alternative would be to preserve the build log in Jenkins for all builds, but to have Jenkins automatically delete build artifacts after some time.
Does anyone know of a way in Jenkins (perhaps a plugin?) which would let us automatically delete/archive build artifacts from older builds, but still keep the build details and log for those builds? I'm happy to do a Jenkins upgrade if necessary to get this feature. And of course this needs to be only for selected development build jobs - all release build jobs need their build artifacts to be preserved forever, as do any builds which have the "keep forever" button ticked.
If it's absolutely necessary, I could set up a separate cron job to do this on the Jenkins file area. That's a nasty hack though, and I suspect it's likely to cause some issues with Jenkins, so I'd rather not do something that brute-force if there's a better alternative.
I think you need this option in your jenkinsfile
buildDiscarder(logRotator(artifactNumToKeepStr: '10'))
artifactNumToKeepStr: This number of builds have their artifacts kept.

Jenkins CI workflow with separate build and automated test both in source control

I am trying to improve our continuous integration process using Jenkins and our source control system (currently svn, but git soon).
Maybe I am thinking about this overly complicated, or maybe I have not yet seen the right hints.
The process I envisioned has three steps and associated roles:
one or more developers would do their job and ultimately submit the code changes for the actual software ("main software") as well as unit tests into source control (git, or something else). Jenkins shall build the software, run unit tests and perhaps some other steps (e.g. static code analysis). If none of this fails, the work of the developers is done. As part of the build, the build number is baked into the main software itself as part of the version number.
one or more test engineers will subsequently pickup the build and perform tests. Some of them may be manual, most of them are desired to be automated/scripted tests. These shall ultimately be submitted into source control as well and be executed through the build server. However, this shall not trigger a new build of the main software (since there is nothing changed). If none of this fails, the test engineers are done. Note that our automated tests currently take several hours to complete.
As a last step, a project manager authorizes release of the software, which executes whatever delivery/deployment steps are needed. Also, the source of the main software, unit tests, and automated test scripts, the jenkins build script - and ideally all build artifacts ("binaries") - are archived (tagged) in the source control system.
Ideally, developers are able to also manually trigger execution of the automated tests to "preview" the outcome of their build.
I have been unable to figure out how to do this with Jenkins and Git - or any other source control system.
Jenkin's pipelines seem to assume that all steps are carried out in sequence automatically. It also seems to assume that committing code into source control starts at the beginning (which I believe is not true if the commit was "merely" automated test scripts). Triggering an unnecessary build of the main software really hurts our process, as it basically invalidates and manual testing and documentation, as it results in a new build number baked into the software.
If my approach is so uncommon, please direct me how to do this correctly. Otherwise I would appreciate pointers how to get this done (conceptually).
I will try to reply with some points. This is indeed conceptually approach as there are a lot of details and different approaches too, this is only one.
You need git :)
You need to setup a git branching strategy which will allow to have multiple developers to work simultaneously, pushing code and validating it agains the static code analysis. I would suggest that you start with Git Flow, it is widely used and can be adapted to whatever reality you do have - you do not need to use it in its pure state, so give some thought how to adapt it. Fundamentally, it will allow for each feature to be tested. Then, each developer can merge it on the develop branch - from this point on, you have your features validated and you can start to deploy and test.
Have a look at multibranch pipelines. This will allow you to test the several feature branches that you might have and to have different flows for the other branches - develop, release and master - depending on your deployment needs. So, when you have a merge on develop branch, you can trigger testing or just use it to run static code analysis.
To overcome the problem that you mention on your second point, there are ways to read your change sets on the pipeline, and in case the changes are only taken on testing scripts, you should not build your SW - check here how to read changes, and here an example of how to read changes and also to prevent your pipeline to build all the stages according to the changes being pushed to git.
In case you still have manual testing taking place, pipelines are pausable which means that you can pause the pipeline asking for approval to proceed. Before approving, testers should do whatever they have to, and whenever they are ready to proceed, just approve the build to proceed for the next steps.
Regarding deployments authorization, it is done the same way that I mention on the last point, with the approvals, but in this case, you can specify which users/roles are allowed to approve that step.
Whatever you need to keep from your builds, Jenkins has an archive artifacts utility. Let me just note that ideally you would look into a proper artefact repository such as Nexus.
To trigger manually a set of tests... You can have a manually triggered job on Jenkins apart from your CI/CD pipeline, that will only execute the automated tests. You can even trigger this same job as one pipeline stage - how to trigger another jobs
Lastly let me say that the branching strategy is the starting point.
Think on your big picture, what SDLC flows you need to have and setup those flows on your multibranch pipeline. Different git branches will facilitate whatever flows you need within the same Jenkinsfile - your pipeline definition. It really depends on how many environments you have to deploy to and what kind of steps you need.

Is there a standard way to delete successful vnext builds (PR) just after their completion?

The most aggressive build retention policy one can set for pull request builds is described in "Clean up pull request builds"
a policy that keeps a minimum of 0 builds
Still, it means that successful PR builds (with artifacts no one will ever need) will be deleted only after the next automatic retention cleanup - usually the next day, but in reality it results in nearly two days worth of no longer needed builds.
In our particular case it seems to be desirable to find a way to clean successful PR builds ASAP due to their frequency and artifact's sheer size that may periodically strain our not yet fully organized infrastructure dedicated to PR handling (it will be significantly improved, but not as soon as we'd like to, and those successful PR builds would still remain no less of a dead weight).
And as far as I see the only way to do it would be to delete builds manually.
While it is not too difficult to implement, I'd still like to check whether there is a simpler standard way to delete successful PR builds automatically.
P.S.: There is one particularity in our heavily customized build process - we have multiple dependent artifacts. Like create A, use it to build B, create C to test B... So trying not to Publish artifacts on overall successful build with custom condition like it is suggested below is not exactly feasible.
Let's look at the problem from a different perspective: The problem isn't that builds are retained, the problem is that your PR builds are publishing artifacts.
You can make the Publish Artifacts steps conditional so that they don't run during PRs. Something like and(succeeded(), ne(variables['Build.Reason'], 'PullRequest')) will make the task only run if it's not a PR.

TFS and storing binary files

Our project group stored binary files of the project that we are working on in SVN repository for over a year, in the end our repository grew out of control, taking backups of SVN repo became impossible at one point since each binary that is checked in is around 20 MB.
Now we switched to TFS,we are not responsible for backing the repository up, our IT tream takes care of it and we have more network and storage capacity for backups because of that but we want to decide what to do with the binaries. As far as I know TFS stores deltas and for binary files but deltas will be huge, but we might end up reaching our disk space quota one day, so I would like to plan things better from the start, I don't want to get caught up in a bad situation when it's too late to fix the problem.
I would prefer not keeping builds in the source control but our project group insists to keep a copy of every binary for reproducing the problems that we see in the production system, I can't get them to get the source code from TFS, build it and create the binary, because it is not straightforward according to them.
Does TFS offer a better build versioning method? If someone can share some insight I'd really be grateful.
As a general rule you should not be storing build output in TFS. Occasionally you may want to store binaries for common libraries used by many applications but tools such as nuget get around that.
Build output has a few phases of its life and each phase should be stored in a separate place. e.g.
Build output: When code is built (by TFS / Jenkins / Hudson etc.) the output is stored in a drop location. This storage should be considered volatile as you'll be producing a lot of builds, many of which will be discarded.
Builds that have been passed to testers: These are builds that have passed some very basic QA e.g. it compiles, static code analysis tools are happy, unit tests pass. Once a build has been deemed good enough to be given to test it should be moved from the drop location to another area. This could be a network share (non production as the build can be reproduced) there may be a number of builds that get promoted during the lifetime of a project and you will want to keep track of what versions the testers are using in each environment.
Builds that have passed test and are in production: Your test team deem the build to be of a high enough quality to ship. As part of your go live process, you should take the build that has been signed off by test and store it in a 3rd location. In ITIL speak this is a Definitive Media Library. This can be a simple file share, but it should be considered to be "production" and have the same backup and resilience criteria as any other production system.
The DML is the place where you store the binaries that are in production (and associated configuration items such as install instructions, symbol files etc.) The tool producing the build should also have labelled the source in TFS so that you can work out what code was used to produce the binary. Your branching strategy will also help with being able to connect the binary to the code.
It's also a good idea to have a "live like" environment, this should be separate from your regular dev and test environments. As the name suggests it contains only the code that has been released to production. This enables you to quickly reproduce bugs in production
Two methods that may help you:
Use Team Foundation Build System. One of the advantages is that you can set up retention periods for finished builds. For example, you can order TFS to store the 10 latest successful builds, and the two latest failed ones. You can also tell TFS to store certain builds (e.g. "production builds"/final releases) indefinitely. These binaries folders can of course also be backed up externally, if needed.
Use a different collection for your binaries, with another (less frequent) backup schedule. TFS needs to backup whole collections, but by separating data that doesn't change as frequently as the source you can lower the backup cost. This of course depends on the frequency you are required to have the binaries backed up.
You might want to look into creating build definitions in TFS to give your project group an easy 'one button' push to grab the source code from a particular branch and then build it and drop it to a location. That way they get to have their binaries, and you don't have to source control them.
If you are using a branching strategy where you create Release or RTM branches when you push something to production, then you can point your build definitions at those branches and they can manually trigger them from the TFS portal or from within Visual Studio.

TFS 2012: Correllating binaries to builds and source code

I'm starting to dive into TFS 2012 and I have a basic understanding of the tiers and how build servers, controllers and agents work and how different build scripts can have different configurations and projects.
However, one of the things I'm struggling with is a requirement for our source control solution that says that I need to be able to prove a particular changeset or shelfset produced a particular build. That is, given a particular binary, I can point to a release changeset that generated that binary. I should also be able to point to the test changeset that was merged into the release branch. The idea here is not just a separation of duty, but validating that because the release and test changesets are identical, no code was injected into a project by a code reviewer.
I've read one blog post that talks about "Binary promotions" -- would that concept be useful in my situation? I'm having a hard time finding how this binary promotion is set up in TFS.
Deployment
Out of the box TFS doesn't really support deployments, it can deploy to 1 location on build which often is a test server (think lab management). TFS 2012 has built in support for Azure deployments, but they still happen at the end of a build and the build artifacts cannot be automatically deployed to a new location.
You could modify the build template to allow to release to different locations, but that would still be a fresh build for every environment and not true binary promotions.
TFS does, however, have a concept of build quality and actually fires off events when this quality is changed. TFS Deployer is a 3rd party tool that hooks into the quality change event and can execute powershell scripts. This means with a simple change of a dropdown value you can automatically kickoff a script that releases to any environment you want. You can customize the build quality list (per team collection) to be a list of environments (dev, uat, staging, production etc) which the script then figures out where to release the specific build to.
VS2012 also has some nice improvements to web deploy which means deployment configurations are stored in source control with the project, which in theory means they'll be available in the drop folder for TFS Deployer to make use of.
I don't believe TFS keeps a history of build qualities, which means you can't really use the build quality history to maintain a list of what is deployed to which environment. You could fairly easily record this information as part of the deployment script though. Or at the very least add a custom summary node to the build with information about the release.
TFS2012 does have the ability to mark a build as deployed as part of the Azure deployment functionality, you mark tfs deployer builds as deployed using a script but it doesn't feel very useful.
Octopus Deploy is another project that's worth checking out, and could be used instead of TFS Deployer if your build template creates NuGet packages. It requires a bit more control over the production hardware as you need to install agents on each environment to handle releases, but it solves a lot of other issues with deployment.
Versioning
Once you have a nice consistent way of automatically releasing that people don't bypass, you can look at enhancing the build template to inject the build version, or changeset number as the assembly version for anything built as part of that automated build. There's a number of different ways to do it and plenty of blog posts and tools to help you achieve that.
Alternatively you could just use automatic assembly versioning ([assembly: AssemblyVersion("1.0.*")]) to give you the date/time the build occurred, which ends up like 1.0.1234.123 where 1234 is something like the days since jan 1st 2000, and 123 is the minutes since midnight (my specifics may be wrong here).
If you're deploying websites, then I highly recommend injecting the current build version into the html somewhere. This way you can check what version a website is running without needing access to the bin directory. It can also be appended as a querystring to css/js file imports to ensure no browser caching occurs between versions.
Thoughts
Personally I'm hoping Microsoft realise that the xaml build workflows are trying to do too much and that they split the different concerns (build, test, deployment...) into different scriptable parts. Of course that would not be until the next major release of TFS which is years away. Although with Team Foundation Service they are trying to iterate a lot quicker, so they may actually extend the Azure deployment stuff into something more useful in the nearer future.

Resources