I'm beginning on Jenkins in my work place. We use semantic versioning with Teamcity and I want to implement the same on Jenkins. My problem appears when I store the artifacts in builds folder ($JENKINS_HOME/jobs/$JOB_NAME/builds/$BUILD_NUMBER) because Jenkins use only the build_number to create the folder for build so when I have to reset de Build_number the future artifacts will be stored in the folder of previous builds.
For example:
I have build 1.3.1_develop.1 stored, when I reset Build_Number the next build should be 1.3.2_develop.1 and it should be stored in the
folder 1 of build 1.3.1_develop.1
My question is if someone could explain me how to deal with automatic semantic versioning on jenkins because we reset the build number we increase the mayor, minor and patch number.
Jenkins Version: 2.89.4
Jobs--> We use jobs to compile Vuejs for front and to deploy back with python (If this helps)
Thanks for any help.
First thing I notice is you are not using semantic versioning correctly. 1.3.1_develop.1 should be 1.3.1-develop+1. Build metadata should always be preceded by the plus '+' symbol and is not factored into the SemVer sort order.
Second, build number is never "reset", it might roll-over eventually, but it should never be reset. A build number that does not indicate the machine performing the build is also generally useless unless there can only ever be one.
Basically, there's no concept of a build number in semantic versioning. The standard specifies the syntax for build meta data, but is completely neutral on what might be included. Build numbers are generally useless at the level of semantic versioning. They have their uses for preventing directory collisions in CI build systems and even provide a unique identifier for some product lines (Windows for instance), particularly where semantic versioning is not in use (Windows again).
I recommend using a SHA-1 or better hash of the build inputs (Git commit Id for instance) in the build meta tag in addition to any build counter, and use that for your output directory name. You can still use a monotonic counter on your prerelease tags as well, but you would have to create a build output directory name that includes the entire semver string in order to maintain uniqueness.
Third, your build machine is the worst place to archive your build artifacts! Build automation can and does go horribly wrong from time to time. Your build system should not have access to your archive of build artifacts. When your build and initial smoke testing is completed, it should signal a process running on a completely different machine to move the artifacts off the build machine to a more secure location. No process running on the build system should have write access to your archive of build artifacts.
There is a tool GitVersion which does what you want and it can be integrated with Jenkins or other CI providers.
https://gitversion.net/docs/reference/build-servers/jenkins
Related
I have been searching far and wide to see if I can find information on Jenkins incremental pipeline builds that does not involve Maven.
The general idea is that I want to build a generic project and run specific steps of the pipeline if the underlying code has changed. If the code did not change, I want to re-use the results from a previous build.
The reason why I want to do this, is to drastically reduce build times for huge projects.
Imagine that you only need to fix 1 line in a SCSS file, but the whole project needs to be rebuild, repackaged, etc because of this. In the meantime, the site is live and broken and waiting 15 mins to be fixed.
Can someone give a basic example of how such a build can be created or where I can find more information on incremental building?
The only thing I have been able to find is incremental building for Maven projects, but this is not applicable for me.
The standard solution is to create modules that depends on each others.
Publish the built artifact of your modules to a binary repository like Sonatype Nexus (you can easily create private npm repo as well as proxy npm repo).
During the build download the dependencies, instead of building them.
If this solution is not the one you want to take, you will have a hard time hacking a solution. To persist the state of your steps, an easy solution is to create files in the job workspace and read them at next build
I am new to the world of scripting with TFS2015. I created a script that builds all of the projects within my solution (it is a rather large solution) and puts it out in a shared folder (where each project has its own subfolder).
I would like to create a separate script for each project that simply copies the bin folder from the shared and pastes it out on my Test environment. I rarely need to deploy everything, so the idea is one build...multiple deploys.
However, when I run my deploy script using the Copy Files step it is doing another build. Although it copies the files that I expect, it is after a full build that creates the folder structure for the build.
Am I able to make the Copy Files step NOT do a Build?
Here is the steps that my script is curently doing:
As you can see, there is only one step (Copy Files) but it still does the Get sources and copies everything into a new folder on the build box like so (where the number keeps incrementing up with each run of the script):
I just want to copy the files from the Source to the Target and not do a build or Get Sources.
It looks like you're still on TFS 2015 RTM or Update 1. Which is already pretty old technology if you compare it to the lifetime of the new build system which was introduced with this version.
TFS 2015 update 2 has introduced a similar system to the Build pipelines to orchestrate Releases. This doesn't require you to map any workspaces or git repositories and can act on the artefacts of your builds or simply on the contents of file shares.
It makes sense that a Build has to build something and in order to build something, it has to get the things to build. If you're actually not building something, then you're probably deploying or releasing or packaging something else. Hence the distinction between Build and Release pipelines.
TFS 2017+ has an option to disable the syncing of sources. Primarily to allow people to get the sources themselves in creative ways (e.g. a custom powershell script that invokes git.exe).
My primary advice would be to upgrade to TFS 2018 update 3 or at least TFS 2017 update 3.1, worst case TFS 2015 update 4.1. The fact that versions older than update 2015.4.1 have a known XSS scripting security bug may be reason enough to convince your organisation to perform this update.
Barring that option you're left with one solution:
Link your build definition either to a git repository with only a single commit (If I remember correctly the 2015 agent still crashes when syncing an empty Git repo) or link it to a TFVC repository and set the workspace settings to cloak everything. This essentially causes the build to sync an empty folder, which it can cache, before calling your powershell script.
I have a TeamCity build project that parameterizes a docker-compose.yml template with the build versions of a dozen Docker containers, so in order to get the build_counter from each container, I have them set as snapshot dependencies in the docker-compose build job. Each container's Dockerfile and other files are in their own BitBucket repo, and they have triggers for the appropriate files. In the snapshot dependencies in the docker-compose build I have them set to "Do not run new build if there is a suitable one" but it still tries to run all of the dependent builds even though there aren't any changes in their respective repos.
This makes what should be a very simple and quick build in to a very long build. And often times, one of the dependent builds will fail with "could not collect changes: connection refused" and I suspect it has to do with TC trying to hit all of these different repos all at once.
Is there something I can do to not trigger a build of every dependency every time the docker-compose build is run?
Edit:
Here's an example of what our docker-compose.yml.j2 looks like: http://termbin.com/b2xy
Obviously, I've sanitized it for sharing, and our real docker-compose template has about a dozen services listed.
Here is an example Dockerfile for one of the services: http://termbin.com/upins
Rather than changing the source code of your build (parameterized docker-compose.yml) and brute-force your build every time, you could consider building the containers independently while tagging them with a version increment, and labels. After the build store the images in a local registry. Use docker-compose to suit your runtime needs. docker-compose can use multiple yaml files, so if you need other images for a particular build, just pull the other images you need. For production use another yaml file that composes the system to run. Add LABEL to your Dockerfile. See http://label-schema.org//rc1/ for a set of labels that suit your needs.
I know this is old question but I have come across this issue and you can't do what sounds reasonable i.e. get recent green builds without rebuilding. This is partly because of what the snapshot dependencies are designed to do by Jetbrains.
The basic idea is that dependencies are for synchronized revisions of code: that is if you build Compose at a certain time, it will need to use not just its own versions of source code at that point in time but also the code for all the dependencies that also comes from that point of time, regardless of whether anything significant has changed.
In my example, there were always changes because the same repo was used for lots of projects and had unrelated changes that would not trigger a build but would make the project appear behind and cause a build.
If your dependencies have NOT changed and show no changes pending, then they shouldn't build. In this case, you need to tick "Do not run new build if there is a suitable one". "Enforce Revisions Synchronization" is slightly confusing. If ticked, it will find older builds that match the first build after your build was triggered. If unticked, it can use newer builds.
I'm using TFS 2013 on premises. I have four build agents configured on a Build machine. Several build definitions compile ASP .NET websites. I configured the msbuild parameters to deploy the IIS application to the integration server, which sits out there in Rackspace.
By default webdeploy does differential deployments by comparing file dates. In my case that's a big plus because copying files from our network to Rackspace takes quite some time. Now, in order to preserve file dates the build agent has to compile the same base set of source code. On every build only the differential source code yields a new DLL, minimizing the number of files deployed.
All of that works fine, with a caveat: a given build definition has to be assigned to a build agent (by agent name or tag). The problem is I create a lot of contingency when all builds assigned to the same agent are queued up. They wait in line until the previous build is done.
In an ideal world any agent should be able to take care of any build, but the source code being compiled has to be the same, regardless of the agent.
I tried changing the working folder of all agents to point to the same location but I get an error because two agents can't be mapped to the same folder. I guess there is one workspace per agent.
Any ideas?
Finally I found a way to do this. Here are all the changes you need to do:
By default the working folder of each agent is $(SystemDrive)\Builds\$(BuildAgentId)\$(BuildDefinitionPath). That means there's one working folder per BuildAgentId. I changed it so that all Agents share the same folder: $(SystemDrive)\Builds\WorkingFolder\$(BuildDefinitionPath)
By default at runtime the workflow creates a workspace that looks like "[BuildDefinitionId][AgentId][MachineName]". Because all agents share the same working folder there's an error trying to create each separate workspace. The solution to this is in the build definition: Edit the xaml and look for an activity called "Get sources from Team Foundation Version Control". There's a property called WrokspaceName. Since I want to have one workspace per build definition I set that property to the BuildDetail.BuildDefinition.Name.
Save your customized build template and create a build that uses it.
Make sure the option "1. TF VersionControl/1. Clean workspace" is set to False. Otherwise the build will wipe out all the source code on every build.
Make sure the option "2. Build/3. Clean build" is set to false. Otherwise the build will wipeout the output binaries on every build.
With this setup you can queue up the same build on any agent, and all of them will point to the same source code and bin output. When the source code changes only the affected binaries are recompiled. I have a custom step in the template that deploys the output files to IIS, to all the servers in our webfarm, using msdeploy.exe. Now my builds+deployments take one or two minutes, because only the dlls or content that changed during the build are synchronized to the servers.
You can't run two build agents in the same folder. The point of build agents is to run multiple builds in parallel, usually on separate PCs. If you try to run them on the same source code, then (a) it's pointless as two build of exactly the same source should produce identical results, and (b) they are almost certainly going to trip over each other and cause the builds to fail or produce unexpected results.
If you want to be able to build and then deploy a series of versions of your codebase, then there are two options:
if you queue up multiple builds, then the last one will "win", so the intermediate builds are of no real value. So if you check in New code before your first build completes, you may as well stop the active build and start a new one. you should be asking yourself why the build is so slow, or why you are checking in changes so often that this is necessary.
if each build produces an incremental update to the deployed result, then you need to pass the output of your builds to some deployment agent that is able to diff it against the deployed version and send only the changes to be deployed. This could be set up to gather results from multiple build agents if that would be beneficial.
but I wonder if perhaps your build Is slow because you are doing a complete build each time (which cleans the build folder, gets all the sources, and does a full rebuild), when what you want is an incremental build (which gets the latest changes, compiles only what is affected, and complete quickly). perhaps you should investigate making your build incremental.
If you follow some of MS's recommended branching strategies you can easily end up with a project structure such as:
$PROJECT\
DEV\
MyProject
STAGE\
MyProject
PROD\
MyProject
Now let's say I have three different build definitions. One each for DEV, STAGE, and PROD. This should be common considering that the build definition will define the exact solutions to build.
If I turn on CI for each of them, STAGE will be built even though the checkin occurred in DEV...
Now my question. How can I limit the build definition to execute only when a check in occurs in either a path or a solution that is part of the build definition?
When defining the working folder configuration screen - only have it start at the root of the branch you want to build.
For example, your DEV branch would be configured so that $/TEAMPROJECT/DEV/MyProject was mapped to $(SourceDir) rather than the default mapping which would have been set to just $/TEAMROJECT.
FYI - Personally, I only have CI Builds on Dev branches and queue manual builds for a push to QA. I also normally don't do a re-build for production but just push the build binaries that were QA'd. I also keep by build configuration folder inside the branch i.e. $/TEAMPROJECT/DEV/TeamBuild rather than the default $/TEAMPROJECT/TeamBuildTypes and therefore changes to the build configuration are also pushed up through the branches. That said, you have to stick with the default if you wanted the build configuration to be visible to VS2005 clients.
Hope that helps,
Martin.