Jenkins fails to pick a valid build number - jenkins

I have an "official" Jenkins running on ServerA, and a copy (copy of the .jenkins folder) that runs on ServerB and is slated to become the new official Jenkins, so we are testing it. Both installations are active.
I find today that, when I try to start a build on the new server, it complains that the build number 704 already exists, and indeed it does. So I wonder why it didn't pick a fresh number. The number it picks (and fails) is a number that was indeed used on this new server nine days ago. That build ran to success.
As a test, I deleted build 704, and retried it. It then picked 705 as a number, and is now running. However a previous and conflicting build may be significant and not to be deleted, so this is not really a workaround....
Thanks for any clues!

Related

Jenkins pipeline checking out to new workspace when previous build has been aborted

So I'm running into a specific issue, I have a Jenkins Declarative Pipeline (from an SVN hosted Jenkinsfile) that is configured to not run concurrent builds and abort previous builds when a new build is triggered.
This works perfectly fine, however the problem I am running into is that Jenkins will re-checkout the whole repository to an #2 suffixed workspace directory for the subsequent build (this ONLY happens when a build is automatically aborted after a new one is triggered, if the first build ends successfully, it re-uses the same directory).
I've seen a ton of threads stating that this is by design, but from what I can see that's only when concurrent builds are enabled, but since it's not I'm confused as to what could cause Jenkins to not re-use the same workspace directory?
If the "why" I require this is necessary, I have a few large repositories (for Unreal Engine games specifically), that I need to build and as an optimization measure for the time in compiling, cooking and uploading the game, it makes perfect sense to cancel old builds but instead Jenkins decides to clean checkout 10+GB of game code and assets (20+ in the case of some other games) in another folder becuase it can't reuse a folder that's not having a job/build executed in it already 😅.
Happy to accept all possible solutions/suggestions as I'm getting a lil' tired of pulling my hair out.
I was facing the same issue with my pipeline. I tried deleting the aborted builds and restarted jenkins. I also deleted the #2 directories in my workspace and only kept the main directory out there. Post this,I didn't face the same issue. This could happen because of the jenkins cache. Make sure that your workspace is correctly reflecting the directory name mentioned in your jenkins file.

Missing build from jenkins

I ran a build yesterday, hoping I would read some logs today.
I came today, and got an error 404 when trying to access the build. Strange.
Running another build, shows my build actually did run, but it is unreachable.
Is there a way to get my hands on the logs?
Notice build #10 is missing, even though it did start.
Probably a windows update is to blame for this.
The broken link is http://192.168.80.10:8080/job/Dev_git/10
More information on a run can usually be found using the context menu under Console Output. This is only accessible if you have the correct permissions set in Jenkins.
This of course does not work, if a build is missing. One reason could be that your Jenkins is configured in a way that only a certain number of historic builds are kept, see Build History Missing in Jenkins for an explanation how to deal with that.
However, your case seems to be different, because a build in the middle of the history is missing. For this, I suggest to look around in the jobs directory of your Jenkins installation where it stores all the configuration and run data.
References
https://wiki.jenkins.io/display/JENKINS/Administering+Jenkins
Where does Jenkins store configuration files for the jobs it runs?

How to stop TeamCity from rebuilding docker dependencies every time?

I have a TeamCity build project that parameterizes a docker-compose.yml template with the build versions of a dozen Docker containers, so in order to get the build_counter from each container, I have them set as snapshot dependencies in the docker-compose build job. Each container's Dockerfile and other files are in their own BitBucket repo, and they have triggers for the appropriate files. In the snapshot dependencies in the docker-compose build I have them set to "Do not run new build if there is a suitable one" but it still tries to run all of the dependent builds even though there aren't any changes in their respective repos.
This makes what should be a very simple and quick build in to a very long build. And often times, one of the dependent builds will fail with "could not collect changes: connection refused" and I suspect it has to do with TC trying to hit all of these different repos all at once.
Is there something I can do to not trigger a build of every dependency every time the docker-compose build is run?
Edit:
Here's an example of what our docker-compose.yml.j2 looks like: http://termbin.com/b2xy
Obviously, I've sanitized it for sharing, and our real docker-compose template has about a dozen services listed.
Here is an example Dockerfile for one of the services: http://termbin.com/upins
Rather than changing the source code of your build (parameterized docker-compose.yml) and brute-force your build every time, you could consider building the containers independently while tagging them with a version increment, and labels. After the build store the images in a local registry. Use docker-compose to suit your runtime needs. docker-compose can use multiple yaml files, so if you need other images for a particular build, just pull the other images you need. For production use another yaml file that composes the system to run. Add LABEL to your Dockerfile. See http://label-schema.org//rc1/ for a set of labels that suit your needs.
I know this is old question but I have come across this issue and you can't do what sounds reasonable i.e. get recent green builds without rebuilding. This is partly because of what the snapshot dependencies are designed to do by Jetbrains.
The basic idea is that dependencies are for synchronized revisions of code: that is if you build Compose at a certain time, it will need to use not just its own versions of source code at that point in time but also the code for all the dependencies that also comes from that point of time, regardless of whether anything significant has changed.
In my example, there were always changes because the same repo was used for lots of projects and had unrelated changes that would not trigger a build but would make the project appear behind and cause a build.
If your dependencies have NOT changed and show no changes pending, then they shouldn't build. In this case, you need to tick "Do not run new build if there is a suitable one". "Enforce Revisions Synchronization" is slightly confusing. If ticked, it will find older builds that match the first build after your build was triggered. If unticked, it can use newer builds.

Jenkins: Keep older build running if new build fails to deploy

I'm new to jenkins.
For that sake I installed the latest version of jenkins i.e. 1.632 on my ubuntu and deployed a war using post build actions in the configuration part. That worked fine for me.
Then I changed a few things in the build making sure that it fails when deployed and it effectively did and I'm not able to access the application die to deployment failure.
But I'm curious here, I have heard that in case of a build failure jenkins makes sure that the previous build remains deployed so that the application is always up and running. Please clarify if I'm wrong or doing anything wrong in my deployment steps.
I did searched a lot about this but couldn't find a valuable answer.
Haven't done much with the deploy plugin but it states this in the docs
Now when you trigger this job you can enter the build number (or use
any other available selector) to select which build to redeploy.
So you can set up a build on failure which will redeploy the last stable version. Here is also an example how to get the last stable build number:
http://<JENKINS>/job/<JOB_NAME>/lastStableBuild/buildNumber

TFS 2013 build agents sharing common build folder

I'm using TFS 2013 on premises. I have four build agents configured on a Build machine. Several build definitions compile ASP .NET websites. I configured the msbuild parameters to deploy the IIS application to the integration server, which sits out there in Rackspace.
By default webdeploy does differential deployments by comparing file dates. In my case that's a big plus because copying files from our network to Rackspace takes quite some time. Now, in order to preserve file dates the build agent has to compile the same base set of source code. On every build only the differential source code yields a new DLL, minimizing the number of files deployed.
All of that works fine, with a caveat: a given build definition has to be assigned to a build agent (by agent name or tag). The problem is I create a lot of contingency when all builds assigned to the same agent are queued up. They wait in line until the previous build is done.
In an ideal world any agent should be able to take care of any build, but the source code being compiled has to be the same, regardless of the agent.
I tried changing the working folder of all agents to point to the same location but I get an error because two agents can't be mapped to the same folder. I guess there is one workspace per agent.
Any ideas?
Finally I found a way to do this. Here are all the changes you need to do:
By default the working folder of each agent is $(SystemDrive)\Builds\$(BuildAgentId)\$(BuildDefinitionPath). That means there's one working folder per BuildAgentId. I changed it so that all Agents share the same folder: $(SystemDrive)\Builds\WorkingFolder\$(BuildDefinitionPath)
By default at runtime the workflow creates a workspace that looks like "[BuildDefinitionId][AgentId][MachineName]". Because all agents share the same working folder there's an error trying to create each separate workspace. The solution to this is in the build definition: Edit the xaml and look for an activity called "Get sources from Team Foundation Version Control". There's a property called WrokspaceName. Since I want to have one workspace per build definition I set that property to the BuildDetail.BuildDefinition.Name.
Save your customized build template and create a build that uses it.
Make sure the option "1. TF VersionControl/1. Clean workspace" is set to False. Otherwise the build will wipe out all the source code on every build.
Make sure the option "2. Build/3. Clean build" is set to false. Otherwise the build will wipeout the output binaries on every build.
With this setup you can queue up the same build on any agent, and all of them will point to the same source code and bin output. When the source code changes only the affected binaries are recompiled. I have a custom step in the template that deploys the output files to IIS, to all the servers in our webfarm, using msdeploy.exe. Now my builds+deployments take one or two minutes, because only the dlls or content that changed during the build are synchronized to the servers.
You can't run two build agents in the same folder. The point of build agents is to run multiple builds in parallel, usually on separate PCs. If you try to run them on the same source code, then (a) it's pointless as two build of exactly the same source should produce identical results, and (b) they are almost certainly going to trip over each other and cause the builds to fail or produce unexpected results.
If you want to be able to build and then deploy a series of versions of your codebase, then there are two options:
if you queue up multiple builds, then the last one will "win", so the intermediate builds are of no real value. So if you check in New code before your first build completes, you may as well stop the active build and start a new one. you should be asking yourself why the build is so slow, or why you are checking in changes so often that this is necessary.
if each build produces an incremental update to the deployed result, then you need to pass the output of your builds to some deployment agent that is able to diff it against the deployed version and send only the changes to be deployed. This could be set up to gather results from multiple build agents if that would be beneficial.
but I wonder if perhaps your build Is slow because you are doing a complete build each time (which cleans the build folder, gets all the sources, and does a full rebuild), when what you want is an incremental build (which gets the latest changes, compiles only what is affected, and complete quickly). perhaps you should investigate making your build incremental.

Resources