Jenkins pipeline checking out to new workspace when previous build has been aborted - jenkins

So I'm running into a specific issue, I have a Jenkins Declarative Pipeline (from an SVN hosted Jenkinsfile) that is configured to not run concurrent builds and abort previous builds when a new build is triggered.
This works perfectly fine, however the problem I am running into is that Jenkins will re-checkout the whole repository to an #2 suffixed workspace directory for the subsequent build (this ONLY happens when a build is automatically aborted after a new one is triggered, if the first build ends successfully, it re-uses the same directory).
I've seen a ton of threads stating that this is by design, but from what I can see that's only when concurrent builds are enabled, but since it's not I'm confused as to what could cause Jenkins to not re-use the same workspace directory?
If the "why" I require this is necessary, I have a few large repositories (for Unreal Engine games specifically), that I need to build and as an optimization measure for the time in compiling, cooking and uploading the game, it makes perfect sense to cancel old builds but instead Jenkins decides to clean checkout 10+GB of game code and assets (20+ in the case of some other games) in another folder becuase it can't reuse a folder that's not having a job/build executed in it already 😅.
Happy to accept all possible solutions/suggestions as I'm getting a lil' tired of pulling my hair out.

I was facing the same issue with my pipeline. I tried deleting the aborted builds and restarted jenkins. I also deleted the #2 directories in my workspace and only kept the main directory out there. Post this,I didn't face the same issue. This could happen because of the jenkins cache. Make sure that your workspace is correctly reflecting the directory name mentioned in your jenkins file.

Related

Jenkins Deleting Workspace

I have Jenkins pipeline projects, and everything works fine as long as I run the project at least once per month. If I wait more than a month Jenkins will delete the workspace for that pipeline project, causing the project to do a brand new git checkout and compile. This results in a super slow build, since all of the intermediate object files/etc are regenerated from scratch.
I cannot find what setting in Jenkins is causing it to clean up these older workspaces. If I modify the pipeline to check out to a custom directory instead of the workspace directory then it works fine, so it doesn't appear to be the git plugin itself, or anything like that.
'Discard old builds' is disabled in the General settings for these projects.
Can someone point me to the setting that is causing 'older' workspaces to get cleaned up for some reason?

Jenkins - Deleting artifacts automatically

JENKINS
I am noticing that the every time I run one of my jobs in Jenkins, there are two files created in the /workspace/build/distributions dir. The two files have the extensions of .tar and .tgz. Every time, I run the job, another set of these files are created. So, if I run the job 3 times, there will be 6 files all together. I have noticed that during the dependency check phase, these artifacts slow things down. Therefore, I wanted to remove them automatically before each time this job runs. I have attempted the configs in the image below. In addition, I have tried the workspace cleanup plugin and that completely deleted the workspace. That is definitely not what I wanted.
Therefore, what would be the best way to go about this.
What scm plugin are you using? Some of the scm plugins allow you to clean the workspace before an update (e.g. SVN's "Emulate clean checkout" and Git's "Clean before checkout" options).
If you're not using a scm plugin, can you remove the files in a batch/shell script during the first build step?
Or perhaps you can go about it from the reverse direction. Can you get rid of the files as the last build step of the job? That way, they are gone when the next build comes along.

Revert to original configuration in Jenkins

I have a Jenkins server hosted, which has a master node and couple of other slave configurations. Last night, the job that triggers the matrix based build configuration failed. I did a restart and performed clean up jobs via Jenkins but none of those fixed the issue. The initial error that was logged was:
FATAL: hudson.remoting.RequestAbortedException: java.io.IOException: Unexpected termination of the channel
hudson.remoting.RequestAbortedException: hudson.remoting.RequestAbortedException: java.io.IOException: Unexpected termination of the channel
Following which I performed a reload configuration from disk,followed by a manual restart via <jenkins_job_url>/restart, which even worsened the build system. The master went offline due to unavailability of space in /tmp folder, which I fixed by cleaning up the space. Following which I observed that the original slave server configuration is no longer seen. I had slave-0 and slave-1 still there, but slave-2 was no longer present. Instead, it got replaced with slave-3 configuration. Now the slave 0's and 1's seems to be working fine. However, slave 3's build are failing due to Failed to mkdirs. Is there a way I could revert back to the original configuration from where I started, since the steps I performed seems to make sense initially, but I had no idea it had so many repercussions? Any help is appreciated.
UPDATE1: I guess I should have used some of the configuration backup plugins available in Jenkins, but is there some specific directory other than $JENKINS_HOME where these configurations gets stored?
You should always backup ${JENKINS_HOME} before doing major changes.
Even better is to have a job based on time trigger that will do this for you once in a while.
Other than that - only physically restoring the hard drive to a previous state will get back your old configs. Once a config is overwritten in Jenkins - it is gone. Except when you are using Job Config History plugin. Though keeping manually created backups is better in my opinion: where's the insurance that JobConfigHistory won't disappear along with the job configs? :)
Aside from that, the mentioned plugin tracks system config too.
As mentioned by #Zloj, there is no easy way to repair once the changes gets overwritten. I ended up fixing the issues by deleting the slaves that were not working, remapping the existing builds to the newer slaves that I created via Copy of the existing slaves that were working, reducing the number of builds(by removing the ones from the matrix that aren't required) and finally, taking a backup via https://wiki.jenkins-ci.org/display/JENKINS/thinBackup plugin and backing up the configuration at Stash :)
for windows just delete .jenkins folder in your home directory. This will revert you to the original settings.
We have been using SCM Sync Configuration plugin and that has saved our butt many times. It stores all job configuration including global config in bitbucket. But the latest plugin will say that its no longer maintained. but I was able to pull the source code from github and rebuild it ourselves.
one word of caution...don't use global variables for storing passwords and keys...this plugin will sync them all to github. Strictly use Jenkins Credentials.

TFS 2013 build agents sharing common build folder

I'm using TFS 2013 on premises. I have four build agents configured on a Build machine. Several build definitions compile ASP .NET websites. I configured the msbuild parameters to deploy the IIS application to the integration server, which sits out there in Rackspace.
By default webdeploy does differential deployments by comparing file dates. In my case that's a big plus because copying files from our network to Rackspace takes quite some time. Now, in order to preserve file dates the build agent has to compile the same base set of source code. On every build only the differential source code yields a new DLL, minimizing the number of files deployed.
All of that works fine, with a caveat: a given build definition has to be assigned to a build agent (by agent name or tag). The problem is I create a lot of contingency when all builds assigned to the same agent are queued up. They wait in line until the previous build is done.
In an ideal world any agent should be able to take care of any build, but the source code being compiled has to be the same, regardless of the agent.
I tried changing the working folder of all agents to point to the same location but I get an error because two agents can't be mapped to the same folder. I guess there is one workspace per agent.
Any ideas?
Finally I found a way to do this. Here are all the changes you need to do:
By default the working folder of each agent is $(SystemDrive)\Builds\$(BuildAgentId)\$(BuildDefinitionPath). That means there's one working folder per BuildAgentId. I changed it so that all Agents share the same folder: $(SystemDrive)\Builds\WorkingFolder\$(BuildDefinitionPath)
By default at runtime the workflow creates a workspace that looks like "[BuildDefinitionId][AgentId][MachineName]". Because all agents share the same working folder there's an error trying to create each separate workspace. The solution to this is in the build definition: Edit the xaml and look for an activity called "Get sources from Team Foundation Version Control". There's a property called WrokspaceName. Since I want to have one workspace per build definition I set that property to the BuildDetail.BuildDefinition.Name.
Save your customized build template and create a build that uses it.
Make sure the option "1. TF VersionControl/1. Clean workspace" is set to False. Otherwise the build will wipe out all the source code on every build.
Make sure the option "2. Build/3. Clean build" is set to false. Otherwise the build will wipeout the output binaries on every build.
With this setup you can queue up the same build on any agent, and all of them will point to the same source code and bin output. When the source code changes only the affected binaries are recompiled. I have a custom step in the template that deploys the output files to IIS, to all the servers in our webfarm, using msdeploy.exe. Now my builds+deployments take one or two minutes, because only the dlls or content that changed during the build are synchronized to the servers.
You can't run two build agents in the same folder. The point of build agents is to run multiple builds in parallel, usually on separate PCs. If you try to run them on the same source code, then (a) it's pointless as two build of exactly the same source should produce identical results, and (b) they are almost certainly going to trip over each other and cause the builds to fail or produce unexpected results.
If you want to be able to build and then deploy a series of versions of your codebase, then there are two options:
if you queue up multiple builds, then the last one will "win", so the intermediate builds are of no real value. So if you check in New code before your first build completes, you may as well stop the active build and start a new one. you should be asking yourself why the build is so slow, or why you are checking in changes so often that this is necessary.
if each build produces an incremental update to the deployed result, then you need to pass the output of your builds to some deployment agent that is able to diff it against the deployed version and send only the changes to be deployed. This could be set up to gather results from multiple build agents if that would be beneficial.
but I wonder if perhaps your build Is slow because you are doing a complete build each time (which cleans the build folder, gets all the sources, and does a full rebuild), when what you want is an incremental build (which gets the latest changes, compiles only what is affected, and complete quickly). perhaps you should investigate making your build incremental.

Hudson build scripts location - recommendation?

I'm already finishing my project build automation :) with Hudson and Nant.
My project structure is something like
$/Project
build.scripts
script1.build
script2.build
build.properties.xml
Code
Project1
Project2
So Hudson downloads from the root $/Project to the workspace folder.
And everything is ok since the build.scripts are in the workspace, I run them very easily, however what is bugging me is the fact that since the build scripts are inside the workspace, then I can't program Hudson to run automatically either based on time or changes because it will always detect changes to the files (note build.properties.xml which I check out and check in at build time to store some stats).
Where do you recommend these files to go in and still get the advantage of having them source-controlled?
What I ended up doing is to NOT check-in changes to those files. I changed my CI workflow to create another file (local to the workspace only) where the changes are written to.
This way, I still get the last build info written somewhere to pick it up, and avoid the issue of Jenkins detecting the change.
PS: I changed from Hudson to Jenkins since I saw that most plugins ran away from the former. The transition was too smooth to be true.

Resources