I have set up a CI Build that is also executing some tests.
In the GetSources step Clean is set to true.
I use a git repo.
When I run the build in a hosted agent, "getSources" takes about 20 seconds. When I run the build on a on premise agent, "getSources" takes about 20 minutes!
I can see, that the on premise agent is mostly idle in terms of cpu and memory. I also verified that network speed is at around 50 MBit/s.
Why does getSources take so long?
The Clean option has no effect for Hosted agent.
No matter what you set for the clean option (false or true for clean sources/all build directories/output directory etc), when you queue build with Hosted agent, it always download the sources only.
Assume if you set Clean as true and clean all build directories. For private agent, it will delete the entire working folder that contains the sources folder, binaries folder, artifact folder, and so on. But for Hosted agent, it only download the sources each time.
So the execute time for Hosted agent is different from private agent for most time.
To speed up private agent to build, you can follow any of below aspect:
set Clean as false (more efficient).
Since you are queuing CI build, if you set Clean as false, the private agent will only update the files which has been modified/created/deleted to your local source folder.
Only clean sources if you still need to set Clean as true.
It will only clean up the files and subfolders source folder (s/). But if your project is large enough, you'd better use the way to set Clean as false.
To answer the "why?", it's the fact that you're copying all those files between your system and Microsoft's data center. I've experienced the same and bought some additional pipelines to accommodate.
Related
I have a Jenkins setup running in production, I want to automate jenkins setup(installation) along with all the jobs that are setup in jenkins.
One crude way I can think of is to copy the whole jobs directory to the new Jenkins setup.
I want to know how other people in industry do deal with this problem.
I have used the plugin Thinbackup to move jobs, users, and plugins. You can make a full backup and restore it to the new server. The plugin is not perfect and is up for adoption. I had issues with the restore. I ended up using the plugin only for creating the archive, but then I copied manually the folders (users, jobs, plugins, nodes, email-templates, secrets, JENKINS_HOME files) from the archive to the new server.
Before creating the archive or copying the jobs, ensure that no more than 30 builds per job are kept, this will keep your archive small. I have seen 5000+ builds per job, which were totally unnecessary and were blocking the creation of the archive.
When you create or restore the archive, or copy files, the server should be in quiet mode, no builds should be executed.
http://<jenkins.server>/quietDown
After you copy the files or restore the archive, you should restart Jenkins or even better, restart the server.
Another option is to use RSync as mentioned here. I am not sure what is the OS of your Jenkins server. If it is Linux you can check out this guide that I have written.
yarn takes a lot of time on vsts hosted agent due to more than few dependencies .
Our monorepo contains three somewhat identical but totally different apps which share lot of node dependencies.
Each app is very huge and takes considerable time to build. So we build individual app based on path filter
Release contains artifacts from all three builds
What I need
download node modules once
use same downloaded dependencies in three different conditional builds
release app after all or any build with artifacts latest for each build
any pointers how to configure this
There isn't any way to do this with Hosted Agent. The Hosted Agent is a group of virtual machines hosted on Azure. Every time you queue a new build, it will initialize an available agent from these machines with a clean environment. So the build machine you used may different for every build. And when the build is finished, the files downloaded/generated during the build will also be cleared. So there isn't any way to share the files between them.
JENKINS
I am noticing that the every time I run one of my jobs in Jenkins, there are two files created in the /workspace/build/distributions dir. The two files have the extensions of .tar and .tgz. Every time, I run the job, another set of these files are created. So, if I run the job 3 times, there will be 6 files all together. I have noticed that during the dependency check phase, these artifacts slow things down. Therefore, I wanted to remove them automatically before each time this job runs. I have attempted the configs in the image below. In addition, I have tried the workspace cleanup plugin and that completely deleted the workspace. That is definitely not what I wanted.
Therefore, what would be the best way to go about this.
What scm plugin are you using? Some of the scm plugins allow you to clean the workspace before an update (e.g. SVN's "Emulate clean checkout" and Git's "Clean before checkout" options).
If you're not using a scm plugin, can you remove the files in a batch/shell script during the first build step?
Or perhaps you can go about it from the reverse direction. Can you get rid of the files as the last build step of the job? That way, they are gone when the next build comes along.
I'm trying to reduce build times and right now Source Indexing and Symbol Publishing with TFS 2015 takes (~1hr). Maybe indexing sources and publishing symbols is just heavy on disk I/O and bottle-necked there -- I'm unsure. I want sources to continue to be indexed and symbols to continue to be published for this particular build as it makes debugging exponentially easier.
Are there any ways to make source indexing and symbol publishing with TFS 2015 faster?
It's hard to just reduce the time of this task "Source Indexing/Symbol Publishing "
However, there are other ways to reduce the build timeļ¼Such as setting clean workspace to none. Changing the workspace setting from recreate a fresh workspace every time to incremental by which it will incrementally download the source to the build workspace only.
During the build process, the build agent compiles and does other work with your source files. Before the build agent can do this work, it downloads the files from folders on your version control server into a local working directory. To facilitate downloading these files, the build agent creates a version control workspace, which maps the folders on the server to the local folders in the working directory for the build agent. If you set clean workspace , it will delete the old files and get down the sources during every triggered build. So set clean workspace to none can reduce the time of the build.
And it's also related to the hardware of your server. Improve the performance of the server will also reduce your build times.
I'm using TFS 2013 on premises. I have four build agents configured on a Build machine. Several build definitions compile ASP .NET websites. I configured the msbuild parameters to deploy the IIS application to the integration server, which sits out there in Rackspace.
By default webdeploy does differential deployments by comparing file dates. In my case that's a big plus because copying files from our network to Rackspace takes quite some time. Now, in order to preserve file dates the build agent has to compile the same base set of source code. On every build only the differential source code yields a new DLL, minimizing the number of files deployed.
All of that works fine, with a caveat: a given build definition has to be assigned to a build agent (by agent name or tag). The problem is I create a lot of contingency when all builds assigned to the same agent are queued up. They wait in line until the previous build is done.
In an ideal world any agent should be able to take care of any build, but the source code being compiled has to be the same, regardless of the agent.
I tried changing the working folder of all agents to point to the same location but I get an error because two agents can't be mapped to the same folder. I guess there is one workspace per agent.
Any ideas?
Finally I found a way to do this. Here are all the changes you need to do:
By default the working folder of each agent is $(SystemDrive)\Builds\$(BuildAgentId)\$(BuildDefinitionPath). That means there's one working folder per BuildAgentId. I changed it so that all Agents share the same folder: $(SystemDrive)\Builds\WorkingFolder\$(BuildDefinitionPath)
By default at runtime the workflow creates a workspace that looks like "[BuildDefinitionId][AgentId][MachineName]". Because all agents share the same working folder there's an error trying to create each separate workspace. The solution to this is in the build definition: Edit the xaml and look for an activity called "Get sources from Team Foundation Version Control". There's a property called WrokspaceName. Since I want to have one workspace per build definition I set that property to the BuildDetail.BuildDefinition.Name.
Save your customized build template and create a build that uses it.
Make sure the option "1. TF VersionControl/1. Clean workspace" is set to False. Otherwise the build will wipe out all the source code on every build.
Make sure the option "2. Build/3. Clean build" is set to false. Otherwise the build will wipeout the output binaries on every build.
With this setup you can queue up the same build on any agent, and all of them will point to the same source code and bin output. When the source code changes only the affected binaries are recompiled. I have a custom step in the template that deploys the output files to IIS, to all the servers in our webfarm, using msdeploy.exe. Now my builds+deployments take one or two minutes, because only the dlls or content that changed during the build are synchronized to the servers.
You can't run two build agents in the same folder. The point of build agents is to run multiple builds in parallel, usually on separate PCs. If you try to run them on the same source code, then (a) it's pointless as two build of exactly the same source should produce identical results, and (b) they are almost certainly going to trip over each other and cause the builds to fail or produce unexpected results.
If you want to be able to build and then deploy a series of versions of your codebase, then there are two options:
if you queue up multiple builds, then the last one will "win", so the intermediate builds are of no real value. So if you check in New code before your first build completes, you may as well stop the active build and start a new one. you should be asking yourself why the build is so slow, or why you are checking in changes so often that this is necessary.
if each build produces an incremental update to the deployed result, then you need to pass the output of your builds to some deployment agent that is able to diff it against the deployed version and send only the changes to be deployed. This could be set up to gather results from multiple build agents if that would be beneficial.
but I wonder if perhaps your build Is slow because you are doing a complete build each time (which cleans the build folder, gets all the sources, and does a full rebuild), when what you want is an incremental build (which gets the latest changes, compiles only what is affected, and complete quickly). perhaps you should investigate making your build incremental.