I've been trying to make my Jenkins builds 1 stop shopping for everyone to make it easy for QA and for on-boarding never team members. Be they business, QA, DEV, etc...
I currently use HTML reports to publish links to various documents about how to deploy a build, how to test a build, design docs, etc...
Unfortunately, many of our builds are 1-2 gigs in size. For each HTML report you publish, that plugin copies the build repo into a report folder under html reports.
So for a product that has a build usage of 1.7 gigs... When I publish 21 reports, it uses and additional 22 gigs of disk space... That is per build, if you keep 10 builds and you set save old reports, I've turned that off, but still the disk space usage is a problem...
Then multiply that by 20 different jobs.. Well, suddenly disk space becomes a serious problem.
So... How to publish external link, HTML reports, etc... That doesn't use so much extra space?
We have tried to use the description before, had problems with making that readable, but it worked for the static links with out disk space problems. For the html report we do generate in the build, that did not work in initial efforts. If that really is the best option, we can give that a go again.
A couple of solutions come to mind.
Create and deploy a single page that has links to where the other documentation can be found.
Check out the Sidebar-Link Plugin. It allows you to put links in the left menu bar (sidebar).
Related
I don't understand what Promoted build really is and how it works. Can someone please explain to me like to a 10 years old kid. If you can provide some sample examples would help me a lot.
Thanks
In a typical software developing organization with CI system, there are 10's or 100's of continuous builds daily. Only one of those builds (usually the latest stable) is selected and "promoted" to be a Release Candidate (RC), which goes to the next quality gate - usually the QA department. Then, they select one of those RC's (others are dropped) and again, "promote" it to the next level - either to staging environment, validation etc. Then, finally... one of these builds is again "promoted" to be an official release.
Why is that important?
Visibility: You would want to distinguish many "regular", continuous builds from few, selected "RC" builds.
Retention: If you commit often (which is the best practice), you will likely get lot of daily builds, and would like to implement a retention policy (e.g. only keep last 100 builds or only builds from the last 7 days). You will then want to make sure promoted builds (RCs) are locked against retention. This is mostly important if you deploy binaries to customers, and may need the exact binary to reproduce an escaping bug in the future (though you still have the source code in the repository, I've seen cases where escaping bugs relate to the build process rather than the source code - due to rapid changes to the build process, or time-of-build sensitive data like digital signatures).
Permissions: you may want to prevent access to builds with "half baked" features from non-developers.
Binary Repositories: you may want to publish only meaningful builds to an external binary repository.
Builds in Jenkins can be "promoted" either manually or automatically, using plugins like Promoted Builds Plugin. You can also create your entire "promotion" workflow using pipeline scripts. Here's an example:
a "Continuous" job that polls SCM and builds on every change. It has a retention policy to keep only the last 50 builds. Access is restricted only to developers;
a "Release Candidates" job that copies artifacts from a manually selected build (using parameters). Access is allowed to QA testers;
a "Releases" jobs that copies artifacts from a manually selected RC. Access is allowed to the entire organization. Binaries are released to external/public repository.
I hope this answers your question :-)
There is some Jenkins plugin to ZIP old builds? I don't want to package just the generated archive (I am deleting those). I want to ZIP just the log data and the data used by tools, like FindBugs, Checkstyle, Surefire, Cobertura, etc.
Currently I am running out of disk space due to Jenkins. There are some build log files that reach 50 Mb due running 3000+ unit tests (most of these are severely broken builds full of stacktraces in which everything fails). But this happens frequently in some of my projects, so I get this for every bad build. Good builds are milder and may get up to around 15 Mb, but that is still a bit costly.
The surefile XML files for these are huge too. Since they tend to contain very repetitive data, I could save a lot of disk space by zipping them. But I know no Jenkins plugin for this.
Note: I am already deleting old builds not needed anymore.
The 'Compress Buildlog' plugin does pretty much exactly what you're asking for, for the logs themselves at least.
https://github.com/daniel-beck/compress-buildlog-plugin
For everything else, you'll probably want an unconditional step after your build completes that manually applies compression to other generated files that will stick around.
The administering Jenkins guide gives some guidance on how to do this manually. There are also links to the following plugins
Shelve Project
thinBackup
The last one is really designed to backup Jenkins configuration, but there are also options for build results.
Although this question is early 3 years ago there may be other people search the same question
here is my answer
If you want to compress the current build job's log using This jenkins plugin
If you want to compress the old jenkins jobs using the following script mtime +5 means the file change time is 5 days ago
cd "$JENKINS_HOME/jobs"
find * -name "log" -mtime +5|xargs gzip -9v '{}'
I am in the process of setting up continuous integration in our TFS system. One major part of our system are the development of about 50 DotNetNuke modules to support our CMS infrastructure. Right now, each of those projects have their own solution since their code bases are mostly siloed (with common code in 1 or 2 common projects). Keeping them in their own solution is done because it makes the development process faster (loading, compiling, etc....)
However, this has proven difficult to maintain when setting up TFS team build as each solution has to be manually added to the build definition and MSBuild seems unable to take advantage of parallel compiling due to each project being in its own solution. This causes about 5 minute full build times, which while isn't horrible isn't ideal. Mostly though, it's not ideal from a build definition maintenance aspect.
To solve this I creating a global solution that included all projects. The idea being that if you want your project to be automatically compiled and deployed by TFS you will have to include your project in the global solution. This seems works well, as it's easy to maintain from a build definition standpoint and brings the total build time down to 70 seconds.
The one problem is that the displayed TFS build log groups all warnings and errors together under the solution instead of separating them out by project. This makes it difficult to quickly see what project caused which errors and warnings.
Is there a good way to see project level error/warning messages in the build log summary view without delving into the cluttered build log?
To answer your direct question, I believe the answer is no (at least not without some heavy customization).
For me this is never a big concern as I am pretty aggressive about getting my teams to bring errors/warnings down to zero, then enforcing it via TFS Build (/p:TreatWarningsAsErrors=true). This means you should never have to wade through hundreds of warnings in the build summary.
If you add all your individual solutions to the build definition, you can always use the TFS Power Tools to "clone" a build def to make maintenance easier. You could also modify the Build Template to build the solutions/projects in parallel, although this runs the risk of having file contention issues.
When we build our MVC app, we have a build process that pushes the site on to our UAT box.
Once published, We would like to run an automated tool that crawls all the links in the app and checks for broken links and any other issues (such as usability/accessibility, etc.)
What tool(s) exists that will crawl a site and generate a report a report on broken links, etc.?
Can it be integrated in to our CI (TFS) Build?
In the absence of any alternative advice, we went with this solution:
http://wummel.github.com/linkchecker/
It's working well for us. One of the key advantages of this system is that it has a command-line mode with lots of options AND it can produce multiple format reports (HTML, Sitemap, CSV, XML) in a single crawl. Other tools I've used in the past had to run multiple times, consuming more bandwidth and time.
The beauty of this is that we can add it to our build process and then automate the output. When the build is pushed to UAT, LinkChecker runs. When it has finished, the HTML output is emailed to us and the XML output is parsed. All 404 errors are raised as bugs in TFS.
Quite a nice solution.
Our project group stored binary files of the project that we are working on in SVN repository for over a year, in the end our repository grew out of control, taking backups of SVN repo became impossible at one point since each binary that is checked in is around 20 MB.
Now we switched to TFS,we are not responsible for backing the repository up, our IT tream takes care of it and we have more network and storage capacity for backups because of that but we want to decide what to do with the binaries. As far as I know TFS stores deltas and for binary files but deltas will be huge, but we might end up reaching our disk space quota one day, so I would like to plan things better from the start, I don't want to get caught up in a bad situation when it's too late to fix the problem.
I would prefer not keeping builds in the source control but our project group insists to keep a copy of every binary for reproducing the problems that we see in the production system, I can't get them to get the source code from TFS, build it and create the binary, because it is not straightforward according to them.
Does TFS offer a better build versioning method? If someone can share some insight I'd really be grateful.
As a general rule you should not be storing build output in TFS. Occasionally you may want to store binaries for common libraries used by many applications but tools such as nuget get around that.
Build output has a few phases of its life and each phase should be stored in a separate place. e.g.
Build output: When code is built (by TFS / Jenkins / Hudson etc.) the output is stored in a drop location. This storage should be considered volatile as you'll be producing a lot of builds, many of which will be discarded.
Builds that have been passed to testers: These are builds that have passed some very basic QA e.g. it compiles, static code analysis tools are happy, unit tests pass. Once a build has been deemed good enough to be given to test it should be moved from the drop location to another area. This could be a network share (non production as the build can be reproduced) there may be a number of builds that get promoted during the lifetime of a project and you will want to keep track of what versions the testers are using in each environment.
Builds that have passed test and are in production: Your test team deem the build to be of a high enough quality to ship. As part of your go live process, you should take the build that has been signed off by test and store it in a 3rd location. In ITIL speak this is a Definitive Media Library. This can be a simple file share, but it should be considered to be "production" and have the same backup and resilience criteria as any other production system.
The DML is the place where you store the binaries that are in production (and associated configuration items such as install instructions, symbol files etc.) The tool producing the build should also have labelled the source in TFS so that you can work out what code was used to produce the binary. Your branching strategy will also help with being able to connect the binary to the code.
It's also a good idea to have a "live like" environment, this should be separate from your regular dev and test environments. As the name suggests it contains only the code that has been released to production. This enables you to quickly reproduce bugs in production
Two methods that may help you:
Use Team Foundation Build System. One of the advantages is that you can set up retention periods for finished builds. For example, you can order TFS to store the 10 latest successful builds, and the two latest failed ones. You can also tell TFS to store certain builds (e.g. "production builds"/final releases) indefinitely. These binaries folders can of course also be backed up externally, if needed.
Use a different collection for your binaries, with another (less frequent) backup schedule. TFS needs to backup whole collections, but by separating data that doesn't change as frequently as the source you can lower the backup cost. This of course depends on the frequency you are required to have the binaries backed up.
You might want to look into creating build definitions in TFS to give your project group an easy 'one button' push to grab the source code from a particular branch and then build it and drop it to a location. That way they get to have their binaries, and you don't have to source control them.
If you are using a branching strategy where you create Release or RTM branches when you push something to production, then you can point your build definitions at those branches and they can manually trigger them from the TFS portal or from within Visual Studio.