I have a Jenkins pipeline build that reports on cucumber tests using the "CucumberReportPublisher". When I delete tests or refactor a whole feature, many times the old tests hang around in the jenkins test report, showing as "skipped".
Is there a way to make Jenkins/CucmberReportPublisher forget about these old tests and stop reporting them as skipped?
It sounds like you don't have a clean environment when you build your project.
I would make sure that Jenkins deleted the work space for the job and checked out the entire project from scratch for every build. I don't have a Jenkins to look at here, but there are different checkout options available for the job near the version control settings. Choose one that deletes the work space before checkout.
Another option can be to clean as the first step in your build. Assuming that you use Maven, it could look like this
mvn clean deploy
This may solve your problem with ghost tests hanging around after deletion. But it may not solve your problem with a dirty work space.
Related
I have been searching far and wide to see if I can find information on Jenkins incremental pipeline builds that does not involve Maven.
The general idea is that I want to build a generic project and run specific steps of the pipeline if the underlying code has changed. If the code did not change, I want to re-use the results from a previous build.
The reason why I want to do this, is to drastically reduce build times for huge projects.
Imagine that you only need to fix 1 line in a SCSS file, but the whole project needs to be rebuild, repackaged, etc because of this. In the meantime, the site is live and broken and waiting 15 mins to be fixed.
Can someone give a basic example of how such a build can be created or where I can find more information on incremental building?
The only thing I have been able to find is incremental building for Maven projects, but this is not applicable for me.
The standard solution is to create modules that depends on each others.
Publish the built artifact of your modules to a binary repository like Sonatype Nexus (you can easily create private npm repo as well as proxy npm repo).
During the build download the dependencies, instead of building them.
If this solution is not the one you want to take, you will have a hard time hacking a solution. To persist the state of your steps, an easy solution is to create files in the job workspace and read them at next build
Problem:
We are running Selenium tests using release pipelines. If environment deployment which runs those tests is cancelled then the drivers might not be killed, and this will lock the working folder. So when the deployment happens again on same environment within the release definition (does not matter if it is new release or redeployment), release agent will throw the error that the working folder is locked.
So we do have powershell task with an inline script that does the clean up (it is inline so no dependencies), but unfortunately TFS release pipeline tries to download the artifacts into the locked folder before running mentioned powershell script.
Is there a way to execute an inline powershell before the release pipeline downloads the artifacts?
We do have a partial solution that uses multiple phases but this will only work as long a the deployment queue is not busy, and we are getting to the point where it will be in the future, and when queue is busy TFS might pick different agents for different phases of specific environment deployment, resulting in this approach not to work. So bonus question from this one: Alternatively, is possible to lock the agent for specific environment deployment so that agent does not change between phases?
I did searches for both solutions and it looks like there are no out of the box solutions, or did I miss one? if not then is there some creative way to achieve either of these?
You're approaching this from the wrong end. If the process failed, it needs to clean up. Thus, add a task at the end of the release with a condition of canceled() (or perhaps ne(succeeded()) to perform your cleanup operations.
Also, you didn't specify what language you're doing your Selenium testing in, but in C# you can wrap your webdriver creation in a using block to ensure it properly cleans up the driver. There are ostensibly similar constructs or patterns in other languages. Basically, "if the web driver goes out of scope, clean it up, period".
I had the similar issue with downloading artifact, you can disable this step by Click on the Environment name, expand Additional options and then select "skip artifact download":
JENKINS
I am noticing that the every time I run one of my jobs in Jenkins, there are two files created in the /workspace/build/distributions dir. The two files have the extensions of .tar and .tgz. Every time, I run the job, another set of these files are created. So, if I run the job 3 times, there will be 6 files all together. I have noticed that during the dependency check phase, these artifacts slow things down. Therefore, I wanted to remove them automatically before each time this job runs. I have attempted the configs in the image below. In addition, I have tried the workspace cleanup plugin and that completely deleted the workspace. That is definitely not what I wanted.
Therefore, what would be the best way to go about this.
What scm plugin are you using? Some of the scm plugins allow you to clean the workspace before an update (e.g. SVN's "Emulate clean checkout" and Git's "Clean before checkout" options).
If you're not using a scm plugin, can you remove the files in a batch/shell script during the first build step?
Or perhaps you can go about it from the reverse direction. Can you get rid of the files as the last build step of the job? That way, they are gone when the next build comes along.
I have made a checkout in my directory of an SVN repository. The project take a lot of time to be completely checked. And so while creating my Hudson job I need this in order :
Clean up the directory (this resolves some ambiguous problems such as : "Hudson workspace locked while building" )
Revert the changes
Update
The choices that I have for Check-out Strategy, in the Hudson job creation form, are:
Use svn update as much as possible, with 'svn revert' before update
Use 'svn switch' as much as possible
Use 'svn update' as much as possible
Clean checkout folders and then checkout
Emulate clean checkout by first deleting unversioned/ignored files, then 'svn update'
Clean workspace and then checkout (Eliminated)
What is the right option for my case?
Thank you a lot!
If your build is done correctly, you should be able to simply do use 'svn update as much as possible. This is the fastest way to update your files. This means not modifying committed files, or placing build artifacts in directories where they will interfere with the build process. In a Java shop, simply keeping all built objects in a subdirectory (we use target to match Maven, but others use build or diet) and out of the way of the rest of the process.
Most people do a clean as part of their build step. This, in theory, should not be necessary, and doing so will lengthen build times. The idea of the build is not to do any unnecessary work. If a source file isn't changed, the corresponding object file should not need to be rebuilt. However, Java is pretty fast at compiling, that most Java projects simply wipe the build directory clean. In C projects, not deleting old objects is better since it really reduces build time.
If there is a problem with your build process where use 'svn update' as much as possible can't work, you should fix your build process. However, there are a couple of projects on our old Jenkins server that do have problems, and they simply aren't updated enough to worry about it. For those, I do Always checkout a fresh copy. This takes the longest, but if you're having problems with your build process, I wouldn't bother using emulate a clean checkout by first deleting unversioned/modified files and use svn revert first. These can cause update conflicts, and cause problems with your build. Either get the build working correctly, or do a clean checkout.
I would go with "Use svn update as much as possible, with 'svn revert' before update". If that is not sufficient, check out the EnvInject Plugin. It can run a script before the SCm checkout happens. You can use it to run a svn cleanup for your job, before the Subversion plugin takes over with the revert and the update. Caveat, you need to install some kind of SVN command line client on your build server.
I'm working on my first rails app and am struggling trying to find an efficient and clean solution for doing automated checkouts and deployments.
So far I've looked at both CruiseControl.rb (having been familiar with CruiseControl.NET) and Capistrano. Unfortunately, unless I'm missing something, each one of them only does about half of what I want (with each one doing a different half).
For what I've seen so far:
CruiseControl
Strengths
Automated builds on repository checkouts upon commit
Also runs unit/functional tests and reports back
Weaknesses
No built-in deployment mechanisms (best I can find so far is writing your own bash scripts)
Capistrano
Strengths
Built for deployments
Weaknesses
Has to be kicked off via a command (i.e. doesn't do automated checkouts upon commit)
I've found ways that I can string the two together -- i.e. have CruiseControl ping the repository for changes, do a checkout upon commit, run the tests, etc. and then make a call to Capistrano when finished to do the deployment (even though Capistrano is also going to do a repository checkout).
Basically, when all is said and done, I'd like to have three projects set up:
Dev: Checkout/Deployment is entirely no touch. When someone commits a file, something checks it out, runs the tests, deploys the changes, and reports back
Stage: Checkout/Deployment requires a button click
Prod: Button click does either a tagged check out or moves the files from stage
I have this working with a combination of CruiseControl.NET and MSBuild in the .NET world, and it was fairly straightforward. I would guess this is also a common pattern in the ruby deployment world, but I could easily be mistaken.
I would give Hudson a try (free and open source). I started off using CruiseControl but got sick of having to relearn the XML configuration every time I needed to change a setting or add a project. Then I started using Hudson and never looked back. Hudson is more or less completely configurable over the web. It was initially a continuous integration tool for Java but has plugins for other development stack such as .NET and Ruby on Rails. There's a Rake plugin. If that doesn't work, you can configure it to execute any arbitrary command line after running your Rake builds/tests.
I should also add it's extremely easy to get Hudson going:
java -jar hudson.war
Or you can drop the war in any servlet container.
I would use two system to build and deploy anyway. At least two reasons: you should be able to run it separately and you should have two config files one for deploy and one for build. But you can easily glue the two systems together.
Just create a simple capistrano task, that tests and reports back to you. You can use the "run" command to do anything you want.
If you don't want any command line tool there was webistrano 2 years ago.
To could use something like http://github.com/benschwarz/gitnotify/tree/master to trigger the build deploy if you use git as repository.
At least for development automated deployments, check out the hook scripts available in git:
http://git-scm.com/docs/githooks
I think you'll want to focus on the post-receive hook script, since this runs after a push to a remote server.
Also worth checking out Mislav's git-deploy on github. Makes managing deployments pretty clean.
http://github.com/mislav/git-deploy