How can I stop a merge in bitbucket if a build fails? - bitbucket

In Bitbucket, After a developers code is PR'd and ready to be merged, I want a build to be made when the developer requests to merge, and for the merge to fail if the build fails. I know I can trigger a build when a pull request is made, and then use the merge checks feature, but the developers code can change during the PR which could introduce build breaking bugs. Builds take quite a while too, so I don't want it to happen every time a developer updates their code.
Is there any solution to this? Perhaps via the Build Pipelines feature?

Related

Jenkins build notification affecting all branches in bitbucket

I have previously use Jenkins and BitBucket on premises and been able to have Jenkins notify bitbucket of the build condition of each individual branch (success, failed, in progress) however since I moved to bitbucket cloud it has started applying the condition of every build on every branch to every branch. For example if I have just a master and develop branch (to keep it simple) and the master branch failed because of some deployment configuration I am unable to merge a fix into it from develop even if develop is passing because it claims 1 of my 2 builds is failing on the develop branch.
This is tough to explain clearly in words so I've attached some pictures:
Two branches one build failing but both being marked as failed
Showing that develop branch is passing
Proof it wont let me merge
These notifications come from jenkins and have been set up using the standard cloudbees-bitbucket-branch-source:2.9.7 plugin to scan my bit bucket cloud.
Okay so this was a really obvious mistake but I thought I would leave the reason why this happened here in case any one makes the same mistake. cloudbees-bitbucket-branch-source:2.9.7 notifies bitbucket using the commit ID, when creating the branch structure for the repository I branched off main to make develop and both got built but both had the same commit ID and so both were notified of both builds. The problem fixes it's self on the first cycle of code to run through it.

Jenkins CI workflow with separate build and automated test both in source control

I am trying to improve our continuous integration process using Jenkins and our source control system (currently svn, but git soon).
Maybe I am thinking about this overly complicated, or maybe I have not yet seen the right hints.
The process I envisioned has three steps and associated roles:
one or more developers would do their job and ultimately submit the code changes for the actual software ("main software") as well as unit tests into source control (git, or something else). Jenkins shall build the software, run unit tests and perhaps some other steps (e.g. static code analysis). If none of this fails, the work of the developers is done. As part of the build, the build number is baked into the main software itself as part of the version number.
one or more test engineers will subsequently pickup the build and perform tests. Some of them may be manual, most of them are desired to be automated/scripted tests. These shall ultimately be submitted into source control as well and be executed through the build server. However, this shall not trigger a new build of the main software (since there is nothing changed). If none of this fails, the test engineers are done. Note that our automated tests currently take several hours to complete.
As a last step, a project manager authorizes release of the software, which executes whatever delivery/deployment steps are needed. Also, the source of the main software, unit tests, and automated test scripts, the jenkins build script - and ideally all build artifacts ("binaries") - are archived (tagged) in the source control system.
Ideally, developers are able to also manually trigger execution of the automated tests to "preview" the outcome of their build.
I have been unable to figure out how to do this with Jenkins and Git - or any other source control system.
Jenkin's pipelines seem to assume that all steps are carried out in sequence automatically. It also seems to assume that committing code into source control starts at the beginning (which I believe is not true if the commit was "merely" automated test scripts). Triggering an unnecessary build of the main software really hurts our process, as it basically invalidates and manual testing and documentation, as it results in a new build number baked into the software.
If my approach is so uncommon, please direct me how to do this correctly. Otherwise I would appreciate pointers how to get this done (conceptually).
I will try to reply with some points. This is indeed conceptually approach as there are a lot of details and different approaches too, this is only one.
You need git :)
You need to setup a git branching strategy which will allow to have multiple developers to work simultaneously, pushing code and validating it agains the static code analysis. I would suggest that you start with Git Flow, it is widely used and can be adapted to whatever reality you do have - you do not need to use it in its pure state, so give some thought how to adapt it. Fundamentally, it will allow for each feature to be tested. Then, each developer can merge it on the develop branch - from this point on, you have your features validated and you can start to deploy and test.
Have a look at multibranch pipelines. This will allow you to test the several feature branches that you might have and to have different flows for the other branches - develop, release and master - depending on your deployment needs. So, when you have a merge on develop branch, you can trigger testing or just use it to run static code analysis.
To overcome the problem that you mention on your second point, there are ways to read your change sets on the pipeline, and in case the changes are only taken on testing scripts, you should not build your SW - check here how to read changes, and here an example of how to read changes and also to prevent your pipeline to build all the stages according to the changes being pushed to git.
In case you still have manual testing taking place, pipelines are pausable which means that you can pause the pipeline asking for approval to proceed. Before approving, testers should do whatever they have to, and whenever they are ready to proceed, just approve the build to proceed for the next steps.
Regarding deployments authorization, it is done the same way that I mention on the last point, with the approvals, but in this case, you can specify which users/roles are allowed to approve that step.
Whatever you need to keep from your builds, Jenkins has an archive artifacts utility. Let me just note that ideally you would look into a proper artefact repository such as Nexus.
To trigger manually a set of tests... You can have a manually triggered job on Jenkins apart from your CI/CD pipeline, that will only execute the automated tests. You can even trigger this same job as one pipeline stage - how to trigger another jobs
Lastly let me say that the branching strategy is the starting point.
Think on your big picture, what SDLC flows you need to have and setup those flows on your multibranch pipeline. Different git branches will facilitate whatever flows you need within the same Jenkinsfile - your pipeline definition. It really depends on how many environments you have to deploy to and what kind of steps you need.

Is there a standard way to delete successful vnext builds (PR) just after their completion?

The most aggressive build retention policy one can set for pull request builds is described in "Clean up pull request builds"
a policy that keeps a minimum of 0 builds
Still, it means that successful PR builds (with artifacts no one will ever need) will be deleted only after the next automatic retention cleanup - usually the next day, but in reality it results in nearly two days worth of no longer needed builds.
In our particular case it seems to be desirable to find a way to clean successful PR builds ASAP due to their frequency and artifact's sheer size that may periodically strain our not yet fully organized infrastructure dedicated to PR handling (it will be significantly improved, but not as soon as we'd like to, and those successful PR builds would still remain no less of a dead weight).
And as far as I see the only way to do it would be to delete builds manually.
While it is not too difficult to implement, I'd still like to check whether there is a simpler standard way to delete successful PR builds automatically.
P.S.: There is one particularity in our heavily customized build process - we have multiple dependent artifacts. Like create A, use it to build B, create C to test B... So trying not to Publish artifacts on overall successful build with custom condition like it is suggested below is not exactly feasible.
Let's look at the problem from a different perspective: The problem isn't that builds are retained, the problem is that your PR builds are publishing artifacts.
You can make the Publish Artifacts steps conditional so that they don't run during PRs. Something like and(succeeded(), ne(variables['Build.Reason'], 'PullRequest')) will make the task only run if it's not a PR.

Building branches/commits on demand in Travis CI

I would like to achieve the following setup in Travis CI.
Do a build whenever commits are pushed to dev or release branches only.
Disable builds whenever commits are pushed to any other branch apart from dev or release but build pull requests.
If a developer is really interested to know if his commits are good, then he should be able to kick start a build on Travis CI explicitly by choosing a branch/commit.
From reading the documentation on Travis CI and some blog posts, I see that I can achieve "1." and "2."
Does anyone know how to make "3." work?
Update-1:
The reason I want scenario "3." is because the developers in our team (or in general any other team) make several commits and push them even before they send out a pull request. Building for every single commit of a private branch even before it goes for a pull request is causing lot of requests to queue up in the Travis CI queue which unnecessarily blocks developers who really care to verify a particular commit to check if everything is good or not before sending out a pull request.
Having the following is just fine for us:
Build on every commit push to dev and release branches
Build on every commit pushed to a pull request
You can easily achieve 1 and 2 by whitelisting the branches you want to see push builds for:
branches:
only:
- dev
- release
For reference, see https://docs.travis-ci.com/user/customizing-the-build/#Whitelisting-or-blacklisting-branches.
You can achieve 3 only if your developers open a PR against one of the whitelisted branches.
I'd personally recommend to open a PR as early as possible (after the first commit), as it makes work in progress visible to everyone who is interested.

TFS 2010 Build Work Items in a Certain State

We have been using TFS 2010 to manage work items and sprints for a while now and have recently added on a dedicated QA person. What I need to be able to do is to either create a build definition that I can run on a scheduled basis (Tuesdays at 9pm for instance) that will only build and/or deploy the Work Items that are in a State of "ReadyToDeploy". Or a way to get a list of files to push based on the TFS API.
My end goal is to have a way to automate the release process so that only the items that have passed QA are sent to our staging environment weekly. Then the customer or QA can approve that the items work in staging which is a mirror of production, and another process or build definition will deploy those items which will be in a different state.
I have modified the work items and work flows to accomplish the different states, but I am having an issue getting either a build of just the fixes or a list of all the files to push based on the state of the work item.
I am open to any ideas or solutions for this, the alternative is that I have to manage the list of files and manually push files every week and I am trying to get away from that.
Thanks,
Edit: The way we have it setup now, is that each developer has their own branch and own website, our software is server based and has to be run on a particular server. Our Trunk is linked to the main dev website. This is where QA initially does their testing to move an item to the ready to deploy status. When a dev is ready for QA they check in their changes in their branch and merge into the trunk. The builds are created off of the trunk at the moment. On our deployment nights I open up the trunk website in VS and do a publish then take those files compared against a list the developers have given me and ftp the compiled files to our production server.
I could be wrong, but I am not aware of a way to tell a build to avoid certain changesets based on the state of an attached work item.
I think the only way you could achieve this would be to create a Release branch and perform a daily merge of changesets that are in the "ReadyToDeploy" state. When you merge, you are able to cherry pick changesets, but they must be contiguous. This means that you would have to perform multiple merges to get the Release branch into the appropriate condition.
We have operated something similar for many years and it works well for us. Many people will tell you that it's bad practice, and it probably is, but that is for you to decide.
As for automating this for a build, I don't think it would be a trivial job. The hardest part will be the merge. You might think that there will be no conflicts as it is a one-way merge, but having done this for a number of years, I can tell you that conflicts do arise.
Step By Step Explanation
Create a branch off the trunk called release
When you want to do a deploy, merge from the trunk into release, but only cherry pick the ReadyToDeploy changesets. This may take several merges as the changesets must be contiguous.
Fire off a build / deploy of the release branch.
Repeat steps 2 - 3 as your release schedule allows.
You could do something automatic, it would require an extensive bit of coding. It would also require a branching strategy so that, Dev, Staging, and Production all come from their own branches.
Set up a process that uses the TFS API to scan for items in that state
Use API to Traverse Items to get check ins attached
Use API to Get latest of source and target branch
Use API to Merge by change set (identified in #2) (this is non trivial, have to handle lots of cases, but merging should all be 1 way with no conflicts, so doable)
Use API to Check In
Kick off Compile Build
If Compile Build Successful kick off Publish Build
To the best of my knowledge there is a major issue with this approach. Say you have two changesets: # 1 and 2. Both of them containing modification for the same file. Now changeset #2 contains its own changes plus changes from #1.
If you decide to pull in changeset 2 and skip #1 guess what's gonna happen. You are going to suck in changes from #1. This is obviously a problem.

Resources