Integrating Jenkins with BitBucket Server - jenkins

I have an Bitbucket Server that is running currently quite well.
I also have an Bamboo Server which crashed on me today so badly that I would need to reinstall the complete bamboo stuff again. (Plugins, Server etc.)
This was not the first time. The Atlassian team couldn't even help me, so I moved to Jenkins instead.
Everything works so far.
I checked on several posts on the Internet but didn't find anything really useful or up2date on my question.
Here is what I am planing to do:
Build every Branch on creation
If Build Succeeds notify BitBucket Server that Build was successful
Update Jira and start Code Review (already in place)
I'm currently missing only step 1 and 2 as step 3 is quite easy to do with the according plugins.
But I need step 1 and 2 as well as an pull request should not be mergeable if the branch itself was not successfully built.
If anybody already solved this, notify me and I will look into it right away
Thanks for your help :-)

Related

How to organize deployment process of a product from devs to testers?

We're a small team of 4 developers and 2 testers and I'm a team lead of the team. Developers do their tasks each in separate branch. Our stack is ASP.NET MVC, ASP.NET Core, Entity Framework 6, MSSQL, IIS, Windows Server. We also use Bitbucket, Jira software to store code and manage issues.
For example, there is a task "add an about window". A developer creates branch named "add-about-window" and put all the code there. Once the task is done, I do code review and in case all was good, I merge the branch into some accumulating branch let's name it "main". As a next step, I then manually deploy the updated "main" branch to test server with installed IIS, MSSQL. Once done I notify testers to test freshly uploaded app to make sure "add about window" is done correctly and works good. If testers find a bug, I have to revert the task branch merge from "main" branch and tell the developer to fix the bug in task's branch. Once the developer fixed it, I merge the branch into "main" branch again and ask testers to check again. In the end the task branch gets deleted.
This is really inconvenient, time consuming and frustrating. I have heard about git flow (maybe this is kind of what we have now).
Ideally, I would like this process to be as this:
Each developer still do work in separate branches.
Once a task is done and all the task code is in task branch I do code review.
Once code review is done and all found issues are fixed I just click "deploy"
There is a Docker image which contains IIS, MSSQL, Windows. It also with some base version of the application we work on, fully tested and stable. Let's say it's on a state of some date, like start of the year.
The Docker image is taken and a new container starts.
This Docker container gets fully initialized and then the code from a branch gets installed on the running container.
This container then has own domain name like "proj-100.branches.ourcompany.com" ("proj-100" is task's ID in Jira) which testers can go on and test.
This would definitely decrease time I spend on deployment and also will make the process more convenient and comfortable.
Can someone advice some resources I can learn about similar deployment models? Or maybe someone can share info on this. Any info will be very appreciated.
regardless of your stack, and before talking about the solutions, what you describe is the basic use case of any CI-CD process. all the exhausting manual steps you described, can be done with any CI tool.
now, let's consider what you already have, and talk about the steps for your desired solution - you're using bitbucket, which already gives you at least steps 1 and 2 - merging only approved PRs into master/main.
step 3 is where we start the CI automation process - you define a webhook upon certain actions in the bitbucket repo, which triggers a CI job/pipeline(can be a Jenkins server, gitlab-ci, or many other CI solutions). this way, you won't even need a "deploy" button, since the merging action can trigger the job, which can automatically run unit tests, integration tests(if you define them), build artifacts/docker-images and finally deploy.
step 4 needs some basic understanding of the docker containers design - a docker image is not a VM. it has its use cases and relevant scenarios, and more importantly an advised architecture guideline to follow.
to make it short, I'll only mention the principle of separation - each service should be in a separate container. it allows upscaling and easier debugging, and much more. which means - what you need is not a docker image that contains your entire system, but an orchestration of containers, each containing an independent software unit, with a clear responsibility. and here Kubernetes comes into play.
back to the CI job - after the PR merge, the job starts, running the pre-defined unit tests, building the container, and uploading to your registry.
moving to CD - depending on your process, after the updated and tested docker images are in your registry(could be artifactory/GitLab registry/docker registry...), the CD job can take any image it needs, and deploy them in your Kubernetes cluster. and that's it! the process is done.
A word of advice - if you don't have a professional DevOps team, or a good understanding of docker, CI-CD process, and Kubernetes, and if your dev team is small(and unfortunately it seems so) you may want to consider hiring a DevOps company to build the DevOps/CICD infrastructure for you, preferably with a completely managed DevOps solution and then do a handover. everything I wrote is just the guideline and basic points, to give you the big picture. good luck!
All the other answers are here great still I would like to add my piece of advice.
Recently I was also working on a product and we were three team members. It was a node.js project. If you are on AWS then you can use the AWS pipeline. This will detect pushes from a specific GitHub branch and the changes will get deployed to the server. The pipeline service has a build stage too. You can also configure slack notifications.
But you should have at least two environments production and dev to check if deployment is working properly on dev.
AWS also has services like AWS Code Commit and AWS Code Deploy.
This is just a basic solution. You don't actually need fancy software to set up ci/cd.
This kind of setup is usually supported by a CICD tool coupled with Kubernetes.
Either an on-premise one, like Jenkins+Kubernetes, or its Jenkins Kubernetes plugin, which runs dynamic agents in a Kubernetes cluster.
You can see an example in "How to Setup Jenkins Build Agents on Kubernetes Pods" by Bibin Wilson.
Or a Cloud one, like Bitbucket pipeline deploying a containerized application to Kubernetes
In both instances, the idea remains the same: create a ephemeral execution environment (a Docker container with the right components in it) for each pushed branch, in order to execute tests.
That way, said tests can take place before any merge between a feature branch and an integration branch like main.
If the tests pass, Jenkins itself could trigger an automatic merge (assuming the feature branch was rebased first on top of the target branch, main in your case)
We have similar process in our team.
We use gitlab-ci.
Hence there are some out of docker infrastructure (nginx with test stand dns),
we just create dev1, dev2 ... stands (5 stands for team of 10 developer and more than 6 microservices). For each devX stand and each microservice we have deploy to devX button in our CI-CD. And we just reserve in slack devX for feature Y on time of tests after deploy. Whan tests are done and bugs are fixed we merge to main branch and other feature brunch can be deployed and tested on devX stand.
As a next step, I then manually deploy the updated "main" branch to test server with installed IIS, MSSQL.
Once done I notify testers to test freshly uploaded app to make sure "add about window" is done correctly and works good.
If testers find a bug, I have to revert the task branch merge from "main" branch and tell the developer to fix the bug in task's branch.
If there are multiple environments then devs could deploy themselves into them. Even a single "dev" environment they can deploy to would greatly help. The devs should be able to deploy themselves and notify the testers without going through you.
That the deploy is "manual" is suspicious. How manual? Ideally it should just be a few button clicks. Sometimes you can even have it so that pushing to a branch does a deploy (through webhooks).
You should be able to deploy from branches besides main. What that means or looks like can vary a lot but the point is that if you're forcing everything there and having to revert when it doesn't work you're creating a lot of unneeded work. Ideally there should be some way to test locally. If there really can't be then you need to at least allow a way to deploy from any branch (or something like force pushing to a branch called 'dev' or something).
From another angle, unless the application gets horribly broken you don't necessarily need to rollback changes unless a release is coming soon. You can just have it fixed in another pull request.
All in all the main problem here sounds like there's only a single environment for testing, the process to deploy to it is far too manual, and the devs have no way to deploy to it themselves. This sort of thing is a massive bottle neck. Having a burdensome process to even begin to test things takes a big toll on everyone's morale -- which can be worse than the loss in velocity. You don't necesscarily need every dev to be able to spin up as many environments as they want at the push of a button but devs do need some autonomy to be able to test.
Having the application run in Docker containers can greatly help with running it locally as well as making the deployment process simpler. I've tried to stay away from specific product suggestions because this is more of a process problem it sounds like.

VSTS, create build definition gets AllowScriptsAuthAccess error

long time listener, first time caller!
I've spent two days searching for an answer to this so hopefully someone here may be able to help.
I've set up a personal/free VSTS instance and created a project.
One of the first tasks I want to do is setup the build pipeline, so create a new pipeline, define the agent pool as VS2017, connect to my Github repo etc, all of which is fine.
Next I try to add an Agent Job, again choosing VS2017 as the agent. With no other options chosen, if I try to save the build definition I get the following error message (and cannot save it);
The AllowScriptsAuthAccess build option is not supported in API versions greater than 4.0.
Allow scripts to access the OAuth token is unchecked on the Agent job configuration under phases and on the Build/Options tab (slider set to disabled)
I've googled and searched for all sorts of stuff to try and find someone with the same problem but it's almost like I'm the first to discover this - which is highly unlikely!! It has almost driven me to using Bing to search for a solution, but let's not get carried away.
Any ideas or suggestions would be greatly appreciated!
So it turns out that turning off the "New YAML pipeline creation experience" and "New Navigation" under preview features fixes the problem, insofar as I can now create and save a build pipeline without the error.
Also, if you have "Build YAML Pipelines" enabled under preview features for the Organisation, you get the "View YAML" link that I was missing also.
Thanks all for your help. I'd be interested to know the root cause of this still. I'll update the Microsoft support ticket with the same and post back here if they have any insights.
There's an similar issue here:https://developercommunity.visualstudio.com/content/problem/123012/getting-multiconfiguration-build-option-not-suppor.html
Seems the build template was broken. So, you can try with other build templates or starting over with an empty template, then add the needed tasks manually to check if that works.
Besides, you can try below things:
Clean the caches on your client machine, also clean the browser
caches, then check it again. See How to clear the TFS cache on
client machines.
Create a new team project and create a new build pipeline within the
new team project to check if that works
I am assuming this is a bug in the VSTS system and it will likely be fixed soon. But for the time being, I found a workaround:
I was also getting the AllowScriptsAuthAccess error and struggled with it for hours. I don't think any of the configuration settings you mentioned have anything to do with it (free account, GitHub, OAuth token unchecked).
To solve it, I converted the Agent Job to YAML (which is as easy as clicking "View YAML" in the upper right). Save the code to a file named .vsts-ci.yml, and save this in the root folder of your solution. Commit/push the new file, then queue the build. (Note that the conversion to YAML is one-way, so you may want to Clone your build.)
That should get rid of the AllowScriptsAuthAccess error. After that I had to add a few variables, but then it's just a matter of following the error messages.
I hope this helps. Sorry I can't answer this more authoritatively. Please post a comment if I am missing any steps.
I had this issue and it turned out that I didn't have Build Admin permissions in VSTS for the project. Not a very helpful error message for this.

Travis CI has failures on unchanged code for pull request

I submitted my first pull request to the Apache Flink project on github, but the Travis CI check is saying two of the tests it ran have failed. The thing is that where it says the failures are occurring is in a module that I haven't modified at all. So what am I missing? I am very new to Travis CI as well, so I could just be reading something the wrong way, but I doubt it.
Here is the pull request on Travis CI: https://travis-ci.org/apache/flink/builds/368327990
The module I made changes to was Flink-connector-cassandra and the failure is happening in the module flink.
Thanks for any help in advance!
Actually such problems should be asked on the dev# mailing list of the corresponding project and not on Stackoverflow. Sometimes builds are not stable and thus tests might fail even though you have nothing to do with them. In most cases there exists already an issue in the corresponding bug tracker that mentions the test instability. If the community says everything is fine, you can still open your PR.
However, in your case you have "Too many files with unapproved license" which clearly indicates a mistake on your side. You need to add an Apache License header to your newly created files.

How to start another build on CodeClimate after the initial build fails?

Here's the first fail build. I forgot to configure file. So I added it again and recommit. Now it won't fire again.
Do I have to get a CI (using Travis CI) to first successfully test it first?
You can press the refresh button in the top right corner on your repo.
Support got back to me and told me it was a problem on their end.
Sorry that your repository got stuck in that weird "limbo" state.
Currently, we don't automatically install our webhook for open-source
repositories and without that we don't see any subsequent commits if
the first analysis errors. Our dev team plans to improve this
experience, but in the meantime, I'd recommend installing our webhook.
This hook is what notifies us of certain events happening in your
repository including commits made to your default branch.
To get that installed you'll need to run through steps 5-7 in this
help doc here: https://docs.codeclimate.com/docs/github#pull-requests.

proper preparation before upgrading jenkins

I'm going to upgrade our jenkins-ci to the latest version. I'm following this wiki page (Going for the upgrade button in the "Manage jenkins page"): How to upgrade jenkins
My question is this, we have a lot of jobs that constantly run (some timed jobs, some triggered jobs). When upgrading, should (or even need) I disable all jobs before hand? If there are jobs currently running, should (or even need) i terminate them?
It depends a lot how you deployed you CI. If you installed by default (no custom settings i assume you can follow the auto procedure that you already provided in link).
When upgrading, should (or even need) I disable all jobs before hand?
When upgrading you should put your Jenkins instance in the quiet mode Configure > Manage > Quiet down, this will prevent further builds to be executed, also it will let all running builds to finish, i hope this answer to your both questions.
Speaking more about jobs, you should make a backup first in case something goes wrong.
Also you should think a lot about plugins and review them all since some of them might not work as you expect since you are upgrading to the new fresh Jenkins core. There is one plugin called plugin usage which might help you to understand your current status

Resources