We're a small team of 4 developers and 2 testers and I'm a team lead of the team. Developers do their tasks each in separate branch. Our stack is ASP.NET MVC, ASP.NET Core, Entity Framework 6, MSSQL, IIS, Windows Server. We also use Bitbucket, Jira software to store code and manage issues.
For example, there is a task "add an about window". A developer creates branch named "add-about-window" and put all the code there. Once the task is done, I do code review and in case all was good, I merge the branch into some accumulating branch let's name it "main". As a next step, I then manually deploy the updated "main" branch to test server with installed IIS, MSSQL. Once done I notify testers to test freshly uploaded app to make sure "add about window" is done correctly and works good. If testers find a bug, I have to revert the task branch merge from "main" branch and tell the developer to fix the bug in task's branch. Once the developer fixed it, I merge the branch into "main" branch again and ask testers to check again. In the end the task branch gets deleted.
This is really inconvenient, time consuming and frustrating. I have heard about git flow (maybe this is kind of what we have now).
Ideally, I would like this process to be as this:
Each developer still do work in separate branches.
Once a task is done and all the task code is in task branch I do code review.
Once code review is done and all found issues are fixed I just click "deploy"
There is a Docker image which contains IIS, MSSQL, Windows. It also with some base version of the application we work on, fully tested and stable. Let's say it's on a state of some date, like start of the year.
The Docker image is taken and a new container starts.
This Docker container gets fully initialized and then the code from a branch gets installed on the running container.
This container then has own domain name like "proj-100.branches.ourcompany.com" ("proj-100" is task's ID in Jira) which testers can go on and test.
This would definitely decrease time I spend on deployment and also will make the process more convenient and comfortable.
Can someone advice some resources I can learn about similar deployment models? Or maybe someone can share info on this. Any info will be very appreciated.
regardless of your stack, and before talking about the solutions, what you describe is the basic use case of any CI-CD process. all the exhausting manual steps you described, can be done with any CI tool.
now, let's consider what you already have, and talk about the steps for your desired solution - you're using bitbucket, which already gives you at least steps 1 and 2 - merging only approved PRs into master/main.
step 3 is where we start the CI automation process - you define a webhook upon certain actions in the bitbucket repo, which triggers a CI job/pipeline(can be a Jenkins server, gitlab-ci, or many other CI solutions). this way, you won't even need a "deploy" button, since the merging action can trigger the job, which can automatically run unit tests, integration tests(if you define them), build artifacts/docker-images and finally deploy.
step 4 needs some basic understanding of the docker containers design - a docker image is not a VM. it has its use cases and relevant scenarios, and more importantly an advised architecture guideline to follow.
to make it short, I'll only mention the principle of separation - each service should be in a separate container. it allows upscaling and easier debugging, and much more. which means - what you need is not a docker image that contains your entire system, but an orchestration of containers, each containing an independent software unit, with a clear responsibility. and here Kubernetes comes into play.
back to the CI job - after the PR merge, the job starts, running the pre-defined unit tests, building the container, and uploading to your registry.
moving to CD - depending on your process, after the updated and tested docker images are in your registry(could be artifactory/GitLab registry/docker registry...), the CD job can take any image it needs, and deploy them in your Kubernetes cluster. and that's it! the process is done.
A word of advice - if you don't have a professional DevOps team, or a good understanding of docker, CI-CD process, and Kubernetes, and if your dev team is small(and unfortunately it seems so) you may want to consider hiring a DevOps company to build the DevOps/CICD infrastructure for you, preferably with a completely managed DevOps solution and then do a handover. everything I wrote is just the guideline and basic points, to give you the big picture. good luck!
All the other answers are here great still I would like to add my piece of advice.
Recently I was also working on a product and we were three team members. It was a node.js project. If you are on AWS then you can use the AWS pipeline. This will detect pushes from a specific GitHub branch and the changes will get deployed to the server. The pipeline service has a build stage too. You can also configure slack notifications.
But you should have at least two environments production and dev to check if deployment is working properly on dev.
AWS also has services like AWS Code Commit and AWS Code Deploy.
This is just a basic solution. You don't actually need fancy software to set up ci/cd.
This kind of setup is usually supported by a CICD tool coupled with Kubernetes.
Either an on-premise one, like Jenkins+Kubernetes, or its Jenkins Kubernetes plugin, which runs dynamic agents in a Kubernetes cluster.
You can see an example in "How to Setup Jenkins Build Agents on Kubernetes Pods" by Bibin Wilson.
Or a Cloud one, like Bitbucket pipeline deploying a containerized application to Kubernetes
In both instances, the idea remains the same: create a ephemeral execution environment (a Docker container with the right components in it) for each pushed branch, in order to execute tests.
That way, said tests can take place before any merge between a feature branch and an integration branch like main.
If the tests pass, Jenkins itself could trigger an automatic merge (assuming the feature branch was rebased first on top of the target branch, main in your case)
We have similar process in our team.
We use gitlab-ci.
Hence there are some out of docker infrastructure (nginx with test stand dns),
we just create dev1, dev2 ... stands (5 stands for team of 10 developer and more than 6 microservices). For each devX stand and each microservice we have deploy to devX button in our CI-CD. And we just reserve in slack devX for feature Y on time of tests after deploy. Whan tests are done and bugs are fixed we merge to main branch and other feature brunch can be deployed and tested on devX stand.
As a next step, I then manually deploy the updated "main" branch to test server with installed IIS, MSSQL.
Once done I notify testers to test freshly uploaded app to make sure "add about window" is done correctly and works good.
If testers find a bug, I have to revert the task branch merge from "main" branch and tell the developer to fix the bug in task's branch.
If there are multiple environments then devs could deploy themselves into them. Even a single "dev" environment they can deploy to would greatly help. The devs should be able to deploy themselves and notify the testers without going through you.
That the deploy is "manual" is suspicious. How manual? Ideally it should just be a few button clicks. Sometimes you can even have it so that pushing to a branch does a deploy (through webhooks).
You should be able to deploy from branches besides main. What that means or looks like can vary a lot but the point is that if you're forcing everything there and having to revert when it doesn't work you're creating a lot of unneeded work. Ideally there should be some way to test locally. If there really can't be then you need to at least allow a way to deploy from any branch (or something like force pushing to a branch called 'dev' or something).
From another angle, unless the application gets horribly broken you don't necessarily need to rollback changes unless a release is coming soon. You can just have it fixed in another pull request.
All in all the main problem here sounds like there's only a single environment for testing, the process to deploy to it is far too manual, and the devs have no way to deploy to it themselves. This sort of thing is a massive bottle neck. Having a burdensome process to even begin to test things takes a big toll on everyone's morale -- which can be worse than the loss in velocity. You don't necesscarily need every dev to be able to spin up as many environments as they want at the push of a button but devs do need some autonomy to be able to test.
Having the application run in Docker containers can greatly help with running it locally as well as making the deployment process simpler. I've tried to stay away from specific product suggestions because this is more of a process problem it sounds like.
thanks for taking the time to read this.
I’m a graduate DevOps engineer at my organisation and I have been tasked with trying to automate a Jenkins upgrade for all our instances (RHEL 7 servers) across the estate. I have written a role in ansible which automates the update using .rpm files and have tested that the role does correctly update the application.
I otherwise have very little to no experience at all using Jenkins and don’t know a whole lot about the application itself.
In my research I have seen conflicting information about which is the best method to upgrade Jenkins, but an incremental upgrade path seems to be the best approach, or most widely advised. Some of the oldest instances of Jenkins on our estate are sitting at version 2.121.x so are quite out of date.
I have read the upgrade path guide and I’m considering upgrading to the first release of each major version of Jenkins but I really don't understand all the specific pieces of information given about the changes made to the Jenkins application and what things I should DEFINITELY be doing before installing the next major version. Should I also be updating the plugins manually from the GUI after every single upgrade step or should I update the plugins after updating to the latest version of Jenkins first?
Any advice on this is really appreciated as I really don’t know what I’m doing. Thanks a lot.
For internal reasons, one of my jobs is able to run concurrently, but new builds abort themselves if another build is already running (disabling concurrency doesn't help, since I don't want new jobs to be scheduled for execution once the current build is done).
However, this behaviour is detrimental to the job status preview (the colored ball next to the job name when inside the job's parent folder). It often shows the status as "aborted", which is undesirable - I want to view the latest running build as the source of the job status.
I tried deleting aborted builds from within their own execution, but that's unfortunately neither trivial nor stable, and thus not suitable for this situation. I could probably get a workaround running that deletes them from a separate job, but that's not ideal either.
Anyway, I'm now hoping that I can just tell Jenkins to ignore "aborted" builds in the calculation of the job preview. Unfortunately, I wasn't able to find a setting or plugin that allows me to do this.
Is this possible at all? Or do I need to find another way?
Would something like this help?
https://plugins.jenkins.io/build-blocker-plugin/
I haven't used it myself but it supports blocking builds completely if a build is already running.
Our Jenkins instance is currently reporting BUILDS_ALL_TIME to be 999 for all builds of all jobs. Has anyone else experienced this and understand the path of least resistance to getting it to handle this environment variable as expected?
The back story:
Yesterday morning I updated all of the plugins on our Jenkins instance to the latest stable version. There were half a dozen or more plugins to be updated and I never pay close attention to them but the Credentials Binding plugin stuck out because it turned my monitor red regarding a critical security update and kicked off the whole process.
Yesterday afternoon, my coworker noted that the version number of one of his builds went from 1.0.0.7 to 1.0.0.999 and I was able to confirm the same thing with one of mine. Now all jobs that rely on the BUILDS_ALL_TIME environment variable report 999 for that variable in every build.
The Version Number Plugin is installed & up to date, and here's an excerpt from build.xml from an affected build:
<org.jvnet.hudson.tools.versionnumber.VersionNumberAction plugin="versionnumber#1.9">
<info>
<buildsToday>34</buildsToday>
<buildsThisWeek>40</buildsThisWeek>
<buildsThisMonth>40</buildsThisMonth>
<buildsThisYear>40</buildsThisYear>
<buildsAllTime>24</buildsAllTime> <!-- This is correct, is incremented properly between builds, and is updated appropriately when overridden in the job configuration GUI -->
</info>
<versionNumber>999.0.0</versionNumber> <!-- This is incorrect and NOT incremented properly between builds -->
</org.jvnet.hudson.tools.versionnumber.VersionNumberAction>
The timing of this behavior seems to be associated with an upgrade to plugins (this association is by no means certain but it's the best I've got at this point). Consequently I tried downgrading each plugin with that option available in the management GUI, one-by-painful-one, to see if I can find the one culprit. This was fruitless. I'm not given the option to downgrade the Version Number plugin, but the last release of this thing was two years ago.
Well after a year and a half of workarounds and self-loathing, the problem occurred to me. Somehow, someone set the BUILDS_ALL_TIME environment variable globally on our build server. Once I unset that and restarted Jenkins, our builds' version numbers returned to their appropriate values.
I have Jenkins version 1.6.11 installed onto a windows server, the number of jobs configured are huge and the load is distributed among multiple Master & slave's. There are couple of issues occurring very frequently,
The whole Jenkins UI becomes so slow, either Jenkins server needs to restarted or the whole server needs to restarted to bring it back to normal.
Certain jobs are taking way too much time to load. To fix this, that particular job has to be abandoned and new ones has to be created for the same.
It would be really helpful if you could provide possible solutions for the two issues.
Use this syntax /node_modules/ to remove node modules folder, but before you do this, you should exclude .git folder using /.git/