Should we rollback release if integration tests fail? - devops

We are using trunk-based development, and I am confused wheater we should rollback our deployment if integration tests fail? P.S we are using microservices and if A microservice wants to interact with B microservice or any third party service, which is down for some reason should we revert back our deployment?

Related

Microservice rollback approach if e2e test automation fails in CICD

When a new feature of a microservice is merged into development branch, is it always deployed to Kubernetes test environment.
If yes, what happens if e2e tests fail? Is the microservice deployment rolled back?
Is automated rollback common in CICD in the industry right now?
Is there any other way to make e2e blackbox tests without deploying to Kubernetes test environment?
I could not find a good example about that?
When a new feature of a microservice is merged into development
branch, is it always deployed to Kubernetes test environment.
Yes, you are right
If yes, what happens if e2e tests fail? Is the microservice deployment
rolled back?
Since it's a development environment ideally it should be fine to run the broken version however more depends on the requirements or architecture you are planning to follow.
Is automated rollback common in CICD in the industry right now?
Yes, that's common however it would be better to keep it as Manual action, you might have a scenario where you want to deploy the version while e2e failing.
The benefit of Manual action as require might get change and have to deploy feature in which integration testing failing or something else.
Is there any other way to make e2e blackbox tests without deploying to
Kubernetes test environment?
You can do the subsystem or small portion testing in CI/CD, e2e also you can automate on CI/CD side and integrate the Kubernetes service accessible to the CI server for testing.

SCP neo deploy-mta can't deploy 2 versions in a row

I'm working with Jenkins that runs on a server.
I have a pipeline which is triggered by a user that pushes something on a GitHub repository.
It performs a script which makes sure the GitHub repository is deployed to the SAP Cloud Platform.
It uses the MTA Archive Builder for building the MTA application which creates a .mtar file.
The MTA application has a HTML5 module.
After building the .mtar file with the MTA Archive Builder, I deploy it using the NEO Java Web SDK (the library you need to perform neo deploy-mta).
"neo deploy-mta" is a command that performs the actual request for deploying your html5 application.
This works fine and the project is successfully deployed on the SAP Cloud Platform.
The problem is: if a user rapidly pushes 2 times on GitHub, my Jenkins pipeline is triggered twice and performs "neo deploy-mta" 2 times.
In a normal case the SAP Cloud platform should deploy 2 versions, but when I look it only deployed the first deployment request. So it skipped the second request for deployment.
My question is how can I make sure there are 2 versions deployed on the SAP Cloud Platform when 2 pushes happened?
The Jenkins instance is already waiting until there is no build running.
The problem was that the SAP Cloud Platform didn't deploy 2 versions when there were 2 requests for deployment.
The solution for this problem is to add the "--synchronous" parameter to the "neo deploy-mta" command. Now this script will wait until there is no deployment (for this application) running on the SAP Cloud Platform.
Most likely it happens because the SAP MTA deployer detects that you have another deploy in progress and thus stops the second deployment.
One version to go about it is to ensure from Jenkins that you don't run the second build until the first one has finished. You can do this with the help of a lock / semaphore like mechanism. There are several ways to do this via Jenkins plugins:
Lockable Resources
Exclusion Plugin
Build Blocker
Also look at How can I prevent two Jenkins projects/builds from running concurrently?.

Automating deployments of large distributed client server application part of CI / CD

For one of our application we are trying to automate the deployment process. We have end to end Continuous Integration implemented (Build/Test/Package/Report) for this application. Now we are trying to implement the automated deployment. This application needs to be deployed in 2000 servers and 50 clients under each server. Some of the components will be installed on the server and some of them will be installed on the client.
Tools used for CI: Jenkins, Github, msbuild, nunit, specflow, wix.
I have read the difference between continuous delivery and continuous deployment and understood that continuous delivery means the code/change is proven to go to live at any point of time and continuous deployment means the proven code/change will be automatically deployed to production server.
Most of the articles on the net explain how we can automate the deployments part of continues delivery / deployment to one of the servers (DEV/Staging/Preproduction/production). None of the article talks about deploying the application to large number of servers and clients.
Now my questions are
1) Deploying application to 2000+ servers and clients is part of continuous deployment or it should be handled out of CI/CD?
2) If it can be handled within CI/CD then how do I model this in the Jenkins delivery pipeline and trigger the deployment for all the servers from the CI tool?
3) Is there any tool which I can integrate with the CI tool to automate the deployment?
Thanks
I'd keep separate these 2 aspects:
deployment on a large number of servers (it doesn't matter if the artifacts to deploy come from CI or not)
hooking up such deployment into CI to perform CD
For #1 - I'm sure there are profesional IT tools out there that could be used. Check with your IT department. If deployment does't require superuser permissions or if you have such privileges (and knowledge) you could also write your own custom deployment/management tools.
For #2 - CD doesn't really specify if you should deploy to a single server, a certain percentage of your production servers or all of them). Your choice should be based on what makes sense or is more appropriate for your particular context. How exactly it's done if you decide to go that way? It really depends on #1 - you just need to trigger the process from your CI. Should be a breeze with a custom deployment tool :)
The key requirement (IMHO) in Continuous Deployment is the process orchestration, jenkins isn't ideal tool for this, but you can write an own groovy script wrapper or to invoke the jobs remotely form an another orchestration tool. Another issue in jenkins, at least for me, is the difficulty to track the progress.
I would model it as the following:
Divide the deployment process to logical levels, e.g Data centers->Application->Pools and create a wrapper for each level. It will allow you to see the progress at highest level in the main wrapper and drill down in case of need as you wish
Every wrapper should finish as SUCCESS only if ALL downstream jobs were SUCCESSFUL, otherwise it should be UNSTABLE or FAILURE . In this case there is no chance that you will miss something at the low level jobs.
1 job per 1 per product/application/package
1 job to control the single sequence run. For example I would use mcollective to run the installation job sequentially/parallel on the selected servers
1 wrapper job for every logical level
I would use:
mcollective - as mentinoed above
foreman to query the puppet to select the servers list for every sequence run
The package/application/artifact installation on the server I would prefer to do with a native OS software, e.g yum on the linuxserves. It will allow you to rely on their mechanism of installation verification
I'm sure that I missed something but I hope it will give you an acceptable start point

How to deploy a successful build using Travis CI and Scalr

We're currently evaluating CI servers and Travis CI caught our eye since it is a hosted solution. I haven't been able to find any information about it being able to deploy to Scalr though. Has anyone had any luck setting this up? I found information about using Jenkins to deploy to Scalr but I'd rather not go with Jenkins.
Thanks.
Deploying an application upon a Travis CI build success if functionally similar to deploying one upon a Jenkins success. All you need to do is to hook in to Scalr through its API when you build succeeds.
Using Travis CI, you can't really run arbitrary post-build shell scripts (unlike Jenkins). This makes integration a bit more complicated than using Jenkins (with Jenkins you just use the Scalr Command Line Tools to call the Scalr API), but it remains feasible.
All you need to do is have Travis CI send a notification to a Webhook Endpoint to a webapp you control (host that on your cloud infrastructure, or on e.g. Heroku), and have that webapp call the Scalr API.
Disclaimer: I work at Scalr.

Which is the best practice of using Jenkins?

Using a single server that is only contains one Jenkins building for dev, test, etc.
Using separate Jenkins on each dev, test servers to build and run tests.
Edit ;
this is an explanation of step by step our deployment and release model
Our server side developers develop and commit/push their code to github.
CI server that Jenkins is located in poll SCM and fetch changes than build. (within CI server), run unit tests.
After building process and deploying artifacts to repository server (artifactory server)
Then CI server starts to deploy latest successful build into Development Server.
then client mobile developers can develop on latest successful snapshot build of server side.
These are our standard deployment process.
By the way,
We are also doing test deployment to test server via CI server with another different job on Jenkins (same CI server) but, this is handling/triggering by manual.
Preproduction and production transitions are done by manual also. (preproduction and production are different servers of course)
Questions;
Integration tests should be run on test server. How can i figure it out by building system on remote CI server instead of building system on the same machine (test server) ?
As a further step, what would the best option be to construct a Continuous Delivery system. ?
Thanks
A good approach is to have a single CI system that builds the system continuously as development makes changes. This build will on each build run all the unit tests as well and result in some kind of package that can be deployed. That can be further connected with automation that deploys and runs other tests or it can be used by e.g. testers to further test the system.
Depending on your release model and branching strategy as well as type of system/product this basic setup can be adjusted to fit your needs.
If you want more details please explain what you build and how you release/deploy.

Resources