Microservice rollback approach if e2e test automation fails in CICD - jenkins

When a new feature of a microservice is merged into development branch, is it always deployed to Kubernetes test environment.
If yes, what happens if e2e tests fail? Is the microservice deployment rolled back?
Is automated rollback common in CICD in the industry right now?
Is there any other way to make e2e blackbox tests without deploying to Kubernetes test environment?
I could not find a good example about that?

When a new feature of a microservice is merged into development
branch, is it always deployed to Kubernetes test environment.
Yes, you are right
If yes, what happens if e2e tests fail? Is the microservice deployment
rolled back?
Since it's a development environment ideally it should be fine to run the broken version however more depends on the requirements or architecture you are planning to follow.
Is automated rollback common in CICD in the industry right now?
Yes, that's common however it would be better to keep it as Manual action, you might have a scenario where you want to deploy the version while e2e failing.
The benefit of Manual action as require might get change and have to deploy feature in which integration testing failing or something else.
Is there any other way to make e2e blackbox tests without deploying to
Kubernetes test environment?
You can do the subsystem or small portion testing in CI/CD, e2e also you can automate on CI/CD side and integrate the Kubernetes service accessible to the CI server for testing.

Related

Should we rollback release if integration tests fail?

We are using trunk-based development, and I am confused wheater we should rollback our deployment if integration tests fail? P.S we are using microservices and if A microservice wants to interact with B microservice or any third party service, which is down for some reason should we revert back our deployment?

How to deploy a successful build using Travis CI and Scalr

We're currently evaluating CI servers and Travis CI caught our eye since it is a hosted solution. I haven't been able to find any information about it being able to deploy to Scalr though. Has anyone had any luck setting this up? I found information about using Jenkins to deploy to Scalr but I'd rather not go with Jenkins.
Thanks.
Deploying an application upon a Travis CI build success if functionally similar to deploying one upon a Jenkins success. All you need to do is to hook in to Scalr through its API when you build succeeds.
Using Travis CI, you can't really run arbitrary post-build shell scripts (unlike Jenkins). This makes integration a bit more complicated than using Jenkins (with Jenkins you just use the Scalr Command Line Tools to call the Scalr API), but it remains feasible.
All you need to do is have Travis CI send a notification to a Webhook Endpoint to a webapp you control (host that on your cloud infrastructure, or on e.g. Heroku), and have that webapp call the Scalr API.
Disclaimer: I work at Scalr.

How do you manage multiple releases in multiple environments in continuous integration/delivery?

I am trying to wrap my head around this. Most CI/CD examples/projects have a single master that is always released, and have some variant of, e.g. git-flow, to have a develop branch. Once tagged, it goes to master.
Either way, master is always released to production.
But in the real world as I see it, there are human gates for release to production and other environments. What mechanism do you use to manage the deployment of different versions?
For example:
v1.5 is the current production release
v1.6 has passed all tests, artifacts are ready, it is tagged as valid, but business decides to deploy it only to staging, awaiting an opportune moment to deploy
v1.5 is deployed to a demo environment
v2.0 has also passed all tests, but is in UAT, subject to the customer being happy, as it is a major release
There could be many more such environments - production, staging, UAT, demo, demo2, etc.
What mechanism do you use to handle the tagging of a particular version for a particular environment, and the actual deployment thereof?
Although there a probably a few ways to do it, I use the build pipeline plugin https://wiki.jenkins-ci.org/display/JENKINS/Build+Pipeline+Plugin Along with the copy artifacts plugin https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin
With these, you can create individual jobs for each piece of your environment, and link them altogether.
So as in your example, the pipeline would look like:
Build -> Test and Deploy to UAT (2.0) -> deploy to staging(1.6) -> demo(1.5) -> prod (1.5)
Each piece represents a different build in jenkins. The idea behind continuous integration is you create the binaries once, and you carry it down the pipeline, only changing configuration pieces along the way. In a build job, the artifacts are created and then archived. In any jobs after, the artifact is picked up from the upstream job, some stuff is done, and then it get's re-archived for the next downstream job. So the deploy to staging would go to the Test and Deploy to Uat job to get its binary. The entire concept of Continuous Delivery boils down to the the build pipeline. http://en.wikipedia.org/wiki/Continuous_delivery (and yes I did just cite wikipedia).
As for tagging individual binaries for specific environments, that is by definition, not continuous integration. A binary is suppose to be created in a way that it can easily be propagated from one environment to the next. So unfortunately, individual builds for specific environments can never be continuous delivery. You can use jenkins as a CI server all you want, but if your process does not match, you will never achieve true continuous integration.
Braching, merging and checkins always seems to be a touchy subject when it comes to Continuous Integration, so I won't go into it much. But a lot of people share the idea that : "If different members of the team are working on separate branches, then by definition, they not participating in continuous integration process." http://eugenedvorkin.com/continuous-integration-strategies-for-branching-and-merging/
EDIT
For Flagging specific builds, it sounds like your looking to take use of this feature : https://wiki.jenkins-ci.org/display/JENKINS/Fingerprint ... Which gets the job done effectively, giving you the entire life of any individual artifact. A bit more complex solution would be artifactory, which is essentially artifact source control.
I explained the concept of the deployment process above, and without information on your specific environment it is hard to go much further. But for me, for java applications deployed to tomcat containers, the deploy plugin works great https://wiki.jenkins-ci.org/display/JENKINS/Deploy+Plugin
You shouldn't have to worry about selection of which artifact to deploy. The pipeline should be setup to always deploy the latest artifact that was archived in its corresponding upstream job.
Maybe Docker can help you out with this issue. It is able to deploy images of projects to a specific environment. If that environment has a docker client or a docker deamon you are able to request specific information about that environment and the project (to be) deployed on it.
Jenkins can still play a huge part in your pipeline for the integration part and you could let docker do the delivery part.
Docker: https://www.docker.com
Docker plugin for jenkins: https://wiki.jenkins-ci.org/display/JENKINS/Docker+build+step+plugin
Docker also has support for windows machines and .NET.

Which is the best practice of using Jenkins?

Using a single server that is only contains one Jenkins building for dev, test, etc.
Using separate Jenkins on each dev, test servers to build and run tests.
Edit ;
this is an explanation of step by step our deployment and release model
Our server side developers develop and commit/push their code to github.
CI server that Jenkins is located in poll SCM and fetch changes than build. (within CI server), run unit tests.
After building process and deploying artifacts to repository server (artifactory server)
Then CI server starts to deploy latest successful build into Development Server.
then client mobile developers can develop on latest successful snapshot build of server side.
These are our standard deployment process.
By the way,
We are also doing test deployment to test server via CI server with another different job on Jenkins (same CI server) but, this is handling/triggering by manual.
Preproduction and production transitions are done by manual also. (preproduction and production are different servers of course)
Questions;
Integration tests should be run on test server. How can i figure it out by building system on remote CI server instead of building system on the same machine (test server) ?
As a further step, what would the best option be to construct a Continuous Delivery system. ?
Thanks
A good approach is to have a single CI system that builds the system continuously as development makes changes. This build will on each build run all the unit tests as well and result in some kind of package that can be deployed. That can be further connected with automation that deploys and runs other tests or it can be used by e.g. testers to further test the system.
Depending on your release model and branching strategy as well as type of system/product this basic setup can be adjusted to fit your needs.
If you want more details please explain what you build and how you release/deploy.

Parallelizing tests with Jenkins

I am using Jenkins for integration testing.
Just to give the context. At the moment I have a separate build server which produces the build daily and Jenkins is not used as the build server. The build server executes the unit testing in my case.
When build process is complete it invokes the Jenkins job. In that job Jenkins start to deploy the build into the Virtual machine. I have a script for doing this.
Followed to that my plan is to run several scripts for doing the end-to-end testing.
Now I have several question in this regard:
How to parallelize the execution of the end-to-end tests?
As I am adding scripts after script I am getting worried how manageable it will be?
I am always using the web interface for adding and changing the scripts. How to do this from the command line?
Any ideas for a good tutorial? Any pointers from all of you? Thanks!
Looks like Build Flow Plugin is what I need.
https://github.com/jenkinsci/build-flow-plugin
You might want to try and see if you can use the Build Pipeline plugin before build flow. Much better visualization of what is going on, less scripting.
I link Build and deploy jobs in one sequence and then have unit and integration test jobs linked separately off the build job. You can then use Fail The Build plugin to have downstream jobs fail upstream ones.

Resources