Which is the best practice of using Jenkins? - jenkins

Using a single server that is only contains one Jenkins building for dev, test, etc.
Using separate Jenkins on each dev, test servers to build and run tests.
Edit ;
this is an explanation of step by step our deployment and release model
Our server side developers develop and commit/push their code to github.
CI server that Jenkins is located in poll SCM and fetch changes than build. (within CI server), run unit tests.
After building process and deploying artifacts to repository server (artifactory server)
Then CI server starts to deploy latest successful build into Development Server.
then client mobile developers can develop on latest successful snapshot build of server side.
These are our standard deployment process.
By the way,
We are also doing test deployment to test server via CI server with another different job on Jenkins (same CI server) but, this is handling/triggering by manual.
Preproduction and production transitions are done by manual also. (preproduction and production are different servers of course)
Questions;
Integration tests should be run on test server. How can i figure it out by building system on remote CI server instead of building system on the same machine (test server) ?
As a further step, what would the best option be to construct a Continuous Delivery system. ?
Thanks

A good approach is to have a single CI system that builds the system continuously as development makes changes. This build will on each build run all the unit tests as well and result in some kind of package that can be deployed. That can be further connected with automation that deploys and runs other tests or it can be used by e.g. testers to further test the system.
Depending on your release model and branching strategy as well as type of system/product this basic setup can be adjusted to fit your needs.
If you want more details please explain what you build and how you release/deploy.

Related

CI/CD pipeline and build server

Have seen the following graph representing Jenkins pipeline:
git push --> Git repo --> Jenkins CI server --> Maven build server --> Test server --> Deliver build artifacts --> Deploy
Given that is correct, I am struggling to understand how the above different(?) servers work under the hood and I need some clarification so that Jenkins procedures are not just a black box.
Jenkins CI server, Maven build server and Test server are in reality one physical server?
If the answer to the previous question is yes, these 3 servers are different logical servers?
In my understanding and my case (Java Spring project), Maven build server executes mvn install and since pom.xml contains npm install plus npm run test commands, it is Maven build server that executes the UI tests and not the Test server. Am I right?
Does the Test server execute only the back-end Java acceptance tests?
It could be one physical or virtual server server.
no
To use maven you just need to install maven plugin in jenkins. Configure which version of maven you want to use in "Manage jenkins" -> "Global tools configuration" -> add needed maven version.
Depends on which kind of tests you want to run - you can repeat steps from step 3 for the tool you need or you can install needed tools on the jenkins server manually and add it to the PATH.
At the end just use needed tools in your pipeline or other king of job in jenkins.
Without further context, we must put some points clear:
Jenkins may work on multiples nodes (could be real/virtual/pods)
Jenkins is an orchestrator... This means that it uses different tools in a order given by the pipeline. This order is pretty much: git > build > test > delivery > deploy
Testing server are used for install the software an run a variety of tests, pretty much for several teams.
Build servers are used for running commands to build software
With those clear, these are the questions:
1.- Depends on your infrastructure. Jenkins works with executors that may run on master or in nodes. If your Jenkins is just one server, then yes... It is one physical server. If you use nodes, then it is more probable than you may found one node handling the building, and another one the tests. The definite answer lies within your Jenkins and your pipeline
2.- No... Even with a Jenkins master-agent infrastructure, the building and testing occurs within Jenkins.
3.- Depends on your concept of test server. If you define it as a server where you use as a target for testing purposes, then the answer is no... If you define a test server as a machine that executes test, then the answer is yes.
4.- Depends on what you type or test do you have automatized. You can run unit tests, regression tests, smoke test, etc. For example, you may have some unit test for your back-end, and have some test in karma for your UI. Again, in your case you must check on your pipeline, and the code to check what kind of tests are you running.

User permissions for TFS Build server

I am creating a build using the new TFS 2015 Build definitions. I have msbuild tasks as well as npm/gulp tasks. I am looking at using variables to allow me to build and deploy to each environment, with DEV being the only one that runs on check-in. However, I don't want anyone to be able to start a deploy for production. How would I go about limiting the users that can start a deploy to production? I'd prefer to only have one build definition, for maintenance.
Use the Release hub capabilities for deployments and create an approval workflow for your environment pipeline.

CICD with jenkins and sonarqube

from the last week i am working with jenkins, and its going good but at the end of R&D I have lost of confusion, these are questions which confusingto me.
How to do Continuous Integration with jenkins of Website?
hoe to view Testing analysis reports produced by sonarqube server to my local system?
what happens where all my developers will do commit and push at the same time on repository?
How to deploy my web application on targeted server using jenkins?
How to I use sonarqube to test my website ?
If you are building a .net website, install msbuild plugin in jenkins. Create a jenkins job, add a step checkout your website (git/svn - install required repo plugin). Schedule to run nightly or hourly depending on your needs.
Type in url for your sonarqube server and then you can view the reports.
This is not an issue if your developers are comitting simultaneously. Usually a CI tool such as Jenkins will take care of that.
This is called continuous deployment or continuous delivery. Something like Octopus Deploy can help you if you are deploying .net applications. You can use the tool octo.exe and pass the API key as parameter to deploy to a specific environment.
SonarQube is used as a quality gate for code analysis. It is not a test framework. Try to investigate on how to use selenium framework for automation testing.
I hope that I have answered your questions.

CI and CD implementation issues

I am looking for implementation of CI/CD in to my current project here is what i think will work.
Environment consists of
- Jenkins
- git
- docker
- gradle
- Linux servers
- Sonar
- Ansible.
Each tool will be used as following.
Git:- Developers will push there code to this CVS.
Jenkins:- On detecting Check-in Jenkins will trigger a build and will deploy to one of the server.
Sonar:- will be used for code coverage and will check the code before building the same through Jenkins.
ansible:- ansible will be used to quickly prepare added nodes so that code can be deployed to them.
Docker in case if we need fresh test environments every time we can use docker+ ansible combo for doing the things.
Flow of work will be
User run unit test cases on his machine and commits the code to the server.
Jenkins will pull the code from git and will run sonar on the same and will generate reports.
jenkins will create build and will deploy the same on dev server.
A jenkins job will run and will perform the integration testing on the dev server
Any other automated tests can be run.
Finally builds pushed to next server using Jenkins.
I will use shell commands inside Jenkins to push compiled code from one server to another.
In my this scenario can some one answer me following.
Where will sonar get fit and how to use the same?
I see there are CD tools, cant i push compiled code to the servers using shell scripts written inside the Jenkins jobs to automatically deploy the things? What extra benefits a CD tool provides
Is is wise to create fresh test environment or we can keep using the old one again and again?
Will this complete CI/CD?
can someone share there implementation
You say you plan to use Git. I'll outline a scenario using Git on GitHub
Developers push code changes here as pull requests
The SonarQube GitHub Plugin kicks off an initial analysis of only the code changed in the PR looking for the introduction of new issues (note that coverage and duplications are not included in this check)
Once the PR is merged, Jenkins (in one job or several, depending on your needs)
builds
fires integration tests & any other automated tests
runs SonarQube Scan. Note that this comes last to include integration test results.
pushes build to next server
Note that the ability to break the build when the project doesn't pass the SonarQube Quality Gate you've set up may be desirable in your situation. Unfortunately, it's not available in the current server version: 5.2. It is available in 5.1, and is should return soon.

How do you manage multiple releases in multiple environments in continuous integration/delivery?

I am trying to wrap my head around this. Most CI/CD examples/projects have a single master that is always released, and have some variant of, e.g. git-flow, to have a develop branch. Once tagged, it goes to master.
Either way, master is always released to production.
But in the real world as I see it, there are human gates for release to production and other environments. What mechanism do you use to manage the deployment of different versions?
For example:
v1.5 is the current production release
v1.6 has passed all tests, artifacts are ready, it is tagged as valid, but business decides to deploy it only to staging, awaiting an opportune moment to deploy
v1.5 is deployed to a demo environment
v2.0 has also passed all tests, but is in UAT, subject to the customer being happy, as it is a major release
There could be many more such environments - production, staging, UAT, demo, demo2, etc.
What mechanism do you use to handle the tagging of a particular version for a particular environment, and the actual deployment thereof?
Although there a probably a few ways to do it, I use the build pipeline plugin https://wiki.jenkins-ci.org/display/JENKINS/Build+Pipeline+Plugin Along with the copy artifacts plugin https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin
With these, you can create individual jobs for each piece of your environment, and link them altogether.
So as in your example, the pipeline would look like:
Build -> Test and Deploy to UAT (2.0) -> deploy to staging(1.6) -> demo(1.5) -> prod (1.5)
Each piece represents a different build in jenkins. The idea behind continuous integration is you create the binaries once, and you carry it down the pipeline, only changing configuration pieces along the way. In a build job, the artifacts are created and then archived. In any jobs after, the artifact is picked up from the upstream job, some stuff is done, and then it get's re-archived for the next downstream job. So the deploy to staging would go to the Test and Deploy to Uat job to get its binary. The entire concept of Continuous Delivery boils down to the the build pipeline. http://en.wikipedia.org/wiki/Continuous_delivery (and yes I did just cite wikipedia).
As for tagging individual binaries for specific environments, that is by definition, not continuous integration. A binary is suppose to be created in a way that it can easily be propagated from one environment to the next. So unfortunately, individual builds for specific environments can never be continuous delivery. You can use jenkins as a CI server all you want, but if your process does not match, you will never achieve true continuous integration.
Braching, merging and checkins always seems to be a touchy subject when it comes to Continuous Integration, so I won't go into it much. But a lot of people share the idea that : "If different members of the team are working on separate branches, then by definition, they not participating in continuous integration process." http://eugenedvorkin.com/continuous-integration-strategies-for-branching-and-merging/
EDIT
For Flagging specific builds, it sounds like your looking to take use of this feature : https://wiki.jenkins-ci.org/display/JENKINS/Fingerprint ... Which gets the job done effectively, giving you the entire life of any individual artifact. A bit more complex solution would be artifactory, which is essentially artifact source control.
I explained the concept of the deployment process above, and without information on your specific environment it is hard to go much further. But for me, for java applications deployed to tomcat containers, the deploy plugin works great https://wiki.jenkins-ci.org/display/JENKINS/Deploy+Plugin
You shouldn't have to worry about selection of which artifact to deploy. The pipeline should be setup to always deploy the latest artifact that was archived in its corresponding upstream job.
Maybe Docker can help you out with this issue. It is able to deploy images of projects to a specific environment. If that environment has a docker client or a docker deamon you are able to request specific information about that environment and the project (to be) deployed on it.
Jenkins can still play a huge part in your pipeline for the integration part and you could let docker do the delivery part.
Docker: https://www.docker.com
Docker plugin for jenkins: https://wiki.jenkins-ci.org/display/JENKINS/Docker+build+step+plugin
Docker also has support for windows machines and .NET.

Resources