I have a project in Jenkins (web-application with many files). There are 5 servers where I want to deploy build if it is successful: 3 production and 2 test servers. However, automatic upload is required only for the first test server (for sure only if build is not broken). To the rest servers I want to deploy manually (ideally would be separately upload to the second test server and separately for 3 production servers).
So I would like to have something like list of buttons "Upload to server #1" on the build page and on the project page, near all the plots.
However, I couldn't find anything similar which would help me on that matter. I cannot believe, is really manual publish / deployment from admin panel some kind of exotic / extraordinary operation? Probably I try to resolve my problem in wrong way?
Install Build Pipeline plugin and add "Build other projects (manual step)" in post build action of job which uploads to first test server. Once pipeline created you can run manual steps in pipeline with Pipeline view.
https://wiki.jenkins-ci.org/display/JENKINS/Build+Pipeline+Plugin
Related
I know this Forum is not to provide strategy's.
Using Jenkins I have set up CI and CD to my Dev,QA and Staging environments. I am stuck up with Rollback strategy for all my environments.
1- What happens if my build fails in Dev
2- What happens if my build fails in QA and passed in Dev.
3- What happens if my build fails in Staging and passed in Dev and QA.
How should I roll back and get things done considering DB in not in place. I have created sample workflow but not sure its an right process.
Generally you can achieve this in 2 ways:
Setting up some sort of release management tool that tracks every execution of your pipeline and snapshots the variables, artifacts, etc... that was used on that exact execution, then you can just run an earlier release of it (check tools like octopus deploy)
If you are using a branch strategy with tags you can parameterize your jobs, passing the tag you wanna build, and build the "earlier tag" if something fails. Check the rebuild option for older job executions.
I have recently started to mess about with Jenkins and am unsure how to deploy my web app to a basic server. I've gotten into the Pipeline (https://jenkins.io/doc/book/pipeline/) and it seems like a fantastic way to work.
Where I'm a bit stuck is in two spots:
Once my repo is in my workspace within Jenkins, how do I prep it so I am only deploying the files necessary for the application? For example, I don't need my src/ directory or my Vagrantfile when I'm deploying things.
How do I deploy my app to the server? I see examples all over the place, but I am getting a bit lost since there seems to be so many ways to do this. I'm assuming scp or something like that...?
To build off of #2, is there a way to deploy web apps as transactions (in one shot) rather than file-by-file?
Please let me know if I can provide any information for potential answers!
I can't speak to your specific use case but a common way to do this is the build-and-deploy model, where you will have 2 Jenkins jobs. The "build" job will check out from source, run build commands such as maven or make, and lastly will "archive" the build artifacts. The latter is an option under the 'post-build actions' tab at the bottom.
In the "deploy" job, you will grab the artifacts of your choice. You can fetch a single file, all of them, and everything in between. This requires use of the 'Copy Artifact' plug-in and it allows you to copy files generated by other jobs. Now you can run your usual deploy script in the 'Execute Command' box. Most command line paradigms are supported out of the box such as setting environment variables.
The instructions above assume that you want to run your application off of a host that you've provisioned as a Jenkins slave.
Use artifacts as mentioned by Paul Back, or a 3rd party artifactory server as in video
This is always tricky and error-prone. Why not spin up a fresh server with new release (humanly verified once)
Jenkins & Ansible is the answer here. This is how I deploy to production, since I am in no need to use anything like Docker (too many issues with particular app) so have to run the app natively. Quick example would be
You monitor a specific branch in gitlab / github or whatever else and then call a webhook on push / merge etc on that branch, at this point you deal with anything you need to do by running a playbook on the jenkins job that monitors that branch (jenkins).
in my case jenkins and ansible run on the same server. Jenkins runs the ansible playbook that does whatever I need to do.
for example with ansible, I copy certain files that need to be there, run configs / change filenames etc. setup nginx, run composer,
you get the point.
I am looking for implementation of CI/CD in to my current project here is what i think will work.
Environment consists of
- Jenkins
- git
- docker
- gradle
- Linux servers
- Sonar
- Ansible.
Each tool will be used as following.
Git:- Developers will push there code to this CVS.
Jenkins:- On detecting Check-in Jenkins will trigger a build and will deploy to one of the server.
Sonar:- will be used for code coverage and will check the code before building the same through Jenkins.
ansible:- ansible will be used to quickly prepare added nodes so that code can be deployed to them.
Docker in case if we need fresh test environments every time we can use docker+ ansible combo for doing the things.
Flow of work will be
User run unit test cases on his machine and commits the code to the server.
Jenkins will pull the code from git and will run sonar on the same and will generate reports.
jenkins will create build and will deploy the same on dev server.
A jenkins job will run and will perform the integration testing on the dev server
Any other automated tests can be run.
Finally builds pushed to next server using Jenkins.
I will use shell commands inside Jenkins to push compiled code from one server to another.
In my this scenario can some one answer me following.
Where will sonar get fit and how to use the same?
I see there are CD tools, cant i push compiled code to the servers using shell scripts written inside the Jenkins jobs to automatically deploy the things? What extra benefits a CD tool provides
Is is wise to create fresh test environment or we can keep using the old one again and again?
Will this complete CI/CD?
can someone share there implementation
You say you plan to use Git. I'll outline a scenario using Git on GitHub
Developers push code changes here as pull requests
The SonarQube GitHub Plugin kicks off an initial analysis of only the code changed in the PR looking for the introduction of new issues (note that coverage and duplications are not included in this check)
Once the PR is merged, Jenkins (in one job or several, depending on your needs)
builds
fires integration tests & any other automated tests
runs SonarQube Scan. Note that this comes last to include integration test results.
pushes build to next server
Note that the ability to break the build when the project doesn't pass the SonarQube Quality Gate you've set up may be desirable in your situation. Unfortunately, it's not available in the current server version: 5.2. It is available in 5.1, and is should return soon.
I am trying to wrap my head around this. Most CI/CD examples/projects have a single master that is always released, and have some variant of, e.g. git-flow, to have a develop branch. Once tagged, it goes to master.
Either way, master is always released to production.
But in the real world as I see it, there are human gates for release to production and other environments. What mechanism do you use to manage the deployment of different versions?
For example:
v1.5 is the current production release
v1.6 has passed all tests, artifacts are ready, it is tagged as valid, but business decides to deploy it only to staging, awaiting an opportune moment to deploy
v1.5 is deployed to a demo environment
v2.0 has also passed all tests, but is in UAT, subject to the customer being happy, as it is a major release
There could be many more such environments - production, staging, UAT, demo, demo2, etc.
What mechanism do you use to handle the tagging of a particular version for a particular environment, and the actual deployment thereof?
Although there a probably a few ways to do it, I use the build pipeline plugin https://wiki.jenkins-ci.org/display/JENKINS/Build+Pipeline+Plugin Along with the copy artifacts plugin https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin
With these, you can create individual jobs for each piece of your environment, and link them altogether.
So as in your example, the pipeline would look like:
Build -> Test and Deploy to UAT (2.0) -> deploy to staging(1.6) -> demo(1.5) -> prod (1.5)
Each piece represents a different build in jenkins. The idea behind continuous integration is you create the binaries once, and you carry it down the pipeline, only changing configuration pieces along the way. In a build job, the artifacts are created and then archived. In any jobs after, the artifact is picked up from the upstream job, some stuff is done, and then it get's re-archived for the next downstream job. So the deploy to staging would go to the Test and Deploy to Uat job to get its binary. The entire concept of Continuous Delivery boils down to the the build pipeline. http://en.wikipedia.org/wiki/Continuous_delivery (and yes I did just cite wikipedia).
As for tagging individual binaries for specific environments, that is by definition, not continuous integration. A binary is suppose to be created in a way that it can easily be propagated from one environment to the next. So unfortunately, individual builds for specific environments can never be continuous delivery. You can use jenkins as a CI server all you want, but if your process does not match, you will never achieve true continuous integration.
Braching, merging and checkins always seems to be a touchy subject when it comes to Continuous Integration, so I won't go into it much. But a lot of people share the idea that : "If different members of the team are working on separate branches, then by definition, they not participating in continuous integration process." http://eugenedvorkin.com/continuous-integration-strategies-for-branching-and-merging/
EDIT
For Flagging specific builds, it sounds like your looking to take use of this feature : https://wiki.jenkins-ci.org/display/JENKINS/Fingerprint ... Which gets the job done effectively, giving you the entire life of any individual artifact. A bit more complex solution would be artifactory, which is essentially artifact source control.
I explained the concept of the deployment process above, and without information on your specific environment it is hard to go much further. But for me, for java applications deployed to tomcat containers, the deploy plugin works great https://wiki.jenkins-ci.org/display/JENKINS/Deploy+Plugin
You shouldn't have to worry about selection of which artifact to deploy. The pipeline should be setup to always deploy the latest artifact that was archived in its corresponding upstream job.
Maybe Docker can help you out with this issue. It is able to deploy images of projects to a specific environment. If that environment has a docker client or a docker deamon you are able to request specific information about that environment and the project (to be) deployed on it.
Jenkins can still play a huge part in your pipeline for the integration part and you could let docker do the delivery part.
Docker: https://www.docker.com
Docker plugin for jenkins: https://wiki.jenkins-ci.org/display/JENKINS/Docker+build+step+plugin
Docker also has support for windows machines and .NET.
Let's say I have this situation. I have three jobs. Job number one has two manually triggered downstream jobs (deploy to test, deploy to prod for example). Something like this:
I want the deployment jobs (test-job-2, test-job-3) to require a password before they are triggered. How can I solve this with Jenkins?
The only option right now supported by the Build Pipeline Plugin is to have a manually deployed downstream job. But this job starts right after you click on it. I would like to require the user to manually enter some parameters (password for example).
Is there some workaround? I was thinking of using the Promoted Builds Plugin. So the deployment jobs would run in a "dry run mode" - just checking that we have ssh access to the server and some other basic stuff. And then in order to deploy you will have to promote the build.
This approach isn't very nice though. Build pipeline and promoted builds plugins don't interact with each other very well.
This is not exactly what you want, but I guess it would some how solve your problem.
View Job Filters
Using this feature in tandem with a security feature such as the Standard matrix based security can help you create a view that will show different jobs depending on who is logged in.
I use different Jenkins Servers to "complete the pipeline" using Build Publisher job to publish the last part of the pipeline job to the other jenkins. I then pick it up from there. Operations teams have access to the "prod" jenkins system, and developers have access to the "dev" system.