I work for a small startup. We have 3 environments (Production, Development, and Staging) and GitHub is used as VCS.
All env runs on EC2 with docker.
Can someone suggest me a simple CICD solution that can trigger builds automatically after certain branches are merged / manual trigger option?
Like, if anything in merged into dev-merge, build and deploy to development, and the same for staging and pushing the image to ECR and rolling out docker update.
We tried Jenkins but we felt it was over-complicated for our small-scale infra.
GitHub actions are also evaluated (self-hosted runners), but it needs YAMLs to be there in repos.
We are looking for something that can give us option to modify the pipeline or overall flow without code-hosted CICD config. (Like the way Jenkins gives option to either use Jenkins file or configure the job manually via GUI)
Any opinions about Team City?
in the software company I work we use jenkins to deploy to different servers, the way we do that is ever single branch from git repository deploy to the specific server based on the name of the branch and in the specifications on the jenkinsfile. But we are in the process of unification of this branchs in just one: Master, but how we can configure jenkins to catch the same code and deploy to the servers we are interested in, without changing the code? I think we should separate code from deploy, but the pipeline still have to exist in some way.
2 solutions come to mind:
I believe you may be using SCM polling to get the builds started. With git diff you can check what was changed and based on that start the specific deployment.
If you are running the builds manually you can parameterize the build and specify this way which one you want to deploy.
From experience you might want to set the pipeline so when you commit to the repository only testing and building is done and deploy is not done (or only on a test env) and the proper prod deploy is done only manually (and can be parameterized).
I feel it's a little crazy I couldn't find anything along these lines, especially as it's an incredibly simple requirement: Is there a way you can deploy from Jenkins using SSH/SCP, yet write only one instance of a transfer-set/exec script?
As it stands, deploying to servers is kind of INSANE in that I need to create a new "Deploy to SSH" task, choose a different server from the drop down and then copy/past all transfer-sets and execs from the previous entry. Then do it again. And again. And again.
There must be a better way?
This may not be a short-term immediate solution for your question---
(On long run this can be used)
Your requirement seems to me like you need a configuration management equipment. You could use Chef, Puppet or Ansible. And automation of this deployment can be done using Jenkins CI.
One example of how to deploy an application on jboss using Ansible -
Deploy a hello world application
jboss: src=/tmp/hello-1.0-SNAPSHOT.war deployment=hello.war state=present
Of course, this will require installation of Ansible and little bit of initial configuration. Ansible is simplest of all deployment mechanisms.
Check this for more details - http://docs.ansible.com/ansible/intro.html
I have a project in Jenkins (web-application with many files). There are 5 servers where I want to deploy build if it is successful: 3 production and 2 test servers. However, automatic upload is required only for the first test server (for sure only if build is not broken). To the rest servers I want to deploy manually (ideally would be separately upload to the second test server and separately for 3 production servers).
So I would like to have something like list of buttons "Upload to server #1" on the build page and on the project page, near all the plots.
However, I couldn't find anything similar which would help me on that matter. I cannot believe, is really manual publish / deployment from admin panel some kind of exotic / extraordinary operation? Probably I try to resolve my problem in wrong way?
Install Build Pipeline plugin and add "Build other projects (manual step)" in post build action of job which uploads to first test server. Once pipeline created you can run manual steps in pipeline with Pipeline view.
https://wiki.jenkins-ci.org/display/JENKINS/Build+Pipeline+Plugin
I am trying to wrap my head around this. Most CI/CD examples/projects have a single master that is always released, and have some variant of, e.g. git-flow, to have a develop branch. Once tagged, it goes to master.
Either way, master is always released to production.
But in the real world as I see it, there are human gates for release to production and other environments. What mechanism do you use to manage the deployment of different versions?
For example:
v1.5 is the current production release
v1.6 has passed all tests, artifacts are ready, it is tagged as valid, but business decides to deploy it only to staging, awaiting an opportune moment to deploy
v1.5 is deployed to a demo environment
v2.0 has also passed all tests, but is in UAT, subject to the customer being happy, as it is a major release
There could be many more such environments - production, staging, UAT, demo, demo2, etc.
What mechanism do you use to handle the tagging of a particular version for a particular environment, and the actual deployment thereof?
Although there a probably a few ways to do it, I use the build pipeline plugin https://wiki.jenkins-ci.org/display/JENKINS/Build+Pipeline+Plugin Along with the copy artifacts plugin https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin
With these, you can create individual jobs for each piece of your environment, and link them altogether.
So as in your example, the pipeline would look like:
Build -> Test and Deploy to UAT (2.0) -> deploy to staging(1.6) -> demo(1.5) -> prod (1.5)
Each piece represents a different build in jenkins. The idea behind continuous integration is you create the binaries once, and you carry it down the pipeline, only changing configuration pieces along the way. In a build job, the artifacts are created and then archived. In any jobs after, the artifact is picked up from the upstream job, some stuff is done, and then it get's re-archived for the next downstream job. So the deploy to staging would go to the Test and Deploy to Uat job to get its binary. The entire concept of Continuous Delivery boils down to the the build pipeline. http://en.wikipedia.org/wiki/Continuous_delivery (and yes I did just cite wikipedia).
As for tagging individual binaries for specific environments, that is by definition, not continuous integration. A binary is suppose to be created in a way that it can easily be propagated from one environment to the next. So unfortunately, individual builds for specific environments can never be continuous delivery. You can use jenkins as a CI server all you want, but if your process does not match, you will never achieve true continuous integration.
Braching, merging and checkins always seems to be a touchy subject when it comes to Continuous Integration, so I won't go into it much. But a lot of people share the idea that : "If different members of the team are working on separate branches, then by definition, they not participating in continuous integration process." http://eugenedvorkin.com/continuous-integration-strategies-for-branching-and-merging/
EDIT
For Flagging specific builds, it sounds like your looking to take use of this feature : https://wiki.jenkins-ci.org/display/JENKINS/Fingerprint ... Which gets the job done effectively, giving you the entire life of any individual artifact. A bit more complex solution would be artifactory, which is essentially artifact source control.
I explained the concept of the deployment process above, and without information on your specific environment it is hard to go much further. But for me, for java applications deployed to tomcat containers, the deploy plugin works great https://wiki.jenkins-ci.org/display/JENKINS/Deploy+Plugin
You shouldn't have to worry about selection of which artifact to deploy. The pipeline should be setup to always deploy the latest artifact that was archived in its corresponding upstream job.
Maybe Docker can help you out with this issue. It is able to deploy images of projects to a specific environment. If that environment has a docker client or a docker deamon you are able to request specific information about that environment and the project (to be) deployed on it.
Jenkins can still play a huge part in your pipeline for the integration part and you could let docker do the delivery part.
Docker: https://www.docker.com
Docker plugin for jenkins: https://wiki.jenkins-ci.org/display/JENKINS/Docker+build+step+plugin
Docker also has support for windows machines and .NET.