We are developing a CI/CD pipeline leveraging Docker/Kubernetes in AWS. This topic is touched in Kubernetes CI/CD pipeline.
We want to create (and destroy) a new environment for each SCM branch, since a Git pull request until merge.
We will have a Kubernetes cluster available for that.
During prototyping by the dev team, we came up to Kubernetes namespaces. It looks quite suitable: For each branch, we create a namespace ns-<issue-id>.
But that idea was dismissed by dev-ops prototyper, without much explanation, just stating that "we are not doing that because it's complicated due to RBAC". And it's quite hard to get some detailed reasons.
However, for the CI/CD purposes, we need no RBAC - all can run with unlimited privileges and no quotas, we just need a separated network for each environment.
Is using namespaces for such purposes a good idea? I am still not sure after reading Kubernetes docs on namespaces.
If not, is there a better way? Ideally, we would like to avoid using Helm as it a level of complexity we probably don't need.
We're working on an open source project called Jenkins X which is a proposed sub project of the Jenkins foundation aimed at automating CI/CD on Kubernetes using Jenkins and GitOps for promotion.
When you submit a Pull Request we automatically create a Preview Environment which is exactly what you describe - a temporary environment which is used to deploy the pull request for validation, testing & approval before the pull request is approved.
We now use Preview Environments all the time for many reasons and are big fans of them! Each Preview Environment is in a separate namespace so you get all the usual RBAC features from Kubernetes with them.
If you're interested here's a demo of how to automate CI/CD with multiple environments on Kubernetes using GitOps for promotion between environments and Preview Environments on Pull Requests - using Spring Boot and nodejs apps (but we support many languages + frameworks).
Related
I am trying to setup Kubernetes for my company. I have looked a good amount into Jenkins X and, while I really like the roadmap, I have come the realization that it is likely not mature enough for my company to use at this time. (UI in preview, flaky command line, random IP address needs and poor windows support are a few of the issues that have lead me to that conclusion.)
But I understand that the normal Jenkins is very mature and can run on Kubernetes. I also understand that it can have dynamically created build agents run in the cluster.
But I am not sure about gitops support. When I try to google it (gitops jenkins) I get a bunch of information that includes Jenkins X.
Is there an easy(ish) way for normal Jenkins to use GitOps? If so, how?
Update:
By GitOps, I mean something similar to what Jenkins X supports. (Meaning changes to the cluster stored in a Git repository. And merging causes a deployment.)
I mean something similar to what Jenkins X supports. (Meaning changes to the cluster stored in a Git repository. And merging causes a deployment.)
Yes, this is the what Jenkins (or other CICD tools) do. You can declare a deployment pipeline in a Jenkinsfile that is triggered on merge (commit to master) and have other steps for other branches (if you want).
I recommend to deploy with kubectl using kustomize and store the config files in your Git repository. You parameterize different environments e.g. staging and production with overlays. You may e.g. deploy with only 2 replicas in staging but with 6 replicas and more memory resources in production.
Using Jenkins for this, I would create a docker agent image with kubectl, so your steps can use the kubectl command line tool.
Jenkins on Kubernetes
But I understand that the normal Jenkins is very mature and can run on Kubernetes. I also understand that it can have dynamically created build agents run in the cluster.
I have not had the best experience with this. It may work - or it may not work so well. I currently host Jenkins outside the Kubernetes cluster. I think that Jenkins X together with Tekton may be an upcoming promising solution for this, but I have not tried that setup.
I am a beginner user of Jenkins. I am trying to putting a development process onto the DevOps pipeline that includes Jenkins, GitHub, SonarQube, IBM UCD.
It is not a very complicated deployment process and it uses windows machine.
There are three environments, QA, DEV, and PROD.
I know that I need to install one IBM UCD agent for each of those three, but do I need to have three slaves in Jenkins as well , or just one master in Jenkins could do that deployment for three environments ? Which way is better ?
Usually for the complex deployment process companies are using "Master+Agent" scheme, but in your case there is no need to create some advanced Jenkins system with master and agents if you can build it on one host and you have not any additional projects or restrictions.
From official documentation:
It is pretty common when starting with Jenkins to have a single server which runs the master and all builds, however Jenkins architecture is fundamentally "Master+Agent". The master is designed to do co-ordination and provide the GUI and API endpoints, and the Agents are designed to perform the work. The reason being that workloads are often best "farmed out" to distributed servers. This may be for scale, or to provide different tools, or build on different target platforms. Another common reason for remote agents is to enact deployments into secured environments (without the master having direct access).
For additional information you can read the following articles: this and this.
What are the pros and cons of using AWS CodePipeline vs Jenkins?
I can't see a whole lot of info on the interwebs (apart from https://stackshare.io/stackups/jenkins-vs-aws-codepipeline). As far as I can see they are as follows:
AWS CodePipeline Pros
Web-based
integrated with AWS
simple to setup (as web-based)
AWS CodePipeline Cons
can't be used to set up code repos locally
Jenkins Pros
standalone software
can be used for many systems (other than AWS)
many options for setup (e.g. plugins)
can be used to setup code repos locally
Any other major differences that people can use to make an informed choice?
CodePipeline is a continuous "deployment" tool, while Jenkins is more of a continuous "integration" tool.
Continuous integration is a DevOps software development practice where developers regularly merge their code changes into a central repository, after which automated builds and tests are run.
With continuous deployment, code changes are automatically built, tested, and released to production. Continuous deployment expands upon continuous integration by deploying all code changes to a testing environment and/or a production environment after the build stage.
References:
https://aws.amazon.com/devops/continuous-integration/
https://aws.amazon.com/devops/continuous-delivery/
Other downside of using AWS CodePipeLine is lack of integration with source control providers other than GitHub. The only other option we have is to create version enabled Amazon S3 bucket and push our code there. This creates an extra layer between Source control and CodePipeline.
Also, there is no proper documentation available to explain how one could push their code to Amazon S3 bucket for codebases built in commonly used platforms such as .Net. The example given in AWS website deals with some random files which is not helpful whatsoever.
The other entry(trivial?) missing in your question from cons section of AWS CodePipeLine is, Price. Jenkins is free.
Gitlab SCM solution is now provided by AWS https://aws.amazon.com/blogs/devops/integrating-git-with-aws-codepipeline/
CodePipeline and Jenkins can accomplish the same thing. Also you don't have to necessarily use the web UI for CodePipeline it can be setup through an AWS SAM CLI template, very similar to CloudFormation templates.
CodePipeline also supports a lot of source code providers, AWS CodeCommit, AWS S3, GitHub, and BitBucket.
I personally like CodePipeline a lot better than Jenkins if your working in AWS. The interface is 10x cleaner IMO. And with the SAM CLI templates your pipelines can be managed as code, similar to how you'd use a Jenkinsfile.
You can do a lot more with Jenkins because you can customize it with a myriad plugins. Thus, you can stay on the bleeding edge, if needed.
By contrast, with Codepipeline you are limited to what AWS offers you. Of course, Codepipeline gives you the opportunity to select Jenkins as the tool for the build step. However, that means you cannot use Jenkins for different purposes at other stages of the pipeline.
If you are fan of Hashicorp Vault, you can easily integrated with Jenkins to provide dynamic secrets to your builds. You cannot do that with Codepipeline. you will have to rely on the Cloud-native mechanisms, in this case AWS KMS.
Here is a tutorial that shows you how to integrated Jenkins with Codepipeline - you will need several plugins to get Jenkins to talk to the different Codepipeline components.
https://aws.amazon.com/blogs/devops/setting-up-a-ci-cd-pipeline-by-integrating-jenkins-with-aws-codebuild-and-aws-codedeploy/
I am new to DevOps, and need to develop a strategy for a growing business that will handle many different services/nodes (like 100).
I've been learning about Docker, and it seems like Docker Cloud is a good service, but I just don't really know the standard use cases of the various services, and how to compare them.
I need some guidance as to how to manage the development environment, deployment, production environment, and server administration. Are Docker Cloud, Chef Cloud, and AWS ECS tools that can help with all of these, or only some aspects? How do these services differ?
If you are only starting out with DevOps I would start with the most basic pipeline and the foundational elements of the pipeline.
The reason why I would start with a basic pipeline is because if you have no experience you have to get it from somewhere and understand the basics of Docker Engine and its foundational elements. In addition, you need to design the pipeline.
Here is one basic uni-container pipeline with which you can start getting some experience:
Maven - use the standard, well-understood versioning scheme in your Dockerfile(s) so your Docker tags will be e.g. 0.0.1-SNAPSHOT or 0.0.1 for a release
Maven - get familiar with and use the spotify plugin
Jenkins - this will do your pulls / pushes to Nexus 3
Nexus 3 - this will proxy both Docker Hub and Maven Central and be your private registry
Deploy Server (test/dev) - Jenkins will scp docker-compose files onto this environment and tear your environments up & down
Cleanup - clean up all your environments with spotify-gc (ideally daily, get Jenkins to do this)
Once you have the above going, then move onto cloud services, orchestration etc - but first get the basics right.
I am trying to wrap my head around this. Most CI/CD examples/projects have a single master that is always released, and have some variant of, e.g. git-flow, to have a develop branch. Once tagged, it goes to master.
Either way, master is always released to production.
But in the real world as I see it, there are human gates for release to production and other environments. What mechanism do you use to manage the deployment of different versions?
For example:
v1.5 is the current production release
v1.6 has passed all tests, artifacts are ready, it is tagged as valid, but business decides to deploy it only to staging, awaiting an opportune moment to deploy
v1.5 is deployed to a demo environment
v2.0 has also passed all tests, but is in UAT, subject to the customer being happy, as it is a major release
There could be many more such environments - production, staging, UAT, demo, demo2, etc.
What mechanism do you use to handle the tagging of a particular version for a particular environment, and the actual deployment thereof?
Although there a probably a few ways to do it, I use the build pipeline plugin https://wiki.jenkins-ci.org/display/JENKINS/Build+Pipeline+Plugin Along with the copy artifacts plugin https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin
With these, you can create individual jobs for each piece of your environment, and link them altogether.
So as in your example, the pipeline would look like:
Build -> Test and Deploy to UAT (2.0) -> deploy to staging(1.6) -> demo(1.5) -> prod (1.5)
Each piece represents a different build in jenkins. The idea behind continuous integration is you create the binaries once, and you carry it down the pipeline, only changing configuration pieces along the way. In a build job, the artifacts are created and then archived. In any jobs after, the artifact is picked up from the upstream job, some stuff is done, and then it get's re-archived for the next downstream job. So the deploy to staging would go to the Test and Deploy to Uat job to get its binary. The entire concept of Continuous Delivery boils down to the the build pipeline. http://en.wikipedia.org/wiki/Continuous_delivery (and yes I did just cite wikipedia).
As for tagging individual binaries for specific environments, that is by definition, not continuous integration. A binary is suppose to be created in a way that it can easily be propagated from one environment to the next. So unfortunately, individual builds for specific environments can never be continuous delivery. You can use jenkins as a CI server all you want, but if your process does not match, you will never achieve true continuous integration.
Braching, merging and checkins always seems to be a touchy subject when it comes to Continuous Integration, so I won't go into it much. But a lot of people share the idea that : "If different members of the team are working on separate branches, then by definition, they not participating in continuous integration process." http://eugenedvorkin.com/continuous-integration-strategies-for-branching-and-merging/
EDIT
For Flagging specific builds, it sounds like your looking to take use of this feature : https://wiki.jenkins-ci.org/display/JENKINS/Fingerprint ... Which gets the job done effectively, giving you the entire life of any individual artifact. A bit more complex solution would be artifactory, which is essentially artifact source control.
I explained the concept of the deployment process above, and without information on your specific environment it is hard to go much further. But for me, for java applications deployed to tomcat containers, the deploy plugin works great https://wiki.jenkins-ci.org/display/JENKINS/Deploy+Plugin
You shouldn't have to worry about selection of which artifact to deploy. The pipeline should be setup to always deploy the latest artifact that was archived in its corresponding upstream job.
Maybe Docker can help you out with this issue. It is able to deploy images of projects to a specific environment. If that environment has a docker client or a docker deamon you are able to request specific information about that environment and the project (to be) deployed on it.
Jenkins can still play a huge part in your pipeline for the integration part and you could let docker do the delivery part.
Docker: https://www.docker.com
Docker plugin for jenkins: https://wiki.jenkins-ci.org/display/JENKINS/Docker+build+step+plugin
Docker also has support for windows machines and .NET.