I am new to DevOps, and need to develop a strategy for a growing business that will handle many different services/nodes (like 100).
I've been learning about Docker, and it seems like Docker Cloud is a good service, but I just don't really know the standard use cases of the various services, and how to compare them.
I need some guidance as to how to manage the development environment, deployment, production environment, and server administration. Are Docker Cloud, Chef Cloud, and AWS ECS tools that can help with all of these, or only some aspects? How do these services differ?
If you are only starting out with DevOps I would start with the most basic pipeline and the foundational elements of the pipeline.
The reason why I would start with a basic pipeline is because if you have no experience you have to get it from somewhere and understand the basics of Docker Engine and its foundational elements. In addition, you need to design the pipeline.
Here is one basic uni-container pipeline with which you can start getting some experience:
Maven - use the standard, well-understood versioning scheme in your Dockerfile(s) so your Docker tags will be e.g. 0.0.1-SNAPSHOT or 0.0.1 for a release
Maven - get familiar with and use the spotify plugin
Jenkins - this will do your pulls / pushes to Nexus 3
Nexus 3 - this will proxy both Docker Hub and Maven Central and be your private registry
Deploy Server (test/dev) - Jenkins will scp docker-compose files onto this environment and tear your environments up & down
Cleanup - clean up all your environments with spotify-gc (ideally daily, get Jenkins to do this)
Once you have the above going, then move onto cloud services, orchestration etc - but first get the basics right.
Related
We are developing an web application which had a multiple micro-services in same repository.
For our existing monolithic application, we are deploying our application to Kubernetes using Helm and Jenkins. When micro-services in question, we are struggling to define our CI/CD Pipeline strategies. Belows are the unclear issues:
For monolithic application I have one Dockerfile, one Jenkinsfile and one Helm Chart. For building stage I am building image/s using following command.
docker.build("registry/myrepo/image:${env.BUILD_NUMBER}"
For monolithic application, I have one chart with multiple values.yaml such as values.dev.yaml, values.prod.yaml which I configured for multiple environments.
So our questions are;
1.How should we build and push multiple containers for multiple Dockerfiles in Jenkinsfile for micro-services? At the present every micro-services have their own Dockerfiles in their own root.
2.Is that possible for Jenkins to distinguish the micro-services that we would like to deploy? I mean,sometimes probably we made some changes only for specific service and we would like to deploy this changes. So should we organize independent pipeline or is there any way to handle this same pipeline?
3.How should we organize our Helm chart to deploy micro-services Kubernetes? Should we create multiple charts per services or create multiple templates that refers only one values.yaml?
Looks like you are almost there.
Have separate pipelines for micro services, this would build, verify and deploy docker images to a docker registry. Have a separate pipeline for verifying and deploying the whole stack using helm.
I assume you would be using git events to identify the changes? When there is change in a microservice, I assume it would be committed to a single repository. This would trigger a git event using which you can trigger the pipeline of the respective microservice.
As the helm represents your whole application stack, I would suggest to have it as a single chart. If the complexity of the micro service is getting increased split it as subcharts.
Multiple charts can be a future maturity level when teams associated with each microservice can deploy the upgrades independently without affecting availability of the whole stack.
Have separate job in jenkins for each microservice.
Have a separate job to deploy the whole application using Helm chart
I have a .NET core web API and Angular 7 app that I need to deploy to multiple client servers, potentially running a plethora of different OS setups.
Dockerising the whole app seems like the best way to handle this, so I can ensure that it all works wherever it goes.
My question is on my understanding of Kubernetes and the distribution of the application. We use Azure Dev Ops for build pipelines, so if I'm correct would it work as follows:
1) Azure Dev Ops builds and deploys the image as a Docker container.
2) Kubernetes could realise there is a new version of the docker image and push this around all of the different client servers?
3) Client specific app settings could be handled by Kubernetes secrets.
Is that a reasonable setup? Have I missed anything? And are there any recommendations on setup/guides I can follow to get started.
Thanks in advance, James
Azure DevOps will perform the CI part of your pipeline. Once it is completed, Azure DevOps will push images to ACR. CD part should be done either directly from Azure DevOps (You may have to install a private agent on your on-prem servers & configure firewall etc) or Kubernetes native CD tools such as Spinnaker or Jenkins-X. Secrets should be kept in Kubernetes secrets.
We are developing a CI/CD pipeline leveraging Docker/Kubernetes in AWS. This topic is touched in Kubernetes CI/CD pipeline.
We want to create (and destroy) a new environment for each SCM branch, since a Git pull request until merge.
We will have a Kubernetes cluster available for that.
During prototyping by the dev team, we came up to Kubernetes namespaces. It looks quite suitable: For each branch, we create a namespace ns-<issue-id>.
But that idea was dismissed by dev-ops prototyper, without much explanation, just stating that "we are not doing that because it's complicated due to RBAC". And it's quite hard to get some detailed reasons.
However, for the CI/CD purposes, we need no RBAC - all can run with unlimited privileges and no quotas, we just need a separated network for each environment.
Is using namespaces for such purposes a good idea? I am still not sure after reading Kubernetes docs on namespaces.
If not, is there a better way? Ideally, we would like to avoid using Helm as it a level of complexity we probably don't need.
We're working on an open source project called Jenkins X which is a proposed sub project of the Jenkins foundation aimed at automating CI/CD on Kubernetes using Jenkins and GitOps for promotion.
When you submit a Pull Request we automatically create a Preview Environment which is exactly what you describe - a temporary environment which is used to deploy the pull request for validation, testing & approval before the pull request is approved.
We now use Preview Environments all the time for many reasons and are big fans of them! Each Preview Environment is in a separate namespace so you get all the usual RBAC features from Kubernetes with them.
If you're interested here's a demo of how to automate CI/CD with multiple environments on Kubernetes using GitOps for promotion between environments and Preview Environments on Pull Requests - using Spring Boot and nodejs apps (but we support many languages + frameworks).
I'm using Jenkins for Continuous Integration tool with DevOps tools like JIRA, Confluence, Crowd, SonarQube, Hygieia, etc.
But the environments are changed to deploy microservices to PaaS.
So I got the issues to resolve below.
Deployment Monitoring
to view which application is deployed to what instance with which version.
Canary Deployment
deploy to 1 instance and then deploy to all instances(after manual approval or auto).
Deploy to Cloud Foundry
more specifically IBM Bluemix
So I examined Spinnaker but I found that the cloud driver for CF is no longer maintained.
https://github.com/spinnaker/clouddriver/pull/1749
Do you know another open-sourced CD tool?
take a look at concourse : https://concourse-ci.org/
Its open source, you can us it to deploy either application or cloud foundry. It's a central tool for DevOps. Basically you have pipelines that can trigger tasks (manually or automatically). You have some already created ressources (github connector, etc ...) but you can also create your own tasks. Its running docker containers as workers to execute tasks/jobs.
Best,
I find it relatively easy to integrate a CD server to any PaaS provider. You will have to either use a plugin or create your own integration.
My top two recommendations would be gitlab or Bamboo in that order.
Given your preference for Jira, you might prefer Bamboo as it has very good integration with the rest of that Atlassian tools but it is not open source.
I am brand new with Openstack and Chef tools.
I am trying to setup a Continuous Delivering Process where I imagine something like following:
From Jenkins create a Pipeline where we have Jobs:
Job1: compiles, runs unit test + static analysis and deploys RPM build/artifacts into Artifactory.
Job2: Download RPM files from Artifactory and save them all together into a Yum Repository.
Job3: Clean and Recreate in Openstack the Lab infrastructure (Routers, Private Networks, Nodes with a clean image). After that, clean and re-register those Nodes in the Chef-Server specifying the run-list cookbooks that each node will have.
Job4: Runs Functional and Integration Test using infrastructure created in Job3. Publish results.
The doubt I have is how to implement Job3, the ways I see to implement this is using in Jenkins configuration Openstack command lines as nova and neutron, and for Chef also using knife and chef-client command, but for all that I shall have access to OpenStack controller server and all Chef Nodes.
Is there a more tidy way to implement this without just using command lines, something like Jenkins Plugins, Chef recipes or some other way?
What I don't like of adding in Jenkins configuration is that is not under version control, I would like something like chef recipes that perform all Openstack and Chef infrastructure setup and have those recipes under version control. But I am not sure how to implement all this with recipes and how then they will be applied from Jenkins.
It is correct the idea I have or there is other ways to implement this approach?
Thank you for the help.
For provisioning and orchestrating application infrastructure, I would recommend using Heat. A single YAML file describes your desired application environment.
The openstack documents describe how nova servers can be configured using chef at boot time using a cloud-init.
Hope this helps
Also consider using CloudMunch which integrates into Openstack to deliver continuous delivery and deployments.
Disclaimer: I work at CloudMunch.