I'm looking to do to the following scenario:
Build my application with an automated tool like Jenkings for example
When the build/tests succeed, I want to create a new docker container with the new version of the application in it.
I would than have all the other applications that make use of the newly deployed app/service, use the newly created container instead of the old one. So when other containers are for example using "calculator.local:3000" I would like to have this pointing to my new container, instead of the old one
When everthing succeeds I want to delete/archive the old container
What would be the correct way to create such a setup, I came accross a lot of complex systems involving DNS-servers, but I'm really looking somewhat more easy to setup.
Similar to what you want to achieve is process called blue-green deployment. It relies on that there are always 2 versions of application running (blue and green), and one is set to active which means all production traffic is routed to it.
Let's say blue container is currently active. Deployment is done by updating green container, and changing proxy to route traffic to green app. When done properly, you can have zero downtime. But, the hardest part is to setup this proxy, which will be dynamically updated with application container ip. This can be done using consul, consul's registrator, and consul-template.
Here are few guides, how to setup blue green deployment:
Blue-Green Deployment To Docker Swarm with Jenkins Workflow Plugin
Building Blue-Green Deployment with Docker
Docker Flow
Blue-Green deployment DIY
Related
I have a docker stack deployed with 20+ services which comprise my application. I would like to know that is there a way to update this stack with the latest changes to the software from within one of the containers running as a part of the stack?
Approach i have tried:
In one of the containers for a service, mounted the docker socket and the /usr/bin/docker file and downloaded the latest compose file from the server.
Instantiated a script which downloads the latest images
Initiate a docker stack deploy with the new compose file
Everything works fine this way but if the service which is running this process itself has an update and if that docker stack deploy tries to create this service before any other service in the stack, then the stack update fails.
Any suggestion or alternative approaches for this?
There is no out of the box solution for docker swarm mode (something like watchtower for single docker). I think you already found the best solution for doing this automatically. I would suggest you put the update container (the one that is updating the services) on a ignore list. Then on one of your master nodes, create a cron that updates that one container. I know this is not a prefect solution, but it should work.
The standard way to do this is to build a new Docker image that contains your new application code. Tag it (as in the docker build -t argument) with some unique version, like a source control tag or date stamp. Start a new container with the new application code, then stop and delete the old container.
As a general rule you do not upgrade the software inside a running container. Delete the old container and start a new container with the software and version you want. Also, this is generally managed by an operator, a continuous deployment system, or an orchestration system, not by the container itself. (Mounting the Docker socket into a container is a significant security exposure.)
(Imagine setting up a second copy of your cluster that works exactly the same way as your production cluster, except that it has the software you want to deploy tomorrow. You don't want your production cluster picking that up on its own until you've tested it. This scheme should give you a reproducible deployment setup so that it's easy to start that pre-production cluster, but also give you control over which specific versions are running where.)
I'm using Docker I have implemented a system to deploy environments (on a single server) based on Git branches using Traefik (*.dev.domain.com) and Docker Compose templates.
I like Kubernetes and I've never switched to it since I'm limited to one single server for my infrastructure. I've only used it using local installations (Docker for Windows).
So, my question is: does it make sense to run a Kubernetes "cluster" (master and nodes) on a single server to orchestrate and route containers (in place of Traefik/Rancher/Docker Compose)?
This use is for development and staging only for the moment, so high availability is not a prerequisite.
Thanks.
If it is not a production environment, it doesn't matter how many nodes you are using. So yes, it should be just fine in this case. But make sure all the k8s features you will need in production are available in test/dev, to keep things similar and portable.
AFAIU,
I do not see a requirement for kubernetes unless we are doing below at least for single host using native docker run or docker-compose or docker engine swarm mode -
Make sure there are enough(>=2) replicas of your app in a single server and you are balancing the load across those apps docker containers.
If you want to go bit advanced, we should be able to scale up & down dynamically (docker swarm mode supports this out of the box else use jwilder nginx proxy).
Your deployment should not cause a downtime. Make sure a single container is always healthy at any instant of time while deploying.
Container should auto heal(restart automatically) in case your HTTP or TCP health check fails.
Doing all of the above will certainly put you in a better place but single host is still a single source of failure which you got to deal with at regular intervals.
Preferred : if possible try to start with docker engine swarm mode or kubernetes single master or minikube. This will automatically take care of all the above scenarios out of the box and will also allow you to further scale up anytime by adding more nodes without changing much in your YML files for docker swarm or kubernetes.
Ref -
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
https://docs.docker.com/engine/swarm/
I would use single host k8s only if I managed clusters with the same project that I would like to deploy to the said host. This enables you to reuse manifests and all the automation you've created for your clusters.
Have I had single host environments only, I would probably stick to docker-compose.
If you're looking to try it out your easiest options are probably minikube (easy to run single-node cluster locally but without some features) or using one of the free trial accounts for a managed Kubernetes service from one of the big cloud providers (fully-featured and multi-node but limited use before you have to pay).
I'm new to devops and kubernetes and was setting up the local development environment.
For having hurdle-free deployment, I wanted to keep the development environment as similar as possible to the deployment environment. So, for that, I'm using minikube for single node cluster, and that solves a lot of my problems but right now, according to my knowledge, a developer need to do following to see the changes:
write a code locally,
create a container image and then push it to container register
apply the kubernetes configuration with updated container image
But the major issue with this approach is the high development time, Can you suggest some better approach by which I can see the changes in real-time?
The official Kubernetes blog lists a couple of CI/CD dev tools for building Kubernetes based applications: https://kubernetes.io/blog/2018/05/01/developing-on-kubernetes/
However, as others have mentioned, dev cycles can become a lot slower with CI/CD approaches for development. Therefore, a colleague and I started the DevSpace CLI. It lets you create a DevSpace inside Kubernetes which allows you a direct terminal access and real-time file synchronization. That means you can use it with any IDE and even use hot reloading tools such as nodemon for nodejs.
DevSpace CLI on GitHub: https://github.com/covexo/devspace
I am afraid that the first two steps are practically mandatory if you want to have a proper CI/CD environment in Kubernetes. Because of the ephemeral nature of containers, it is strongly discouraged to perform hotfixes in containers, as they could disappear at any moment.
There are tools like helm or kubecfg that can help you with the third step
apply the kubernetes configuration with updated container image
They allow versioning and deployment upgrades. You would still need to learn how to use but they have innumerable advantages.
Another option that comes to mind (that without Kubernetes) would be to use development containers with Docker. In this kind of containers your code is in a volume, so it is easier to test changes. In the worst case you would only have to restart the container.
Examples of development containers (by Bitnami) (https://bitnami.com/containers):
https://github.com/bitnami/bitnami-docker-express
https://github.com/bitnami/bitnami-docker-laravel
https://github.com/bitnami/bitnami-docker-rails
https://github.com/bitnami/bitnami-docker-symfony
https://github.com/bitnami/bitnami-docker-codeigniter
https://github.com/bitnami/bitnami-docker-java-play
https://github.com/bitnami/bitnami-docker-swift
https://github.com/bitnami/bitnami-docker-tomcat
https://github.com/bitnami/bitnami-docker-python
https://github.com/bitnami/bitnami-docker-node
I think using Docker / Kubernetes already during development of a component is the wrong approach, exactly because of this slow development cycles. I would just develop as I'm used to do (e.g. running the component in the IDE, or a local app server), and only build images and start testing it in a production like environment once I have something ready to deploy. I only use local Docker containers, or our Kubernetes development environment, for running components on which the currently developed component depends: that might be a database, or other microservices, or whatever.
On the Jenkins X project we're big fans of using DevPods for fast development - which basically mean you compile/test/run your code inside a pod inside the exact same kubernetes cluster as your CI/CD runs using the exact same tools (maven, git, kubectl, helm etc).
This lets you use your desktop IDE of your choice while all your developers get to work using the exact same operating system, containers and images for development tools.
I do like minikube but developers often hit issues trying to get it running (usually related to docker or virtualisation issues). Plus many developers laptops are not big enough to run lots of services inside minikube and its always going to behave differently to your real cluster - plus then the developers tools and operating system are often very different to whats running in your CI/CD and cluster.
Here's a demo of how to automate your CI/CD on Kubernetes with live development with DevPods to show how it all works
It's not been so long for me to get involved in Kubernetes and Docker, but to my knowledge, I think it's the first step to learn whether it is possible and how to dockerize your application.
Kubernetes is not a tool for creating docker image and it is simply pulling pre-built image by Docker.
There are quite a few useful courses in the Udemy including this one.
https://www.udemy.com/docker-and-kubernetes-the-complete-guide/
I would like a Jenkins master and slave setup for running specs on standard Rails apps (PostgreSQL, sidekiq/redis, RSPec, capybara-webkit, a common Rails stack), using docker so it can be put on other machines as well. I got a few good stationary machines collecting dust.
Can anybody share an executable docker jenkins rails stack example?
What prevents that from being done?
Preferable with master-slave setup too.
Preface:
After days online, following several tutorials with no success, I am about to abandon project. I got a basic understanding of docker, docker-machine, docker compose and volumes, I got a docker registry of a few simple apps.
I know next to nothing about Jenkins, but I've used Docker pretty extensively on other CI platforms. So I'll just write about that. The level of difficulty is going to vary a lot based on your app's dependencies and quirks. I'll try and give an outline that's pretty generally useful, and leave handling application quirks up to you.
I don't think the problem you describe should require you to mess about with docker-machine. docker build and docker-compose should be sufficient.
First, you'll need to build an image for your application. If your application has a comprehensive Gemfile, and not too many dependencies relating to infrastructure etc (e.g. files living in particular places that the application doesn't set up for itself), then you'll have a pretty easy time. If not, then setting up those dependencies will get complicated. Here's a guide from the Docker folks for a simple Rails app that will help get you started.
Once the image is built, push it to a repository such as Docker Hub. Log in to Docker Hub and create a repo, then use docker login and docker push <image-name> to make the image accessible to other machines. This will be important if you want to build the image on one machine and test it on others.
It's probably worth spinning off a job to run your app's unit tests inside the image once the image is built and pushed. That'll let you fail early and avoid wasting precious execution time on a buggy revision :)
Next you'll need to satisfy the app's external dependencies, such as Redis and postgres. This is where the Docker Compose file comes in. Use it to specify all the services your app needs, and the environment variables etc that you'll set in order to run the application for testing (e.g. RAILS_ENV).
You might find it useful to provide fakes of some non-essential services such as in-memory caches, or just leave them out entirely. This will reduce the complexity of your setup, and be less demanding on your CI system.
The guide from the link above also has an example compose file, but you'll need to expand on it. The most important thing to note is that the name you give a service (e.g. db in the example from the guide) is used as a hostname in the image. As #tomwj suggested, you can search on Docker Hub for common images like postgres and Redis and find them pretty easily. You'll probably need to configure a new Rails environment with new hostnames and so on in order to get all the service hostnames configured correctly.
You're starting all your services from scratch here, including your database, so you'll need to migrate and seed it (and any other data stores) on every run. Because you're starting from an empty postgres instance, expect that to take some time. As a shortcut, you could restore a backup from a previous version before migrating. In any case, you'll need to do some work to get your data stores into shape, so that your test results give you useful information.
One of the tricky bits will be getting Capybara to run inside your application Docker image, which won't have any X displays by default. xvfb (X Virtual Frame Buffer) can help with this. I haven't tried it, but building on top of an image like this one may be of some help.
Best of luck with this. If you have the time to persist with it, it will really help you learn about what your application really depends on in order to work. It certainly did for me and my team!
There's quite a lot to unpack in that question, this is a guide of how to get started and where to look for help.
In short there's nothing preventing it, although it's reasonably complex and bespoke to setup. So hence no off-the-shelf solution.
Assuming your aim is to have Jenkins build, deploy to Docker, then test a Rails application in a Dockerised environment.
Provision the stationary machines, I'd suggest using Ansible Galaxy roles.
Install Jenkins
Install Docker
Setup a local Docker registry
Setup Docker environment, the way to bring up multiple containers is to use docker compose this will allow you to bring up the DB, redis, Rails etc... using the public docker hub images.
Create a Jenkins pipeline
Build the rails app docker image this will contain the rails app.
Deploy the application, this updates the application in the Docker swarm, from the local Docker registry.
Test, run the tests against the application now running.
I've left out the Jenkins master/slave config because if you're only running on one machine you can increase the number of executors. E.g. the master can execute more jobs at the expense of speed.
How should applications be scripted/automatically deployed when in LXD containers?
For example is best way to deploy applications in LXD containers to use a bash script (which deploys an application)? How to execute this bash script inside the container by executing a command on the host?
Are there any tools/methods of doing this in a similar way to Docker recipes?
In my case, I use Ansible to:
build the LXD containers (web, database, redis for example).
connect to the containers and deploy the services and code needed.
you can build your own images for example with the services and/or code already deployed and build specific containers from this images.
I was doing this from before LXD had Ansible support (Ansible 2.2) i prefer to use ssh instead of lxd connection, when i connect to the containers to deploy services/code. they comes with a profile where i had setup my ssh public key (to have direct ssh connection by keys ... no passwords)
Take a look at my open source project on bitbucket devops_lxd_containers It includes:
Scripts to build lxd image templates including Apache, tomcat, haproxy.
Scripts to demonstrate custom application image builds such as Apache hosting and key/value content and haproxy configured as a router.
Code to launch the containers and map ports so they are accessible to the larger network
Code to configure haproxy as layer 7 proxy to route http requests between boxes and containers based on uri prefix routing. Based on where it previously deployed and mapped ports.
At the higher level it accepts a data drive spec and will deploy an entire environment compose of many containers spread across many hosts and hook them all up to act as a cohesive whole via a layer 7 proxy.
Extensive documentation showing how I accomplished each major step using code snippets before automating.
Code to support zero-outage upgrades using the layer7 ability to gracefully bleed off old connections while accepting new connections at the new layer.
The entire system is built on the premise that image building is best done in layers. We build a updated Ubuntu image. From it we build a hardened Ubuntu image. From it we build a basic Apache image. From it we build an application specific image like our apacheKV sample. The goal is to never rebuild any more than once and to re-use the common functionality such as the basicJDK as the source for all JDK dependent images so we can avoid having duplicate code in any location. I have strived to keep Image or template creation completely separate from deployment and port mapping. The exception is that I could not complete creation of the layer 7 routing image until we knew everything about how other images would be mapped.
I've been using Hashicorp Packer with the ansible provisioner using ansible_connection = lxd
Some notes here for constructing a template
When iterating through local files on your host system you may need to be using ansible_connection = local (e.g for stat & friends)
Using local_action in ansible with the lxd connection is still
action inside the container when using stat (but not with include_vars & lookup function for files)
Using lots of debug messages in Ansible is helpful to know which local environment ansible is actually operating in.
I'm surprised no one here mentioned Canonicals own tool for managing LXD.
https://juju.is
it is super simple, well supported, and the only caveat is it requires you turn off ipv6 at the LXD/LXC side of things (in the network bridge)
snap install juju --classic
juju bootstrap localhost
from there you can learn about juju models, deploy machines or prebaked images like ubuntuOS
juju deploy ubuntu