Automate docker swarm config rotation - docker

I'd like to deploy a service to docker swarm that uses a config.
I understand that to update configs they need to be rotated. This means some manual docker commands are run to:
Add a new version of the config file (different name from the original) to the swarm manager.
Update the existing services that may be consuming the old config, and give them the new config file instead of the old.
This is quite a bit of manual work to juggle and so I'm looking to automate it.
My idea is to have some service that will be independent deployed to the swarm to perform monitoring of services (and their labels) and use custom labels to work out which services are interested in which configs. This service will also monitor a physical directory containing the config files, and when it detects a config file change in the physical directory, it will compute a new name for the config in swarm- and using the service labels it can auto rotate the config for all those services that are "subscribed".
This means teams would no longer need to rotate configs by running commands on the swarm manually, or check in commits to docker compose files with new secret names - only an admin would need to update the physical config file in the monitored config directory and this service would take care of the rest in terms of updating subscribed services.
My question is, had this problem already been solved, is there something I can already use to achieve this? Please don't answer with "kubernetes" - I'm focusing on swarm only. Also I am aware of a docker swarm monitor service to notify me of new services and plan to use that.

Related

Kubernetes - from Minikube to production

I have created a simple PHP api application that works with a mysql database to store data. I have been experimenting with Kubernetes on my Windows 10 machine through Minikube.
I have just about got my head round the ideas involved, yet I’m not sure about how to implement this properly. So far I have used Kompose to create a set of yaml files from an existing docker-compose file. This has been half successful.
To get my application code into a pod hosting PHP, I have been using hostPath to share from my local machine. I mount to the minikube machine and share from there. I was having trouble sharing by other means. The application code is hosted in a github repo.
My questions are:
Is mounting my application code into a pod (assuming this is similar to what happens in docker) the correct way to do this? I’m not clear exactly what information is held on an image retrieved from the docker hub. Although I have read up on containers isolating the build environment from your machine.
How does this approach to translate into a production environment hosted on a cloud? I see there are various storage types. I had for example, wanted to try deploying on AWS just to see how this would work in practice.
I’m really looking for guidance to go from the tutorials found on the web working on my machine, to something that could be done for a customer hosted on the cloud. This might scale up to a more microservices style architecture over time.
The approach you are describing is mostly for development setups, where you want to mount your code into the container as a volume so you don't have to rebuild every time your code changes. Typically done with a docker-compose file.
For production setups, you want the docker image to correctly work and only mount volumes to data you want to persist, typically databases are the core example. For this EKS is deeply integrated into the AWS infrastructure and will create EBS volumes on demand. You don't need to provision any volume or even care for most cases (unless you need multiple read-write volumes needed for scaling).
For a PHP application you really should not persist any data in the pod, because it will create other issues when you need to scale the application. Also, a good approach for managing files that need to persist is S3 (AWS simple storage service).
So generally speaking, you need a deployment per application a service to access each pod on that application and then an ingress object to route traffic from the internet to each pod.
Your application docker image is really the core. You just build it with your code inside. Make sure to pass configuration using environment variable or configuration file so you can connect to the database.
Now for kubernetes, for each compoment (e.g. PHP application, MySQL) you will most likely create a deployment k8s manifest that points to the docker image and add some configuration environment variables.
For production, you will need persistence volume. On aws you can simply use EBS-backed volumes
To get traffic from Internet to your PHP application, you will need to add one or more k8s components:
K8s Service manifest that exposes your PHP deployment/pod on a stable address. If you only have q or very few services, you can use LoadBalancer which on cloud like AWS will create an ALB/ELB (might need to add annotation to your service)
An ingress which is just a reverse proxy (contour, nginx, traefik). On cloud environment it will map to an ALB/ELB. The advantage of this is that you can have a single ALB for all your services i.e. save money. Also you can configure routing path or TLS termination in one place.

Self updating docker stack

I have a docker stack deployed with 20+ services which comprise my application. I would like to know that is there a way to update this stack with the latest changes to the software from within one of the containers running as a part of the stack?
Approach i have tried:
In one of the containers for a service, mounted the docker socket and the /usr/bin/docker file and downloaded the latest compose file from the server.
Instantiated a script which downloads the latest images
Initiate a docker stack deploy with the new compose file
Everything works fine this way but if the service which is running this process itself has an update and if that docker stack deploy tries to create this service before any other service in the stack, then the stack update fails.
Any suggestion or alternative approaches for this?
There is no out of the box solution for docker swarm mode (something like watchtower for single docker). I think you already found the best solution for doing this automatically. I would suggest you put the update container (the one that is updating the services) on a ignore list. Then on one of your master nodes, create a cron that updates that one container. I know this is not a prefect solution, but it should work.
The standard way to do this is to build a new Docker image that contains your new application code. Tag it (as in the docker build -t argument) with some unique version, like a source control tag or date stamp. Start a new container with the new application code, then stop and delete the old container.
As a general rule you do not upgrade the software inside a running container. Delete the old container and start a new container with the software and version you want. Also, this is generally managed by an operator, a continuous deployment system, or an orchestration system, not by the container itself. (Mounting the Docker socket into a container is a significant security exposure.)
(Imagine setting up a second copy of your cluster that works exactly the same way as your production cluster, except that it has the software you want to deploy tomorrow. You don't want your production cluster picking that up on its own until you've tested it. This scheme should give you a reproducible deployment setup so that it's easy to start that pre-production cluster, but also give you control over which specific versions are running where.)

Does it make sense to run Kubernetes on a single server?

I'm using Docker I have implemented a system to deploy environments (on a single server) based on Git branches using Traefik (*.dev.domain.com) and Docker Compose templates.
I like Kubernetes and I've never switched to it since I'm limited to one single server for my infrastructure. I've only used it using local installations (Docker for Windows).
So, my question is: does it make sense to run a Kubernetes "cluster" (master and nodes) on a single server to orchestrate and route containers (in place of Traefik/Rancher/Docker Compose)?
This use is for development and staging only for the moment, so high availability is not a prerequisite.
Thanks.
If it is not a production environment, it doesn't matter how many nodes you are using. So yes, it should be just fine in this case. But make sure all the k8s features you will need in production are available in test/dev, to keep things similar and portable.
AFAIU,
I do not see a requirement for kubernetes unless we are doing below at least for single host using native docker run or docker-compose or docker engine swarm mode -
Make sure there are enough(>=2) replicas of your app in a single server and you are balancing the load across those apps docker containers.
If you want to go bit advanced, we should be able to scale up & down dynamically (docker swarm mode supports this out of the box else use jwilder nginx proxy).
Your deployment should not cause a downtime. Make sure a single container is always healthy at any instant of time while deploying.
Container should auto heal(restart automatically) in case your HTTP or TCP health check fails.
Doing all of the above will certainly put you in a better place but single host is still a single source of failure which you got to deal with at regular intervals.
Preferred : if possible try to start with docker engine swarm mode or kubernetes single master or minikube. This will automatically take care of all the above scenarios out of the box and will also allow you to further scale up anytime by adding more nodes without changing much in your YML files for docker swarm or kubernetes.
Ref -
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
https://docs.docker.com/engine/swarm/
I would use single host k8s only if I managed clusters with the same project that I would like to deploy to the said host. This enables you to reuse manifests and all the automation you've created for your clusters.
Have I had single host environments only, I would probably stick to docker-compose.
If you're looking to try it out your easiest options are probably minikube (easy to run single-node cluster locally but without some features) or using one of the free trial accounts for a managed Kubernetes service from one of the big cloud providers (fully-featured and multi-node but limited use before you have to pay).

How docker container can communicate in PCF

I have multiple docker containers. In local they talk to each other using network and compose.yml. I have pushed my containers to PCF only i don't know how to make them talk to each other. Can anyone help me out??
My understanding is that you would utilize Cloud Foundry's container to container networking to do this.
https://docs.cloudfoundry.org/concepts/understand-cf-networking.html
By default, no connections are allowed between containers but you can use the cf cli to allow connections on specific ports between your applications. Your applications just need to be configured to start and listen on the ports that you allow.
While it's not docker specific, there's a good example here.
https://github.com/cloudfoundry/cf-networking-examples/blob/master/docs/c2c-no-service-discovery.md
Using docker should be minimally different. You'll obviously need to create your own docker images and make sure that those are going to listen on the correct ports, otherwise it's just an adjustment to how you push the application (i.e. pass the docker image name to cf push). The cf add-network-policy commands should be the same.
Hope that helps!
** UPDATE **
If you are looking for docker-compose like behavior, kind of a run one command and deploy multiple apps, you can achieve this with cf push and a manifest.yml file.
The manifest.yml file allows you to define multiple applications. Thus you can use it to deploy a series of applications that work together, like you often see with docker-compose.
https://docs.cloudfoundry.org/devguide/deploy-apps/manifest-attributes.html
You have quite a bit of flexibility with manifest.yml, you can deploy buildpack based apps or docker image based apps. You can configure routes, bound services, health checks, memory/disk quotas, and if you're on a new enough version, even processes and sidecars. That said, it can't do 100% of what you can with the cf cli, for example you cannot control the above mentioned network policy using a manifest.yml. If you need to control something not exposed through manifest.yml, the other option would be to script the deployment.

How to automate application deployment when using LXD containers?

How should applications be scripted/automatically deployed when in LXD containers?
For example is best way to deploy applications in LXD containers to use a bash script (which deploys an application)? How to execute this bash script inside the container by executing a command on the host?
Are there any tools/methods of doing this in a similar way to Docker recipes?
In my case, I use Ansible to:
build the LXD containers (web, database, redis for example).
connect to the containers and deploy the services and code needed.
you can build your own images for example with the services and/or code already deployed and build specific containers from this images.
I was doing this from before LXD had Ansible support (Ansible 2.2) i prefer to use ssh instead of lxd connection, when i connect to the containers to deploy services/code. they comes with a profile where i had setup my ssh public key (to have direct ssh connection by keys ... no passwords)
Take a look at my open source project on bitbucket devops_lxd_containers It includes:
Scripts to build lxd image templates including Apache, tomcat, haproxy.
Scripts to demonstrate custom application image builds such as Apache hosting and key/value content and haproxy configured as a router.
Code to launch the containers and map ports so they are accessible to the larger network
Code to configure haproxy as layer 7 proxy to route http requests between boxes and containers based on uri prefix routing. Based on where it previously deployed and mapped ports.
At the higher level it accepts a data drive spec and will deploy an entire environment compose of many containers spread across many hosts and hook them all up to act as a cohesive whole via a layer 7 proxy.
Extensive documentation showing how I accomplished each major step using code snippets before automating.
Code to support zero-outage upgrades using the layer7 ability to gracefully bleed off old connections while accepting new connections at the new layer.
The entire system is built on the premise that image building is best done in layers. We build a updated Ubuntu image. From it we build a hardened Ubuntu image. From it we build a basic Apache image. From it we build an application specific image like our apacheKV sample. The goal is to never rebuild any more than once and to re-use the common functionality such as the basicJDK as the source for all JDK dependent images so we can avoid having duplicate code in any location. I have strived to keep Image or template creation completely separate from deployment and port mapping. The exception is that I could not complete creation of the layer 7 routing image until we knew everything about how other images would be mapped.
I've been using Hashicorp Packer with the ansible provisioner using ansible_connection = lxd
Some notes here for constructing a template
When iterating through local files on your host system you may need to be using ansible_connection = local (e.g for stat & friends)
Using local_action in ansible with the lxd connection is still
action inside the container when using stat (but not with include_vars & lookup function for files)
Using lots of debug messages in Ansible is helpful to know which local environment ansible is actually operating in.
I'm surprised no one here mentioned Canonicals own tool for managing LXD.
https://juju.is
it is super simple, well supported, and the only caveat is it requires you turn off ipv6 at the LXD/LXC side of things (in the network bridge)
snap install juju --classic
juju bootstrap localhost
from there you can learn about juju models, deploy machines or prebaked images like ubuntuOS
juju deploy ubuntu

Resources