Docker stack to Kubernetes - docker

I am quite familiar with Docker, but I have zero experience on Kubernetes.
I have a Docker stack (multi-container) software that I can deploy in a Docker swarm cluster. I was wondering if Kubernetes has something similar? I don't need replicas, auto scaling and so on... I just need a group of containers working together with its dependencies and networks defined in single text file.
I have searched and found a tool called kompose that translates the Docker stack file to Kubernetes syntax... However, it looks like the output is a list of *.yaml files, instead of a single file.
So, I came to the conclusion that kubernetes does not have this exact functionality.. Am I missing something?

You can copy the content of the generated files into one file and separate them with ---.
For instance, if you've got 3 Kubernetes files: service.yml, deployment.yml and configmap.yml, your file should look something like:
# content of service.yml
....
---
# content of deployment.yml
....
---
# content of configmap.yml
....
You would use the same kubectl commands to CRUD using this spec file.

A single 'Docker Stack' yml definition is equivalent to a collection of Kubernetes Deployments and Services. Each service in a Docker Stack definition is also available to one another via a default overlay network automatically created by docker at deploy time. To simulate this in Kubernetes you would need to define multiple deployments/services with-in the same file so that they could be created and deleted as a single 'stack'.

Related

kubernetes Overriding environment variables

I would like to be able to test my docker application on local before sending it to the cluster. I want to use mini Kube for this. Meanwhile, instead of having multiple kube config files which would define env variables for the cloud environment and for my local machine, I would like to override some of the env variables when running in local. I can see that you can do something like that with docker compose:
docker-compose up -f docker-compose.yml -f docker-compose.e2e.yml.
The second file would only have the overriding values. Yes, there are two files but I find it clean.
Is there a way to do something similar with Kube/minikube? Or even something better ???
I think you are asking how to pass different environment values into your Pods depending upon which environment they are deployed to. One pattern to achieve this is to deploy with helm. Then you use templated versions of your kubernetes descriptors for deployment. You also have a values.yaml file that contains values to be injected into the descriptors. You can switch and overlay values.yaml files at the time of install to control which values are injected for a given installation.
If you are asking how to switch whether a kubectl command runs against local or cloud without having to keep switching your kubeconfig file then you can add both contexts to your kubeconfig and use kubectl context to switch between them, as #Ijaz Khan suggests

How do I finalize my Docker setup and how is it actually called?

I am pretty new to Docker. After reading specifically what I needed I figured out how to create a pretty nice Docker setup. I have created some setup where in I can start up multiple systems using one docker-compose.yml file.
I am currently using this for testing specific PHP code on different PHP and MySQL versions. The file structure looks something like this:
./mysql55/Dockerfile
./mysql56/Dockerfile
./mysql57/Dockerfile
./php53/Dockerfile
./php54/Dockerfile
./php56/Dockerfile
./php70/Dockerfile
./php71/Dockerfile
./php72/Dockerfile
./web (shared folder with test files available on all php machines)
./master_web (web interface to send test request to all possible versions using one call)
./docker-compose.yml
In the docker-compose file I setup different containers most refering to the local Dockerfiles, some refering to online image names. When I run docker-compose up all containers start as expected in the configured network configuration and I'm able to use it as desired.
I would first of all like to know how this setup is called. Is this called a "docker swarn" or is such setup called differently?
Secondly, I'd like to make one "compiled/combined" file (image, container, swarn, engine, machine) or however it is called) of this. That I can save without having to depend on external sources again. Of course the docker-compose.yml file will work as long as all the refered external sources are still available. But I'd like to pusblish my fully confired setup as is. How do I do that?
You can publish built images with Docker registry. You can setup your own or use third-party service.
After that, you need to prefix you image names with your registry IP/DNS in docker-compose.yml. This way, you can deploy it anywhere docker-compose is installed (and docker-compose itself can be run as docker container too), just need to copy your docker-compose.yml file there.
docker-machine is tool to deploy to multiple machines, as is docker swarm.

Doubts about docker

I'm new with docker, and have some doubts.
In a dev environment (not server), is better to use just one container, with apache, php and mysql for exemple, and use just a docker and a Dockerfile, or is better to use one container for each service, and use docker-compose to do it?
I have made this here with docker-compose, but I don't know if it is the best way, seems to me unnecessary complexity, but I'm newb.
I have the following situation, I work with magento, and is a common need to have a clear instalation for isolate modules and test, so I want create my magento 2 docker environment, where have just a clear magento and must have some easy way of put my module files inside, for test, and ons shutdown, the environment backs to clear magento 2 instalation, without my files, what is the best way to get this environemnt?
Thanks in advance.
I'd certainly recommend using a docker stack (defined in a docker-compose), and not trying to spin up a whole application stack inside a single container. You should have one service per container generally.
I believe what you are looking for in the second part of your question is a deployment orchestration tool. Docker does not replace deployment orchestration, but you can run shell scripts that do application setup in the Dockerfiles that build the containers you use in your stack.
As for access to files inside your containers, I'd look into docker volumes.

How does Spread know to update image in Kubernetes?

I want to set up a Gitlab CD to Kubernetes and I read this article
However, I am wondering, how is it that my K8 cluster would be updated with my latest Docker images?
For example, in my .gitlab-ci.yaml file I will have a build, test, and release stage that ultimately updates my cloud Docker images. By setting up the deploy stage as instructed in the article:
deploy:
stage: deploy
image: redspreadapps/gitlabci
script:
- null-script
would Spread then know to "magically" update my K8 cluster (perhaps by repulling all images, perform rolling-updates) as long as I set up my directory structure of K8 resources as is specified by Spread?
I don't have a direct answer, but from looking at the spread project it seems pretty dead. Last commit in Aug last year with a bunch of issues and not supporting any of the newer kubernetes constructs (e.g. deployments).
The typical way to update images in kubernetes nowadays is to run a command like kubectl set image <deployment-name> <image>. This will in turn perform a rolling update on the deployment and shutting down a POD at a time updating it with the new image. See this doc.
Since spread is from before that, I assume they must use rolling update replication controller with a command like kubectl rolling-update NAME -f FILE and picking up the new image from the configuration file in their project folder (assuming it changed). See this doc.

docker 1.12 swarm : Does Swarm have a configuration store like kubernetes configMap

As per kubernetes docs: http://kubernetes.io/docs/user-guide/configmap/
Kubernetes has a ConfigMap API resource which holds key-value pairs
of configuration data that can be consumed in pods.
This looks like a very useful feature as many containers require configuration via some combination of config files, and environment variables
Is there a similar feature in docker1.12 swarm ?
Sadly, Docker (even in 1.12 with swarm mode) does not support the variety of use cases that you could solve with ConfigMaps (also no Secrets).
The only things supported are external env files in both Docker (
https://docs.docker.com/engine/reference/commandline/run/#/set-environment-variables-e-env-env-file) and Compose (https://docs.docker.com/compose/compose-file/#/env-file).
These are good to keep configuration out of the image, but they rely on environment variables, so you cannot just externalize your whole config file (e.g. for use in nginx or Prometheus). Also you cannot update the env file separately from the deployment/service, which is possible with K8s.
Workaround: You could build your configuration files in a way that uses those variables from the env file maybe.
I'd guess sooner or later Docker will add those functionality. Currently, Swarm is still in it's early days so for advanced use cases you'd need to either wait (mid to long term all platforms will have similar features), build your own hack/woraround, or go with K8s, which has that stuff integrated.
Sidenote: For Secrets storage I would recommend Hashicorp's Vault. However, for configuration it might not be the right tool.

Resources