I have been digging through the Kubernetes documentation for hours. I understand the core design, and the notion of services, controllers, pods, etc.
What I don't understand, however, is the process in which I can declaratively configure the cluster. That is, a way for me to write a config file (or a set thereof) to define the makeup, and scaling options of the cloud deployment. I want to be able to declare which containers I want in which pods, how they will communicate, how they will scale, etc. without running a ton of cli commands.
Is there docker-compose functionality for Kubernetes?
I want my application to be defined in git—to be version controlled–without relying on manual cli interactions.
Is this possible to do in a concise way? Is there a reference that is more clear than the official documentation?
If you're still looking, maybe this tool can help: https://github.com/kelseyhightower/compose2kube
You can create a compose file:
# sample compose file with 3 services
web:
image: nginx
ports:
- "80"
- "443"
database:
image: postgres
ports:
- "5432"
cache:
image: memcached
ports:
- "11211"
Then use the tool to convert it to kubernetes objects:
compose2kube -compose-file docker-compose.yml -output-dir output
Which will create these files:
output/cache-rc.yaml
output/database-rc.yaml
output/web-rc.yaml
Then you can use kubectl to apply them to kubernetes.
If you have existing Docker Composer files, you may take a look at the Kompose project.
kompose is a tool to help users who are familiar with docker-compose move to Kubernetes. kompose takes a Docker Compose file and translates it into Kubernetes resources.
kompose is a convenience tool to go from local Docker development to managing your application with Kubernetes. Transformation of the Docker Compose format to Kubernetes resources manifest may not be exact, but it helps tremendously when first deploying an application on Kubernetes.
To run docker-compose.yaml file or your own, run:
kompose up
To convert docker-compose.yaml into Kubernetes deployments and services with one simple command:
$ kompose convert -f docker-compose.yaml
INFO Kubernetes file "frontend-service.yaml" created
INFO Kubernetes file "redis-master-service.yaml" created
INFO Kubernetes file "redis-slave-service.yaml" created
INFO Kubernetes file "frontend-deployment.yaml" created
INFO Kubernetes file "redis-master-deployment.yaml" created
INFO Kubernetes file "redis-slave-deployment.yaml" created
For more info, check: http://kompose.io/
Docker has officially announced the docker-compose functionality for the kubernetes cluster. So from now on you can compose the kubernetes resources in a file and apply them using that single file.
First we need to install the Compose on Kubernetes controller into your Kubernetes cluster. This controller uses the standard Kubernetes extension points to introduce the Stack to the Kubernetes API. Check the full documentation to install the docker compose controller:
https://github.com/docker/compose-on-kubernetes
Let's write a simple compose yaml file:
version: "3.7"
services:
web:
image: dockerdemos/lab-web
ports:
- "33000:80"
words:
image: dockerdemos/lab-words
deploy:
replicas: 3
endpoint_mode: dnsrr
db:
image: dockerdemos/lab-db
We’ll then use the docker client to deploy this to a Kubernetes cluster running the controller:
$ docker stack deploy --orchestrator=kubernetes -c docker-compose.yml words
Waiting for the stack to be stable and running...
db: Ready [pod status: 1/1 ready, 0/1 pending, 0/1 failed]
web: Ready [pod status: 1/1 ready, 0/1 pending, 0/1 failed]
words: Ready [pod status: 1/3 ready, 2/3 pending, 0/3 failed]
Stack words is stable and running
We can then interact with those objects via the Kubernetes API. Here you can see we’ve created the lower-level objects like Services, Pods, Deployments and ReplicaSets automatically:
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/db 1 1 1 1 57s
deployment.apps/web 1 1 1 1 57s
deployment.apps/words 3 3 3 3 57s
It’s important to note that this isn’t a one-time conversion. The Compose on Kubernetes API Server introduces the Stack resource to the Kubernetes API. So we can query and manage everything at the same level of abstraction as we’re building the application. That makes delving into the details above useful for understanding how things work, or debugging issues, but not required most of the time:
$ kubectl get stack
NAME STATUS PUBLISHED PORTS PODS AGE
words Running 33000 5/5 4m
Kubernetes certainly has its own yaml (as shown in "Deploying Applications")
But as "Docker Clustering Tools Compared: Kubernetes vs Docker Swarm", it was not written (just) for Docker, and it has its own system.
You could use docker-compose to start Kubernetes though, as shown in "vyshane/kid": that does mask some of the kubectl commands cli in scripts (which can be versioned).
Related
I've built an application through Docker with a docker-compose.yml file and I'm now trying to convert it into deployment file for K8S.
I tried to use kompose convert command but it seems to work weirdly.
Here is my docker-compose.yml:
version: "3"
services:
worker:
build:
dockerfile: ./worker/Dockerfile
container_name: container_worker
environment:
- PYTHONUNBUFFERED=1
volumes:
- ./api:/app/
- ./worker:/app2/
api:
build:
dockerfile: ./api/Dockerfile
container_name: container_api
volumes:
- ./api:/app/
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "8050:8050"
depends_on:
- worker
Here is the output of the kompose convert command:
[root#user-cgb4-01-01 vm-tracer]# kompose convert
WARN Volume mount on the host "/home/david/vm-tracer/api" isn't supported - ignoring path on the host
WARN Volume mount on the host "/var/run/docker.sock" isn't supported - ignoring path on the host
WARN Volume mount on the host "/home/david/vm-tracer/api" isn't supported - ignoring path on the host
WARN Volume mount on the host "/home/david/vm-tracer/worker" isn't supported - ignoring path on the host
INFO Kubernetes file "api-service.yaml" created
INFO Kubernetes file "api-deployment.yaml" created
INFO Kubernetes file "api-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "api-claim1-persistentvolumeclaim.yaml" created
INFO Kubernetes file "worker-deployment.yaml" created
INFO Kubernetes file "worker-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "worker-claim1-persistentvolumeclaim.yaml" created
And it created me 7 yaml files. But I was expected to have only one deployment file. Also, I don't understand these warning that I get. Is there a problem with my volumes?
Maybe it will be easier to convert the docker-compose to deployment.yml manually?
Thank you,
I'd recommend using Kompose as a starting point or inspiration more than an end-to-end solution. It does have some real limitations and it's hard to correct those without understanding Kubernetes's deployment model.
I would clean up your docker-compose.yml file before you start. You have volumes: that inject your source code into the containers, presumably hiding the application code in the image. This setup mostly doesn't work in Kubernetes (the cluster cannot reach back to your local system) and you need to delete these volumes: mounts. Doing that would get rid of both the Kompose warnings about unsupported host-path mounts and the PersistentVolumeClaim objects.
You also do not normally need to specify container_name: or several other networking-related options. Kubernetes does not support multiple networks and so if you have any networks: settings they will be ignored, but most practical Compose files don't need them either. The obsolete links: and expose: options, if you have them, can also usually be safely deleted with no consequences.
version: "3.8"
services:
worker:
build:
dockerfile: ./worker/Dockerfile
environment:
- PYTHONUNBUFFERED=1
api:
build:
dockerfile: ./api/Dockerfile
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "8050:8050"
depends_on: # won't have an effect in Kubernetes,
- worker # but still good Docker Compose practice
The bind-mount of the Docker socket is a larger problem. This socket usually doesn't exist in Kubernetes, and if it does exist, it's frequently inaccessible (there are major security concerns around having it available, and it would allow you to launch unmanaged containers as well as root the node). If you need to dynamically launch containers, you'd need to use the Kubernetes API to do that instead (look at creating one-off Jobs). For many practical purposes, having a long-running worker container attached to a queueing system like RabbitMQ is a better approach. Kompose can't fix this architectural problem, though, you will have to modify your code.
When all of this is done, I'd expect Kompose to create four files, with one Kubernetes YAML manifest in each: two Deployments, and two matching Services. Each of your Docker Compose services: would get translated into a separate Kubernetes Deployment, and you need a paired Kubernetes Service to be able to connect to it (even from within the cluster). There are a number of related objects that are often useful (ServiceAccounts, PodDisruptionBudgets, HorizontalPodAutoscalers) and a typical Kubernetes practice is to put each in its own file.
I guess this is fine:
All your docker exposed ports are now kubernetes services
Your volumes need PV and PVC, they are generated
There is a deployment yaml for your API and WORKER service.
This is how it should be usually.
However if you have confusion in deploying these files; try -
kubectl apply -f mymanifests/*.yaml - this will deploy all at once.
Or if you just want a single fine , you can concatenate all these files with
--------- one after other; which can be used to seperate multiple manifests but still have them in a single file. Something like -
apiVersion.... deploymentfile....
-------------
apiVersion.... servicefile...... and so on...
There are a lot of applications which I launch on my workstation using docker-compose up.
Reasons:
They don't have an installer, or I don't want to use it
They require a dedicated storage engine to be present
They require a build process step
They are created by me and I want them to be easily launched on any workstation
e.t.c
So what I usually end up with the following file-structure:
myAppDir
- docker-compose.yml
- Dockerfile (not always)
- someConfigFile
And my docker-compose.yml is something like this:
(It can contain 2 or 3 services, but I provide the simplest form that I use)
version: '3.7'
services:
mysql:
image: mysql:5.7.29
restart: always
volumes:
- ./mysqld.cnf:/etc/mysql/mysql.conf.d/mysqld.cnf
environment:
- MYSQL_ROOT_PASSWORD=xyz
ports:
- 3306:3306
Then when I need to launch the application I just perform:
docker-compose up # (or with --build)
Recently I tried to add:
deploy:
resources:
limits:
cpus: '0.50'
memory: 200M
and got a message:
Some services (mysql) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use docker stack deploy to deploy to a swarm.
So I tried:
docker stack deploy mystack --compose-file docker-compose.yml
and got message:
Ignoring unsupported options: restart
this node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again
This seems more complex that docker-compose up.
I saw that I can use --compatibility flag e.g.
docker-compose --compatibility up
But the word compatibility means to me that I should soon switch to a new way of launching my apps locally.
My question is: What is the new procedure that I should follow for launching apps on my workstation using a docker and a descriptor file, in order to support options present in Compose file v3?
If you want to specify memory limits and similar constraints for local containers, you need to use a version 2 Compose file. This is called out in the documentation for the deploy: resources: section. docker/compose#4513 has some reasonably clear statements that Compose file version 2 is more targeted at local setups and version 3 more at Swarm installations, and that Docker intends to keep supporting both file versions.
Docker has put many options and functions specific to their Swarm cluster-installation mode into the core product. Anything that mentions a "stack", for example, is specific to a Swarm setup. One consequence of Swarm and plain-Docker things being combined together is that the deploy: Docker Compose options only have an effect in Swarm mode. The documentation for the deploy: key notes:
This only takes effect when deploying to a swarm with docker stack deploy, and is ignored by docker-compose up and docker-compose run.
My question is: What is the new procedure that I should follow for launching apps on my workstation using a docker and a descriptor file, in order to support options present in Compose file v3?
Docker compose V3 is meant to be used with Docker Swarm deployments, therefore you need to run your Docker in Swarm mode, otherwise just keep using the V2 and it's simpler interface for localhost developments.
For example restart is ignored because that responsibility belongs now to the Docker Swarm, not to Docker itself.
Using the compatibility flag it's kind of converting at runtime your V3 compose file into a V2 compose file.
So in short just use V3 if you want to run Docker in Swarm mode to take advantage of all its new features, aka it's kind of a Kubernetes in Docker land.
I am working on building automated CI/CD pipeline for LAMP application using docker.
I want image to be spinned into 5 containers, so that 5 different developers can work on their code. Can this be atained? I tried it using replicas, but it didnt worked out.
version: '3'
services:
web:
build: .
ports:
- "8080:80"#
deploy:
mode: replicated
replicas: 4
Error which i get:
:#!/bin/bash -eo pipefail docker-compose up ERROR: The Compose file
'./docker-compose.yml' is invalid because: Additional properties are
not allowed ('jobs' was unexpected) You might be seeing this error
because you're using the wrong Compose file version. Either specify a
supported version (e.g "2.2" or "3.3") and place your service
definitions under the services key, or omit the version key and place
your service definitions at the root of the file to use version 1. For
more on the Compose file format versions, see
docs.docker.com/compose/compose-file Exited with code 1 –
Also, from different container, can developers push, pull and commit to git? Will work done in one container will get lost if image is rebuild or run?
What things should i actually take care of while building this pipeline.
First of all, build your image separately using a Dockerfile with docker build -t <image name>:<version/tag> . then use following compose file with docker stack deploy to deploy your stack.
version: '3'
services:
web:
image: <image name>:<version/tag>
ports:
- "8080:80"#
deploy:
mode: replicated
replicas: 4
deploy attribute should be inside a service because it describes the number of replicas a service must have. It is not a global attribute like services. That seems to be the only problem in your compose file and docker compose up is complaining about this when running from the pipeline.
Update
You cannot run multiple replicas with a single docker-compose command. To run multiple replicas from a compose.yml, create a swarm by executing docker swarm init on your machine.
Afterward, simply replace docker-compose up with docker stack deploy <stack name>. docker-compose simply ignores the deploy attribute.
For details on differences between docker-compose up and docker stack deploy <stack name> refer to this article: https://nickjanetakis.com/blog/docker-tip-23-docker-compose-vs-docker-stack
I started a flask API service onto docker swarm cluster with 1 master and 3 worker node. I have deployed task using the following docker compose file,
version: '3'
services:
xgboost-model-api:
image: xgboost-model-api
ports:
- "5000:5000"
deploy:
mode: global
networks:
- xgboost-net
networks:
xgboost-net:
I deployed the task using the following docker swarm command,
docker stack deploy --compose-file docker-compose.yml xgboost-swarm
However, the task was started only on my master node and not on any worker node.
$ docker service ls
ID NAME MODE REPLICAS IMAGE
pgd8cktr4foz viz replicated 1/1
dockersamples/visualizer
twrpr4av4c7f xgboost-swarm_xgboost-model-api global 1/4 xgboost-model-api
xxrfn1w7eqw6 dockercloud-server-proxy global 1/1 dockercloud/server-proxy
Dockerfile being used is here. Any thoughts on why this behavior occurs would be appreciated.
As stated in this thread (duplicate?):
If you are using a private registry its important to share the login and credentials with the worker nodes by using
docker stack deploy --with-registry-auth
---- UPDATE
From your compose file it doesn't look like you are using a private registry. Generally speaking if containers can't start successfuly on the workers they will end up on the manager.
Some possible reasons for this are:
Can't access private registry (fix with --with-registry-auth)
Application requires some change on the host to run (like elasticSearch requires vm.max_map_count=262144)
HealthCheck fails on other node because of poorly written helthcheck
Network setting issues preventing pulling an image
Try removing your stack and running it again. Then do docker service ps --no-trunc {serviceName} this might show you tasks that should run the service on another node and why it failed.
Check out this SO thread for more troubleshooting tips.
I have been search google for a solution to the below problem for longer than I care to admit.
I have a docker-compose.yml file, which allows me to fire up an ecosystem of 2 containers on my local machine. Which is awesome. But I need to be able to deploy to Google Container Engine (GCP). To do so, I am using Kubernetes; deploying to a single node only.
In order to keep the deploying process simple, I am using kompose, which allows me to deploy my containers on Google Container Engine using my original docker-compose.yml. Which is also very cool. The issue is that, by default, Kompose will deploy each docker service (I have 2) in seperate pods; one container per pod. But I really want all containers/services to be in the same pod.
I know there are ways to deploy multiple containers in a single pod, but I am unsure if I can use Kompose to accomplish this task.
Here is my docker-compose.yml:
version: "2"
services:
server:
image: ${IMAGE_NAME}
ports:
- "3000"
command: node server.js
labels:
kompose.service.type: loadbalancer
ui:
image: ${IMAGE_NAME}
ports:
- "3001"
command: npm run ui
labels:
kompose.service.type: loadbalancer
depends_on:
- server
Thanks in advance.
The thing is, that neither dose docker-compose launch them like this. They are completely separate. It means, for example, that you can have two containers listening on port 80, cause they are independent. If you try to pack them into same pod you will get port conflict and end up with a mess. The scenario you want to achieve should be achieved on your Dockerfile level to make any sense (although fat [supervisor based] containers can be considered an antipattern in many cases), in turn making your compose obsolete...
IMO you should embrace how things are, cause it does not make sense to map docker-compose defined stack to single pod.