docker-compose to K8S deployment - docker

I've built an application through Docker with a docker-compose.yml file and I'm now trying to convert it into deployment file for K8S.
I tried to use kompose convert command but it seems to work weirdly.
Here is my docker-compose.yml:
version: "3"
services:
worker:
build:
dockerfile: ./worker/Dockerfile
container_name: container_worker
environment:
- PYTHONUNBUFFERED=1
volumes:
- ./api:/app/
- ./worker:/app2/
api:
build:
dockerfile: ./api/Dockerfile
container_name: container_api
volumes:
- ./api:/app/
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "8050:8050"
depends_on:
- worker
Here is the output of the kompose convert command:
[root#user-cgb4-01-01 vm-tracer]# kompose convert
WARN Volume mount on the host "/home/david/vm-tracer/api" isn't supported - ignoring path on the host
WARN Volume mount on the host "/var/run/docker.sock" isn't supported - ignoring path on the host
WARN Volume mount on the host "/home/david/vm-tracer/api" isn't supported - ignoring path on the host
WARN Volume mount on the host "/home/david/vm-tracer/worker" isn't supported - ignoring path on the host
INFO Kubernetes file "api-service.yaml" created
INFO Kubernetes file "api-deployment.yaml" created
INFO Kubernetes file "api-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "api-claim1-persistentvolumeclaim.yaml" created
INFO Kubernetes file "worker-deployment.yaml" created
INFO Kubernetes file "worker-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "worker-claim1-persistentvolumeclaim.yaml" created
And it created me 7 yaml files. But I was expected to have only one deployment file. Also, I don't understand these warning that I get. Is there a problem with my volumes?
Maybe it will be easier to convert the docker-compose to deployment.yml manually?
Thank you,

I'd recommend using Kompose as a starting point or inspiration more than an end-to-end solution. It does have some real limitations and it's hard to correct those without understanding Kubernetes's deployment model.
I would clean up your docker-compose.yml file before you start. You have volumes: that inject your source code into the containers, presumably hiding the application code in the image. This setup mostly doesn't work in Kubernetes (the cluster cannot reach back to your local system) and you need to delete these volumes: mounts. Doing that would get rid of both the Kompose warnings about unsupported host-path mounts and the PersistentVolumeClaim objects.
You also do not normally need to specify container_name: or several other networking-related options. Kubernetes does not support multiple networks and so if you have any networks: settings they will be ignored, but most practical Compose files don't need them either. The obsolete links: and expose: options, if you have them, can also usually be safely deleted with no consequences.
version: "3.8"
services:
worker:
build:
dockerfile: ./worker/Dockerfile
environment:
- PYTHONUNBUFFERED=1
api:
build:
dockerfile: ./api/Dockerfile
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "8050:8050"
depends_on: # won't have an effect in Kubernetes,
- worker # but still good Docker Compose practice
The bind-mount of the Docker socket is a larger problem. This socket usually doesn't exist in Kubernetes, and if it does exist, it's frequently inaccessible (there are major security concerns around having it available, and it would allow you to launch unmanaged containers as well as root the node). If you need to dynamically launch containers, you'd need to use the Kubernetes API to do that instead (look at creating one-off Jobs). For many practical purposes, having a long-running worker container attached to a queueing system like RabbitMQ is a better approach. Kompose can't fix this architectural problem, though, you will have to modify your code.
When all of this is done, I'd expect Kompose to create four files, with one Kubernetes YAML manifest in each: two Deployments, and two matching Services. Each of your Docker Compose services: would get translated into a separate Kubernetes Deployment, and you need a paired Kubernetes Service to be able to connect to it (even from within the cluster). There are a number of related objects that are often useful (ServiceAccounts, PodDisruptionBudgets, HorizontalPodAutoscalers) and a typical Kubernetes practice is to put each in its own file.

I guess this is fine:
All your docker exposed ports are now kubernetes services
Your volumes need PV and PVC, they are generated
There is a deployment yaml for your API and WORKER service.
This is how it should be usually.
However if you have confusion in deploying these files; try -
kubectl apply -f mymanifests/*.yaml - this will deploy all at once.
Or if you just want a single fine , you can concatenate all these files with
--------- one after other; which can be used to seperate multiple manifests but still have them in a single file. Something like -
apiVersion.... deploymentfile....
-------------
apiVersion.... servicefile...... and so on...

Related

How to convert a docker-compose file to Kubernetes YAML file?

How to convert the following code for docker-compose to Kubernetes YAML file.
version: '3.8'
services:
mongo:
image: mongo
restart: always
ports:
- 27017:27017
volumes:
- ./init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
You will need several components, firstly, a service, that will get the http request. Then a Deployment to create the actual pod, if you need volume so also a Persistent Volume. In my repository you can find docker compose yaml converted to k8s. Of course, you will probably need to change some data.
The kompose project is a project that provides a tool that does specifically this -- converts a docker-compose file into a set of Kubernetes yaml files. It might be worth a look for you.
For simple docker-compose files like this it would work fine. But for more complicated ones, it might over-complicate things, so YMMV and all.

Docker change location of named volumes

I have a problem that I just can't understand. I am using docker to run certain containers, but I have problems with at least one Volume, where I't like to ask if anybody can give me a hint what I am doing wrong. I am using Nifi-Ingestion as example, but it affects even more container volumes.
First, let's talk about the versions I use:
Docker version 19.03.8, build afacb8b7f0
docker-compose version 1.27.4, build 40524192
Ubuntu 20.04.1 LTS
Now, let's show the volume in my working docker-compose-file:
In my container, it is configured as followed:
volumes:
- nifi-ingestion-conf:/opt/nifi/nifi-current/conf
Below my docker-compose file it is defined as a normal named volume:
volumes:
nifi-ingestion-conf:
This is a snippet from the docker-compose that I'd like to get working
In my container, it is configured in this case as followed (having my STORAGE_VOLUME_PATH defined as /mnt/storage/docker_data):
volumes:
- ${STORAGE_VOLUME_PATH}/nifi-ingestion-conf:/opt/nifi/nifi-current/conf
On the bottom I guess there is something to do but I don't know what I could need to do here. In this case it is the same as in the working docker-compose:
volumes:
nifi-ingestion-conf:
So, now whats my problem?
I have two docker-compose files. One uses the normal named volumes, and one uses the volumes in my extra mount path. When I run the containers, the volumes seem to work different since files are written in the first style, but not in the second. My mount paths are generated in the second version so there is nothing wrong with my environment variables in the .env-file.
Hint: the /mnt/storage/docker_data is an NFS-mount but my machine has the full privileges on that share.
Here is my fstab-entry to mount that volume (maybe I have to set other options):
10.1.0.2:/docker/data /mnt/storage/docker_data nfs auto,rw
Bigger snippets
Here is a bigger snipped if the docker-compose (i need to cut and remove confident data, my problem is not that it does not work, it is only that the volume acts different. Everything for this one volume is in the code.):
version: "3"
services:
nifi-ingestion:
image: my image on my personal repo
container_name: nifi-ingestion
ports:
- 0000
labels:
- app-specivic
volumes:
- ${STORAGE_VOLUME_PATH}/nifi-ingestion-conf:/opt/nifi/nifi-current/conf
#working: - nifi-ingestion-conf:/opt/nifi/nifi-current/conf
environment:
- app-specivic
networks:
- cnetwork
volumes:
nifi-ingestion-conf:
networks:
cnetwork:
external: false
ipam:
driver: default
config:
- subnet: 192.168.1.0/24
And here of the env (only the value we are using)
STORAGE_VOLUME_PATH=/mnt/storage/docker_data
if i understand your question correctly, you wonder why the following docker-compose snippet works for you
version: "3"
services:
nifi-ingestion:
volumes:
- nifi-ingestion-conf:/opt/nifi/nifi-current/conf
volumes:
nifi-ingestion-conf:
and the following docker-compose snippet does not work for you
version: "3"
services:
nifi-ingestion:
volumes:
- ${STORAGE_VOLUME_PATH}/nifi-ingestion-conf:/opt/nifi/nifi-current/conf
what makes them different is how you use volumes. you need to differentiate between mount host paths and mount named volumes
You can mount a host path as part of a definition for a single service, and there is no need to define it in the top level volumes key.
But, if you want to reuse a volume across multiple services, then define a named volume in the top-level volumes key.
named volumes are managed by docker
If you start a container with a volume that does not yet exist, Docker creates the volume for you.
also, would advise you to read this answer
update:
you might also want to read about docker nfs volumes

docker service with compose file single node and local image

So I need rolling-updates with docker on my single node server. Until now, I was using docker-compose but unfortunately, I can't achieve what I need with it. Reading the web, docker-swarm seems to be the way to go.
I have found how to run an app with multiple replicas on a single node using swarm:
docker service create --replicas 3 --name myapp-staging myapp_app:latest
myapp:latest being built from my docker-compose.yml:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
build: "."
working_dir: /app
depends_on:
- "postgres"
env_file:
- ".env"
command: iex -S mix phx.server
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
Unfortunately, this doesn't work since it doesn't get the config from the docker-compose.yml file: .env file, command entry etc.
Searching deeper, I find that using
docker stack deploy -c docker-compose.yml <name>
will create a service using my docker-compose.yml config.
But then I get the following error message:
failed to update service myapp-staging_postgres: Error response from daemon: rpc error: code = InvalidArgument desc = ContainerSpec: image reference must be provided
So it seems I have to use the registry and push my image there so that it works. I understand this need in case of a multiple node architecture, but in my case I don't want to do that. (Carrying images are heavy, I don't want my image to be public, and after all, image is here, so why should I move it to the internet?)
How can I set up my docker service using local image and config written in docker-compose.yml?
I could probably manage my way using docker service create options, but that wouldn't use my docker-compose.yml file so it would not be DRY nor maintainable, which is important to me.
docker-compose is a great tool for developers, it is sad that we have to dive into DevOps tools to achieve such common features as rolling updates. This whole swarm architecture seems too complicated for my needs at this stage.
You don't have to use registeries in your single node setup. you can build your "app" image on your node from a local docker file using this command -cd to the directory of you docker file-
docker build . -t my-app:latest
This will create a local docker image on your node, this image is only visible to your single node which is benefitial in your use case but i wouldn't recommend this in a production setup.
You can now edit the compose file to be:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
image: "my-app:latest"
depends_on:
- "postgres"
env_file:
- ".env"
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
And now you can run your stack from this node and it will use your local app image and benefit from the usage of the image [updates - rollbacks ...etc]
I do have a side note though on your stack file. You are using the same env file for both services, please mind that swarm will look for the ".env" file relative/next to the ".yml" file, so if this is not intentional please revise the location of your env files.
Also on a side note this solution is only feasable on a single node cluster and if you scale your cluster you will have to use a registery and registeries dont have to be public, you can deploy a private registery on your cluster and only your nodes can access it -or you can make it public- the accessibility of your registery is your choice.
Hope this will help with your issue.
Instead of docker images, you can directly use the docker file there. please check the below example.
version: "3.7"
services:
webapp:
build: ./dir
The error is because of compose unable to find an image on the Docker public registry.
Above method should solve your issue.
Basically you need to use docker images in order to make the rolling update to work in docker swarm. Also I would like to clarify that you can host a private registry and use it instead of public one.
Detailed Explanation:
When you try out rolling update how docker swarm works is that it sees whether there is a change in the image which is used for the service if so then docker swarm schedules service updation based on the updation criteria's set up and will work on it.
Let us say there is no change to the image then what happens? Simply docker will not apply the rolling update. Technically you can specify --force flag to make it force update the service but it will just redeploy the service.
Hence create a local repo and store the images into that and use that image name in docker-compose file to be used for a swarm. You can secure the repo by using SSL, user credentials, firewall restrictions which is up to you. Refer this for more details on deploying docker registry server.
Corrections in your compose file:
Since docker stack uses the image to create service you need to specify image: "<image name>" in app service like done in postgres service. AS you have mentioned build instruction image-name is mandatory as docker-compose doesn't know what tho name the image as.Reference.
Registry server is needed if you are going to deploy the application in multi-server. Since you have mentioned it's a single node deployment just having the image pulled/built on the server is enough. But private registry approach is the recommended.
My recommendation is that don't club all the services into a single docker-compose file. The reason is that when you deploy/destroy using docker-compose file all the services will be taken down. This is a kind of tight coupling. Of course, I understand that all the other services depend on DB. in such cases make sure DB service is brought up first before other services.
Instead of specifying the env file make it as a part of Docker file instruction. either copy the env file and source it in entry point or use ENV variable to define it.
Also just an update:
Stack is just to group the services in swarm.
So your compose file should be:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
build: "."
image: "image-name:tag" #the image built will be tagged as image-name:tag
working_dir: /app # note here I've removed .env file
depends_on:
- "postgres"
command: iex -S mix phx.server
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
Dockerfile:
from baseimage:tag
COPY .env /somelocation
# your further instructions go here
RUN ... & \
... & \
... && chmod a+x /somelocation/.env
ENTRYPOINT source /somelocation/.env && ./file-to-run
Alternative Dockerfile:
from baseimage:tag
ENV a $a
ENV b $b
ENV c $c # here a,b,c has to be exported in the shell befire building the image.
ENTRYPOINT ./file-to-run
And you may need to run
docker-compose build
docker-compose push (optional needed to push the image into registry in case registry is used)]
docker stack deploy -c docker-compose.yml <stackname>
NOTE:
Even though you can create the services as mentioned here by #M.Hassan I've explained the ideal recommended way.

Use the Kubernetes tool Kompose to start multiple containers in a single pod

I have been search google for a solution to the below problem for longer than I care to admit.
I have a docker-compose.yml file, which allows me to fire up an ecosystem of 2 containers on my local machine. Which is awesome. But I need to be able to deploy to Google Container Engine (GCP). To do so, I am using Kubernetes; deploying to a single node only.
In order to keep the deploying process simple, I am using kompose, which allows me to deploy my containers on Google Container Engine using my original docker-compose.yml. Which is also very cool. The issue is that, by default, Kompose will deploy each docker service (I have 2) in seperate pods; one container per pod. But I really want all containers/services to be in the same pod.
I know there are ways to deploy multiple containers in a single pod, but I am unsure if I can use Kompose to accomplish this task.
Here is my docker-compose.yml:
version: "2"
services:
server:
image: ${IMAGE_NAME}
ports:
- "3000"
command: node server.js
labels:
kompose.service.type: loadbalancer
ui:
image: ${IMAGE_NAME}
ports:
- "3001"
command: npm run ui
labels:
kompose.service.type: loadbalancer
depends_on:
- server
Thanks in advance.
The thing is, that neither dose docker-compose launch them like this. They are completely separate. It means, for example, that you can have two containers listening on port 80, cause they are independent. If you try to pack them into same pod you will get port conflict and end up with a mess. The scenario you want to achieve should be achieved on your Dockerfile level to make any sense (although fat [supervisor based] containers can be considered an antipattern in many cases), in turn making your compose obsolete...
IMO you should embrace how things are, cause it does not make sense to map docker-compose defined stack to single pod.

docker-compose scale with nginx and without environment variable

I use docker-compose to describe the deployment of one of my application. The application is composed of a
mongodb database,
a nodejs application
a nginx front end the static file of nodejs.
If i scale the nodejs application, i would like nginx autoscale to the three application.
Recently i use the following code snippet :
https://gist.github.com/cmoore4/4659db35ec9432a70bca
This is based on the fact that some environment variable are created on link, and change when new server are present.
But now with the version 2 of the docker-compse file and the new link system of docker, the environment variable doesn't exist anymore.
How my nginx can now detect the scaling of my application ?
version: '2'
services:
nodejs:
build:
context: ./
dockerfile: Dockerfile.nodejs
image: docker.shadoware.org/passprotect-server:1.0.0
expose:
- 3000
links:
- mongodb
environment:
- MONGODB_HOST=mongodb://mongodb:27017/passprotect
- NODE_ENV=production
- DEBUG=App:*
nginx:
image: docker.shadoware.org/nginx:1.2
links:
- nodejs
environment:
- APPLICATION_HOST=nodejs
- APPLICATION_PORT=3000
mongodb:
image: docker.shadoware.org/database/mongodb:3.2.7
Documentation states here that:
Containers for the linked service will be reachable at a hostname identical to the alias, or the service name if no alias was specified.
So I believe that you could just set your services names in that nginx conf file like:
upstream myservice {
yourservice1;
yourservice2;
}
as they would be exported as host entries in /etc/hosts for each container.
But if you really want to have that host:port information as environment variables you could write a script to parse that docker-compose.yml and define an .env file, or doing it manually.
UPDATE:
You can get that port information from outside the container, this will return you the ports
docker inspect --format='{{range $p, $conf := .NetworkSettings.Ports}} {{$p}} -> {{(index $conf 0).HostPort}} {{end}}' your_container_id
But if you want to do it from the inside of a containers then what you want is a service discovery system like zookeeper
There's a long feature request thread in docker's repo, about that.
One workaround solution caught my attention. You could try building your own nginx image based on that.

Resources