How to convert the following code for docker-compose to Kubernetes YAML file.
version: '3.8'
services:
mongo:
image: mongo
restart: always
ports:
- 27017:27017
volumes:
- ./init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
You will need several components, firstly, a service, that will get the http request. Then a Deployment to create the actual pod, if you need volume so also a Persistent Volume. In my repository you can find docker compose yaml converted to k8s. Of course, you will probably need to change some data.
The kompose project is a project that provides a tool that does specifically this -- converts a docker-compose file into a set of Kubernetes yaml files. It might be worth a look for you.
For simple docker-compose files like this it would work fine. But for more complicated ones, it might over-complicate things, so YMMV and all.
Related
I'm trying to setup a docker container with clamav and am struggling to allow for larger files to be scanned. I've set up my docker-compose.yml like this:
version: "3.3"
services:
clamav:
image: clamav/clamav:latest
environment:
CLAMD_CONF_MaxFileSize: 250M
CLAMD_CONF_MaxScanSize: 250M
restart: always
ports:
- "3310:3310"
but that doesn't seem to do it (I keep getting a Broken Pipe Error). I presume I'm just using the wrong variables, but I can't seem to find the right ones.
Can anyone point me in the right direction?
According to my information this is not possible in the official image of clamav/clamav:stable but it would be a great improvement of the image.
We also wanted to use the official image. So our solution has been to mount the /var/lib/clamav directory and the /etc/clamav directories to a persistent volume. Then we change the /etc/clamav/clamd.conf after running the container and restart it after the configuration.
I've built an application through Docker with a docker-compose.yml file and I'm now trying to convert it into deployment file for K8S.
I tried to use kompose convert command but it seems to work weirdly.
Here is my docker-compose.yml:
version: "3"
services:
worker:
build:
dockerfile: ./worker/Dockerfile
container_name: container_worker
environment:
- PYTHONUNBUFFERED=1
volumes:
- ./api:/app/
- ./worker:/app2/
api:
build:
dockerfile: ./api/Dockerfile
container_name: container_api
volumes:
- ./api:/app/
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "8050:8050"
depends_on:
- worker
Here is the output of the kompose convert command:
[root#user-cgb4-01-01 vm-tracer]# kompose convert
WARN Volume mount on the host "/home/david/vm-tracer/api" isn't supported - ignoring path on the host
WARN Volume mount on the host "/var/run/docker.sock" isn't supported - ignoring path on the host
WARN Volume mount on the host "/home/david/vm-tracer/api" isn't supported - ignoring path on the host
WARN Volume mount on the host "/home/david/vm-tracer/worker" isn't supported - ignoring path on the host
INFO Kubernetes file "api-service.yaml" created
INFO Kubernetes file "api-deployment.yaml" created
INFO Kubernetes file "api-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "api-claim1-persistentvolumeclaim.yaml" created
INFO Kubernetes file "worker-deployment.yaml" created
INFO Kubernetes file "worker-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "worker-claim1-persistentvolumeclaim.yaml" created
And it created me 7 yaml files. But I was expected to have only one deployment file. Also, I don't understand these warning that I get. Is there a problem with my volumes?
Maybe it will be easier to convert the docker-compose to deployment.yml manually?
Thank you,
I'd recommend using Kompose as a starting point or inspiration more than an end-to-end solution. It does have some real limitations and it's hard to correct those without understanding Kubernetes's deployment model.
I would clean up your docker-compose.yml file before you start. You have volumes: that inject your source code into the containers, presumably hiding the application code in the image. This setup mostly doesn't work in Kubernetes (the cluster cannot reach back to your local system) and you need to delete these volumes: mounts. Doing that would get rid of both the Kompose warnings about unsupported host-path mounts and the PersistentVolumeClaim objects.
You also do not normally need to specify container_name: or several other networking-related options. Kubernetes does not support multiple networks and so if you have any networks: settings they will be ignored, but most practical Compose files don't need them either. The obsolete links: and expose: options, if you have them, can also usually be safely deleted with no consequences.
version: "3.8"
services:
worker:
build:
dockerfile: ./worker/Dockerfile
environment:
- PYTHONUNBUFFERED=1
api:
build:
dockerfile: ./api/Dockerfile
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "8050:8050"
depends_on: # won't have an effect in Kubernetes,
- worker # but still good Docker Compose practice
The bind-mount of the Docker socket is a larger problem. This socket usually doesn't exist in Kubernetes, and if it does exist, it's frequently inaccessible (there are major security concerns around having it available, and it would allow you to launch unmanaged containers as well as root the node). If you need to dynamically launch containers, you'd need to use the Kubernetes API to do that instead (look at creating one-off Jobs). For many practical purposes, having a long-running worker container attached to a queueing system like RabbitMQ is a better approach. Kompose can't fix this architectural problem, though, you will have to modify your code.
When all of this is done, I'd expect Kompose to create four files, with one Kubernetes YAML manifest in each: two Deployments, and two matching Services. Each of your Docker Compose services: would get translated into a separate Kubernetes Deployment, and you need a paired Kubernetes Service to be able to connect to it (even from within the cluster). There are a number of related objects that are often useful (ServiceAccounts, PodDisruptionBudgets, HorizontalPodAutoscalers) and a typical Kubernetes practice is to put each in its own file.
I guess this is fine:
All your docker exposed ports are now kubernetes services
Your volumes need PV and PVC, they are generated
There is a deployment yaml for your API and WORKER service.
This is how it should be usually.
However if you have confusion in deploying these files; try -
kubectl apply -f mymanifests/*.yaml - this will deploy all at once.
Or if you just want a single fine , you can concatenate all these files with
--------- one after other; which can be used to seperate multiple manifests but still have them in a single file. Something like -
apiVersion.... deploymentfile....
-------------
apiVersion.... servicefile...... and so on...
I have a problem that I just can't understand. I am using docker to run certain containers, but I have problems with at least one Volume, where I't like to ask if anybody can give me a hint what I am doing wrong. I am using Nifi-Ingestion as example, but it affects even more container volumes.
First, let's talk about the versions I use:
Docker version 19.03.8, build afacb8b7f0
docker-compose version 1.27.4, build 40524192
Ubuntu 20.04.1 LTS
Now, let's show the volume in my working docker-compose-file:
In my container, it is configured as followed:
volumes:
- nifi-ingestion-conf:/opt/nifi/nifi-current/conf
Below my docker-compose file it is defined as a normal named volume:
volumes:
nifi-ingestion-conf:
This is a snippet from the docker-compose that I'd like to get working
In my container, it is configured in this case as followed (having my STORAGE_VOLUME_PATH defined as /mnt/storage/docker_data):
volumes:
- ${STORAGE_VOLUME_PATH}/nifi-ingestion-conf:/opt/nifi/nifi-current/conf
On the bottom I guess there is something to do but I don't know what I could need to do here. In this case it is the same as in the working docker-compose:
volumes:
nifi-ingestion-conf:
So, now whats my problem?
I have two docker-compose files. One uses the normal named volumes, and one uses the volumes in my extra mount path. When I run the containers, the volumes seem to work different since files are written in the first style, but not in the second. My mount paths are generated in the second version so there is nothing wrong with my environment variables in the .env-file.
Hint: the /mnt/storage/docker_data is an NFS-mount but my machine has the full privileges on that share.
Here is my fstab-entry to mount that volume (maybe I have to set other options):
10.1.0.2:/docker/data /mnt/storage/docker_data nfs auto,rw
Bigger snippets
Here is a bigger snipped if the docker-compose (i need to cut and remove confident data, my problem is not that it does not work, it is only that the volume acts different. Everything for this one volume is in the code.):
version: "3"
services:
nifi-ingestion:
image: my image on my personal repo
container_name: nifi-ingestion
ports:
- 0000
labels:
- app-specivic
volumes:
- ${STORAGE_VOLUME_PATH}/nifi-ingestion-conf:/opt/nifi/nifi-current/conf
#working: - nifi-ingestion-conf:/opt/nifi/nifi-current/conf
environment:
- app-specivic
networks:
- cnetwork
volumes:
nifi-ingestion-conf:
networks:
cnetwork:
external: false
ipam:
driver: default
config:
- subnet: 192.168.1.0/24
And here of the env (only the value we are using)
STORAGE_VOLUME_PATH=/mnt/storage/docker_data
if i understand your question correctly, you wonder why the following docker-compose snippet works for you
version: "3"
services:
nifi-ingestion:
volumes:
- nifi-ingestion-conf:/opt/nifi/nifi-current/conf
volumes:
nifi-ingestion-conf:
and the following docker-compose snippet does not work for you
version: "3"
services:
nifi-ingestion:
volumes:
- ${STORAGE_VOLUME_PATH}/nifi-ingestion-conf:/opt/nifi/nifi-current/conf
what makes them different is how you use volumes. you need to differentiate between mount host paths and mount named volumes
You can mount a host path as part of a definition for a single service, and there is no need to define it in the top level volumes key.
But, if you want to reuse a volume across multiple services, then define a named volume in the top-level volumes key.
named volumes are managed by docker
If you start a container with a volume that does not yet exist, Docker creates the volume for you.
also, would advise you to read this answer
update:
you might also want to read about docker nfs volumes
From the docker docs:
Docker Compose’s extends keyword enables sharing of common
configurations among different files, or even different projects
entirely. Extending services is useful if you have several services
that reuse a common set of configuration options. Using extends you
can define a common set of service options in one place and refer to
it from anywhere.
For some reason this feature was removed in version 3.
Found also this thread, but it is inactive for 2 years.
I'm trying to find a replacement for this feature in the newer versions.
Would like to hear if somebody found a replacement for extends.
Thanks.
There are 2 ways to achieve what you need, you can decide to use one of them or both at the same time as they work slightly differently:
Multiple compose files
You can specify multiple compose files when running a docker compose command, you could for instance set up your project with:
docker-compose -f config1.yml -f config2.yml up
You could also use an environment variable to specify your files:
COMPOSE_FILE=config1.yml:config2.yml docker-compose up
What happens is that docker compose creates a single config merging what you defined in each of them.
Here the documentation showing how to merge multiple compose files.
You can also generate your final config file running the config command.
YAML Anchors
Since docker compose files are basically YAML files, you can take advantage of YAML Anchors to define a block of properties and reuse them in multiple parts of your config.
For example:
version: '3'
common: &common
image: "myrepo/myimage"
restart: "unless-stopped"
volumes:
- "volume:/mnt/myvolume"
services:
service1:
<<: *common
ports:
- "5000:5000"
service2:
<<: *common
environment:
- MYENV: value
Letzt imagine i have 3 compose files (only focus on the mysql service)
docker-compose.yml
docker-compose.staging.yml
docker-compose.prod.yml
In my docker compose.yml i have my basic mysql stuff with dev als build target
version: "3.4"
services:
mysql:
build:
target: dev
...
And start it with
docker-compose up -d
In my staging environment i would like to expose port 3306, but also want another build target so i would create the docker-compose.staging.yml with the following content.
version: "3.4"
services:
mysql:
build
target: prod
ports:
- 3306:3306
And combine it with
docker-compose -f docker-compose.yml -f docker-compose.staging.yml up -d
So the build target is overwritten and the port 3306 is now exposed to the outside.
Now i want the same in the docker-compose.prod.yml, just without having the port 3306 exposed to the outside ... How can i override the ports directive to not having ports exposed?
I tried to put an empty array in the prod.yml without success (port is still exposed):
version: "3.4"
services:
mysql:
ports: []
In the end i would like to stack the up command like this:
docker-compose -f docker-compose.yml -f docker-compose.staging.yml -f docker-compose.prod.yml up -d
I also know the docs says
For the multi-value options ports, expose, external_links, dns, dns_search, and tmpfs, Compose concatenates both sets of values
But how can i reach my goal anyway without duplicating configuration?
Yes for sure, i could omit the docker-compose.staging.yml but in the staging.yml are build steps defined, which should also be used for the prod stage to not have any differences between the built container.
So duplicating things isn't really an option.
Thanks
I would actually strongly suggest just not using the "target" command in your compose files. I find it to be extremely beneficial to build a single image for local/staging/production - build once, test it, and deploy it in each environment. In this case, you change things using environment variables or mounted secrets/config files.
Further, using compose to build the images is... fragile. I would recommend building the images in a CI system, pushing them to a registry, and then using the image version tags in your compose file- it is a much more reproducible system.
You might consider using extends key in your compose files like this:
mysql:
extends:
file: docker-compose.yml
service: mysql
ports:
- 3306:3306
# other definitions
Although you'd have to change your compose version from 3.4 to < 3 ( like 2.3 ) because v3 doesn't support this feature ref as there is a open feature request hanging for a long time now.
Important note here is that you shouldn't expose any ports in your base docker-compose.yml file, only on the specific composes.
Oficial docs ref for extends
edit
target clause is not supported in v2.0 so I've adjusted the answer to match the extends and target requirement. That's compose v2.3.
edit from comments
As there is a deploy keyword requirement, then there is compose v3 requirement. And as for now, there is no possibility to extend composes. I've read in some official doc (can't find it now for ref) that they encourage us to use flat composes specific for environment so that it's always clear. Also Docker states that's hard to implement in v3 (ref in the above issue) and it's not going to be implemented anywhere soon. You have to use separate compose files per environment.