People using Docker have probably used dockerfiles as master templates for their containers.
Does Kubernetes allow re-use of existing dockerfiles? Or will people need to port that to Kubernetes .yaml -style templates?
I'm not aware of tools for doing so or people that have been trying this.
Dockerfiles and the Kubernetes resource manifests (the yaml files) are somewhat orthogonal. While you could pull some information from the Dockerfile to prepopulate the Kubernetes manifest, it'd only be able to fill in a very small subset of the options available.
You can think of Dockerfiles as describing what is packaged into your container image, while the Kubernetes manifests specify how your container image is deployed -- which ports are exposed, environment variables are added, volumes are mounted, services made available to it; how it should be scheduled, health checked, restarted; what its resource requirements are; etc.
I think what you are referring to are your docker-compose files. These guys are responsible for orchestrating your 'service'. If you have docker-compose files there is a tool that can help convert them to k8 manifests.
https://github.com/kubernetes/kompose
Related
I'm specifically trying to get files from services (docker containers) in a Gitlab CI job to the runner container. *I could provide more details on exactly what I'm trying to do, but I'd like to keep this question fairly generic and platform/language agnostic.
Essentially I have the following .gitlab-ci.yml:
image: customselfhostedimage-redacted
services:
- postgres:latest
- selenium/standalone-chrome
...
There are files being downloaded in one of the service containers (selenium) which I need to gain access to from the main container being run by the Gitlab runner. Unfortunately I can not seem to find any method to create a volume mount or share of some sort between service containers and the host (※ NOTE: This was not true, see accepted answer.). Adding commands to specifically copy files from within service containers is also not an option for me. I'm aware of multiple issues requesting such functionality, such as this one:
https://gitlab.com/gitlab-org/gitlab-runner/-/issues/3207
The existence of these open issues indicates to me there is not currently a solution.
I have tried to specify volumes in config.toml as has been in comments in various Gitlab CI issues related to this subject, but this does not seem to create volume mounts on service containers?
Is there any way to create volume mounts inside service containers accessible to the runner/runner container, or if not is there any simple solution to make files accessible from (and possibly between) service containers?
※ NOTE: This is NOT a docker-compose question, and it is NOT a docker-in-docker question.
If you self-host your runners, you can add volumes to the runner configuration, which applies to services and job containers alike.
Per the documentation:
GitLab Runner 11.11 and later mount the host directory for the defined services as well.
For example:
[runners.docker]
# ...
volumes = ["/path/to/bind/from/host:/path/to/bind/in/container:rw"]
This path will be available both in the job as well as in containers defined in services:. But keep in mind that the data (and changes) are available and persisted across all jobs that use this runner.
While looking for a kubernetes equivalent of the docker-compose watchtower container, I stumbled upon renovate. It seems to be a universal tool to update docker tags, dependencies and more.
They also have an example of how to run the service itself inside kubernetes, and I found this blogpost of how to set renovate up to check kubernetes manifests for updates (?).
Now the puzzle piece that I'm missing is some super basic working example that updates a single pod's image tag, and then figuring out how to deploy that in a kubernetes cluster. I feel like there needs to be an example out there somewhere but I can't find it for the life of me.
To explain watchtower:
It monitors all containers running in a docker compose setup and pulls new versions of images once they are available, updating the containers in the process.
I found one keel which looks like watchtower:
Kubernetes Operator to automate Helm, DaemonSet, StatefulSet & Deployment updates
Alternatively, there is duin
Docker Image Update Notifier is a CLI application written in Go and delivered as a single executable (and a Docker image) to receive notifications when a Docker image is updated on a Docker registry.
The Kubernetes provider allows you to analyze the pods of your Kubernetes cluster to extract images found and check for updates on the registry.
I think there is a confusion regarding what Renovate does.
Renovate updates files inside GIT repositories not on the Kubernetes API server.
The Kubernetes manager which you are probably referencing updates K8 manifests, Helm charts and so on inside of GIT repository.
I have been reading various articles for migrating my Docker Application into a different machine. All the articles talk about “docker commit” or “export/ import”. This only refers to a single Container, which is first converted to an Image and then we do a “docker run” on the new machine.
But my application is usually made up of several containers, because I am following the best practice of segregating different services.
The question is, how do I migrate or move all the containers that have been configured to join together and run as one. I don’t know whether “swarm” is the correct term for this.
The alternative I see is - simply copy the “docker-compose” and “dockerfile” into the new machine and do a fresh setup of the architecture. Then I copy all the application files. It runs fine.
My purpose, of course is not the only solution, but it's quite nice:
Create docker images in one machine (where you need your Dockerfile)
Upload images to a docker registry (you can use your own docker hub account, or maybe a nexus, or whatever)
2.1. It's also recommended to tag with version your images, and protect overwritting an image with the same version and different code.
Use docker-compose (it's recommended define a docker network for all docker that have to interact among them) to deploy (docker-compose up is like several docker run, but easier to mantain.)
You can deploy in several machines just using the same docker-compose.yml to deploy and access to your registry.
4.1. Deploy can be done in a single host, swarm, kubernetes... (you'd have to translate your docker-compose.yml to kubectl yml file for that)
I agree to the docker-compose suggestion. And to store your images in a registry or on your local machine. Each section in your docker compose file will be separated per service. Each service will have to be written in YAML format.
You are going to want version 3 YAML I believe. Then from there you code something like below. But each service will use your Dockerfile image in your registry or locally in your folder.
version : '3'
services:
drupal:
image:
......ports, volumes, etc
postgres:
image:
......ports, volumes, etc
Disclosure: I took a Docker Course from Bret Fisher on Udemy.
I want to create some docker images that generates text files. However, since images are pushed to Container Registry in GCP. I am not sure where the files will be generated to when I use kubectl run myImage. If I specify a path in the program, like '/usr/bin/myfiles', would they be downloaded to the VM instance where I am typing "kubectl run myImage"? I think this is probably not the case.. What is the solution?
Ideally, I would like all the files to be in one place.
Thank you
Container Registry and Kubernetes are mostly irrelevant to the issue of where a container will persist files it creates.
Some process running within a container that generates files will persist the files to the container instance's file system. Exceptions to this are stdout and stderr which are both available without further ado.
When you run container images, you can mount volumes into the container instance and this provides possible solutions to your needs. Commonly, when running Docker Engine, it's common to mount the host's file system into the container to share files between the container and the host: docker run ... --volume=[host]:[container] yourimage ....
On Kubernetes, there are many types of volumes. An seemingly obvious solution is to use gcePersistentDisk but this has a limitation in that it these disks may only be mounted for write on one pod at a time. A more powerful solution may be to use an NFS-based solution such as nfs or gluster. These should provide a means for you to consolidate files outside of the container instances.
A good solution but I'm unsure whether it is available, would be to write your files as Google Cloud Storage objects.
A tenet of containers is that they should operate without making assumptions about their environment. Your containers should not make assumptions about running on Kubernetes and should not make assumptions about non-default volumes. By this I mean, that your containers will write files to container's file system. When you run the container, you apply the configuration that e.g. provides an NFS volume mount or GCS bucket mount etc. that actually persists the files beyond the container.
HTH!
If my container exits then all my images got lost and their respective data as well.
Could you please answer how to save the data (Here in this case of gitlab, we have multiple branches). How to save those branches even if container exits and next time when we restart the container, I should get all my old branches back?
This question is a bit light on specific details of your workflow, but the general answer to the need for persistent data in the ephemeral container world is volumes. Without a broader understanding of your workflow and infrastructure, it could be as simple as making sure that your gitlab data is in a named local volume. E.g. something you create with docker volume or an image that everyone uses that has a VOLUME location identified in the Dockerfile and is bind mounted to a host location at container run time.
Of course once you are in the world of distributed systems and orchestrating multi-node container environments, local volumes will no longer be a viable answer and you will need to investigate shared volume capabilities from a storage vendor or self-managed with NFS or some other global filesystem capabilities. A lot of good detail is provided in the Docker volume administrative guide if you are new to the volume concept.