Launch different containers from a Dockerfile - docker

Is there any possibility to launch containers of different images simultaneously from a single "Dockerfile" ?

There is a misconception here. A Dockerfile is not responsible for launching a container. It's responsible for building an image (which you can then use docker run ... to create a container from). More info can be found on the official Docker documentation.
If you need to run many docker containers simultaneously I'd suggest you had a look at Docker Compose which you can use to run containers based on images either from the docker registry or custom-built via Dockerfiles

Also somewhat new to Docker, but my understanding is that the Dockerfile is used to create Docker images, and then you start containers from images.
If you want to run multiple containers you need to use an orchestrator like docker swarm or Kubernetes.
Those have their own configuration files that tell it which images to spin up.

Related

How can I use the containers offer by webdevops/*?

I'm learning about Docker Containers, so, I found this repo with a lot of images and references, can anyone help me in order to understand how can I use those images?
I know the docker run --rm command
With docker you first need a docker image. A docker image is a representation of an application that docker can understand and run.
The most common ways to get one is to use docker pull or to generate yours with docker build.
You can check the images you got with docker images
Once you got your image you can run it with docker run MyImage, this will create a container, a container is a running application.

Can Kubernetes ever create a Docker image?

I'm new to Kubernetes and I'm learning about it.
Are there any circumstances where Kubernetes is used to create a Docker image instead of pulling it from a repository ?
Kubernetes natively does not create images. But you can run a piece of software such kaniko in the kubernetes cluster to achieve it. Kaniko is a tool to build container images from a Dockerfile, inside a container or Kubernetes cluster.
The kaniko executor image is responsible for building an image from a Dockerfile and pushing it to a registry. Within the executor image, we extract the filesystem of the base image (the FROM image in the Dockerfile). We then execute the commands in the Dockerfile, snapshotting the filesystem in userspace after each one. After each command, we append a layer of changed files to the base image (if there are any) and update image metadata
Several options exist to create docker images inside Kubernetes.
If you are already familiar with docker and want a mature project you could use docker CE running inside Kubernetes. Check here: https://hub.docker.com/_/docker and look for the dind tag (docker-in-docker). Keep in mind there's pros and cons to this approach, so take care to understand them.
Kaniko seems to have potential but there's no version 1 release yet.
I've been using docker dind (docker-in-docker) to build docker images that run in production Kubernetes cluster with good results.

Live upgrade with docker

I'm making a docker image for a daemon that can be upgraded live without restarting. And I'm making it minimal by using a multistage build and start everything with docker-compose.
Since this daemon has most of its features in loadable modules, upgrades are usually just a matter of reloading them. Which is a very nice feature to have, because restarting the daemon would mean disconnecting all the users. But I don't know how to keep this feature with a docker image.
A shared volume obviously come to mind, but this doesn't seem to play well with a multistage build or with docker-compose.
Unfortunatley, I don't think this is possible with docker. As docker images are immutable, you need to create a new image with the new version of unrealircd. From that image you can start a new docker container. Using a shared volume would be possible in theory but that is really not the intended use-case of volumes. Volumes are the preferred mechanism for persisting data generated by and used by Docker containers. If you use them to store the modules of unrealircd you loose the ability to just take your docker image and start another container with the same application in it.

Why do docker CLI commands default to controlling containers?

I'm new to Docker, and one of the things I'm interested in WRT to it is what are the majority use-cases. For example, These commands seem to do the same thing:
docker container rm
and
docker rm
i.e. the CLI provides a shorthand means of controlling containers rather than images (the command docker image ls is also valid).
Why does docker choose to provide a short-hand means of working with containers rather than with images?
From my experience I work more with containers than with images. You create the image once but may create a container from this image multiple times.
I think this is similar to classes and objects. A image is just a blueprint for a container, same as a class is a blueprint for an object. You create multiple objects from a class but you write the class just once so in the end you will also execute more commands for containers than for images.
I think this is the reason why the commands are focused on the containers.

Multiple Docker containers, same image, different config

I'm totally new to Docker so I appreciate your patience.
I'm looking for a way to deploy multiple containers with the same image, however I need to pass in a different config (file) to each?
Right now, my understanding is that once you build an image, that's what gets deployed, but the problem for me is that I don't see the point in building multiple images of the same application when it's only the config that is different between the containers.
If this is the norm, then I'll have to deal with it however if there's another way then please put me out of my misery! :)
Thanks!
I think looking at examples which are easy to understand could give you the best picture.
What you want to do is perfectly valid, an image should be anything you need to run, without the configuration.
To generate the configuration, you either:
a) volume mounts
use volumes and mount the file during container start docker run -v my.ini:/etc/mysql/my.ini percona (and similar with docker-compose).
Be aware, you can repeat this as often as you like, so mount several configs into your container (so the runtime-version of the image).
You will create those configs on the host before running the container and need to ship those files with the container, which is the downside of this approach (portability)
b) entry-point based configuration (generation)
Most of the advanced docker images do provide a complex so called entry-point which consumes ENV variables you pass when starting the image, to create the configuration(s) for you, like https://github.com/docker-library/percona/blob/master/5.7/docker-entrypoint.sh
so when you run this image, you can do docker run -e MYSQL_DATABASE=myapp percona and this will start percona and create the database percona for you.
This is all done by
adding the entry-point script here https://github.com/docker-library/percona/blob/master/5.7/Dockerfile#L65
do not forget to copy the script during image build https://github.com/docker-library/percona/blob/master/5.7/Dockerfile#L63
Then during the image-startup, your ENV variable will cause this to trigger: https://github.com/docker-library/percona/blob/master/5.7/docker-entrypoint.sh#L91
Of course, you can do whatever you like with this. E.g this configures a general portus image: https://github.com/EugenMayer/docker-rancher-extra-catalogs/blob/master/templates/registry-slim/11/docker-compose.yml
which has this entrypoint https://github.com/EugenMayer/docker-image-portus/blob/master/build/startup.sh
So you see, the entry-point strategy is very common and very powerful and i would suppose to go this route whenever you can.
c) Derived images
Maybe for "completeness", the image-derive strategy, so you have you base image called "myapp" and for the installation X you create a new image
from myapp
COPY my.ini /etc/mysql/my.ini
COPY application.yml /var/app/config/application.yml
And call this image myapp:x - the obvious issue with this is, you end up having a lot of images, on the other side, compared to a) its much more portable.
Hope that helps
Just run from the same image as many times as needed. New containers will be created and they can then be started and stoped each one saving its own configuration. For your convenience would be better to give each of your containers a name with "--name".
F.i:
docker run --name MyContainer1 <same image id>
docker run --name MyContainer2 <same image id>
docker run --name MyContainer3 <same image id>
That's it.
$ docker ps
CONTAINER ID IMAGE CREATED STATUS NAMES
a7e789711e62 67759a80360c 12 hours ago Up 2 minutes MyContainer1
87ae9c5c3f84 67759a80360c 12 hours ago Up About a minute MyContainer2
c1524520d864 67759a80360c 12 hours ago Up About a minute MyContainer3
After that you have your containers created forever and you can start and stop them like VMs.
docker start MyContainer1
Each container runs with the same RO image but with a RW container specific filesystem layer. The result is each container can have it's own files that are distinct from every other container.
You can pass in configuration on the CLI, as an environment variable, or as a unique volume mount. It's a very standard use case for Docker.

Resources