Define Docker container volume bindings in a configuration file? - docker

Is there a way to define all the volume bindings either in the Dockerfile or another configuration file that I am not aware of?

Since volume bindings are used when you create a container, you can't define them in the Dockerfile (which is used to build your Docker image, not the container).
If you want a way to define the volume bindings without having to type them every time, you have the following options:
Create a script that runs the docker command and includes all of the volume options.
If you want to run more than one container, you can also use Docker Compose and define the volume bindings in the docker-compose.yaml file: https://docs.docker.com/compose/compose-file/#/volumes-volumedriver
Out of the two, I prefer Docker Compose, since it includes lots of other cool functionality, e.g. allowing you to define the port bindings, having links between containers, etc. You can do all of that in a script as well, but as soon as you use more than one container at a time for the same application (e.g. a web server container talking to a database container), Docker Compose makes a lot of sense, since you have the configuration in one place, and you can start/stop all of your containers with one single command.

Related

Isn't docker-compose a replacement for docker run?

I'm having difficulties understanding docker. No matter how many tutorials I watch, guides I read, for me docker-compose is like being able to define multiple Dockerfiles, ie multiple containers. I can define environment variables in both, ports, commands, base images.
I read in other questions/discussions that Dockerfile defines how to build an image, and docker-compose is how to run an image, but I don't understand that. I can build docker containers without having to have a Dockerfile.
It's mainly for local development though. Does Dockerfile have an important role when deploying to AWS for example (where it's probably coming out of the box for example for EC2)?
So the reason why I can work locally with docker-compose only is because the base image is my computer (sorting out the task Dockerfile is supposed to do)?
Think about how you'd run some program, without Docker involved. Usually it's two steps:
Install it using a package manager like apt-get or brew, or build it from source
Run it, without needing any of its source code locally
In plain Docker without Compose, similarly, you have the same two steps:
docker pull a prebuilt image with the software, or docker build it from source
docker run it, without needing any of its source code locally
I'd aim to have a Dockerfile that creates an immutable copy of your image, with all of its source code and library dependencies as part of the image. The ideal is that you can docker run your image without -v options to inject source code or providing the command at the docker run command line.
The reality is that there are a lot of moving parts: you probably need to docker network create a network to get containers to communicate with each other, and use docker run -e environment variables to specify host names and database credentials, and launch multiple containers together, and so on. And that's where Compose comes in: instead of running a series of very long docker commands, you can put all of the details you need in a docker-compose.yml file, check it in, and run docker-compose up to get all of those parts put together.
So, do:
Use Compose to start multiple containers together
Use Compose to write down complex runtime options like port mappings or environment variables with host names and credentials
Use Compose to build your image and start a container from it with a single command
Build your application code and a standard CMD to run it into your Dockerfile.

Can I adopt a loose container in a new docker-compose file?

I have several loose containers, including for example a rabbitmq container, that I am using for development. I have started migrating all of these loose containers to a docker-compose file for easier management and spinning up/down when testing. Unfortunately, there is a lot of configuration in these containers that I would rather not have to spend time setting up again.
As such, I was wondering if it was possible to adopt a docker container into a new docker-compose file.
I tried starting the compose file with just using the same name as the loose container, but I get cannot create container for service rabbitmq: Conflict. The container name "/rabbitmq" is already in use by container "<ID>". You have to remove (or rename) that container to be able to reuse that name
If you expect that when you run "docker compose" it will not start a new container if there is one already running with that name and it will just reuse that, it doesn't work that way.
You should define any configurations that you used when running the stand-alone container in your docker-compose file.

Using docker volumes in packer build

Is it possible to use existing docker or external volumes in/during packer build?
I saw in https://www.packer.io/docs/builders/docker/:
"VOLUME /test1 /test2"
What does it exactly mean? "VOLUME String EX: "VOLUME FROM TO"" doesn't explain much. Is /test1 from host?
I also saw in https://www.packer.io/docs/builders/docker/#volumes:
volumes (map[string]string) - A mapping of additional volumes to mount into this container. The key of the object is the host path, the value is the container path.
How can I make use of that? Where/how can I put/declare it , suppose that I want to map /etc/dnsmasq.d/ host path into the container, during build time and run time as well?
It has the same meaning as the corresponding Dockerfile directive (indeed, all of the directives in that section of the Packer documentation are Dockerfile commands). You probably don't need or want it.
This is different from the docker run -v option to mount content into a container. You cannot specify mount options like this at container build time (whether using docker build or Packer). You don't need to specify a VOLUME to be able to mount content on some container directory.
The Dockerfile VOLUME directive isn't needed for most common uses and mostly only has confusing side effects. You do not need it to mount configuration into your application; you do not need it to overwrite application source code with a development tree; the most obvious thing it does do is prevent future RUN instructions from having an effect. I'd avoid it unless you understand in detail what it does and why you want it.

Where do docker images' new Files get saved to in GCP?

I want to create some docker images that generates text files. However, since images are pushed to Container Registry in GCP. I am not sure where the files will be generated to when I use kubectl run myImage. If I specify a path in the program, like '/usr/bin/myfiles', would they be downloaded to the VM instance where I am typing "kubectl run myImage"? I think this is probably not the case.. What is the solution?
Ideally, I would like all the files to be in one place.
Thank you
Container Registry and Kubernetes are mostly irrelevant to the issue of where a container will persist files it creates.
Some process running within a container that generates files will persist the files to the container instance's file system. Exceptions to this are stdout and stderr which are both available without further ado.
When you run container images, you can mount volumes into the container instance and this provides possible solutions to your needs. Commonly, when running Docker Engine, it's common to mount the host's file system into the container to share files between the container and the host: docker run ... --volume=[host]:[container] yourimage ....
On Kubernetes, there are many types of volumes. An seemingly obvious solution is to use gcePersistentDisk but this has a limitation in that it these disks may only be mounted for write on one pod at a time. A more powerful solution may be to use an NFS-based solution such as nfs or gluster. These should provide a means for you to consolidate files outside of the container instances.
A good solution but I'm unsure whether it is available, would be to write your files as Google Cloud Storage objects.
A tenet of containers is that they should operate without making assumptions about their environment. Your containers should not make assumptions about running on Kubernetes and should not make assumptions about non-default volumes. By this I mean, that your containers will write files to container's file system. When you run the container, you apply the configuration that e.g. provides an NFS volume mount or GCS bucket mount etc. that actually persists the files beyond the container.
HTH!

How to use Dockerfile to link main container to a db container?

Docker has a quite convenient way to link the main container to a db container with a command like:
docker run --link db:db user/main
This is very convenient already. However, I believe it's still clumsy compared to a command like:
docker run user/ultra
where ultra is a container that is already linking the main container to the db container.
Is that possible that I can achieve this by writing a good Dockerfile.
I suppose I can start the Dockerfile with
FROM user/main
but how do I get the second container involved and then link them with Dockerfile?
Thanks.
FROM user/main
That would create an image (build time) based on user/main, which is not at all the same as linking at runtime two containers together.
Plus, --link is now obsolete: see "Legacy container links"
Warning: The --link flag is a deprecated legacy feature of Docker. It may eventually be removed.
Unless you absolutely need to continue using it, we recommend that you use user-defined networks to facilitate communication between two containers instead of using --link
Use instead the links directive in a docker-compose.yml file.
Containers for the linked service will be reachable at a hostname identical to the alias, or the service name if no alias was specified
You can make sure the containers are launched in the proper order with the depends_on directive.
Then a simple docker-compose up will launch the containers and link them on the same network.

Resources