How to use persistent-volume with podman pod and containers - persistent-volumes

I want to run two containers in podman pod. Both of these containers will use same persistent volume. Do I use --volume when I run each container OR when I run pod.

It depends, do you want the volume mounted to the same place for both containers? If so, you can mount it to the pod and both will have it, otherwise, you can mount it to each container separately without mounting it to the entire pod.

Related

Adding volume after docker-compose up

I am using a multi-container Docker application in a EC2 linux instance.
I have started it with: docker-compose -p myapplication up -d
I also have mounted my EFS under (in my EC2 host machine): /mnt/efs/fs1/
Everything is working fine at that point.
Now I need to access this EFS from one of my docker containers.
So I guess I have to add a volume to one of my containers linking /mnt/efs/fs1/ (in host) to /mydestinationpath (on container)
I can see my running containers IDs and images with: docker container ls
How can attach the volume to my container?
Edit the docker-compose.yml file to add the volumes: you need, and re-run the same docker-compose up -d command. Compose will notice that some specific services' configurations have changed, and delete and recreate those specific containers.
Most of the configuration for a Docker container (image name and tag, environment variables, published ports, volume mounts, ...) can only be specified when the container is first created. At the same time, the general expectation is that there's nothing important in a container filesystem. So it's extremely routine to delete and recreate a container to change options like this, and Compose can automate it for you.

Docker using host libraries

I have hadoop libraries on my host and I want use them in the container rather than having them in the container itself, is there a way I can use my host's libraries in the docker container?
You can mount a host directory on the container so it will be available inside, for example:
docker run -it -v /opt/myhadooplibs:/myhadooplibs busybox
Now /opt/myhadooplibs contents will be available on /myhadooplibs on the container. You can read more about volumes here.
Note that by doing this you are making your container non portable as it depends on the host.

Is there a way to start a sibling docker container mounting volumes from the host?

the scenario: I have a host that has a running docker daemon and a working docker client and socket. I have 1 docker container that was started from the host and has a docker socket mounted within it. It also has a mounted docker client from the host. So I'm able to issue docker commands at will from whithin this docker container using the aforementioned mechanism.
the need: I want to start another docker container from within this docker container; in other words, I want to start a sibling docker container from another sibling docker container.
the problem: A problem arises when I want to mount files that live inside the host filesystem to the sibling container that I want to spin up from the other docker sibling container. It is a problem because when issuing docker run, the docker daemon mounted inside the docker container is really watching the host filesystem. So I need access to the host file system from within the docker container which is trying to start another sibling.
In other words, I need something along the lines of:
# running from within another docker container:
docker run --name another_sibling \
-v {DockerGetHostPath: path_to_host_file}:path_inside_the_sibling \
bash -c 'some_exciting_command'
Is there a way to achieve that? Thanks in advance.
Paths are always on the host, it doesn't matter that you are running the client remotely (or in a container).
Remember: the docker client is just a REST client, the "-v" is always about the daemon's file system.
There are multiple ways to achieve this.
You can always make sure that each container mounts the correct host directory
You can use --volumes-from ie :
docker run -it --volumes-from=keen_sanderson --entrypoint=/bin/bash debian
--volumes-from Mount volumes from the specified container(s)
You can use volumes

Can docker containers share a directory amongst them

Is it possible to share a directory between docker instances to allow the different docker instances / containers running on the same server directly share access to some data?
You can mount the same host directory to both containers docker run -v /host/shared:/mnt/shared ... or use docker run --volumes-from=some_container to mount a volume from another container.
Yes, this is what "Docker volumes" are. See Managing Data in Containers:
Mount a Host Directory as a Data Volume
[...] you can also mount a directory from your own host into a container.
$ sudo docker run -d -P --name web -v /src/webapp:/opt/webapp training/webapp python app.py
This will mount the local directory, /src/webapp, into the container as the /opt/webapp directory.
[...]
Creating and mounting a Data Volume Container
If you have some persistent data that you want to share between containers, or want to use from non-persistent containers, it's best to create a named Data Volume Container, and then to mount the data from it.

why does docker have docker volumes and volume containers

Why does docker have docker volumes and volume containers? What is the primary difference between them. I have read through the docker docs but couldn't really understand it well.
Docker volumes
You can use Docker volumes to create a new volume in your container and to mount it to a folder of your host. E.g. you could mount the folder /var/log of your Linux host to your container like this:
docker run -d -v /var/log:/opt/my/app/log:rw some/image
This would create a folder called /opt/my/app/log inside your container. And this folder will be /var/log on your Linux host. You could use this to persist data or share data between your containers.
Docker volume containers
Now, if you mount a host directory to your containers, you somehow break the nice isolation Docker provides. You will "pollute" your host with data from the containers. To prevent this, you could create a dedicated container to store your data. Docker calls this container a "Data Volume Container".
This container will have a volume which you want to share between containers, e.g.:
docker run -d -v /some/data/to/share --name MyDataContainer some/image
This container will run some application (e.g. a database) and has a folder called /some/data/to/share. You can share this folder with another container now:
docker run -d --volumes-from MyDataContainer some/image
This container will also see the same volume as in the previous command. You can share the volume between many containers as you could share a mounted folder of your host. But it will not pollute your host with data - everything is still encapsulated in isolated containers.
My resources
https://docs.docker.com/userguide/dockervolumes/

Resources