docker and "volatile volumes" ala /tmp - docker

I run a server with 2 Docker images, one does building and packaging and thus creates alot of shortlived stuff on /tmp.
I'd like this container /tmp to not be backed by persistent volume (union fs or volume) but to use the host's /tmp which in turn is a tmpfs volume and ideal for such operations. Saving access to a normal drive will have overhead and causes access to HDDs (wear-out), I would prefer to try to stay in RAM as much as possible.
Some options are:
Bind /tmp/:/tmp to the docker process. Doesnt seem very secure, and problematic if another process accesses this directory
Bind a volume to /tmp. This means its on the harddrive unless I manage to move it to /tmp.
There is then still the issue of deleting this volume each time the container stops, since Id prefer a clean slate.
Mount /tmp as tmpfs in the container. Seems the most sane option. Except that would mean editing all containers instead plainly using existing ones
I am new to Docker, maybe I am missing something obvious.
I search for a way to specify volumes which can or have to be dropped after the container stops. Or even are kept completely in RAM unless this is infeasible.
And additionally some easy way to mount /tmp as such a container.

Docker allows you to do this using the --tmpfs option.
For example;
docker run -it --tmpfs /tmp ubuntu
Or, using the "advanced" --mount syntax, which allows for additional options to be set:
docker run -it --mount type=tmpfs,destination=/tmp ubuntu
For more information, and additional options that can be used, see the "Use tmpfs mounts" section in the documentation.

You can mount a tmpfs partition on your container's /tmp location if you run that container with the docker run --privileged option:
docker run -it --privileged ubuntu bash -l
root#2ed296ef6a80:/# mount -t tmpfs -o size=256M tmpfs /tmp
root#2ed296ef6a80:/# mount
...
tmpfs on /tmp type tmpfs (rw,relatime,size=262144k)
Or you can create a tmpfs mount on the docker host and mount it as a volume in your container:
# TMPDIR=$(mktemp -d)
# mount -t tmpfs -o size=256M tmpfs $TMPDIR
# docker run -it -v $TMPDIR:/tmp ubuntu bash -l
root#0f0555ec96cb:/# mount | grep /tmp
tmpfs on /tmp type tmpfs (rw,relatime,size=262144k)

Related

Writeable tmpfs volume on existing directory in read-only docker container giving: Read-only file system

Trying to set up a mysql container in docker with the read-only flag set for security, but running into this error:
[ERROR] Could not create unix socket lock file /var/run/mysqld/mysqld.sock.lock.
Trying to alleviate the problem by mounting /var/run/mysqld as a tmpfs, however it seems that as long as the container is read-only the tmpfs mount is also read-only.
docker run --read-only --mount type=tmpfs,destination=/var/run/mysqld,tmpfs-mode=1777 mysql:5 touch /var/run/mysqld/test
touch: cannot touch '/var/run/mysqld/test': Read-only file system
Using a volume mount instead of tmpfs works, but that is not what I want to do:
docker run --read-only --mount destination=/var/run/mysqld mysql:5 touch /var/run/mysqld
Testing it on a non-existant directory also works:
docker run --read-only --mount type=tmpfs,destination=/test,tmpfs-mode=1777 mysql:5 touch /test/test
And it works if you remove the read-only flag from the container:
docker run --mount type=tmpfs,destination=/var/run/mysqld,tmpfs-mode=1777 mysql:5 touch /var/run/mysqld/test
Is this by design or a bug or am I missing some setting?
Docker version 19.03.12, build 48a66213fe

gcloud: add docker run arguments while deploy docker container to GCE

I need to add --shm-size command to docker run while I deploy container image from Container Registry.
According to documentation I need to use Arguments fields under Advanced container options but it doesn't work for me.
I've added --shm-size 1G line like that:
docker exec -it 68d... df -h still returns default shm size:
shm 64M 0 64M 0% /dev/shm
Could somebody suggest how can I solve my issue?
I also have tried to increase it manually inside docker container but faced
mount: /dev/shm: permission denied. issue.
UPDATE
Solution:
I've created a bash script as an entry point which is set up /dev/shm size manually:
#!/bin/bash
echo "none /dev/shm tmpfs defaults,size=500m 0 0" >> /etc/fstab
mount -o remount /dev/shm
dotnet Worker.dll
Dockerfile:
....
USER root
COPY ["Worker/start.sh", "app/"]
CMD ["/bin/bash", "app/start.sh"]
Arguments under the advance container option is just like passing arg to ENTRYPOINT.
Query compute metadata "gce-container-declaration" using command from the container vm 'curl -H "Metadata-Flavor: Google" "http://metadata.google.internal/computeMetadata/v1/instance/attributes/gce-container-declaration"'
For your use case, create non-container VM, then install the docker yourself and run a container using our docker shm arg

Can I increase shared memory after launching a docker session

If I have a docker session already running, is there any way to increase the shared memory?
I know that we can do docker run -it --shm-size=13g dockertag /bin/bash
as shown in https://stackoverflow.com/a/35884597/3776827
or change the docker-compose.yml file.
But if I have a docker session already running, is it still possible to do that? (I have one process running and don't want to stop it)
Where do I put this run command?
docker build -f ./Dockerfile -t <tag> .
docker tag <tag> <repo>
docker push <repo>
If I do docker run -it --shm-size=13g <tag> /bin/bash
, I get inside the docker. Doing docker push after (exiting the docker) this didn't create any effect.
I am trying to solve these errors on pytorch:
https://github.com/pytorch/pytorch/issues/8976
https://github.com/pytorch/pytorch/issues/973
Pardon my understanding of docker. I am a newbie to it.
But if I have a docker session already running, is it still possible
to do that
The answer is no and yes.
The answer is yes because this is possible if you created a volume when you created the container. You then can increase the size of the mounted /dev/shm and those changes will be reflected in the container without a restart
To demonstrate, in the example below /dev/shm from my machine is mounted as /dev/shm in the container.
First, let's check the size of /dev/shm on my machine
anon#anon-VirtualBox:~$ df -h /dev/shm
Filesystem Size Used Avail Use% Mounted on
tmpfs 2.9G 48M 2.9G 2% /dev/shm
Now let's create a docker container mounting in /dev/shm as a volume in the container and check the size of the container's /dev/shm
create the container:
anon#anon-VirtualBox: docker run -d -v /dev/shm:/dev/shm bash sleep 100000
bdc043c79cf8b1ba64ee8cfc026f8d62f0b609f63cbca3cae9f5d321fe47b0e0
check size of /dev/shm in container:
anon#anon-VirtualBox: docker exec -it bdc043c df -h /dev/shm
Filesystem Size Used Available Use% Mounted on
tmpfs 2.9G 47.9M 2.8G 2% /dev/shm
You can see the size the container matches the size on my machine which verifies we've properly mounted /dev/shm into the container.
Now I'll increase the size of /dev/shm on my machine
anon#anon-VirtualBox:~$ sudo mount -o remount,size=4G /dev/shm
anon#anon-VirtualBox:~$ df -h /dev/shm
Filesystem Size Used Avail Use% Mounted on
tmpfs 4.0G 56M 4.0G 2% /dev/shm
Now we can verify the container has been adjusted (without being restarted)
anon#anon-VirtualBox:~$ sudo docker exec -it bdc043c79cf8 df -h /dev/shm
Filesystem Size Used Available Use% Mounted on
tmpfs 4.0G 55.9M 3.9G 1% /dev/shm
The answer is no for you because you've already created a container. The configuration for the container can be modified in /var/lib/docker/containers/<container_id>/hostconfig.json by adjusting the ShmSize, but this requires a restart of the container to take effect. At that point there's no difference in creating a new container and specifying a new size using docker run..
Where do I put this run command?
docker build -f ./Dockerfile -t <tag> .
docker tag <tag> <repo>
docker push <repo>
docker build: this builds the docker image
docker tag: give the image an optional tag (note - this is redundant since you specify a tag in the prior command)
docker push: this pushes the image to a remote registry (an image repository, i.e. https://hub.docker.com/)
These steps are all independent of each other and are used when they need to be used for their purpose. It is optional to tag an image just as it's optional to push an image to a registry. The only requirement to run a docker container is that an image exists to run the container from, hence why you specify the image name in the docker run command. Therefore, to satisfy your answer the docker run command would go after you built the image. It's worth noting though that by default when you run for example docker run bash it looks locally for that image and if it doesn't exist (by default) it will attempt to pull that image from docker hub, i.e. https://hub.docker.com/_/bash. This is why from a fresh install of docker you can run docker run bash and it works without building the bash image first. Your output would look similar to
docker run bash
Unable to find image 'bash:latest' locally
latest: Pulling from library/bash
050382585609: Pull complete
7bf5420b55e6: Pull complete
1beb2aaf8cf9: Pull complete
Digest: sha256:845d500de4879819b189934ec07de15e8ff8de94b3fd16d2744cbe7eb5ca3323
Status: Downloaded newer image for bash:latest
e4c594907b986af923afe089bdbbac057712b3e27589d12618b3d568df21d533

Docker: where is the newly created files under / stored?

I was running a docker container process with:
host$ docker run -it <image> /etc/bootstrap.sh -bash
Then inside of the container I created a file /a.txt:
container$ echo "abc" > /a.txt
container$ cat a.txt
abc
I noticed the filesystem type for / is none:
container$ df -h
Filesystem Size Used Avail Use% Mounted on
none 355G 19G 318G 6% /
tmpfs 1.9G 0 1.9G 0% /dev
...
The inspect command shows the volumes is null.
host$ docker inspect <image>
...
"Volumes": null,
...
After I exited the container process and restarted, the file disappeared. I wanted to understand:
1) what the root filesystem of the container process actually is;
2) how can I persist the newly created file?
Q: After I exited the container process and restarted, the file disappeared.
A: Data in a docker container is not persisted. That is you lost everything when that container gets restarted.
Q: What the root filesystem of the container process actually is?
A: Don't really understand this question but I assume you are asking about where is the root user's home directory? If it is, then root's home is at /root.
Q: How can I persist the newly created file?
A: If you are intending to keep the changes even after you restart the container then you will need to use docker's data volume.
See:
https://docs.docker.com/engine/tutorials/dockervolumes/
Essentially when you start the container, you can pass in the -v option to tell the container that you would like to map the directory from the host's file system to the container's directory.
That is by doing the example command below,
$ docker run -d -P --name web -v $(pwd):/root
You will you would like to map your current working directory to the container's /root directory. So everything gets written to the container's /root/ area gets reflected to your host's file system and it is persisted.

Docker -v paramter limited in size

I'm trying to mount a directory from my host to my container with
docker ... -v /host/path/to/dir:/container/dir ...
On the host /host/path/to/dir has ~500GB available space, in the container the 'mount' point has 20GB
Why is this, and how can I fix it? I want to expose the full 500GB
I can't replicate this behavior locally. I have a /home filesystem on my host:
/dev/dm-28 118G 104G 9.0G 93% /home
If I start a docker container like this:
$ docker run -it -v /home:/home fedora bash
I see that space available inside the container:
[root#08671029e0ae /]# df -h /home
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/luks-8245e7a1-dd00-48aa-9f24-9cbe887d114a 118G 104G 9.0G 93% /home
If you're not seeing this behavior, can you update your question with specific output of df both inside and outside the container, and the specific docker run command you're using.

Resources