I need to add --shm-size command to docker run while I deploy container image from Container Registry.
According to documentation I need to use Arguments fields under Advanced container options but it doesn't work for me.
I've added --shm-size 1G line like that:
docker exec -it 68d... df -h still returns default shm size:
shm 64M 0 64M 0% /dev/shm
Could somebody suggest how can I solve my issue?
I also have tried to increase it manually inside docker container but faced
mount: /dev/shm: permission denied. issue.
UPDATE
Solution:
I've created a bash script as an entry point which is set up /dev/shm size manually:
#!/bin/bash
echo "none /dev/shm tmpfs defaults,size=500m 0 0" >> /etc/fstab
mount -o remount /dev/shm
dotnet Worker.dll
Dockerfile:
....
USER root
COPY ["Worker/start.sh", "app/"]
CMD ["/bin/bash", "app/start.sh"]
Arguments under the advance container option is just like passing arg to ENTRYPOINT.
Query compute metadata "gce-container-declaration" using command from the container vm 'curl -H "Metadata-Flavor: Google" "http://metadata.google.internal/computeMetadata/v1/instance/attributes/gce-container-declaration"'
For your use case, create non-container VM, then install the docker yourself and run a container using our docker shm arg
Related
If I have a docker session already running, is there any way to increase the shared memory?
I know that we can do docker run -it --shm-size=13g dockertag /bin/bash
as shown in https://stackoverflow.com/a/35884597/3776827
or change the docker-compose.yml file.
But if I have a docker session already running, is it still possible to do that? (I have one process running and don't want to stop it)
Where do I put this run command?
docker build -f ./Dockerfile -t <tag> .
docker tag <tag> <repo>
docker push <repo>
If I do docker run -it --shm-size=13g <tag> /bin/bash
, I get inside the docker. Doing docker push after (exiting the docker) this didn't create any effect.
I am trying to solve these errors on pytorch:
https://github.com/pytorch/pytorch/issues/8976
https://github.com/pytorch/pytorch/issues/973
Pardon my understanding of docker. I am a newbie to it.
But if I have a docker session already running, is it still possible
to do that
The answer is no and yes.
The answer is yes because this is possible if you created a volume when you created the container. You then can increase the size of the mounted /dev/shm and those changes will be reflected in the container without a restart
To demonstrate, in the example below /dev/shm from my machine is mounted as /dev/shm in the container.
First, let's check the size of /dev/shm on my machine
anon#anon-VirtualBox:~$ df -h /dev/shm
Filesystem Size Used Avail Use% Mounted on
tmpfs 2.9G 48M 2.9G 2% /dev/shm
Now let's create a docker container mounting in /dev/shm as a volume in the container and check the size of the container's /dev/shm
create the container:
anon#anon-VirtualBox: docker run -d -v /dev/shm:/dev/shm bash sleep 100000
bdc043c79cf8b1ba64ee8cfc026f8d62f0b609f63cbca3cae9f5d321fe47b0e0
check size of /dev/shm in container:
anon#anon-VirtualBox: docker exec -it bdc043c df -h /dev/shm
Filesystem Size Used Available Use% Mounted on
tmpfs 2.9G 47.9M 2.8G 2% /dev/shm
You can see the size the container matches the size on my machine which verifies we've properly mounted /dev/shm into the container.
Now I'll increase the size of /dev/shm on my machine
anon#anon-VirtualBox:~$ sudo mount -o remount,size=4G /dev/shm
anon#anon-VirtualBox:~$ df -h /dev/shm
Filesystem Size Used Avail Use% Mounted on
tmpfs 4.0G 56M 4.0G 2% /dev/shm
Now we can verify the container has been adjusted (without being restarted)
anon#anon-VirtualBox:~$ sudo docker exec -it bdc043c79cf8 df -h /dev/shm
Filesystem Size Used Available Use% Mounted on
tmpfs 4.0G 55.9M 3.9G 1% /dev/shm
The answer is no for you because you've already created a container. The configuration for the container can be modified in /var/lib/docker/containers/<container_id>/hostconfig.json by adjusting the ShmSize, but this requires a restart of the container to take effect. At that point there's no difference in creating a new container and specifying a new size using docker run..
Where do I put this run command?
docker build -f ./Dockerfile -t <tag> .
docker tag <tag> <repo>
docker push <repo>
docker build: this builds the docker image
docker tag: give the image an optional tag (note - this is redundant since you specify a tag in the prior command)
docker push: this pushes the image to a remote registry (an image repository, i.e. https://hub.docker.com/)
These steps are all independent of each other and are used when they need to be used for their purpose. It is optional to tag an image just as it's optional to push an image to a registry. The only requirement to run a docker container is that an image exists to run the container from, hence why you specify the image name in the docker run command. Therefore, to satisfy your answer the docker run command would go after you built the image. It's worth noting though that by default when you run for example docker run bash it looks locally for that image and if it doesn't exist (by default) it will attempt to pull that image from docker hub, i.e. https://hub.docker.com/_/bash. This is why from a fresh install of docker you can run docker run bash and it works without building the bash image first. Your output would look similar to
docker run bash
Unable to find image 'bash:latest' locally
latest: Pulling from library/bash
050382585609: Pull complete
7bf5420b55e6: Pull complete
1beb2aaf8cf9: Pull complete
Digest: sha256:845d500de4879819b189934ec07de15e8ff8de94b3fd16d2744cbe7eb5ca3323
Status: Downloaded newer image for bash:latest
e4c594907b986af923afe089bdbbac057712b3e27589d12618b3d568df21d533
I would like to be able to have /dev/shm preset to a different value than 64 MB, so that any container I spin up automatically takes on that new value.
I know I can run
docker run --shm-size=2G some-container
but i'd like to be able to do this without having to add the --shm-size flag. Is this possible?
You can set default value in /etc/docker/daemon.json:
shubuntu1#shubuntu1:/etc/docker$ cat daemon.json
{
"default-shm-size": "1G"
}
If do not have this file, you can new a file and add configure to it.
After modify, restart docker service:
sudo systemctl restart docker
Then, confirm it with next command:
shubuntu1#shubuntu1:/etc/docker$ docker run --rm -it ubuntu df -h | grep shm
shm 1.0G 0 1.0G 0% /dev/shm
You can see shared memory already set as 1G just the value you set in daemon.json, detail refers to official guide.
I have a basic server on google cloud that just runs a docker container via cron once every 30 minutes. I noticed that the docker command stopped working and I got an error saying
docker: Error response from daemon: no space left on device.
I then noticed that I got this error even when I trying to autocomplete in bash by typing cd path/ and hitting tab. I figured out something was probably wrong with the storage so I tried df -h and it showed this:
Filesystem Size Used Avail Use% Mounted on
udev 860M 0 860M 0% /dev
tmpfs 175M 19M 157M 11% /run
/dev/sda1 9.7G 9.7G 0 100% /
tmpfs 871M 0 871M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 871M 0 871M 0% /sys/fs/cgroup
tmpfs 175M 0 175M 0% /run/user/1001
As you can see /dev/sda1 is 100% full for some reason. Why is this happening and how can I fix it?
I noticed there were several thousand exited docker containers so I removed this with this command:
docker rm -v $(docker ps -a -q -f status=exited)
Now the storage usage is 61%, which is still too high.
When running containers that you do not need to keep in a stopped state, it's a good practice to use docker run --rm ... to automatically cleanup the stopped container.
If the container generates volumes, visible in docker volume ls, you'll likely want to clean these once the data is no longer needed. with the --rm flag on run, anonymous volumes (which appear as a long unique id) will be automatically deleted.
Docker also provides the command:
docker system prune
which you can automate to cleanup images, containers, networks, and even volumes. Note that you should take time to understand what this command does before running it, and especially before automating it. When scripting this command, you can use the -f flag to bypass the prompt.
Before running a prune, you can check:
docker system df
to see how much disk is being used by each component, and each component has it's own prune command, e.g. docker container prune and docker volume prune, should you want to clean just one area.
For more details on the prune command, see: https://docs.docker.com/engine/reference/commandline/system_prune/
I was running a docker container process with:
host$ docker run -it <image> /etc/bootstrap.sh -bash
Then inside of the container I created a file /a.txt:
container$ echo "abc" > /a.txt
container$ cat a.txt
abc
I noticed the filesystem type for / is none:
container$ df -h
Filesystem Size Used Avail Use% Mounted on
none 355G 19G 318G 6% /
tmpfs 1.9G 0 1.9G 0% /dev
...
The inspect command shows the volumes is null.
host$ docker inspect <image>
...
"Volumes": null,
...
After I exited the container process and restarted, the file disappeared. I wanted to understand:
1) what the root filesystem of the container process actually is;
2) how can I persist the newly created file?
Q: After I exited the container process and restarted, the file disappeared.
A: Data in a docker container is not persisted. That is you lost everything when that container gets restarted.
Q: What the root filesystem of the container process actually is?
A: Don't really understand this question but I assume you are asking about where is the root user's home directory? If it is, then root's home is at /root.
Q: How can I persist the newly created file?
A: If you are intending to keep the changes even after you restart the container then you will need to use docker's data volume.
See:
https://docs.docker.com/engine/tutorials/dockervolumes/
Essentially when you start the container, you can pass in the -v option to tell the container that you would like to map the directory from the host's file system to the container's directory.
That is by doing the example command below,
$ docker run -d -P --name web -v $(pwd):/root
You will you would like to map your current working directory to the container's /root directory. So everything gets written to the container's /root/ area gets reflected to your host's file system and it is persisted.
I run a server with 2 Docker images, one does building and packaging and thus creates alot of shortlived stuff on /tmp.
I'd like this container /tmp to not be backed by persistent volume (union fs or volume) but to use the host's /tmp which in turn is a tmpfs volume and ideal for such operations. Saving access to a normal drive will have overhead and causes access to HDDs (wear-out), I would prefer to try to stay in RAM as much as possible.
Some options are:
Bind /tmp/:/tmp to the docker process. Doesnt seem very secure, and problematic if another process accesses this directory
Bind a volume to /tmp. This means its on the harddrive unless I manage to move it to /tmp.
There is then still the issue of deleting this volume each time the container stops, since Id prefer a clean slate.
Mount /tmp as tmpfs in the container. Seems the most sane option. Except that would mean editing all containers instead plainly using existing ones
I am new to Docker, maybe I am missing something obvious.
I search for a way to specify volumes which can or have to be dropped after the container stops. Or even are kept completely in RAM unless this is infeasible.
And additionally some easy way to mount /tmp as such a container.
Docker allows you to do this using the --tmpfs option.
For example;
docker run -it --tmpfs /tmp ubuntu
Or, using the "advanced" --mount syntax, which allows for additional options to be set:
docker run -it --mount type=tmpfs,destination=/tmp ubuntu
For more information, and additional options that can be used, see the "Use tmpfs mounts" section in the documentation.
You can mount a tmpfs partition on your container's /tmp location if you run that container with the docker run --privileged option:
docker run -it --privileged ubuntu bash -l
root#2ed296ef6a80:/# mount -t tmpfs -o size=256M tmpfs /tmp
root#2ed296ef6a80:/# mount
...
tmpfs on /tmp type tmpfs (rw,relatime,size=262144k)
Or you can create a tmpfs mount on the docker host and mount it as a volume in your container:
# TMPDIR=$(mktemp -d)
# mount -t tmpfs -o size=256M tmpfs $TMPDIR
# docker run -it -v $TMPDIR:/tmp ubuntu bash -l
root#0f0555ec96cb:/# mount | grep /tmp
tmpfs on /tmp type tmpfs (rw,relatime,size=262144k)