I'm trying to mount a directory from my host to my container with
docker ... -v /host/path/to/dir:/container/dir ...
On the host /host/path/to/dir has ~500GB available space, in the container the 'mount' point has 20GB
Why is this, and how can I fix it? I want to expose the full 500GB
I can't replicate this behavior locally. I have a /home filesystem on my host:
/dev/dm-28 118G 104G 9.0G 93% /home
If I start a docker container like this:
$ docker run -it -v /home:/home fedora bash
I see that space available inside the container:
[root#08671029e0ae /]# df -h /home
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/luks-8245e7a1-dd00-48aa-9f24-9cbe887d114a 118G 104G 9.0G 93% /home
If you're not seeing this behavior, can you update your question with specific output of df both inside and outside the container, and the specific docker run command you're using.
Related
If I have a docker session already running, is there any way to increase the shared memory?
I know that we can do docker run -it --shm-size=13g dockertag /bin/bash
as shown in https://stackoverflow.com/a/35884597/3776827
or change the docker-compose.yml file.
But if I have a docker session already running, is it still possible to do that? (I have one process running and don't want to stop it)
Where do I put this run command?
docker build -f ./Dockerfile -t <tag> .
docker tag <tag> <repo>
docker push <repo>
If I do docker run -it --shm-size=13g <tag> /bin/bash
, I get inside the docker. Doing docker push after (exiting the docker) this didn't create any effect.
I am trying to solve these errors on pytorch:
https://github.com/pytorch/pytorch/issues/8976
https://github.com/pytorch/pytorch/issues/973
Pardon my understanding of docker. I am a newbie to it.
But if I have a docker session already running, is it still possible
to do that
The answer is no and yes.
The answer is yes because this is possible if you created a volume when you created the container. You then can increase the size of the mounted /dev/shm and those changes will be reflected in the container without a restart
To demonstrate, in the example below /dev/shm from my machine is mounted as /dev/shm in the container.
First, let's check the size of /dev/shm on my machine
anon#anon-VirtualBox:~$ df -h /dev/shm
Filesystem Size Used Avail Use% Mounted on
tmpfs 2.9G 48M 2.9G 2% /dev/shm
Now let's create a docker container mounting in /dev/shm as a volume in the container and check the size of the container's /dev/shm
create the container:
anon#anon-VirtualBox: docker run -d -v /dev/shm:/dev/shm bash sleep 100000
bdc043c79cf8b1ba64ee8cfc026f8d62f0b609f63cbca3cae9f5d321fe47b0e0
check size of /dev/shm in container:
anon#anon-VirtualBox: docker exec -it bdc043c df -h /dev/shm
Filesystem Size Used Available Use% Mounted on
tmpfs 2.9G 47.9M 2.8G 2% /dev/shm
You can see the size the container matches the size on my machine which verifies we've properly mounted /dev/shm into the container.
Now I'll increase the size of /dev/shm on my machine
anon#anon-VirtualBox:~$ sudo mount -o remount,size=4G /dev/shm
anon#anon-VirtualBox:~$ df -h /dev/shm
Filesystem Size Used Avail Use% Mounted on
tmpfs 4.0G 56M 4.0G 2% /dev/shm
Now we can verify the container has been adjusted (without being restarted)
anon#anon-VirtualBox:~$ sudo docker exec -it bdc043c79cf8 df -h /dev/shm
Filesystem Size Used Available Use% Mounted on
tmpfs 4.0G 55.9M 3.9G 1% /dev/shm
The answer is no for you because you've already created a container. The configuration for the container can be modified in /var/lib/docker/containers/<container_id>/hostconfig.json by adjusting the ShmSize, but this requires a restart of the container to take effect. At that point there's no difference in creating a new container and specifying a new size using docker run..
Where do I put this run command?
docker build -f ./Dockerfile -t <tag> .
docker tag <tag> <repo>
docker push <repo>
docker build: this builds the docker image
docker tag: give the image an optional tag (note - this is redundant since you specify a tag in the prior command)
docker push: this pushes the image to a remote registry (an image repository, i.e. https://hub.docker.com/)
These steps are all independent of each other and are used when they need to be used for their purpose. It is optional to tag an image just as it's optional to push an image to a registry. The only requirement to run a docker container is that an image exists to run the container from, hence why you specify the image name in the docker run command. Therefore, to satisfy your answer the docker run command would go after you built the image. It's worth noting though that by default when you run for example docker run bash it looks locally for that image and if it doesn't exist (by default) it will attempt to pull that image from docker hub, i.e. https://hub.docker.com/_/bash. This is why from a fresh install of docker you can run docker run bash and it works without building the bash image first. Your output would look similar to
docker run bash
Unable to find image 'bash:latest' locally
latest: Pulling from library/bash
050382585609: Pull complete
7bf5420b55e6: Pull complete
1beb2aaf8cf9: Pull complete
Digest: sha256:845d500de4879819b189934ec07de15e8ff8de94b3fd16d2744cbe7eb5ca3323
Status: Downloaded newer image for bash:latest
e4c594907b986af923afe089bdbbac057712b3e27589d12618b3d568df21d533
I have a script used in production that does basically this:
make.ext4 ... /dev/sdb1
mount /dev/sdb1 /folder
and so on
I have a Docker environment where I simulate my production environment. Now, what I need is the possibility to use the same script on both the env. To do that, I need the possibility in Docker to have a /dev/sdb1 device and attach on it a volume in some way, so that when I run the commands above my volume is attached to /folder.
I know this can be done easily with:
docker run -t <tag> -v <my volume>:/folder -it /bin/bash
But in this way, things are a little different in Docker container and I need to modify my script (In my case I have several scripts to change).
Is there a way to do something like:
docker run -t <tag> -v <my volume>:/dev/sdb1 -it /bin/bash
so that when in Docker I do:
mount /dev/sdb1 /folder
I mount my external volume to /folder in the container?
Have you tried to run docker with privileges to do mount?
Maybe if you launch docker run --privileged or docker run --cap-add=SYS_ADMIN, you have /dev/sdb1 accessible from docker, so, is possible to do mount /dev/sdb1/
For further information about docker container privileges, please, see: Docker Documentation privileged mode and capabilities
I was running a docker container process with:
host$ docker run -it <image> /etc/bootstrap.sh -bash
Then inside of the container I created a file /a.txt:
container$ echo "abc" > /a.txt
container$ cat a.txt
abc
I noticed the filesystem type for / is none:
container$ df -h
Filesystem Size Used Avail Use% Mounted on
none 355G 19G 318G 6% /
tmpfs 1.9G 0 1.9G 0% /dev
...
The inspect command shows the volumes is null.
host$ docker inspect <image>
...
"Volumes": null,
...
After I exited the container process and restarted, the file disappeared. I wanted to understand:
1) what the root filesystem of the container process actually is;
2) how can I persist the newly created file?
Q: After I exited the container process and restarted, the file disappeared.
A: Data in a docker container is not persisted. That is you lost everything when that container gets restarted.
Q: What the root filesystem of the container process actually is?
A: Don't really understand this question but I assume you are asking about where is the root user's home directory? If it is, then root's home is at /root.
Q: How can I persist the newly created file?
A: If you are intending to keep the changes even after you restart the container then you will need to use docker's data volume.
See:
https://docs.docker.com/engine/tutorials/dockervolumes/
Essentially when you start the container, you can pass in the -v option to tell the container that you would like to map the directory from the host's file system to the container's directory.
That is by doing the example command below,
$ docker run -d -P --name web -v $(pwd):/root
You will you would like to map your current working directory to the container's /root directory. So everything gets written to the container's /root/ area gets reflected to your host's file system and it is persisted.
I run a server with 2 Docker images, one does building and packaging and thus creates alot of shortlived stuff on /tmp.
I'd like this container /tmp to not be backed by persistent volume (union fs or volume) but to use the host's /tmp which in turn is a tmpfs volume and ideal for such operations. Saving access to a normal drive will have overhead and causes access to HDDs (wear-out), I would prefer to try to stay in RAM as much as possible.
Some options are:
Bind /tmp/:/tmp to the docker process. Doesnt seem very secure, and problematic if another process accesses this directory
Bind a volume to /tmp. This means its on the harddrive unless I manage to move it to /tmp.
There is then still the issue of deleting this volume each time the container stops, since Id prefer a clean slate.
Mount /tmp as tmpfs in the container. Seems the most sane option. Except that would mean editing all containers instead plainly using existing ones
I am new to Docker, maybe I am missing something obvious.
I search for a way to specify volumes which can or have to be dropped after the container stops. Or even are kept completely in RAM unless this is infeasible.
And additionally some easy way to mount /tmp as such a container.
Docker allows you to do this using the --tmpfs option.
For example;
docker run -it --tmpfs /tmp ubuntu
Or, using the "advanced" --mount syntax, which allows for additional options to be set:
docker run -it --mount type=tmpfs,destination=/tmp ubuntu
For more information, and additional options that can be used, see the "Use tmpfs mounts" section in the documentation.
You can mount a tmpfs partition on your container's /tmp location if you run that container with the docker run --privileged option:
docker run -it --privileged ubuntu bash -l
root#2ed296ef6a80:/# mount -t tmpfs -o size=256M tmpfs /tmp
root#2ed296ef6a80:/# mount
...
tmpfs on /tmp type tmpfs (rw,relatime,size=262144k)
Or you can create a tmpfs mount on the docker host and mount it as a volume in your container:
# TMPDIR=$(mktemp -d)
# mount -t tmpfs -o size=256M tmpfs $TMPDIR
# docker run -it -v $TMPDIR:/tmp ubuntu bash -l
root#0f0555ec96cb:/# mount | grep /tmp
tmpfs on /tmp type tmpfs (rw,relatime,size=262144k)
I'm new to docker, and I'm trying mount the root directory of docker container as a NFS mount point.
for example, I had a NFS mount point test:/home/user/3243, and I'm trying:
docker run -it -v "test:/home/user/3243":/ centos7 /bin/bash
absolutely, it's failed. So I tried this:
mount -t nfs test:/home/user/3243 /mnt/nfs/3243
docker run -it -v /mnt/nfs/3243:/ centos7 /bin/bash
but failed again, so how to do this? Could it be worked out?
A couple of issues here:
You cannot mount to the root directory of a container. So docker run -v /foo:/ will never work.
With the syntax of your first attempt, -v test:/foo:bar, Docker would see this as wanting to create a "named" volume called "test".
You should be able to first do the NFS mount, then do docker run -v /mnt/nfs/3243:/foo to have the nfs path mounted to /foo.
But again, you can't mount to /.
That is currently discussed (since mid 2014) in issue 4213.
One recent workaround by Jeroen van Bemmel (jbemmel) was:
It appears that NFS functionality depends on the underlying storage driver ( aufs, devicemapper, etc. ), as well as the sharing of file handles between processes ( see blog post "docker: devicemapper fix for “device or resource busy” (EBUSY)") i.e. 'unshare' may have an impact on NFS mounts.
I've moved away from using the 'MOUNTPOINT=/vm/nfs' as I am not sure if that event is even emitted.
Instead I created an upstart file like this:
cat > /etc/init/ecdn.conf << EOF
description "eCDN container"
author "Jeroen van Bemmel"
# mounted MOUNTPOINT=/vm/nfs doesn't seem to work, at least not the first time
start on started docker and virtual-filesystems
stop on starting rc RUNLEVEL=[016]
respawn
script
exec /usr/bin/docker start -a ecdn
end script
pre-stop script
/usr/bin/docker stop ecdn
# dont /usr/bin/docker rm ecdn
end script
EOF
and then create the container like this:
script -c "docker create -it --name='ecdn' --volume /vm:/usr/share/nginx/html/vm:ro image/name"