Docker container not mounting data volume - docker

I've got a locally maintained Docker image that, for some reason, is not mounting the local data volume in the container.
docker run -d -v /mnt/melissadata:/usr/local/tomcat/appconf -p 7070:7070 -p 80:8080 --restart on-failure:3 --name addrgeo imagename
On my local data volume, I have a number of files the service needs, but it's unable to find them.
I know the volume is mounted.
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvdi 202:128 0 10G 0 disk /mnt/melissadata
And it appears that the Docker container can see the volume...
$ docker inspect
...
"Mounts": [
{
"Source": "/mnt/melissadata",
"Destination": "/usr/local/tomcat/appconf",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
],
findmnt returns:
$ sudo findmnt -o TARGET,PROPAGATION /mnt/melissadata
TARGET PROPAGATION
/mnt/melissadata private
Any thoughts?

The reason this was happening is because the Docker daemon uses devicemapper to devicemapper to back Docker's layer storage. If the volume was mounted after the Docker daemon was started, then Docker doesn't know it exists. A restart of the Docker daemon fixes it.
sudo service docker restart

Related

Can I use docker tmpfs in WSL2 for running docker containers on RAM?

Is docker tmpfs working on wsl2.
If I run this in WSL2:
docker run -it --rm -e POSTGRES_PASSWORD=secret --tmpfs /var/lib/postgresql/data postgres:13-alpine sh
The whole container will run in the RAM?
[EDIT]
As #Nik found, tmpfs in WSL is currently mapped to filesystem. At command line level it works as it is mapped in RAM, but it is actually mapped to filesystem. So, take care of this caveat until it is implemented as one would assume.
According to your first question: "Is docker tmpfs working on wsl2?"
it seems the answer is yes. In fact, try to run a container like that:
$ docker run -it --name tmptest --mount type=tmpfs,destination=/mytmp busybox
If you then inspect the container, you can see that /mytmp
is mounted correctly as a tmpfs:
"Mounts": [
{
"Type": "tmpfs",
"Source": "",
"Destination": "/mytmp",
"Mode": "",
"RW": true,
"Propagation": ""
}
]
Some notes about your second question "The whole container will run in the RAM?":
It's just the content of the folder /var/lib/postgresql/data that is stored in RAM,
not the "whole container" whatever you think that means.
It seems to me you're not running the db but a shell instead.
So, unless you start the db from the shell I guess you would have
no particular advantages in having /var/lib/postgresql/data in RAM.
Technically speaking any program has to be loaded in RAM
to work, or at least the portion that is currently executed.

Docker mounting remote volume with sshfs

I know that this question has been done on the web, but I am not able to find a suitable solution for me.
I have one server (VM1) with sshfs installed that should provide remote file system storage. I have another server (VM2) where containers are run, I would like that these containers use volumes hosted in VM1.
I followed this official docker guide
So in VM1 I ran:
docker plugin install vieux/sshfs DEBUG=1 sshkey.source=/home/debian/.ssh/
In VM 2 I ran:
docker volume create --name remotevolume -d vieux/sshfs -o sshcmd=debian#vm1:/home/debian/sshfs300 -o IdentityFile=/home/csicari/data/Mega/lavoro/keys/vm-csicari.key -o -o allow_other -o nonempty
This is the inspect output:
[
{
"CreatedAt": "0001-01-01T00:00:00Z",
"Driver": "vieux/sshfs:latest",
"Labels": {},
"Mountpoint": "/mnt/volumes/895d7f7679f69131500c786d7fe5fdc1",
"Name": "remotevolume",
"Options": {
"IdentityFile": "/home/csicari/data/Mega/lavoro/keys/vm-csicari.key",
"sshcmd": "debian#vm1:/home/debian/sshfs300"
},
"Scope": "local"
}
]
In VM1 I ran also:
docker run -it -v remotevolume:/home -d ubuntu
But I got this error:
docker: Error response from daemon: VolumeDriver.Mount: sshfs command execute failed: exit status 1 (read: Connection reset by peer
). See 'docker run --help'.
Maybe this is a long back asked question maybe it will help other newbies. The remote VM /etc/ssh/sshd_config file content check the property PasswordAuthentication yes
If it is 'yes' we can use with password passing parameter. otherwise, change it to 'no' and restart the ssh or sshd service.
service ssh restart
service ssh status
And also PasswordAuthentication. Depending on your PAM configuration, so check that as well.
If it is AWS instance reset the password using the command passwd ubuntu # Here ubuntu is the default user in ec2 ubuntu instance

Why is Config.User an empty string when I inspect a Docker container?

I pulled and ran the Docker container jcsilva/docker-kaldi-gstreamer-server:
docker pull jcsilva/docker-kaldi-gstreamer-server
docker run -it -p 8080:80 -v /media/kaldi_models:/opt/models jcsilva/docker-kaldi-gstreamer-server:latest /bin/bash
I can see it running:
username#server:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
24f598fd5019 jcsilva/docker-kaldi-gstreamer-server:latest "/bin/bash" 12 hours ago Up 12 hours 0.0.0.0:8080->80/tcp
When I inspect it:
username#server:~$ docker inspect 24f598fd501911f32e10884ed3f86547e05a031f0d31324badc40c5fb5ed732a > inspection.json
I see in the inspection.json:
"Config": {
"Hostname": "24f598fd5019",
"Domainname": "",
"User": "", <-- Why is User an empty string?
"AttachStdin": true,
"AttachStdout": true,
"AttachStderr": true,
"ExposedPorts": {
"80/tcp": {}
},
What could explain that Config.User is an empty string?
I use Docker version 18.03.0-ce, build 0520e24 on Ubuntu 16.04.4 LTS (GNU/Linux 4.4.0-119-generic x86_64).
Because the Dockerfile doesn't have a USER directive, most likely. Why are you expecting it to have one?
Of all of the containers running on my PC right now, only one has a user, and that's grafana, because that's what's in the Dockerfile. It's not the user (me: roger) that launched it.
As far as I know, docker inspect will show only the configuration that
the container started with.
Because of the fact that commands like entrypoint (or any init script) might change the user, those changes will not be reflected on the docker inspect output.
In order to work around this, you can to overwrite the default entrypoint set by the image with --entrypoint="" and specify a command like whoami or id after it.
In your case:
docker container run --rm --entrypoint "" jcsilva/docker-kaldi-gstreamer-server whoami
You can see that the output for the image you specified is the root user.
Read more about entrypoint "" in here.

Docker volume not used with Redis (mount does show up with inspect)

My final conclusion is that I wasn't able to set the /c/users/... location because it wasn't shared in "Docker".
After this I was able to see the /c/users/.. directory in all my container instances. I was then able to use the -v flag with this directory on every instance basically writing files to my host machine.
What I still don't get is that I don't think I'm actually using volumes at the moment... But it works...
I'm trying to have my Docker-hosted Redis instance to persist its data but the mounted volume doesn't seem to be used. I was using Docker with VirtualBox/boot2docker where the composition worked, however I have since moved to Docker for Windows where the compose file still works, but I'm not sure about the volumes property.
My docker-compose.yml file:
vq-redis:
image: redis:latest
ports:
- "6379:6379"
volumes:
- /c/users/r/.docker/data/redis/data:/data
It doesn't matter if I add or remove the volumes definition, because it will always show something like this with docker inspect:
"Mounts": [
{
"Name": "40791b26771b5d62778d85b0ef24e74e516f95d32cf217424232ce8f8a1b8c6f",
"Source": "/var/lib/docker/volumes/40791b26771b5d62778d85b0ef24e74e516f95d32cf217424232ce8f8a1b8c6f/_data",
"Destination": "/data",
"Driver": "local",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
Is the volumes property still working with Docker for Windows or am I missing a point?
Edit:
If I run...
docker run --name vq-redis -d -v //c/users/r/.docker/data/vq-redis:/data redis redis-server --appendonly yes
... I can see the container appearing in Kitematic and with docker inspect I can see a mount going to my local folder. However the local folder isn't shown in Kitematic...
If I add data to the Redis server hosted in the container, then stop the container and start it again the data is gone.
I tried setting the local folder manually in Kitematic. This restarts the container so it seems, but I'm unsure if the initial parameters are passed again. You say:
"If the volumes aren't networked on run"
I guess they were actually networked on run as seen in the console.
Still, I can add data to the Redis instance hosted in the container. But as soon as I restart the container it's gone...
It should work. I assume you didn't get any errors (e.g., permission issues, etc.) and that you are removing old builds before rebuilding. Does the "/var/lib/docker/volumes/4079..." directory get created?
You could try using double leading slashes on Windows, which was a work-around for some versions:
volumes:
- //c/users/r/.docker/data/redis/data:/data
Redis wouldn't have anything to do with the volume not being created but have you tried other services or even basic docker create -v ... or docker run -v ...?
UPDATE:
There may be some gaps in your understanding of how Docker works that may be getting in the way here.
If you do docker run --name some-redis -d redis redis-server --appendonly yes it will create a volume similar to the one you have in your docker inspect output. Clearly you don't have a /var/lib/docker/volumes/... directory on your Windows machine -- that's in the VM docker host (e.g., boot2docker). How you get to the Docker host volumes differs depending on a number of factors.
If the volumes aren't networked on run, restarting won't help. Do docker stop some-redis && docker rm some-redis and re-run.
Eg. running this command
docker run --name some-redis -d -v $(pwd)/data:/data redis redis-server --appendonly yes
should work as you expect.
ls ./data => appendonly.aof.
It will obviously be empty at first. Destroying the container and creating a new one with the same directory will show the data is still there:
docker exec some-redis echo "set bar baz" | redis-cli
docker stop some-redis
docker rm some-redis
docker run --name some-redis2 -d -v $(pwd)/data:/data redis redis-server --appendonly yes
docker exec some-redis2 echo "get bar" | redis-cli
=> "baz"
(the previous value for "bar" set in the destroyed container).
If this doesn't work for you there could be some issues specific to your environment -- perhaps try a Vagrant-based solution or beta Docker or a native Linux host.

Docker volume mount doesn't exist

I'm running Docker 1.11 on OS X and I'm trying to figure out where my local volumes are being written. I created a Docker volume by running docker volume create --name mysql. I then ran docker volume inspect mysql and it output the following:
[
{
"Name": "mysql",
"Driver": "local",
"Mountpoint": "/mnt/sda1/var/lib/docker/volumes/mysql/_data",
"Labels": {}
}
]
The issue is /mnt/sda1/var/lib/docker/volumes/mysql/_data doesn't actually exist on my machine. I thought maybe the issue was that it didn't actually get created until it was used by a container so I started a container by running docker run --name mysql -v mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=mysql -P -d mysql:5.7 and then created a database in MySQL, but the mount point still doesn't exist. I even ran docker inspect mysql to ensure it's using the correct volume and got the following:
...
"Mounts": [
{
"Name": "mysql",
"Source": "/mnt/sda1/var/lib/docker/volumes/mysql/_data",
"Destination": "/var/lib/mysql",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": "rprivate"
}
],
...
At this point I'm completely lost as to where the data is being written. What am I missing?
Because Docker is based on Linux, it cannot run directly on Windows/OS X. Instead, it runs inside a VirtualBox virtual machine (a Docker Machine) that runs a Linux operating system. That's why when you install Docker Toolbox you see that VirtualBox is installed.
To see files and folders inside this virtual machine, use
docker-machine ssh default
default is the name of the default Docker Machine.
Docker 19.3.8
I've changed the mountpoint for a folder that I created in my Mac.
After that it worked.
$ mkdir volume_database
$ docker container run -it \
--volume=/volume_database:/volume_database debian

Resources