Docker volume not used with Redis (mount does show up with inspect) - docker

My final conclusion is that I wasn't able to set the /c/users/... location because it wasn't shared in "Docker".
After this I was able to see the /c/users/.. directory in all my container instances. I was then able to use the -v flag with this directory on every instance basically writing files to my host machine.
What I still don't get is that I don't think I'm actually using volumes at the moment... But it works...
I'm trying to have my Docker-hosted Redis instance to persist its data but the mounted volume doesn't seem to be used. I was using Docker with VirtualBox/boot2docker where the composition worked, however I have since moved to Docker for Windows where the compose file still works, but I'm not sure about the volumes property.
My docker-compose.yml file:
vq-redis:
image: redis:latest
ports:
- "6379:6379"
volumes:
- /c/users/r/.docker/data/redis/data:/data
It doesn't matter if I add or remove the volumes definition, because it will always show something like this with docker inspect:
"Mounts": [
{
"Name": "40791b26771b5d62778d85b0ef24e74e516f95d32cf217424232ce8f8a1b8c6f",
"Source": "/var/lib/docker/volumes/40791b26771b5d62778d85b0ef24e74e516f95d32cf217424232ce8f8a1b8c6f/_data",
"Destination": "/data",
"Driver": "local",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
Is the volumes property still working with Docker for Windows or am I missing a point?
Edit:
If I run...
docker run --name vq-redis -d -v //c/users/r/.docker/data/vq-redis:/data redis redis-server --appendonly yes
... I can see the container appearing in Kitematic and with docker inspect I can see a mount going to my local folder. However the local folder isn't shown in Kitematic...
If I add data to the Redis server hosted in the container, then stop the container and start it again the data is gone.
I tried setting the local folder manually in Kitematic. This restarts the container so it seems, but I'm unsure if the initial parameters are passed again. You say:
"If the volumes aren't networked on run"
I guess they were actually networked on run as seen in the console.
Still, I can add data to the Redis instance hosted in the container. But as soon as I restart the container it's gone...

It should work. I assume you didn't get any errors (e.g., permission issues, etc.) and that you are removing old builds before rebuilding. Does the "/var/lib/docker/volumes/4079..." directory get created?
You could try using double leading slashes on Windows, which was a work-around for some versions:
volumes:
- //c/users/r/.docker/data/redis/data:/data
Redis wouldn't have anything to do with the volume not being created but have you tried other services or even basic docker create -v ... or docker run -v ...?
UPDATE:
There may be some gaps in your understanding of how Docker works that may be getting in the way here.
If you do docker run --name some-redis -d redis redis-server --appendonly yes it will create a volume similar to the one you have in your docker inspect output. Clearly you don't have a /var/lib/docker/volumes/... directory on your Windows machine -- that's in the VM docker host (e.g., boot2docker). How you get to the Docker host volumes differs depending on a number of factors.
If the volumes aren't networked on run, restarting won't help. Do docker stop some-redis && docker rm some-redis and re-run.
Eg. running this command
docker run --name some-redis -d -v $(pwd)/data:/data redis redis-server --appendonly yes
should work as you expect.
ls ./data => appendonly.aof.
It will obviously be empty at first. Destroying the container and creating a new one with the same directory will show the data is still there:
docker exec some-redis echo "set bar baz" | redis-cli
docker stop some-redis
docker rm some-redis
docker run --name some-redis2 -d -v $(pwd)/data:/data redis redis-server --appendonly yes
docker exec some-redis2 echo "get bar" | redis-cli
=> "baz"
(the previous value for "bar" set in the destroyed container).
If this doesn't work for you there could be some issues specific to your environment -- perhaps try a Vagrant-based solution or beta Docker or a native Linux host.

Related

Rebuild Docker container service after local changes in service code

My docker-compose.yml looks as follows:
services:
application:
entrypoint: [bin/start]
image: myimage:release-v5
env_file: .env
...
Recreate the container for that service as follows:
docker-compose up -d --force-recreate --build application
docker-compose restart
The problem is that, my local changes don't seem to be there at all, no changes after container service has been deployed!
What am I missing ?
To fix the update problem, I did the modifications in the container itself using the vi-command not in outside of it.
docker exec -it <CONTAINER_ID> /bin/bash
and then I exited the container. Lastly I just restarted the container and my changes were there now:
docker restart <CONTAINER_ID>
But I'm still not happy with this way of doing it.
(<CONTAINER_ID> is hereby the CONTAINER ID as returned by docker ps or docker ps -a).

How to debug persistent data volume mount for Docker Odoo container?

I followed the standard Odoo container instructions on Docker to start the required postgres and odoo servers, and tried to pass host directories as persistent data storage for both as indicated in those instructions:
sudo mkdir /tmp/postgres /tmp/odoo
sudo docker run -d -v /tmp/postgres:/var/lib/postgresql/data/pgdata -e POSTGRES_USER=odoo -e POSTGRES_PASSWORD=odoo -e POSTGRES_DB=postgres --name db postgres:10
sudo docker run -v /tmp/odoo:/var/lib/odoo -p 8069:8069 --name odoo --link db:db -t odoo
The Odoo container shows messages that it starts up fine, but when I point my web browser at http://localhost:8069 I get no response from the server. By contrast, if I omit the -v argument from the Odoo docker run command, my web browser connects to the Odoo server fine, and everything works great.
I searched and see other people also struggling with getting the details of persistent data volumes working, e.g. Odoo development on Docker, Encountered errors while bringing up the project
This seems like a significant gap in Docker's standard use-case that users need better info on how to debug:
How to debug why the host volume mounting doesn't work for the odoo container, whereas it clearly does work for the postgres container? I'm not getting any insight from the log messages.
In particular, how to debug whether the container requires the host data volume to be pre-configured in some specific way, in order to work? For example, the fact that I can get the container to work without the -v option seems like it ought to be helpful, but also rather opaque. How can I use that success to inspect what those requirements actually are?
Docker is supposed to help you get a useful service running without needing to know the guts of its internals, e.g. how to set up its internal data directory. Mounting a persistent data volume from the host is a key part of that, e.g. so that users can snapshot, backup and restore their data using tools they already know.
I figured out some good debugging methods that both solved this problem and seem generally useful for figuring out Docker persistent data volume issues.
Test 1: can the container work with an empty Docker volume?
This is a really easy test: just create a new Docker volume and pass that in your -v argument (instead of a host directory absolute path):
sudo docker volume create hello
sudo docker run -v hello:/var/lib/odoo -p 8069:8069 --name odoo --link db:db -t odoo
The odoo container immediately worked successfully this way (i.e. my web browswer was able to connect to the Odoo server). This showed that it could work fine with an (initially) empty data directory. The obvious question then is why it didn't work with an empty host-directory volume. I had read that Docker containers can be persnickety about UID/GID ownership, so my next question was how do I figure out what it expects.
Test 2: inspect the running container's file system
I used docker exec to get an interactive bash shell in the running container:
sudo docker exec -ti odoo bash
Inside this shell I then looked at the data directory ownership, to get numeric UID and GID values:
ls -dn /var/lib/odoo
This showed me the UID/GID values were 101:101. (You can exit from this shell by just typing Control-D)
Test 3: re-run container with matching host-directory UID:GID
I then changed the ownership of my host directory to 101:101 and re-ran the odoo container with my host-directory mount:
sudo chown 101:101 /tmp/odoo
sudo docker stop odoo
sudo docker rm odoo
sudo docker run -v /tmp/odoo:/var/lib/odoo -p 8069:8069 --name odoo --link db:db -t odoo
Success! Finally the odoo container worked properly with a host-directory mount. While it's annoying the Odoo docker docs don't mention anything about this, it's easy to debug if you know how to use these basic tests.

docker-compose up not adding code to remote container

I've run through the initial Overview of Docker Compose exactly as written and it works just fine locally with boot2docker. However, if I try to do a docker-compose up on a remote host, it does not add the code to the remote container.
To reproduce:
Run through the initial Overview of Docker Compose exactly as written.
Install Docker Machine and start a Dockerized VM on any cloud provider.
docker-machine create --driver my-favourite-cloud composetest
eval "$(docker-machine env composetest)"
Now that you're working with a remote host, run docker-compose up on the original code.
composetest $ docker-compose up
Redis runs fine but the Flask app does not.
composetest $ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
794c90928b97 composetest_web "/bin/sh -c 'python About a minute ago Exited (2) About a minute ago composetest_web_1
2c70bd687dfc redis "/entrypoint.sh redi About a minute ago Up About a minute 6379/tcp composetest_redis_1
What went wrong?
composetest $ docker logs 794c90928b97
python: can't open file 'app.py': [Errno 2] No such file or directory
Can we confirm it?
composetest $ docker-compose run -d web sleep 60
Starting composetest_redis_1...
composetest_web_run_3
composetest $ docker exec -it a4 /bin/bash
root#a4a73c6dd159:/code# ls -a
. ..
Nothing there. Can we fix it?
Comment out volumes in docker-compose.yml
web:
build: .
ports:
- "5000:5000"
# volumes:
# - .:/code
links:
- redis
redis:
image: redis
Then just docker-compose up and it works!
Let's try again on boot2docker.
composetest $ eval "$(boot2docker shellinit)"
composetest $ docker-compose up
Recreating composetest_redis_1...
Recreating composetest_web_1...
Attaching to composetest_redis_1, composetest_web_1
...
The Flask app does work but it has a serious problem. If you change app.py, the Flask dev server doesn't reload and those changes aren't automatically seen. Even if you stop the container and docker-compose up again, the changes still aren't seen. I realize we lose this essential feature because the volume is no longer mounted. But not mounting the volume is the only way I've been able to get docker-compose to work with a remote host. We should be able to get both local and remote hosts to work using the same docker-compose.yml and Dockerfile.
How do I develop interactively with a local VM and deploy to a remote VM without having to change the Dockerfile or docker-compose.yml?
Versions:
Docker 1.7.0
Docker Compose 1.3.0
Docker Machine 0.3.0
Compose just does pretty much the same thing as you can do with the regular command line interface under the hood. So your command is roughly equivalent to:
$ docker run --name web -p 5000:5000 -v $(pwd):/code --link redis:redis web
The issue is that the volume is relative to the docker host, not the client. So it will mount the working directory on the remote VM, not the client. In your case, this directory is empty.
If you want to develop interactively with a remote VM, you will have to check out the source and edit the files on the VM.
UPDATE: It seems that you actually want to develop and test locally, then deploy a production version to a remote VM. (Apologies if I still misunderstand). To do this, I suggest you have a separate Compose file for development where you mount the local volume, then rebuild and deploy the image for production. By rebuilding the image, it will pick up the latest version of the code. Mounting a volume in production breaks because you've hidden the code in the image with an empty directory.
It's also worth pointing out that Docker don't advise using Compose in production currently.
What I'm really looking for is found in this Using Compose in production doc. By Extending services in Compose you're able to develop interactively with a local VM and deploy to a remote VM without having to change the Dockerfile or docker-compose.yml.
I ran into this "can't open file 'app.py'" problem following the Getting Started tutorials, for me it was because I'm running Docker in Windows. I needed to make sure that I'd shared the drive containing my project directory in Docker settings.
Source: see the "Shared Drive" section of Docker Settings in the docs

How to set an environment variable in a running docker container

If I have a docker container that I started a while back, what is the best way to set an environment variable in that running container? I set an environment variable initially when I ran the run command.
$ docker run --name my-wordpress -e VIRTUAL_HOST=domain.example --link my-mysql:mysql -d spencercooley/wordpress
but now that it has been running for a while I want to add another VIRTUAL_HOST to the environment variable. I do not want to delete the container and then just re-run it with the environment variable that I want because then I would have to migrate the old volumes to the new container, it has theme files and uploads in it that I don't want to lose.
I would just like to change the value of VIRTUAL_HOST environment variable.
There are generaly two options, because docker doesn't support this feature now:
Create your own script, which will act like runner for your command. For example:
#!/bin/bash
export VAR1=VAL1
export VAR2=VAL2
your_cmd
Run your command following way:
docker exec -i CONTAINER_ID /bin/bash -c "export VAR1=VAL1 && export VAR2=VAL2 && your_cmd"
Docker doesn't offer this feature.
There is an issue: "How to set an enviroment variable on an existing container? #8838"
Also from "Allow docker start to take environment variables #7561":
Right now Docker can't change the configuration of the container once it's created, and generally this is OK because it's trivial to create a new container.
For a somewhat narrow use case, docker issue 8838 mentions this sort-of-hack:
You just stop docker daemon and change container config in /var/lib/docker/containers/[container-id]/config.json (sic)
This solution updates the environment variables without the need to delete and re-run the container, having to migrate volumes and remembering parameters to run.
However, this requires a restart of the docker daemon. And, until issue issue 2658 is addressed, this includes a restart of all containers.
To:
set up many env. vars in one step,
prevent exposing them in 'sh' history, like with '-e' option (passing credentials/api tokens!),
you can use
--env-file key_value_file.txt
option:
docker run --env-file key_value_file.txt $INSTANCE_ID
Here's how you can modify a running container to update its environment variables. This assumes you're running on Linux. I tested it with Docker 19.03.8
Live Restore
First, ensure that your Docker daemon is set to leave containers running when it's shut down. Edit your /etc/docker/daemon.json, and add "live-restore": true as a top-level key.
sudo vim /etc/docker/daemon.json
My file looks like this:
{
"default-runtime": "nvidia",
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
},
"live-restore": true
}
Taken from here.
Get the Container ID
Save the ID of the container you want to edit for easier access to the files.
export CONTAINER_ID=`docker inspect --format="{{.Id}}" <YOUR CONTAINER NAME>`
Edit Container Configuration
Edit the configuration file, go to the "Env" section, and add your key.
sudo vim /var/lib/docker/containers/$CONTAINER_ID/config.v2.json
My file looks like this:
...,"Env":["TEST=1",...
Stop and Start Docker
I found that restarting Docker didn't work, I had to stop and then start Docker with two separate commands.
sudo systemctl stop docker
sudo systemctl start docker
Because of live-restore, your containers should stay up.
Verify That It Worked
docker exec <YOUR CONTAINER NAME> bash -c 'echo $TEST'
Single quotes are important here.
You can also verify that the uptime of your container hasn't changed:
docker ps
You wrote that you do not want to migrate the old volumes. So I assume either the Dockerfile that you used to build the spencercooley/wordpress image has VOLUMEs defined or you specified them on command line with the -v switch.
You could simply start a new container which imports the volumes from the old one with the --volumes-from switch like:
$ docker run --name my-new-wordpress --volumes-from my-wordpress -e VIRTUAL_HOST=domain.com --link my-mysql:mysql -d spencercooley/wordpres
So you will have a fresh container but you do not loose the old data. You do not even need to touch or migrate it.
A well-done container is always stateless. That means its process is supposed to add or modify only files on defined volumes. That can be verified with a simple docker diff <containerId> after the container ran a while.
In that case it is not dangerous when you re-create the container with the same parameters (in your case slightly modified ones). Assuming you create it from exactly the same image from which the old one was created and you re-use the same volumes with the above mentioned switch.
After the new container has started successfully and you verified that everything runs correctly you can delete the old wordpress container. The old volumes are then referred from the new container and will not be deleted.
If you are running the container as a service using docker swarm, you can do:
docker service update --env-add <you environment variable> <service_name>
Also remove using --env-rm
To make sure it's addedd as you wanted, just run:
docker exec -it <container id> env
1. Enter your running container:
sudo docker exec -it <container_name> /bin/bash
2. Run command to all available to user accessing the container and copy them to user running session that needs to run the commands:
printenv | grep -v "no_proxy" >> /etc/environment
3. Stop and Start the container
sudo docker stop <container_name>
sudo docker start <container_name>
Firstly you can set env inside the container the same way as you do on a linux box.
Secondly, you can do it by modifying the config file of your docker container (/var/lib/docker/containers/xxxx/config.v2.json). Note you need restart docker service to take affect. This way you can change some other things like port mapping etc.
here is how to update a docker container config permanently
stop container: docker stop <container name>
edit container config: docker run -it -v /var/lib/docker:/var/lib/docker alpine vi $(docker inspect --format='/var/lib/docker/containers/{{.Id}}/config.v2.json' <container name>)
restart docker
I solve this problem with docker commit after some modifications in the base container, we only need to tag the new image and start that one
docs.docker.com/engine/reference/commandline/commit
docker commit [container-id] [tag]
docker commit b0e71de98cb9 stack-overflow:0.0.1
then you can pass environment vars or file
docker run --env AWS_ACCESS_KEY_ID --env AWS_SECRET_ACCESS_KEY --env AWS_SESSION_TOKEN --env-file env.local -p 8093:8093 stack-overflow:0.0.1
the quick working hack would be:
get into the running container.
docker exec -it <container_name> bash
set env variable,
install vim if not installed in the container
apt-get install vim
vi ~/.profile at the end of the file add export MAPPING_FILENAME=p_07302021
source ~/.profile
check whether it has been set! echo $MAPPING_FILENAME(make sure you should come out of the container.)
Now, you can run whatever you're running outside of the container from inside the container.
Note, in case you're worried that you might lose your work if the current session you logged in gets logged off. you can always use screen even before starting step 1. That way if you logged off by chance of your inside running container session, you can log back in.
After understand that docker run an image constructed with a dockerfile , and the only way to change it is build another image stop everything and run everything again .
So the easy way to "set an environment variable in a running docker container" is read dockerfile [1] (with docker inspect) understand how docker starts [1].
In the example [1] we can see that docker start with /usr/local/bin/docker-php-entrypoint and we could edit it with vi and add one line with export myvar=myvalue since /usr/local/bin/docker-php-entrypoint Posix script .
If you can change dockerfile, you can add a call to a script [2] for example /usr/local/bin/mystart.sh and in that file we can set your environment var.
Of course after change the scripts you need restart the container [3]
[1]
$ docker inspect 011aa33ba92b
[{
. . .
"ContainerConfig": {
"Cmd": [
"php-fpm"
],
"WorkingDir": "/app",
"Entrypoint": [
"docker-php-entrypoint"
],
. . .
}]
[2]
/usr/local/bin/mystart.sh
#!/bin/bash
export VAR1=VAL1
export VAR2=VAL2
your_cmd
[3]
docker restart dev-php (container name)
Hack with editing docker inner configs and then restarting docker daemon was unsuitable for my case.
There is a way to recreate container with new environment settings and use it for some time.
1. Create new image from runnning container:
docker commit my-service
a1b2c3d4e5f6032165497
Docker created new image, and answered with its id. Note, the image doesn't include mounts and networks.
2. Stop and rename original container:
docker stop my-service
docker rename my-service my-service-original
3. Create and start new container with modified environment:
docker run \
-it --rm \
--name my-service \
--network=required-network \
--mount type=bind,source=/host/path,target=/inside/path,readonly \
--env 'MY_NEW_ENV_VAR=blablabla OLD_ENV=zzz' \
a1b2c3d4e5f6032165497
Here, I did the following:
created new temporary container from image built on step 1, that will show its output on terminal, will exit on Ctrl+C, and will be deleted after that
configured its mounts and networks
added my custom environment configuration
4. After you worked with temporary container, press Ctrl+C to stop and remove it, and then return old container back:
docker rename my-service-original my-service
docker start my-service
How to set environment variable in a running docker container as a development environment
Basically you can do like in normal linux, adding export MY_VAR="value" to ~/.bashrc file.
Instructions
Using VScode attach to your running container
Then with VScode open the ~/.bashrc file
Export your variable by adding the code in the end of the file
export MY_VAR="value"
Finally execute .bashrc using source command
source ~/.bashrc
You could set an environment variable to a running Docker container by
docker exec -it -e "your environment Key"="your new value" <container> /bin/bash
Verify it using below command
printenv
This will update your key with the new value provided.
Note: This will get reverted back to old on if docker gets restarted.
Use export VAR=Value
Then type printenv in terminal to validate it is set correctly.

Will the linked docker container gets the link when it goes down and come up?

I've been wondering if the linked container shuts down and starts again, does the container that is linked with it
restores the --link connection?
Fire 2 containers.
docker run -d --name fluentd millisami/fluentd
docker run -d --name railsapp --link fluentd:fluentd millisami/rails
Now if the fluentd container is stopped and restarted, will the railsapp container restores the link and linked ENV vars automatically?
UPDATE:
As of Docker 1.3 (or probably even from earlier version, not sure), the /etc/hosts file will be updated with the new ip of a linked containers if it restarts. This means that if you access it via its name within the /etc/hosts entry as opposed to the environment variable, your link will still work even after restart.
Example:
When you starts two containers app_image and mysql and link them like this:
$ docker run --name mysql mysql:5.6.20
$ docker run -d --link mysql:mysql app_image
you'll get an entry in your /etc/hosts within app_image:
# /etc/hosts example
mysql 172.17.0.12
and that ip will be refreshed in case mysql crashes and is restarted.
So don't use environment variables when referring to your linked container:
$ ping $MYSQL_PORT_3306_TCP_ADDR # --> don't use! won't work after the linked container restarts
$ ping mysql # --> instead, access it by the name as in /etc/hosts
Old answer:
Nope,it won't. In the face of crashing containers scenario, links are as good as dead. I think it is pretty much an open problem,i.e., there are many candidate solutions, but none are yet to be crowned the standard approach.
You might want to take a look at http://jasonwilder.com/blog/2014/07/15/docker-service-discovery/

Resources