For local development of applications where isolation of folder/file permissions are not mission critical, I'd like to mount local directories into transient containers but retain the ability to delete/modify those directories from the host account without sudo.
I have read that I can create specific volumes and then mount those precreated volumes. The thing is I would like to use self-removing transient containers arbitrarily (e.g. docker run --rm -v "$(pwd):/app" golang go build src/main.go).
The objective is to make it easy to work with various technologies (like databases or languages) without needing to install them on the host.
Running
docker run \
--rm \
-v "${HOME}/foo:/app" \
alpine touch /app/bar.txt
Running this will create a folder/file with root permissions. Trying to use the --user flag still mounts the directory as root gives me the following error:
touch: /app/bar.txt: Permission denied
docker run \
--rm \
--user "$(id -u):$(id -g)" \
-v "${HOME}/foo:/app" \
alpine touch /app/bar.txt
This failure obviously extends to trying to spin up a database instance
docker run \
--rm \
--net=host \
--user $(id -u):$(id -g) \
-v "${HOME}/mongo/foo:/data/db" \
mongo:latest
I'm currently following this tutorial to run a model on Docker that was built using the Google Cloud AutoML Vision:
https://cloud.google.com/vision/automl/docs/containers-gcs-tutorial
I'm having trouble running the container, specifically running this command:
sudo docker run --rm --name ${CONTAINER_NAME} -p ${PORT}:8501 -v ${YOUR_MODEL_PATH}:/tmp/mounted_model/0001 -t ${CPU_DOCKER_GCR_PATH}
I have my environment variables set up right (did an echo $<env_var>). I do not have a /tmp/mounted_model/0001 directory on my local system. My model path is configured to be the model location on the cloud storage.
${YOUR_MODEL_PATH} must be a directory on the host on which you're running the container.
Your question suggests that you're using the Cloud Storage bucket path but you cannot do this.
Reviewing the tutorial, I think the instructions are confusing.
You are told to:
gsutil cp \
${YOUR_MODEL_PATH} \
${YOUR_LOCAL_MODEL_PATH}/saved_model.pb
So, your command should probably be:
sudo docker run \
--rm \
--interactive --tty \
--name=${CONTAINER_NAME} \
--publish=${PORT}:8501 \
--volume=${YOUR_LOCAL_MODEL_PATH}:/tmp/mounted_model/0001 \
${CPU_DOCKER_GCR_PATH}
NB I added --interactive --tty to make debugging easier; it's optional
NB ${YOUR_LOCAL_MODEL_PATH} not ${YOUR_MODEL_PATH}
NB The command should not be -t ${CPU_DOCKER_GCR_PATH} omit the -t
I've not run through this tutorial.
I am using docker to run tensorflow and retrain inception module. I use the following code:
docker run -it \
--publish 6006:6006 \
--volume ${HOME}/tf_files:/tf_files \
--workdir /tf_files \
tensorflow/tensorflow:1.1.0 bash
Then I use
python retrain.py
bottleneck_dir=bottlenecks
how_many_training_steps=500
model_dir=inception
summaries_dir=training_summaries/basic
output_graph=retrained_graph.pb
output_labels=retrained_labels.txt
image_dir=flower_photos
When I run these codes, directory of flower_photos should be inside docker container. However, I want this directory to be in my home directory instead (/user/documents/flower_photos). What should I do?
You could use a volume in order to associate a host folder to a container folder:
docker run -it \
...
-v /user/documents/flower_photos:/path/to/inception/flower_photos
That way, the inception module would find an existing folder with your host content.
I know docker, but less about bitcoind.
Now I want to use this docker image to start my own test environment:
The description tells me:
docker volume create --name=bitcoind-data
docker run -v bitcoind-data:/bitcoin --name=bitcoind-node -d \
-p 8333:8333 \
-p 127.0.0.1:8332:8332 \
kylemanna/bitcoind
Now I want to now how I have to add my bitcoind.conf?
This isn't provided anywere? Can I use it at container startup or docker exec?
The repository contains a documentation file dedicated to your issue: https://github.com/kylemanna/docker-bitcoind/blob/master/docs/config.md
I have a question regarding the whole data volume process in Docker. Basically here are two Dockerfiles and their respective run commands:
Dockerfile 1 -
# Transmission over Debian
#
# Version 2.92
FROM debian:testing
RUN apt-get update \
&& apt-get -y install nano \
&& apt-get -y install transmission-daemon transmission-common transmission-cli \
&& mkdir -p /transmission/config /transmission/watch /transmission/download
ENTRYPOINT ["transmission-daemon", "--foreground"]
CMD ["--config-dir", "/transmission/config", "--watch-dir", "/transmission/watch", "--download-dir", "/transmission/download", "--allowed", "*", "--no-blocklist", "--no-auth", "--no-dht", "--no-lpd", "--encryption-preferred"]
Command 1 -
docker run --name transmission -d -p 9091:9091 -v C:\path\to\config:/transmission/config -v C:\path\to\watch:/transmission/watch -v C:\path\to\download:/transmission/download transmission
Dockerfile 2 -
# Nginx over Debian
#
# Version 1.10.3
FROM debian:testing
RUN apt-get update \
&& apt-get -y install nano \
&& apt-get -y install nginx
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
Command 2 -
docker run --name nginx -d -p 80:80 -v C:\path\to\config:/etc/nginx -v C:\path\to\html:/var/www/html nginx
So, the weird thing is that the first dockerfile and command works as intended. Where the docker daemon mounts a directory from the container to the host. So, I am able to edit the configuration files as I please and they will be persisted to the container on a restart.
However, as for the second dockerfile and command it doesn't seem to be working. I know if you go to the Docker Volume documentation it says that volume mounts are only intended to go one-way, from host-to-container, but how come the Transmission container works as intended, while the Nginx container doesn't?
P:S - I'm running Microsoft Windows 10 Pro Build 14393 as my host and Version 17.03.0-ce-win1 (10300) Channel: beta as my Docker version.
Edit - Just to clarify. I'm trying to get the files from inside the Nginx container to the host. The first container (Transmission) works in that regard, by using a data volume. However, for the second container (Nginx), it doesn't want to copy the files in the mounted directory from inside the container to the host. Everything else is working though, it does successfully start.
The host volume will not copy data like a named volume will. However, you can create a named volume that performs a bind mount, which will then have the data initialization properties of any other named volume. The only prerequisite of a bind mount over a host volume is that the directory must exist in advance, docker will not create it for you like it does with a host volume. Here are three different examples of how to create a bind mount volume:
# create the volume in advance
$ docker volume create --driver local \
--opt type=none \
--opt device=/home/user/test \
--opt o=bind \
test_vol
# create on the fly with --mount
$ docker run -it --rm \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=none,volume-opt=o=bind,volume-opt=device=/home/user/test \
foo
# inside a docker-compose file
...
volumes:
bind-test:
driver: local
driver_opts:
type: none
o: bind
device: /home/user/test
...
So in your example with a docker run command, you can use the mount syntax:
docker run --name nginx -d -p 80:80 \
--mount type=volume,dst=/etc/nginx,volume-driver=local,volume-opt=type=none,volume-opt=o=bind,volume-opt=device=/c/path/to/config \
--mount type=volume,dst=/var/www/html,volume-driver=local,volume-opt=type=none,volume-opt=o=bind,volume-opt=device=/c/path/to/html \
nginx
The only part that may need adjusting is the windows path names inside the Linux VM that docker runs in HyperV.
Host volumes don't copy data from the container > host. Host volumes mount over the top of what's in the container/image, so they effectively replace what's in the container with what's on the host.
A standard or "named" volume will copy the existing data from the container image into a new volume. These volumes are created by launching a container with the VOLUME command in it's Dockerfile or by the docker command
docker run -v myvolume:/var/whatever myimage
By default this is data stored in a "local" volume, "local" being on the Docker host. In your case that is on the VM running Docker rather than your Windows host so might not be easily accessible to you.
You could be mistaking transmission auto generating files in a blank directory for a copy?
If you really need the keep the VM Host > container mappings then you might have to copy the data manually:
docker create --name nginxcopy nginx
docker cp nginxcopy:/etc/nginx C:\path\to\config
docker cp nginxcopy:/var/www/html C:\path\to\html
docker rm nginxcopy
And then you can map the populated host directories into the container and they will have the default data the image came with.