My container is not running while attach with MOUNT VOLUME - docker

I've create volume like docker volume create my-vol in my machine. But when I run my container as follow:
docker run -d \
--name=ppshein-test \
--mount source=my-vol,destination=/var/www/ -p 3000:3000 \
ppshein:latest
and found that my container is not working, that's why I've tried to logs
> sample-docker#1.0.0 start /var/www
> node index.js
and found as above. That's why I've tried to run that same image without attaching specific volume as follow:
docker run -d --restart=always -p 3001:3000 ppshein:latest
and found it's working smoothly. But I check its container logs and found as follow:
> sample-docker#1.0.0 start /var/www
> node index.js
Example app listening on port 3000!
Oddly, what I've found Example app listening on port 3000! of that last container even not found that same message on previous container.
Please let me know why. Thanks much.

I think that can be something you are looking for,
(from docker docs)
If you use --mount to bind-mount a file or directory that does not yet exist on the Docker host, Docker does not automatically create it for you, but generates an error.

Related

403 with nginx for file located in binded volume with docker

I am trying to use my nginx server on docker but I cannot use the files / folder if they belong to my volume. Problem, the goal of my test is to keep a volume between the file in my computer and the container.
I have searched during 3 days and tried a lot of solution but no effects...( useradd, chmod, chown, www_data, etc.....)
I don't understand how is it possible to use ngnix, a volume and docker?
The only solution actually for me is to copy the folder of my volume in another folder, and so I can chown the folder and use NGIX. There is no official solution on the web and I am surprised because for me using docker with a volume binded with his container would be the basic for a daily work.
If someone has managed to implement it, I would be very happy if you could share you code. I need to understand what I am missing.
FYI I am working with a VM.
Thanks !
I think you are not passing the right path in the volume option. There are a few ways to do it, you can pass the full path or you can use the $(pwd) if you are using a Linux machine. Let's say you are on /home/my-user/code/nginx/ and your HTML files are on html folder.
You can use:
$ docker run --name my-nginx -v /home/my-user/code/nginx/html/:/usr/share/nginx/html:ro -p 8080:80 -d nginx
or
$ docker run --name my-nginx -v ~/code/nginx/html/:/usr/share/nginx/html:ro -p 8080:80 -d nginx
or
$ docker run --name my-nginx -v $(pwd)/html/:/usr/share/nginx/html:ro -p 8080:80 -d nginx
I've created an index.html file inside the html folder, after the docker run, I was able to open it:
$ echo "hello world" >> html/index.html
$ docker run --name my-nginx -v $(pwd)/html/:/usr/share/nginx/html:ro -p 8080:80 -d nginx
$ curl localhost:8080
hello world
You can also create a Dockerfile, but you would need to use COPY command. I'll give a simple example that's working, but you should improve this by using a version and etc..
Dockerfile:
FROM nginx
COPY ./html /usr/share/nginx/html
...
$ docker build -t my-nginx:0.0.1 .
$ docker run -d -p 8080:80 my-nginx:0.0.1
$ curl localhost:8080
hello world
You can also use docker-compose. By the way, those examples are just to give you some idea of how it works.

Can't save file on remote Jupyter server running in docker container

I'm trying to work in Jupyter Lab run via Docker on a remote machine, but can't save any of the files I open.
I'm working with a Jupyter Docker Stack. I've installed docker on my remote machine and successfully pulled the image.
I set up port forwarding in my ~/.ssh/config file:
Host mytunnel
HostName <remote ip>
User root
ForwardAgent yes
LocalForward 8888 localhost:8888
When I fire up the container, I use the following script:
docker run \
-p 8888:8888 \
-e JUPYTER_ENABLE_LAB=yes \
-v "${PWD}":/home/jovyan/work jupyter/tensorflow-notebook
The container is running:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c8fc3c720af1 jupyter/tensorflow-notebook "tini -g -- start-no…" 8 minutes ago Up 8 minutes 0.0.0.0:8888->8888/tcp, :::8888->8888/tcp adoring_khorana
I get the regular Jupyter url back:
http://127.0.0.1:8888/lab?token=<token>
But when I access the server in my browser, the Save option is disabled.
I've tried some of the solutions proposed elsewhere in SO, but no luck.
Is this something about connecting over SSH? The Jupyter server thinks it's not a secure connection?
It is possible that the problem is related to the SSH configuration, but I think is more probably related to a permission issue with your volume mount.
Please, try reviewing your docker container logs looking for permissions related errors. You can do that using the following:
docker container logs <container id>
See the output provided by your docker run command too.
In addition, try opening a shell in the container:
docker exec -it <container id> /bin/bash
and see if you are able to create a file in the default work directory:
touch /home/jovyan/work/test_file
Finally, the Jupyter docker stacks repository has a troubleshooting page almost entirely devoted to permissions issues.
Consider especially the solutions provided in the Additional tips and troubleshooting commands for permission-related errors and, as suggested, try providing launching the container with your OS user:
docker run \
-p 8888:8888 \
-e JUPYTER_ENABLE_LAB=yes \
--user "$(id -u)" --group-add users \
-v "${PWD}":/home/jovyan/work jupyter/tensorflow-notebook
After that, as suggested in the mentioned documentation as well, see if the container is properly mounted using the following command:
docker inspect <container_id>
In the obtained result note the value of the RW field which indicates whether the volume is writable (true) or not (false).

Docker file in host machine not available in container using bind volume

I am facing an issue where after runnig the container and using bind mount to mount the directory on host to container I am not able to see new files created in host machine inside container.Below is my project structure.
The python code creates a file inside the container which should be available inside the host machine too however this does happen when I start the container with below command. However updates to python code and html is available inside the container.
sudo docker container run -p 5000:5000 --name flaskapp --volume feedback1:/app/feedback/ --volume /home/deepak/PycharmProjects/NewDockerProject/sampleapp:/app flask_image
However after starting the container using below command, everything seems to work fine. I can see all the files from container to host and vice versa(new created , edited).I git this command from docker in the month of lunches book.
sudo docker container run --mount type=bind,source=/home/deepak/PycharmProjects/NewDockerProject/sampleapp,target=/app -p 5000:5000 --name flaskapp
Below is the content of my dockerfile
FROM python:3.8-alpine
WORKDIR /app
COPY ./requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python","main.py"]
Could someone please help me in figuring out the difference between the two commands ? I am using ubuntu. Thank you
In my case i got working volumes using following docker run args (but i am running without --mount type=bind):
docker run -it ... -v mysql_data:/var/lib/mysql -v storage:/usr/shared/app_storage
where:
mysql_data is a volume name
/var/lib/mysql path inside container machine
you could list volumes as:
docker volume ls
and inspect them to see where it points on your system (usually /var/lib/docker/volumes/{volume_nanme}/_data):
docker volume inspect mysql_data
to create volume use following command:
docker volume create {volume_name}

Dockerfile : How to mount host directory in container path?

If i have inside my localhost a log folder at:
/var
/logs
apache.logs
elasticsearch.logs
etc...
And i want to mount /var/logs directory of my host, into a path inside a Docker container, like /usr/var/logs/ , how do i do that within a dockerfile ? So each time a log file is updated, it would be accessible within the container too.
Thank you
You can not mount a volumn in Dockerfile
Because:
Dockerfile will build an image, image is independent on each machine host.
Image should be run everywhere on the same platform for example on linux platform it can be running on fedora, centos, ubuntu, redhat...etc
So you just mount volumn in to the container only. because container will be run on specify machine host.
Hope you understand it. Sorry for my bad English.
You can achieve it in two ways - https://docs.docker.com/storage/bind-mounts/
--mount
$ docker run -d -it --name devtest --mount type=bind,source=/var/logs,target=/usr/var/logs image:tag
-v
$ docker run -d -it --name devtest -v /var/logs:/usr/var/logs image:tag
Try -v option of docker run command.
docker run -itd -v /var/logs:/usr/var/logs image-name
This will mount /var/logs directory of host on to /usr/var/logs directory of container.
Hope this helps.
Update:
To mount directory with source and dest in dockerfile make use of this hack (Not 100% sure though).
RUN --mount=target=/usr/var/logs,type=bind,source=/var/logs

How to mount volume inside child docker created by parent docker sharing docker.sock

I am trying to create a wrapper container to build and run a set of containers using a docker-compose I cannot modify. The docker-compose mounts several volumes, but when starting the docker-compose from inside of the wrapper docker, the volumes are still mounted from the host since the docker .sock is volume mounted to be the host's docker.sock.
I would like to not have to use full docker-in-docker due to all the problems associated with it outlined in jpetazzo's article.
I would also like to avoid volume-from since I cannot edit the docker-compose file mentioned previously.
Is there a way to get this snippet to correctly use the parent docker's file instead of going to the host filesystem and mounting it from there?
FROM docker:latest
RUN mkdir -p /tmp/parent/ && echo "This is from the parent docker" > /tmp/parent/parent.txt
CMD docker run -v /tmp/parent/parent.txt:/root/parent.txt --rm ubuntu:18.04 bash -c "cat /root/parent.txt"
when run with a command akin to this:
docker build -t parent . && docker run --rm -v /var/run/docker.sock:/var/run/docker.sock parent
Make your paths the same on the host and inside of the docker image, e.g.
docker run -v /var/run/docker.sock:/var/run/docker.sock \
-v /home/user:/home/user -w /home/user/project parent_image ...
By mounting the volume as /home/user in the same location inside the image, a command like docker-compose up with relative bind mounts will use the container path names when talking to the docker socket, which will match the paths on the host.

Resources