Nginx cannot access a Docker volume mount - docker

I have a Docker Compose setup to run a Nginx serving pictures (jpg files). The port, protocol, etc. is not relevant. My Nginx works correctly and serve those files:
web:
image: myweb
restart: always
volumes:
- ./logs/nginx:/var/log/nginx
ports:
...
The image myweb inherits from nginx and add all the configuration needed by this server, basically, the web root is:
index index.html index.php;
root /var/www/myweb/public;
I'm also using php:7.0-fpm and fastcgi locations but it is not the issue here.
What I would want is to mount a host volume with the pictures (instead of copying them into the docker image) in order to have the possibility of update it externally (Dropbox sync or whatever).
web:
image: myweb
restart: always
volumes:
- ./webcode:/var/www
- ./logs/nginx:/var/log/nginx
- ./images-data/catalog-images:/var/www/myweb/public/catalog/images
ports:
...
The host path webcode (relative to docker-compose.yml location) is the static web, also mounted on /var/www.
The host path images-data/catalog-images (relative to docker-compose.yml location) is the pictures directory, mounted on /var/www/myweb/public/catalog/images.
I'm not sure it this is a good practice (mount a host directory on the same path where other host directory was already mounted). Anyway, I also tried keeping webcode as part of the myweb image having only one mounted volume (the pictures one).
Mounting webcode and including pictures on it, works. And without any volume mount, just including all the stuff (website and pictures), also works. But I would need to have an isolated volume for the pictures.
I also tried symlinking within the container /var/www/myweb/public/catalog/images towards the pictures volume. But Nginx does not render it, perhaps because it it a different volume than /var/www.
Should it belong to the root of nginx server configuration ?
The thing is that, those images are not loaded in the browser. Going into the container, I confirm that the ownership for the volume is correct (www-data:www-data).
From the point of view of the container, ownership and content is the same if I mount the volume or if I copy in the container filesystem. But Nginx only can see the files within the Docker container file system.
Is this the normal behaviour ? Am I missing any Nginx configuration to work normally with host volumes inside the container ?

I think I finally solved.
When I said
I'm also using php:7.0-fpm and fastcgi locations but it is not the
issue here.
I was totally wrong: it was the key: the php execution. The problem was that the pictures was loaded by an index.php. Then it was sent to the fpm service where the pictures were not available !!.
This is my docker-compose.yml definition for the php service:
fpm:
image: php7fpm:latest
restart: always
volumes:
- ./webcode:/var/www
The thing is that the pictures volume was not visible there. I just added such volume to fpm volume list, and problem is solved:
fpm:
image: php7fpm:latest
restart: always
volumes:
- ./webcode:/var/www
- ./images-data/catalog-images:/var/www/myweb/public/catalog/images
Sorry for the inconvenience.

Related

Map the docker compose volume from container to host is not working

I have a very simple nextjs application where I have two folders which I like to map to the host (developer system) while I deploy this application inside docker (I use docker-desktop).
Data folder (It has some json files and also some nested folders and files)
Public folder (It has nested folders too but It contains images)
I have tested in local and also inside the docker container (without volume and all) - It's all functioning.
As a next step - I want to use the Volume with my docker-compose file so that, I can bind these directories inside the container with the source (and going forward with AKS file storage options).
I have tried with multiple approaches (also checked some of the answers in stackoverflow) but it does not help to achieve the same result.
Here is my docker-compose file for your reference.
version: '3.4'
services:
portfolio:
image: ${DOCKER_REGISTRY-}brij1111-portfolio
build:
context: ./APP-03/clientapp
dockerfile: dockerfile
volumes:
- /app/node_modules
# anonymous volume only for node_modules
- portfolio_data:/app/data
# named volume inside which the nextjs app writes content to the file
volumes:
portfolio_data:
driver: local
driver_opts:
o: bind
type: none
device: /APP-03/clientapp/data
# I have tried here to give a full path like /mnt/c/work/.../APP-03/clientapp/data but that also not working.
using docker-desktop I can see the volume indeed created for the container and it has all the files. However, It does not get reflected in my source if anything is updated inside that volume (like I add some content through nextjs application to that file it does not get reflected inside the running container).
in case, someone wants to know my folder hierarchy and where am i running docker-compose file, here is that reference image.
I had a similar problem installing Gitea on my NAS until someone (thankfully) told me a way to compromise (ie. your data will be persistent, but not in the location of your choosing).
version: '3.4'
volumes:
portfolio_data: {}
services:
portfolio:
image: ${DOCKER_REGISTRY-}brij1111-portfolio
build:
context: ./APP-03/clientapp
dockerfile: dockerfile
volumes:
- /app/node_modules
# anonymous volume only for node_modules
- portfolio_data:/app/data
In my articular case, I had to access my NAS using terminal, to the location which the container image is located, and search from there.
Hope it helps you

files not visible in docker-compose

I am quite new to docker and I try to build a LAMP stack with docker-compose. I have found a nice tutorial over there. I think I understood the difference between volumes and bind mounts, however, I guess I am running into a problem at some point. I want to make one of my folders available to the LAMP stack (my sources, residing in a folder 'src'). However, the sources are not visible within the /var/www/html folder.
My docker-compose file looks like this:
version: "3.7"
services:
mariadb:
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: "no"
MYSQL_DATABASE: testdb
MYSQL_PASSWORD: testpassword
MYSQL_ROOT_PASSWORD: rootpwd
MYSQL_USER: testuser
TZ: Europe/Rome
image: "mariadb:10.5.2"
volumes:
- "mariadb-volume:/var/lib/mysql"
php-httpd:
image: "php:7.3-apache"
ports:
- "80:80"
volumes:
- ./src/:/var/www/html/
phpmyadmin:
image: phpmyadmin/phpmyadmin
links:
- "mariadb:db"
ports:
- "8081:80"
volumes:
mariadb-volume: ~
Phpmyadmin works just fine, also the docker-compose runs without any warnings. My compose command is
docker-compose up -d --force-recreate
Interestingly, when I change "./src/" for "./DocumentRoot", the folder DocumentRoot is created on my host machine. However, placing files in DocumentRoot on the host or in /var/www/html in docker does not show the files on the docker or host, respectively. Nevertheless, I can say for sure that I am in the right directory at least.
Is there some trick or parameter I need to pass along to let docker see the files on my host?
Hah... thanks again. Your question has triggered another thought. It's quite natural to me, so I didn't mention it: When I execute the docker-compose from Desktop, everything works fine. However, if I execute it from my usual working directory, it does not. My usual working directory is a mounted volume with VeryCrypt on Windows. Obviously there are issues sharing the directory in the latter case.
Just in case anybody is experiencing that error too in the future.
I want to make one of my folders available to the LAMP stack (my
sources, residing in a folder 'src'). However, the sources are not
visible within the /var/www/html folder.
I think that there is a confusion about how mounts work with docker.
When you specify a mount for a docker container such as :
php-httpd:
image: "php:7.3-apache"
ports:
- "80:80"
volumes:
- ./src/:/var/www/html/
Only the container php-httpd will be set with the mount, not the other containers of your LAMP stack.
If you need to set that mount on other containers, do it explicitly on them.
Interestingly, when I change "./src/" for "./DocumentRoot", the folder
DocumentRoot is created on my host machine. However, placing files in
DocumentRoot on the host or in /var/www/html in docker does not show
the files on the docker or host, respectively.
That is the way which works the mounts. When the folder exists on the host (here src) , docker uses it to mount its content from host to container. When the folder doesn't exit on the host, Docker creates it.
I have finally found a solution. I am splitting the docker-compose file and I do the php-httpd part in a separate dockerfile. There, I can copy my sources into the dockercontainer.
It is not the original solution, so I would still be grateful for input on the problem why the bind mount does not work, but this solution works for me.

Understanding volumn in docker compose

The following is an example given in https://docker-curriculum.com/
version: "3"
services:
es:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.2
container_name: es
environment:
- discovery.type=single-node
ports:
- 9200:9200
volumes:
- esdata1:/usr/share/elasticsearch/data
web:
image: prakhar1989/foodtrucks-web
command: python app.py
depends_on:
- es
ports:
- 5000:5000
volumes:
- ./flask-app:/opt/flask-app
volumes:
esdata1:
driver: local
and it says The volumes parameter specifies a mount point in our web container where the code will reside about the /opt/flask-app
I think it means, /opt/flask-app is a mount point and it points to the host machines ./flask-app
However it doesn't say anything about esdata1 and I can't apply the same explanation as given to /opt/flask-app since there's no esdata1 directory/file in the host machine.
What is happening for the esdata1 ?
My guess is that it means creating a volume (The closest thing I can think of is a disk partition) and name it esdata1 and mount it on /usr/share/elasticsearch/data, am I correct on this guess?
These are a bit different things: volumes and bind mounts. Bind mounts let you specify folder on host machine, which would serve as a storage. Volumes (at lease for local driver) also have folders on host machines, but their location is managed by Docker and is sometimes a bit more difficult to find.
When you specify volume in docker-compose.yml, if your path starts with / or . it becomes a bind mount, like in web service. Otherwise, if it is a single verb, it is a volume, like for es service.
You can inspect all volumes on your host machine by running docker volume ls.
What is happening for the esdata1 ? My guess is that it means creating
a volume (The closest thing I can think of is a disk partition) and
name it esdata1 and mount it on /usr/share/elasticsearch/data, am I
correct on this guess?
That's all correct.
I do not pretend on setting up the rules, but in general, volumes are more suitable for sharing common data between several containers, like in docker-compose, while bind mounts suite better for sharing data from host to container, like some initial configs for services.

Docker-compose volume key: what protocol is used behind

I am not sure if this is a right question to ask. In the tutorial of Docker compose, https://docs.docker.com/compose/gettingstarted/#step-5-edit-the-compose-file-to-add-a-bind-mount, there is a volume key in the docker-compose.yml:
version: '3'
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
redis:
image: "redis:alpine"
And according to the tutorial, the volume key mounts the local file to the remote, and therefore we can change the code on the fly without restarting the Docker. My question is what internet protocol is used behind to transfer the updated code file.
Furthermore, I guess there would be more framework having this feature. What are the common protocols behind and why?
The tutorial doesn't say "the volume key mounts the local file to the remote". It says:
...in your project directory to add a bind mount for the web service:
[...]
The new volumes key mounts the project directory (current directory) on the host to /code inside the container, allowing you to modify the code on the fly, without having to rebuild the image.
If you click on the bind mount link, it will take you to
documentation that should answer all of your questions.
Briefly, a bind mount is way of making one directory on your system
appear in another location. For example, if I were run:
mkdir /tmp/newetc
mount -o bind /etc /tmp/newetc
Then running ls /tmp/newetc would show the same contents as /etc.
Docker uses this feature to expose host directories inside your
containers.
A bind mount only works on the same host; it cannot be used to expose
files on your local system to a remote system. It is a kernel feature and there are no internet protocols involved.

Docker compose how to mount path from one to another container?

I've nignx container and one asset container which have all my assets build from grunt or some other tools.
Now in docker compose file, i want to mount asset container's 's folder path into nginx container so nginx can serve that files.
How can we do that? i don't remember but i think there is a option where we can share path of one container with another.
Suppose if i scale up nginx to 2 container then will that mount works for all instance of nginx?
if i scale up asset container then what will happen?
i also want to mount that with my host so development can be done be easily.
What you want to do is use a volume, and then mount that volume into whatever containers you want it to appear in.
Completely within Docker
You can do this completely inside of Docker.
Here is an example (stripped-down - your real file would have much more than this in it, of course).
version: '3'
services:
nginx:
volumes:
- asset-volume:/var/lib/assets
asset:
volumes:
- asset-volume:/var/lib/assets
volumes:
asset-volume:
At the bottom is a single volume defined, named "asset-volume".
Then in each of your services, you tell Docker to mount that volume at a certain path. I show example paths inside the container, just adjust these to be whatever path you wish them to be in the container.
The volume is an independent entity not owned by any particular container. It is just mounted into each of them, and is shared. If one container modifies the contents, then they all see the changes.
Note that if you prefer only one can make changes, you can always mount the volume as read-only in some services, by adding :ro to the end of the volume string.
services:
servicename:
volumes:
- asset-volume:/var/lib/assets:ro
Using a host directory
Alternately you can use a directory on the host and mount that into the containers. This has the advantage of you being able to work directly on the files using your tools outside of Docker (such as your GUI text editor and other tools).
It's the same, except you don't define a volume in Docker, instead mounting the external directory.
version: '3'
services:
nginx:
volumes:
- ./assets:/var/lib/assets
asset:
volumes:
- ./assets:/var/lib/assets
In this example, the local directory "assets" is mounted into both containers using the relative path ./assets.
Using both depending on environment
You can also set it up for a different dev and production environment. Put everything in docker-compose.yml except the volume mounts. Then make two more files.
docker-compose.dev.yml
docker-compose.prod.yml
In these files put only the minimum config to define the volume mount. We'll mix this with the docker-compose.yml to get a final config.
Then use this. It will use the config from docker-compose.yml, and use anything in the second file as an override or supplemental config.
docker-compose -f docker-compose.yml \
-f docker-compose.dev.yml \
up -d
And for production, just use the prod file instead of the dev file.
The idea here is to keep most of the config in docker-compose.yml, and only the minimum set of differences in the alternative files.
Example:
docker-compose.prod.yml
version: '3'
services:
nginx:
volumes:
- asset-volume:/var/lib/assets
docker-compose.dev.yml
version: '3'
services:
nginx:
volumes:
- ./assets:/var/lib/assets

Resources