For my application I organise three containers: doango, postgres and nginx.
In docker-compose.yml I write:
nginx:
image: nginx:stable
But in this case I get a flat structure of config files. I mean that neither sites-available nor sites-enabled is created.
I suppose that a parameter in docker-compose could help me organise that.
Could you tell me whether having sites-available and sites-enabled is a good practice?
If yes, how to create them?
Related
I am quite new to docker and I try to build a LAMP stack with docker-compose. I have found a nice tutorial over there. I think I understood the difference between volumes and bind mounts, however, I guess I am running into a problem at some point. I want to make one of my folders available to the LAMP stack (my sources, residing in a folder 'src'). However, the sources are not visible within the /var/www/html folder.
My docker-compose file looks like this:
version: "3.7"
services:
mariadb:
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: "no"
MYSQL_DATABASE: testdb
MYSQL_PASSWORD: testpassword
MYSQL_ROOT_PASSWORD: rootpwd
MYSQL_USER: testuser
TZ: Europe/Rome
image: "mariadb:10.5.2"
volumes:
- "mariadb-volume:/var/lib/mysql"
php-httpd:
image: "php:7.3-apache"
ports:
- "80:80"
volumes:
- ./src/:/var/www/html/
phpmyadmin:
image: phpmyadmin/phpmyadmin
links:
- "mariadb:db"
ports:
- "8081:80"
volumes:
mariadb-volume: ~
Phpmyadmin works just fine, also the docker-compose runs without any warnings. My compose command is
docker-compose up -d --force-recreate
Interestingly, when I change "./src/" for "./DocumentRoot", the folder DocumentRoot is created on my host machine. However, placing files in DocumentRoot on the host or in /var/www/html in docker does not show the files on the docker or host, respectively. Nevertheless, I can say for sure that I am in the right directory at least.
Is there some trick or parameter I need to pass along to let docker see the files on my host?
Hah... thanks again. Your question has triggered another thought. It's quite natural to me, so I didn't mention it: When I execute the docker-compose from Desktop, everything works fine. However, if I execute it from my usual working directory, it does not. My usual working directory is a mounted volume with VeryCrypt on Windows. Obviously there are issues sharing the directory in the latter case.
Just in case anybody is experiencing that error too in the future.
I want to make one of my folders available to the LAMP stack (my
sources, residing in a folder 'src'). However, the sources are not
visible within the /var/www/html folder.
I think that there is a confusion about how mounts work with docker.
When you specify a mount for a docker container such as :
php-httpd:
image: "php:7.3-apache"
ports:
- "80:80"
volumes:
- ./src/:/var/www/html/
Only the container php-httpd will be set with the mount, not the other containers of your LAMP stack.
If you need to set that mount on other containers, do it explicitly on them.
Interestingly, when I change "./src/" for "./DocumentRoot", the folder
DocumentRoot is created on my host machine. However, placing files in
DocumentRoot on the host or in /var/www/html in docker does not show
the files on the docker or host, respectively.
That is the way which works the mounts. When the folder exists on the host (here src) , docker uses it to mount its content from host to container. When the folder doesn't exit on the host, Docker creates it.
I have finally found a solution. I am splitting the docker-compose file and I do the php-httpd part in a separate dockerfile. There, I can copy my sources into the dockercontainer.
It is not the original solution, so I would still be grateful for input on the problem why the bind mount does not work, but this solution works for me.
I have a Docker Compose setup to run a Nginx serving pictures (jpg files). The port, protocol, etc. is not relevant. My Nginx works correctly and serve those files:
web:
image: myweb
restart: always
volumes:
- ./logs/nginx:/var/log/nginx
ports:
...
The image myweb inherits from nginx and add all the configuration needed by this server, basically, the web root is:
index index.html index.php;
root /var/www/myweb/public;
I'm also using php:7.0-fpm and fastcgi locations but it is not the issue here.
What I would want is to mount a host volume with the pictures (instead of copying them into the docker image) in order to have the possibility of update it externally (Dropbox sync or whatever).
web:
image: myweb
restart: always
volumes:
- ./webcode:/var/www
- ./logs/nginx:/var/log/nginx
- ./images-data/catalog-images:/var/www/myweb/public/catalog/images
ports:
...
The host path webcode (relative to docker-compose.yml location) is the static web, also mounted on /var/www.
The host path images-data/catalog-images (relative to docker-compose.yml location) is the pictures directory, mounted on /var/www/myweb/public/catalog/images.
I'm not sure it this is a good practice (mount a host directory on the same path where other host directory was already mounted). Anyway, I also tried keeping webcode as part of the myweb image having only one mounted volume (the pictures one).
Mounting webcode and including pictures on it, works. And without any volume mount, just including all the stuff (website and pictures), also works. But I would need to have an isolated volume for the pictures.
I also tried symlinking within the container /var/www/myweb/public/catalog/images towards the pictures volume. But Nginx does not render it, perhaps because it it a different volume than /var/www.
Should it belong to the root of nginx server configuration ?
The thing is that, those images are not loaded in the browser. Going into the container, I confirm that the ownership for the volume is correct (www-data:www-data).
From the point of view of the container, ownership and content is the same if I mount the volume or if I copy in the container filesystem. But Nginx only can see the files within the Docker container file system.
Is this the normal behaviour ? Am I missing any Nginx configuration to work normally with host volumes inside the container ?
I think I finally solved.
When I said
I'm also using php:7.0-fpm and fastcgi locations but it is not the
issue here.
I was totally wrong: it was the key: the php execution. The problem was that the pictures was loaded by an index.php. Then it was sent to the fpm service where the pictures were not available !!.
This is my docker-compose.yml definition for the php service:
fpm:
image: php7fpm:latest
restart: always
volumes:
- ./webcode:/var/www
The thing is that the pictures volume was not visible there. I just added such volume to fpm volume list, and problem is solved:
fpm:
image: php7fpm:latest
restart: always
volumes:
- ./webcode:/var/www
- ./images-data/catalog-images:/var/www/myweb/public/catalog/images
Sorry for the inconvenience.
Is it possible to to mount a host directory in to a container but only allowing overwriting files that exist on the host?
Example github repo: https://github.com/UniBen/stackoverflow-59031249
E.g
Host:
src/
public/
index.php (200kb)
Container:
src/
public/
index.php (100kb)
vendor/
...
Desired output: (The container file system merged with the mounted host system files which exists. Note the size of the index.php file.)
src/
public/
index.php (200kb)
vendor/
...
Actual output:
src/
public/
index.php (200kb)
Example docker-compose.yml
version: '3.2'
services:
php:
image: php
volumes:
- ./src:/src
Edit: So it looks like overlayfs implemented by docker is only used for bulding docker images and can not be used for volumes what so ever. I still think it's possible to specify a custom driver but not sure how. As a temp fix I've done some fancy stuff with mapping stuff out of container, diffing it and putting it back in but not ideal.
Is it possible to to mount a host directory in to a container but ...
No. The only Docker mounting option is a straight-up "replace this directory in the container with the equivalent directory from the host". There is no way to modify this, selectively hide subdirectories, or implement your "only if it already exists" rule.
In the example you're showing, you probably don't want a volume at all. Files like index.php and a vendor directory are application source code, and in typical use you'd write a Dockerfile, COPY index.php . to move the file into the image, and then RUN composer ... to create the vendor tree. This would be isolated from your host environment, so the vendor directory in the image would be separate from whatever existed on your host system.
Actually, there are no options to control such behavior, e.g, how data between source and dest will be handled. But if David's answer is not really your case you could do something like this:
version: '3.2'
services:
example:
build:
context: .
volumes:
- data:/src
- ./src:/src/host
volumes:
data:
How docker documentation says:
If you start a container which creates a new volume, and the container
has files or directories in the directory to be mounted the
directory’s contents are copied into the volume.
So after that, let's investigate a little bit:
/src # ls
file.a.txt file.c.txt host
/src # cat host/file.a.txt
host
/src # cat file.a.txt
container
/src # cat file.c.txt
container
Data from from container is saved into data named volume. The data from the host machine live in the host folder. Now you can copy from the host folder to the src with cp or rsync with any rules that you want.
This is a quite fishy and artificial example, maybe it's a good idea to rethink current implementation.
I'm learning Docker, and I have question about environments.
Project structure:
nginx (Dockerfile)
nuxt.js + express.js (Dockerfile)
laravel (Dockerfile)
mysql (Dockerfile)
docker compose
I need to save all settings variables in root folder with docker-compose. But laravel needs .env file inside their root dir.
So my question is: Is there some path to store all settings in one .env file in the root directory, and when docker-compose run, it give that setting to all services?
In your docker-compose.yml, you can either declare a volume mount for a directory containing your .env file. Then read from that directory in your laravel application for example.
Or you can choose to use the env_file configuration option to give the running process access to those variables.
services:
foo:
image: foo
volumes:
- ./env:/app/env
bar:
image: bar
env_file: ./env/.env
Read more here
I've nignx container and one asset container which have all my assets build from grunt or some other tools.
Now in docker compose file, i want to mount asset container's 's folder path into nginx container so nginx can serve that files.
How can we do that? i don't remember but i think there is a option where we can share path of one container with another.
Suppose if i scale up nginx to 2 container then will that mount works for all instance of nginx?
if i scale up asset container then what will happen?
i also want to mount that with my host so development can be done be easily.
What you want to do is use a volume, and then mount that volume into whatever containers you want it to appear in.
Completely within Docker
You can do this completely inside of Docker.
Here is an example (stripped-down - your real file would have much more than this in it, of course).
version: '3'
services:
nginx:
volumes:
- asset-volume:/var/lib/assets
asset:
volumes:
- asset-volume:/var/lib/assets
volumes:
asset-volume:
At the bottom is a single volume defined, named "asset-volume".
Then in each of your services, you tell Docker to mount that volume at a certain path. I show example paths inside the container, just adjust these to be whatever path you wish them to be in the container.
The volume is an independent entity not owned by any particular container. It is just mounted into each of them, and is shared. If one container modifies the contents, then they all see the changes.
Note that if you prefer only one can make changes, you can always mount the volume as read-only in some services, by adding :ro to the end of the volume string.
services:
servicename:
volumes:
- asset-volume:/var/lib/assets:ro
Using a host directory
Alternately you can use a directory on the host and mount that into the containers. This has the advantage of you being able to work directly on the files using your tools outside of Docker (such as your GUI text editor and other tools).
It's the same, except you don't define a volume in Docker, instead mounting the external directory.
version: '3'
services:
nginx:
volumes:
- ./assets:/var/lib/assets
asset:
volumes:
- ./assets:/var/lib/assets
In this example, the local directory "assets" is mounted into both containers using the relative path ./assets.
Using both depending on environment
You can also set it up for a different dev and production environment. Put everything in docker-compose.yml except the volume mounts. Then make two more files.
docker-compose.dev.yml
docker-compose.prod.yml
In these files put only the minimum config to define the volume mount. We'll mix this with the docker-compose.yml to get a final config.
Then use this. It will use the config from docker-compose.yml, and use anything in the second file as an override or supplemental config.
docker-compose -f docker-compose.yml \
-f docker-compose.dev.yml \
up -d
And for production, just use the prod file instead of the dev file.
The idea here is to keep most of the config in docker-compose.yml, and only the minimum set of differences in the alternative files.
Example:
docker-compose.prod.yml
version: '3'
services:
nginx:
volumes:
- asset-volume:/var/lib/assets
docker-compose.dev.yml
version: '3'
services:
nginx:
volumes:
- ./assets:/var/lib/assets