can't set a bind mount in docker compose - docker

I tried setting a bind mount in my fastapi server service in docker compose
server:
container_name: server
image: mydockerimg
command: ["python3.8", "-m", "uvicorn", "main:app", "--host=0.0.0.0", "--ssl-keyfile=./key.pem", "--ssl-certfile=./cert.pem"]
ports:
- 8000:8000
working_dir: /app
volumes:
- ./:/app
- mydata:/app/my_data/
I use mkcert for my local certificates (for development) and is used in the command of the server service above. But when I run docker compose up I get this error
server | File "/usr/local/lib/python3.8/site-packages/uvicorn/config.py", line 139, in create_ssl_context
server | ctx.load_cert_chain(certfile, keyfile, get_password)
server | FileNotFoundError: [Errno 2] No such file or directory
This doesn't happen when I don't set a bind mount, the services starts and works fine in that case, so how does the certificates affect the bind mount, and the server??

If certs are located at /app/my_data within the container, then change the command options to point to the appropriate location: "--ssl-keyfile=/app/my_data/key.pem", "--ssl-certfile=/app/my_data/cert.pem"

Related

How do I share a docker-compose volume with a linux host?

I have a docker-compose.yml
version: '3.3'
services:
ssh:
environment:
- TZ=Etc/UTC
- DEBIAN_FRONTEND=noninterative
build:
context: './'
dockerfile: Dockerfile
ports:
- '172.17.0.2:22:22'
- '443:443'
- '8025:8025'
volumes:
- srv:/srv:rw
restart: always
volumes:
srv:
After I run docker-compose up --build I can ssh to the docker vm and there are files in /srv. 'docker volume ls' shows 2 volumes, srv and dockersetupsrv. They are both in /var/lib/docker/volumes. They both contain _data directories and show creation time stamps that match the docker image creation times but are otherwise empty. Neither one contains any of the files that are in the docker container's /srv directory. How can I share the docker /srv directory with the host?
you should point out more specific for the mapping directory,
for example:
/srv:/usr/srv:rw
after that, when you add content inside your host machine /srv,it is automatically map into /usr/srv
--> make sure that directory exist
you can have a check in this link : https://docs.docker.com/storage/volumes/

Docker Compose on Azure Linux App Service for Containers with NGINX Reverse-Proxy: dial unix /tmp/docker.sock: connect: no such file or directory

I am running an ASP.NET Core 3.0 multi-container application on an Azure Linux App Service for Containers using Docker Compose. Containers are built and pushed to an Azure Container Registry via CI pipelines and CD pipelines deploy to the app service using a "docker-compose.[environment].yml".
I am trying to use nginx and jwilder's docker-gen as a reverse proxy (separate containers to avoid having the docker socket bound to a publicly exposed container service), and use virtual host names to access the various services over the net.
I seem to be going round in circles between the following 3 errors:
1. The web app displaying the 'Welcome to Nginx' page with logs repeating:
2020-02-26T15:18:44.021444322Z 2020/02/26 15:18:44 Watching docker events
2020-02-26T15:18:44.022275908Z 2020/02/26 15:18:44 Error retrieving docker server info: Get http://unix.sock/info: dial unix /tmp/docker.sock: connect: no such file or directory
2020-02-26T15:18:44.022669201Z 2020/02/26 15:18:44 Error listing containers: Get http://unix.sock/containers/json?all=1: dial unix /tmp/docker.sock: connect: no such file or directory
2020-02-26T15:18:44.405594944Z 2020/02/26 15:18:44 Docker daemon connection interrupted
2. 502 Bad Gateway
502 - Web server received an invalid response while acting as a gateway or proxy server.
There is a problem with the page you are looking for, and it cannot be displayed. When the Web server (while acting as a gateway or proxy) contacted the upstream content server, it received an invalid response from the content server.
3. The app service's 'Application Error :(' page.
Here is my docker-compose.development.yml:
version: "3.7"
services:
nginx:
image: nginx
environment:
DEFAULT_HOST: ***.azurewebsites.net
ports:
- "80:80"
volumes:
- "${WEBAPP_STORAGE_HOME}/tmp/nginx:/etc/nginx/conf.d"
dockergen:
image: jwilder/docker-gen
command: -notify-sighup nginx -watch /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
volumes_from:
- nginx
volumes:
- "/var/run/docker.sock:/tmp/docker.sock:ro"
- "${WEBAPP_STORAGE_HOME}./nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl"
webapp:
image: ***.azurecr.io/digitalcore:dev
restart: always
environment:
VIRTUAL_HOST: ***.azurewebsites.net
depends_on:
- nginx
"webapp" service dockerfile (exposes ports 80, 443):
FROM mcr.microsoft.com/dotnet/core/sdk:3.0 AS build
WORKDIR /source
COPY . .
RUN dotnet restore
RUN dotnet publish --output /app/ --configuration Release --no-restore
FROM mcr.microsoft.com/dotnet/core/aspnet:3.0 AS runtime
WORKDIR /app
COPY --from=build /app .
EXPOSE 80 443
ENTRYPOINT ["dotnet", "DigitalCore.WebApp.dll"]
Error #1 seems to be the closest I have got to seeing it working, with the issues being centred around configuring the volumes correctly for an Azure Linux App Service (the ${WEBAPP_STORAGE_HOME} made an appearance after much digging).
Networks and Container Names only seemed to make things worse in my efforts so far, so they were removed to try and keep things to the bare essentials to get it working. The "webapp" service is where my focus is at the moment
Can anybody spot where I'm going wrong?! I will be eternally grateful for any words of wisdom...
UPDATE:
Some progress it would seem - after removing the "ro" permissions from the container volume, docker.sock is now being found, but docker-gen is unable to connect to the endpoint.
2020-02-26T22:21:44.186316399Z 2020/02/26 22:21:44 Watching docker events
2020-02-26T22:21:44.187487428Z 2020/02/26 22:21:44 Error retrieving docker server info: cannot connect to Docker endpoint
2020-02-26T22:21:44.188270247Z 2020/02/26 22:21:44 Error listing containers: cannot connect to Docker endpoint
2020-02-26T22:21:44.500471940Z 2020/02/26 22:21:44 Docker daemon connection interrupted
UPDATE 2
I have now built the containers and pushed to the Azure Container Registry so I am not pulling from different locations. This is my current docker-compose:
version: "3.7"
services:
nginx:
image: ***.azurecr.io/nginx:dev
ports:
- "80:80"
volumes:
- "${WEBAPP_STORAGE_HOME}/etc/nginx/conf.d"
dockergen:
image: ***.azurecr.io/dockergen:dev
privileged: true
command: -notify-sighup nginx -watch -only-exposed /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
volumes_from:
- nginx
volumes:
- "${WEBAPP_STORAGE_HOME}/var/run/docker.sock:/tmp/docker.sock"
- "${WEBAPP_STORAGE_HOME}./nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl"
webapp:
image: ***.azurecr.io/digitalcore:dev
restart: always
environment:
VIRTUAL_HOST: ***.azurewebsites.net
depends_on:
- nginx

Docker-compose volume starting empty in container

I have the following docker-compose.yml configuration:
version: '3'
services:
proxy:
image: nginx:latest
container_name: webproxy
ports:
- "80:80"
volumes:
- /etc/nginx/sites-available:/etc/nginx/sites-available
On my host machine I have a nginx.conf file at /etc/nginx/sites-available/nginx.conf.
Steps:
Start the container with docker-compose -up
Go into the command line of the container with sudo docker exec -it 687 /bin/bash
cd into /etc/nginx/sites-available
Unfortunately the folder in step 3 is empty. My nginx.conf file is not being copied.
Is my docker-compose file not configured properly, or are volumes not supposed to also copy and start with the host data?
Doesn't looks anything wrong in docker-compose.yaml , because I used the same file as mentioned by you to create docker container. It worked for me. check your content inside /etc/nginx/sites-available on your host machine.

Configure docker volumes to share data across host and containers

I am stuck trying to configure docker volumes to share files between my host and make able in my container to use this files. let me explain.
I have a rails docker app with puma as a web server, I want to make able to puma to view and use the ssl .key and .crt files, so for this project also I am using docker-compose in "production mode", but I do not know how to make this work.
My setup is this:
Ubuntu 18.04 server host for production has the ssl files inside /home/ubuntu/my_app_keys, the containers are also in my host.
/home/ubuntu/docker-compose.yml
version: '3'
services:
postgres:
image: postgres:10.5
environment:
POSTGRES_DB: my_app_production
env_file:
-~/production.env
redis:
image: redis:4.0.11
web:
image: my_app:latest
command: bundle exec rails server -p 3000 -b 'ssl://127.0.0.1:3000?key=/home/ubuntu/my_app_keys/server.key&cert=/home/ubuntu/my_app_keys/server.crt' -e production
ports:
- '3000:3000'
volumes:
- /home/ubuntu/my_app_keys
depends_on:
- postgres
- redis
env_file:
- ~/production.env
restart: always
sidekiq:
image: my_app_sidekiq:latest
command: bundle exec sidekiq -C config/sidekiq.yml
depends_on:
- postgres
- redis
env_file:
- ~/production.env
restart: always
so, as you can see: command: bundle exec rails server -p 3000 -b 'ssl://127.0.0.1:3000?key=/home/ubuntu/my_app_keys/server.key&cert=/home/ubuntu/my_app_keys/server.crt' is looking for ssl files in /home/ubuntu/my_app_keys, when I execute docker-compose up puma can not find the ssl files and exits with:
/usr/local/bundle/gems/puma-3.9.1/lib/puma/minissl.rb:180:in `key=': No such key file '/home/ubuntu/my_app_keys/server.key' (ArgumentError)
I think is because key=/home/ubuntu/my_app_keys/server.key&cert=/home/ubuntu/my_app_keys/server.crt are pointing in the container context but I have the cert and key in my host context
so, I include in docker compose volume in order to bind-mount the files:
volumes:
- /home/ubuntu/my_app_keys
but without luck, same error.
In the container context my app lives in /var/www/my_app directory, so I tried to specify an absolute path (for some reason I imagined that it was because the ssl files were not in the same directory where my app lived could not be shared), so I add as compose-file docs say:
volumes:
- /home/ubuntu/my_app_keys:/var/www/my_app
and change in compose file:
command: bundle exec rails server -p 3000 -b 'ssl://127.0.0.1:3000?key=server.key&cert=server.crt' -e
when I execute the compose up my web service exit with error:
web | Could not locate Gemfile or .bundle/ directory
only way that web service run is (but no ssl files exist):
volumes:
- /home/ubuntu/my_app_keys
so, I do not know what to do now. any help?
When your Docker Compose YAML file says:
volumes:
- /home/ubuntu/my_app_keys
It means, "make /home/ubuntu/my_app_keys in container space persist across restarts of the container; it will start off empty unless the Dockerfile did something special; it's not connected to any specific host content".
When you say:
volumes:
- /home/ubuntu/my_app_keys:/var/www/my_app
It means, "totally replace the contents of /var/www/my_app in container space with the contents of /home/ubuntu/my_app_keys on the host". (The path names in host and container space don't need to be the same.)
As a bonus question, when you say:
rails server -b 'ssl://127.0.0.1:3000?...'
It means, "only listen for inbound connections on port 3000 initiated from within this Docker container; don't accept any connections from outside the container at all, whether from the same physical host, other containers, or elsewhere."

Docker - Can't share data between containers within a volume (docker-compose 3)

I have some containers for a web app for now (nginx, gunicorn, postgres and node to build static files from source and a React server side rendering). In a Dockerfile for the node container I have two steps: build and run (Dockerfile.node). It ends up with two directories inside a container: bundle_client - is a static for an nginx and bundle_server - it used in the node container itself to start an express server.
Then I need to share a built static folder (bundle_client) with the nginx container. To do so according to docker-compose reference in my docker-compose.yml I have the following services (See full docker-compose.yml):
node:
volumes:
- vnode:/usr/src
nginx:
volumes:
- vnode:/var/www/tg/static
depends_on:
- node
and volumes:
volumes:
vnode:
Running docker-compose build completes with no errors. Running docker-compose up runs everyting ok and I can open localhost:80 and there is nginx, gunicorn and node express SSR all working great and I can see a web page but all static files return 404 not found error.
If I check volumes with docker volume ls I can see two newly created volumes named tg_vnode (that we consider here) and tg_vdata (see full docker-compose.yml)
If I go into an nginx container with docker run -ti -v /tmp:/tmp tg_node /bin/bash I can't see my www/tg/static folder which should map my static files from the node volume. Also I tried to create an empty /var/www/tg/static folder with nginx container Dockerfile.nginx but it stays empty.
If I map a bundle_client folder from the host machine in the docker-compose.yml in a nginx.volumes section as - ./client/bundle_client:/var/www/tg/static it works ok and I can see all the static files served with nginx in the browser.
What I'm doing wrong and how to make my container to share built static content with the nginx container?
PS: I read all the docs, all the github issues and stackoverflow Q&As and as I understood it has to work and there is no info what to do when is does not.
UPD: Result of docker volume inspect vnode:
[
{
"CreatedAt": "2018-05-18T12:24:38Z",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "tg",
"com.docker.compose.version": "1.21.1",
"com.docker.compose.volume": "vnode"
},
"Mountpoint": "/var/lib/docker/volumes/tg_vnode/_data",
"Name": "tg_vnode",
"Options": null,
"Scope": "local"
}
]
Files:
Dockerfile.node,
docker-compose.yml
Nginx dockerfile: Dockerfile.nginx
UPD: I have created a simplified repo to reproduce a question: repo
(there are some warnings on npm install nevermind it installs and builds ok). Eventually when we open localhost:80 we see an empty page and 404 messages for static files (vendor.js and app.js) in Chrome dev tools but there should be a message React app: static loaded generated by react script.
Two changes you need. In your node service add the volume like:
volumes:
- vnode:/usr/src/bundle_client
Since you want to share /usr/src/bundle_client you should NOT be using /usr/src/ because that will share the full folder and the structure too.
And then in your nginx service add the volume like:
volumes:
- type: volume
source: vnode
target: /var/www/test/static
volume:
nocopy: true
The nocopy: true keeps our intention clear that on initial map of the container the content of the mapped folder should not be copied. And by default the first container to get mapped to the volume will get the contents of the mapped folder. In your case you want this to be the node container.
Also before testing make sure you run below command to kill the cached volumes:
docker-compose down -v
You can see during my test the container had the files:
Explanation of what happens step by step
Dockerfile.node
...
COPY ./client /usr/src
...
docker-compose.yml
services:
...
node:
...
volumes:
- ./server/nginx.conf:/etc/nginx/nginx.conf:ro
- vnode:/usr/src
...
volumes:
vnode:
docker-compose up creates with this Dockerfile.node and docker-compose section a named volume with data saved in /usr/src.
Dockerfile.nginx
FROM nginx:latest
COPY ./server/nginx.conf /etc/nginx/nginx.conf
RUN mkdir -p /var/www/tg/static
EXPOSE 80
EXPOSE 443
CMD ["nginx", "-g", "daemon off;"]
That produces that nginx containers created with docker-compose will have an empty /var/www/tg/static/
docker-compose.yml
...
nginx:
build:
context: .
dockerfile: ./Dockerfile.nginx
container_name: tg_nginx
restart: always
volumes:
- ./server/nginx.conf:/etc/nginx/nginx.conf:ro
- vnode:/var/www/tg/static
ports:
- "80:80"
- "443:443"
depends_on:
- node
- gunicorn
networks:
- nw_web_tg
volumes:
vdata:
vnode:
docker-compose up will produce that vnode named volume is created and filled with data from /var/www/tg/static (empty by now) to existing vnode.
So, at this point,
- nginx container has /var/www/tg/static empty because it was created empty (see mkdir in Dockerfile.nginx)
- node container has /usr/src dir with client file (see that was copied in Dockerfile.node)
- vnode has content of /usr/src from node and /var/www/tg/static from nginx.
Definitively, to pass data from /usr/src from your node container to /var/www/tg/static in nginx container you need to do something that is not very pretty because Docker hasn't developed another way yet: You need to combine named volume in source folder with bind volume in destination:
nginx:
build:
context: .
dockerfile: ./Dockerfile.nginx
container_name: tg_nginx
restart: always
volumes:
- ./server/nginx.conf:/etc/nginx/nginx.conf:ro
- /var/lib/docker/volumes/vnode/_data:/var/www/tg/static
ports:
- "80:80"
- "443:443"
depends_on:
- node
- gunicorn
networks:
- nw_web_tg
Just change in docker-compose - vnode:/var/www/tg/static by - /var/lib/docker/volumes/vnode/_data:/var/www/tg/static

Resources