I have a docker container that is running NGINX. Within the container I have an SSL cert that is currently being copied into the container. I would like to avoid using this approach and instead have the value of the SSL cert being passed in, so it is not stored on the container. In the docker-compose file, I have specified the public and private portions of the SSL certs as volumes and I have removed the commands from the Dockerfile that copies the values onto the image. However I am getting an error when running docker-compose up that the certificate cannot be loaded due to it not existing. Any advice on how I can accomplish this would be helpful. Thanks!
Docker Compose
version: "3"
services:
nginx:
container_name: nginx
build: nginx
volumes:
- ./:/var/www
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/files/localhost.crt:/etc/nginx/ssl/nginx.crt
- ./nginx/files/localhost.key:/etc/nginx/ssl/nginx.key
ports:
- 80:80
- 443:443
networks:
- MyNetwork
networks:
MyNetwork:
Dockerfile
FROM nginx:latest
COPY nginx.conf /etc/nginx/conf.d/
Error
[emerg] 1#1: cannot load certificate "/etc/nginx/nginx/files/localhost.crt": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/nginx/nginx/files/localhost.crt','r') error:2006D080:BIO routines:BIO_new_file:no such file)
Related
I am trying to generate/access documentation for dbt via this guide: https://docs.getdbt.com/reference/commands/cmd-docs. The issue is I am getting a 'This Site Cant Be Reached'. So I am referencing this post DBT docker: Docs Served but Not Accessible via Browser - which notes to add --publish to my docker-compose. Currently I have a makefile with the line below:
docker-compose -f docker-compose.yml
I would think to change it to the below but does not seem to work.
docker-compose -f docker-compose.yml -p
as I get the error:
Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "-p": executable file not found in $PATH: unknown
As well, I tried to edit the docker-compose.yml file to include the below:
version: '3.9'
services:
localdev:
build: .
image: localdev:latest
ports:
- "80:80"
- "8080"
- "8080:8080
stdin_open: true
tty: true
environment:
- ENV=my-env
- ADD_PATH=/bin/docker
volumes:
- $LOCAL_REPO_DIR:/usr/app/code
- /var/run/docker.sock:/var/run/docker.sock:ro
- /usr/local/bin:/bin/docker:ro
command: /usr/app/entrypoint.sh
While this allowed me to spin up the container, I was still not able to access to webpage locally.
dbt docs serve by default serves on port 8080. If you want the localdev container's port 8080 to be accessible at 8080 on the host machine, you need:
ports:
- "8080:8080"
You should only list port 8080 once in this list; if you list it alone, without a host port:
ports:
- 8080
Docker will expose it on a random port on the host.
I've a docker-compose which setting up an nginx with a react static site.
When I'm running up the docker-compose I'm trying to sahre with the container a folder which contains my ssl certificates, but the nginx is not able to find them.
version: "3.3"
services:
app:
container_name: frontend
image: colymore/frontend:latest
ports:
- 80:80
- 443:443
restart: always
volumes:
- /etc/letsencrypt/live/colymore.me/:/certs/:ro
labels:
- com.centurylinklabs.watchtower.enable=true
networks:
net:
In the nginx confi I've
ssl_certificate /certs/fullchain.pem;
ssl_certificate_key /certs/privkey.pem;
And the file exists:
colymore#colymore.me$ ls /etc/letsencrypt/live/colymore.me/fullchain.pem
/etc/letsencrypt/live/colymore.me/fullchain.pem
But when IO run the ngins it's giving an error:
52a5118c5c40_frontend | nginx: [emerg] cannot load certificate "/certs/fullchain.pem": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/certs/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)
I'm trying to follow this guide to setting up a reverse proxy for a docker container (serving a static file), using another container with an instance of nginx as a reverse proxy.
I expect to see my page served on /, but I am blocked in the build with the error message:
container_nginx_1 | 2020/05/10 16:54:12 [emerg] 1#1: host not found in upstream "container1:8001" in /etc/nginx/conf.d/sites-enabled/virtual.conf:2
container_nginx_1 | nginx: [emerg] host not found in upstream "container1:8001" in /etc/nginx/conf.d/sites-enabled/virtual.conf:2
nginx_docker_test_container_nginx_1 exited with code 1
I have tried many variations on the following virtual.conf file, and this is the current, based on the example given and various other pages:
upstream cont {
server container1:8001;
}
server {
listen 80;
location / {
proxy_pass http://cont/;
}
}
If you are willing to look at a 3rd party site, I've made a minimal repo here, otherwise the most relevant files are below.
My docker-compose file looks like this:
version: '3'
services:
container1:
hostname: container1
restart: always
image: danjellz/http-server
ports:
- "8001:8001"
volumes:
- ./proj1:/public
command: "http-server . -p 8001"
depends_on:
- container_nginx
networks:
- app-network
container_nginx:
build:
context: .
dockerfile: docker/Dockerfile_nginx
ports:
- 8080:8080
networks:
- app-network
networks:
app-network:
driver: bridge
and the Dockerfile
# docker/Dockerfile_nginx
FROM nginx:latest
# add nginx config files to sites-available & sites-enabled
RUN mkdir /etc/nginx/conf.d/sites-available
RUN mkdir /etc/nginx/conf.d/sites-enabled
ADD projnginx/conf.d/sites-available/virtual.conf /etc/nginx/conf.d/sites-available/virtual.conf
RUN cp /etc/nginx/conf.d/sites-available/virtual.conf /etc/nginx/conf.d/sites-enabled/virtual.conf
# Replace the standard nginx conf
RUN sed -i 's|include /etc/nginx/conf.d/\*.conf;|include /etc/nginx/conf.d/sites-enabled/*.conf;|' /etc/nginx/nginx.conf
WORKDIR /
I'm running this using docker-compose up.
Similar: react - docker host not found in upstream
The problem is if the hostname can not be resolved in upstream blocks, nginx will not start. Here you have defined service container1 to be dependent on container_nginx . But nginx container is never up due to the fact the container1 hostname is not resolved (because container1 is not yet started) Don't you think it should be reverse? Nginx container should be dependent on the app container.
Additionally in your nginx port binding you have mapped 8080:8080 while in nginx conf you have 80 listening.
Hello I'm new to the world of Docker, so I tried an installation with NGINX reverse proxy (jwilder image) and a Docker app.
I have installed both without SSL to make it easy. Since the Docker app seems to be installed in the root path I want to separate the NGINX web server and the docker app.
upstream example.com {
server 172.29.12.2:4040;
}
server {
server_name example.com;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://example.com;
root /usr/share/nginx/html;
index index.html index.htm;
}
location /app {
proxy_pass http://example.com:4040;
}
}
So I want with http://example.com be redirected to the index.html
and with http://example.com/app be redirected to the docker app.
Furthermore, when I build the installation, I use in docker-compose expose: "4040" so when I reload NGINX configuration file with nginx -s reload, it warns me that I have not the port 4040 open.
With the configuration file I posted above any path lead me to the docker app.
I can't find a simple solution to my question.
As I far I understood your logic is right, docker is designed to run a single service to a single container; to reach your goal you still have a couple of thing to look after, if the EXPOSE 4040 was declared in you Docker file, that is not enough to let service reachable. in the docker-compose file you have to declare also the ports, I.E. for nginx you let host system to listen on all interface by adding
...
ports:
- 80:80
...
and this is the first thing, also you have to think on which way you want your proxy reach the "app", from the container network on the same node? If yes you can add in the composer file:
...
depends_on:
- app
...
where app is the declared name of your service in the docker-compose file like this nginx are able to reach your app with name app, so redirect will point to app:
location /app {
proxy_pass http://app:4040;
}
In case you want to reach the "app" via host network, may because one day will run in another host, you can add entry in the hosts file of the container run nginx with:
...
extra_hosts:
- "app:10.10.10.10"
- "appb:10.10.10.11"
...
and so on
Reference: https://docs.docker.com/compose/compose-file/
edit 01/01/2019!!!!! happy new year!!
an example using an "huge" docker compose file:
version: '3'
services:
app:
build: "./app" # in case you docker file is in a app dir
image: "some image name"
restart: always
command: "command to start your app"
nginx:
build: "./nginx" # in case you docker file is in a nginx dir
image: "some image name"
restart: always
ports:
- "80:80"
- "443:443"
depends_on:
- app
In the above example nginx can reach yuor app just with the "app" name so redirect will point to http://app:4040
systemctl (start directly with docker - no compose)
[Unit]
Description=app dockerized service
Requires=docker.service
After=docker.service
[Service]
ExecStartPre=/usr/bin/sleep 1
ExecStartPre=/usr/bin/docker pull mariadb:10.4
ExecStart=/usr/bin/docker run --restart=always --name=app -p 4040:4040 python:3.6-alpine # or your own builded image
ExecStop=/usr/bin/docker stop app
ExecStopPost=/usr/bin/docker rm -f app
ExecReload=/usr/bin/docker restart app
[Install]
WantedBy=multi-user.target
like the above example you can reach the app at port 4040 on the system host (which is in listen for connection on port 4040 by all interfaces) to give a specific interface: -p 10.10.10.10:4040:4040 like this will listen to port 4040 on address 10.10.10.10 (host machine)
docker-compose with extra_host:
version: '3'
services:
app:
build: "./app" # in case you docker file is in a app dir
image: "some image name"
restart: always
command: "command to start your app"
nginx:
build: "./nginx" # in case you docker file is in a nginx dir
image: "some image name"
restart: always
ports:
- "80:80"
- "443:443"
extra_hosts:
- "app:10.10.10.10"
like the above example nginx defined service can reach the name app at 10.10.10.10
least but not last extends service on compose file:
docker-compose.yml:
version: '2.1'
services:
app:
extends:
file: /path/to/app-service.yml
service: app
nginx:
extends: /path/to/nginx-service.yml
service: nginx
app-service.yml:
version: "2.1"
service:
app:
build: "./app" # in case you docker file is in a app dir
image: "some image name"
restart: always
command: "command to start your app"
nginx-service.yml
version: "2.1"
service:
nginx:
build: "./nginx" # in case you docker file is in a nginx dir
image: "some image name"
restart: always
ports:
- "80:80"
- "443:443"
extra_hosts:
- "app:10.10.10.10"
really hope the above posted are enough examples.
I created a Docker image development-certificates which contains a volume directory with several self-signed certificates for our development environment.
I now want to use these certificates in another container (such as the nginx container). How can you do this in docker-compose v3? In docker-compose v2, there is the volumes_from directive, but that is not possible anymore in v3.
You need to create named volumes instead:
version '3'
services:
certs:
image: development-certificates
volumes:
- certificates:<path-to-certs>
nginx:
image: nginx
volumes:
- certificates:<path-to-certs>
volumes:
- certificates
If the development-certificates container has been created sparately, just remove the certs service above and get the volume name previously
created and add it to the volumes section:
docker volume ls // find the name of the certs vol
version '3'
services:
nginx:
image: nginx
volumes:
- certificates:<path-to-certs>
volumes:
certificates:
external:
name: actual-name-of-volume