version: '3.7'
services:
shinyproxy:
build: /home/administrator/shinyproxy
deploy:
replicas: 1
placement:
constraints:
- node.hostname==testnode
user: root:root
hostname: shinyproxy
image: shinyproxy-example
restart: always
networks:
- sp-example-net
volumes:
- type: bind
source: /var/run/docker.sock
target: /var/run/docker.sock
- type: bind
source: /home/administrator/shinyproxy/application.yml
target: /opt/shinyproxy/application.yml
ports:
- 4000:4000
mariadb:
image: mariadb
networks:
- sp-example-net
volumes:
- type: bind
source: /home/administrator/mariadbdata
target: /var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root_password
MYSQL_DATABASE: keycloak
MYSQL_USER: keycloak
MYSQL_PASSWORD: xyz
deploy:
placement:
constraints:
- node.hostname==testnode
keycloak:
image: jboss/keycloak
networks:
- sp-example-net
volumes:
- type: bind
source: /home/administrator/compose/nginx/fullchain.pem
target: /etc/x509/https/tls.crt
- type: bind
source: /home/administrator/compose/nginx/privkey.pem
target: /etc/x509/https/tls.key
- ./theme/:/opt/jboss/keycloak/themes/custom/
environment:
- PROXY_ADDRESS_FORWARDING=true
- KEYCLOAK_USER=xyzasd
- KEYCLOAK_PASSWORD=xyz
ports:
- 8443:8443
deploy:
placement:
constraints:
- node.hostname==testnode
restart: "always"
nginx_service:
image: nginx_custom
ports:
- '80:80'
- '443:443'
build: ./nginx/
networks:
- sp-example-net
networks:
sp-example-net:
driver: overlay
external: true
attachable: true
This is my compose file. The keycloak service authenticates shinyproxy users. I use docker-compose up --build -d to get everything running and it workes. Sometimes I have to change small parts of my shinyproxy service and update everything with the same command: Changes get detected and the output looks like this:
compose_keycloak_1 is up-to-date
Recreating compose_shinyproxy_1 ... done
I am running the services in combination with nginx and get the following error:
nginx_service_1 | 2020/06/05 10:02:11 [error] 7#7: *54 connect() failed (113: No route to host) while connecting to upstream, client: 185.130.32.1, server: myserver.com, request: "GET / HTTP/1.1", upstream: "http://10.0.3.181:4000/", host: "myserver.com"
Running docker-compose down and then docker-compose up --build again works, but I do not want to take down all of my services just to update one.
Can anyone tell me why this might happen and how to solve it?
Edit: I am more and more sure this is an nginx issue, not a docker issue. Might that be the case?
My nginx.conf looks like this:
server {
listen 443;
server_name server.com;
ssl on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_certificate /etc/certs/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/certs/privkey.pem; # managed by Certbot
location / {
proxy_pass http://shinyproxy:4000; ### Übernahme der servicenamen aus Docker-compose
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 600s;
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Protocol $scheme;
}
....
edit2: Issue might be related to this:
https://github.com/docker/compose/issues/2003
Instead of doing docker-compose down and then docker-compose up --build for the whole project, you can actually, start that particular service by running docker-compose up -d serviceName. Have a look at the example.
-d stands for daemon/detached mode.
version: '3'
services:
test:
container_name: test
image: 'busybox'
command: 'sleep 5d'
test1:
container_name: test1
image: 'busybox'
command: 'sleep 4d'
$ docker-compose up -d
Creating network "proj_default" with the default driver
Creating test ... done
Creating test1 ... done
$ docker-compose ps
Name Command State Ports
--------------------------------
test sleep 5d Up
test1 sleep 5d Up
$ docker-compose up -d test1
Recreating test1 ... done
The solution was to restart nginx. It seems that like after every restart a container get will a new IP and nginx uses the old IP and will not be able to find it.
restart nginx container when upstream servers is updated
docker exec <nginx_container_id> nginx -s reload
Related
I am trying to set up pgadmin with docker compose and nginx but there is something weird happened.
every time I enter the site, pgadmin will redirect to /browser and also replaces host to container name, which make me browsing https://pgadmin_container/browser,but sometimes I directly go to https://my_url.com/browser it works, is it bug or I am missing something?
here is the nginx config:
server {
listen 80;
server_name some_name;
limit_conn conn_limit_per_ip 10;
limit_req zone=req_limit_per_ip burst=10 nodelay;
location / {
resolver 127.0.0.11 valid=30s;
set $upstream_pgadmin pgadmin_container;
proxy_pass http://$upstream_pgadmin:80;
proxy_redirect off;
proxy_buffering off;
}
and here is the docker-compose contents:
pgadmin:
container_name: pgadmin_container
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: someEmail
PGADMIN_DEFAULT_PASSWORD: somePassword
PGADMIN_CONFIG_SERVER_MODE: 'False'
volumes:
- ./pgadmin:/root/.pgadmin2
ports:
- "5050:80"
networks:
- shared
restart: unless-stopped
sorry for my bad English
I am trying to find a way to publish nginx, express, and letsencrypt's ssl all together using docker-compose. There are many documents about this, so I referenced these and tried to make my own configuration, I succeed to configure nginx + ssl from this https://medium.com/#pentacent/nginx-and-lets-encrypt-with-docker-in-less-than-5-minutes-b4b8a60d3a71
So now I want to put sample nodejs express app into nginx + ssl docker-compose. But I don't know why, I get 502 Bad Gateway from nginx rather than express's initial page.
I am testing this app with my left domain, and on aws ec2 ubuntu16. I think there is no problem about domain dns and security rules settings. All of 80, 443, 3000 ports opened already. and When I tested it without express app it shows well nginx default page.
nginx conf in /etc/nginx/conf.d
server {
listen 80;
server_name example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name example.com;
server_tokens off;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
ssl_certificate /etc/letsencrypt/live/sendpi.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/sendpi.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
docker-compose.yml
version: '3'
services:
app:
container_name: express
restart: always
build: .
ports:
- '3000:3000'
nginx:
container_name: nginx
image: nginx:1.15-alpine
restart: unless-stopped
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
ports:
- "80:80"
- "443:443"
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
Dockerfile of express
FROM node:12.2-slim
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
I think SSL works fine, but there are some problems between express app and nginx. How can I fix this?
proxy_pass http://localhost:3000
is proxying the request to the 3000 port on the container that is running nginx. What you instead want is to connect to the 3000 port of the container running express. For that, we need to do two things.
First, we make the express container visible to nginx container at a predefined hostname. We can use links in docker-compose.
nginx:
links:
- "app:expressapp"
Alternatively, since links are now considered a legacy feature, a better way is to use a user defined network. Define a network of your own with
docker network create my-network
and then connect your containers to that network in compose file by adding the following at the top level:
networks:
default:
external:
name: my-network
All the services connected to a user defined network can access each other via name without explicitly setting up links.
Then in the nginx.conf, we proxy to the express container using that hostname:
location / {
proxy_pass http://app:3000
}
Warning: The --link flag is a legacy feature of Docker. It may eventually be removed. Unless you absolutely need to continue using it, we recommend that you use user-defined networks to facilitate communication between two containers instead of using --link.
Define networks in your docker-compose.yml and configure your services with the appropriate network:
version: '3'
services:
app:
restart: always
build: .
networks:
- backend
expose:
- "3000"
nginx:
image: nginx:1.15-alpine
restart: unless-stopped
depends_on:
- app
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
networks:
- frontend
- backend
ports:
- "80:80"
- "443:443"
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
networks:
frontend:
backend:
Note: the app service no longer publish's it's ports to the host it only exposes port 3000 (ref. exposing and publishing ports), it is only available to services connected to the backend network. The nginx service has a foot in both the backend and frontend network to accept incoming traffic from the frontend and proxy the connections to the app in the backend (ref. multi-host networking).
With user-defined networks you can resolve the service name:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
upstream app {
server app:3000 max_fails=3;
}
server {
listen 80;
server_name example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name example.com;
server_tokens off;
location / {
proxy_pass http://app;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
ssl_certificate /etc/letsencrypt/live/sendpi.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/sendpi.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
}
Removing the container_name from your services makes it possible to scale the services: docker-compose up -d --scale nginx=1 app=3 - nginx will load balance the traffic in round-robin to the 3 app containers.
I think maybe a source of confusion here is the way the "localhost" designation behaves among running services in docker-compose. The way docker-compose orchestrates your containers, each of the containers understands itself to be "localhost", so "localhost" does not refer to the host machine (and if I'm not mistaken, there is no way for a container running on the host to access a service exposed on a host port, apart from maybe some security exploits). To demonstrate:
services:
app:
container_name: express
restart: always
build: .
ports:
- '2999:3000' # expose app's port on host's 2999
Rebuild
docker-compose build
docker-compose up
Tell container running the express app to curl against its own running service on port 3000:
$ docker-compose exec app /bin/bash -c "curl http://localhost:3000"
<!DOCTYPE html>
<html>
<head>
<title>Express</title>
<link rel='stylesheet' href='/stylesheets/style.css' />
</head>
<body>
<h1>Express</h1>
<p>Welcome to Express</p>
</body>
</html>
Tell app to try to that same service which we exposed on port 2999 on the host machine:
$ docker-compose exec app /bin/bash -c "curl http://localhost:2999"
curl: (7) Failed to connect to localhost port 2999: Connection refused
We will of course see this same behavior between running containers as well, so in your setup nginx was trying to proxy it's own service running on localhost:3000 (but there wasn't one, as you know).
Tasks
build NodeJS app
add SSL functionality from the box (that can work automatically)
Solution
https://github.com/evertramos/docker-compose-letsencrypt-nginx-proxy-companion
/ {path_to_the_project} /Docker-compose.yml
version: '3.7'
services:
nginx-proxy:
image: jwilder/nginx-proxy:alpine
restart: always
container_name: nginx-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./certs:/etc/nginx/certs:ro
- ./vhost.d:/etc/nginx/vhost.d
- ./html:/usr/share/nginx/html
- ./conf.d:/etc/nginx/conf.d
ports:
- "443:443"
- "80:80"
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: letsencrypt
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./certs:/etc/nginx/certs:rw
- ./vhost.d:/etc/nginx/vhost.d:rw
- ./html:/usr/share/nginx/html:rw
environment:
- NGINX_PROXY_CONTAINER=nginx-proxy
api:
container_name: ${APP_NAME}
build:
context: .
dockerfile: Dockerfile
command: npm start --port ${APP_PORT}
expose:
- ${APP_PORT}
# ports:
# - ${APP_PORT}:${APP_PORT}
restart: always
environment:
VIRTUAL_PORT: ${APP_PORT}
VIRTUAL_HOST: ${DOMAIN}
LETSENCRYPT_HOST: ${DOMAIN}
LETSENCRYPT_EMAIL: ${LETSENCRYPT_EMAIL}
NODE_ENV: production
PORT: ${APP_PORT}
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./certs:/etc/nginx/certs:ro
/ {path_to_the_project} /.env
APP_NAME=best_api
APP_PORT=3000
DOMAIN=api.site.com
LETSENCRYPT_EMAIL=myemail#gmail.com
Do not forget to connect DOMAIN to you server before you will run container there.
How it works?
just run docker-compose up --build -d
I'm trying to run nuxtjs application using nginx as proxy server in docker containers. So, I have 2 containers: nginx and nuxt
here is how I'm building nuxt application
FROM node:11.15
ENV APP_ROOT /src
RUN mkdir ${APP_ROOT}
WORKDIR ${APP_ROOT}
ADD . ${APP_ROOT}
RUN npm install
RUN npm run build
ENV host 0.0.0.0
The result seems to be fine
Next is nginx config
server {
listen 80;
server_name dev.iceik.com.ua;
location / {
proxy_pass http://nuxt:3000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Also I've tried this nginx config
upstream nuxt {
server nuxt:3000;
}
server {
listen 80;
server_name dev.iceik.com.ua;
location / {
proxy_pass http://nuxt;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
And finally my docker-compose file
version: "3"
services:
nuxt:
build: ./app/
container_name: nuxt
restart: always
ports:
- "3000:3000"
command:
"npm run start"
nginx:
image: nginx:1.17
container_name: nginx
ports:
- "80:80"
volumes:
- ./nginx:/etc/nginx/conf.d
depends_on:
- nuxt
I can ping nuxt container from nginx container
Also here are opened ports
So, the expected result is that I can access my nuxt application.
However I'm getting 502 Bad Gateway
Do you have any ideas why nginx doesn't expose my nuxt application?
Thank you for any suggestions!
Nodejs is exposed localhost:3000 instead of 0.0.0.0:3000
Please correct it. It will work
Always good that your containers put into a network if they need to talk each other, other way is to use Host network(only works in linux). Try below docker-compose.yml they should be able to talk each other from the container names.
version: "3"
services:
nuxt:
build: ./app/
container_name: nuxt
restart: always
ports:
- "3000:3000"
command:
"npm run start"
networks:
- my_net
nginx:
image: nginx:1.17
container_name: nginx
ports:
- "80:80"
volumes:
- ./nginx:/etc/nginx/conf.d
depends_on:
- nuxt
networks:
- my_net
networks:
my_net:
driver: "bridge"
I'm trying to get docker-compose to run an NGINX reverse-proxy and I'm running into an issue. I know that what I am attempting appears possible as it is outlined here:
https://dev.to/domysee/setting-up-a-reverse-proxy-with-nginx-and-docker-compose-29jg
and here:
https://www.digitalocean.com/community/tutorials/how-to-secure-a-containerized-node-js-application-with-nginx-let-s-encrypt-and-docker-compose#step-2-%E2%80%94-defining-the-web-server-configuration
My application is very simple - it has a front end and a back end (nextjs and nodejs), which I've put in docker-compose along with an nginx instance.
Here is the docker-compose file:
version: '3'
services:
nodejs:
build:
context: .
dockerfile: Dockerfile
ports:
- "8000:8000"
container_name: nodejs
restart: unless-stopped
nextjs:
build:
context: ../.
dockerfile: Dockerfile
ports:
- "3000:3000"
container_name: nextjs
restart: unless-stopped
webserver:
image: nginx:mainline-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
volumes:
- web-root:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
depends_on:
- nodejs
- nextjs
networks:
- app-network
volumes:
certbot-etc:
certbot-var:
web-root:
driver: local
driver_opts:
type: none
device: /
o: bind
networks:
app-network:
driver: bridge
And here is the nginx file:
server {
listen 80;
listen [::]:80;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name patientplatypus.com www.patientplatypus.com localhost;
location /back {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://nodejs:8000;
}
location / {
proxy_pass http://nextjs:3000;
}
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
}
Both of these are very similar to the digitalOcean example and I can't think of how they would be different enough to cause errors. I run it with a simple docker-compose up -d --build.
When I go to localhost:80 I get page could not be found, and here is the result of my docker logs -
patientplatypus:~/Documents/patientplatypus.com/forum/back:10:03:32$docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9c2e4e25e6d9 nginx:mainline-alpine "nginx -g 'daemon of…" 2 minutes ago Restarting (1) 14 seconds ago webserver
213e73495381 back_nodejs "/docker-entrypoint.…" 2 minutes ago Up 2 minutes 0.0.0.0:8000->8000/tcp nodejs
03b6ae8f0ad4 back_nextjs "npm start" 2 minutes ago Up 2 minutes 0.0.0.0:3000->3000/tcp nextjs
patientplatypus:~/Documents/patientplatypus.com/forum/back:10:05:41$docker logs 9c2e4e25e6d9
2019/04/10 15:03:32 [emerg] 1#1: host not found in upstream "nodejs" in /etc/nginx/conf.d/nginx.conf:20
I'm pretty lost as to what could be going wrong. If anyone has any ideas please let me know. Thank you.
EDIT: SEE SOLUTION BELOW
The nginx webserver is on the network app-network which is a different network than the other two services which don't have a network defined. When no network is defined docker-compose will create a default network for them to share.
Either copy the network setting to both of the other services or remove the network setting from the webserver service.
I'm trying to deploy a stack with docker.
Here is how my stack works:
nginx-proxy (redirect user requests to the good container)
website (simple nginx serving a website)
api (django application, launch with gunicorn)
nginx-api (serving static files and uploaded files and redirect to the API container if it is an endpoint)
This is my docker-compose.yml:
version: '3.2'
services:
website:
container_name: nyl2pronos-website
image: nyl2pronos-website
restart: always
build:
context: nyl2pronos_webapp
dockerfile: Dockerfile
volumes:
- ./logs/nginx-website:/var/log/nginx
expose:
- "80"
deploy:
replicas: 10
update_config:
parallelism: 5
delay: 10s
api:
container_name: nyl2pronos-api
build:
context: nyl2pronos_api
dockerfile: Dockerfile
image: nyl2pronos-api
restart: always
ports:
- 8001:80
expose:
- "80"
depends_on:
- db
- memcached
environment:
- DJANGO_PRODUCTION=1
volumes:
- ./data/api/uploads:/code/uploads
- ./data/api/static:/code/static
nginx-api:
image: nginx:latest
container_name: nyl2pronos-nginx-api
restart: always
expose:
- "80"
volumes:
- ./data/api/uploads:/uploads
- ./data/api/static:/static
- ./nyl2pronos_api/config:/etc/nginx/conf.d
- ./logs/nginx-api:/var/log/nginx
depends_on:
- api
nginx-proxy:
image: nginx:latest
container_name: nyl2pronos-proxy
restart: always
ports:
- 80:80
- 443:443
volumes:
- ./proxy:/etc/nginx/conf.d
- /etc/letsencrypt:/etc/letsencrypt
- ./logs/nginx-proxy:/var/log/nginx
deploy:
placement:
constraints: [node.role == manager]
depends_on:
- nginx-api
- website
When I use docker-compose up everything works fine.
But when I try to deploy with docker stack deploy --compose-file=docker-compose.yml prod. My nginx config files can't find the different upstreams.
This is the error provided by my service nginx-api:
2019/03/23 17:32:41 [emerg] 1#1: host not found in upstream "api" in /etc/nginx/conf.d/nginx.conf:2
See below my nginx.conf:
upstream docker-api {
server api;
}
server {
listen 80;
server_name xxxxxxxxxxxxxx;
location /static {
autoindex on;
alias /static/;
}
location /uploads {
autoindex on;
alias /uploads/;
}
location / {
proxy_pass http://docker-api;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
If you see something wrong in my configuration or something I can do better, let me know!
this is happening because nginx-api service is up before the api service.
but i added the depends_on option?
you are right, and this option should work for a docker-compose up case.
but unfortunately not on docker stack deploy, or, as the docs put it:
The depends_on option is ignored when deploying a stack in swarm mode
with a version 3 Compose file.
ok, so what can i do now?
nothing. its actually not a bug-
docker swarm nodes (your stack services) are supposed to recover automatically on error. (thats why you define the restart: always option). so it should work for you anyway.
if you are using the compose file only for deploying the stack and not on a docker-compose up - you may remove the depends_on option completely, it means nothing to docker stack.