My docker-compose.yaml is
version: '3'
services:
nginx:
restart: always
build: ./nginx/
depends_on:
- web
ports:
- "8000:8000"
network_mode: "host" # Connection between containers
web:
build: .
image: app-image
ports:
- "80:80"
volumes:
- .:/app-name
command: uwsgi /app-path/web/app.ini
NGINX conf file is
upstream web {
server 0.0.0.0:80;
}
server {
listen 8000;
server_name web;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
alias "/app-static/";
}
location / {
proxy_pass http://web;
}
}
So basically I have Django and uWSGI in one container 'web' and NGINX in container 'nginx'. I linked both using NGINX via Proxy and both worked fine. (I somehow needed 'network_mode: "host"' without that didn't work)
Since they are different containers, I cannot use .sock file (Unless I use some volume hacks to share the .sock file which is not good!)
Even though this works, I have been asked to avoid using NGINX via proxy, so is there any other way to connect these two?
Searching didn't get me alternatives. I tried
Related
fhsmgr proxy works at / but not any other location with 404
fhsdir proxy gives 404 at /dir, though when i browse directly to it on localhost:5000 i get the expected output, so the host is up and running. also, nginx does not complain about invalid host and exit like i've seen it do before.
i have tried trailing '/' so '/dir/' to no avail.
i have tried putting fhsmgr at '/mgr' and i get the expected 404 at index '/' but then 404 again at '/mgr'.
i have tried without proxy_redirect off; as well.
i have removed the upstream statements and just directly put in container names
seemingly the only thing it'll let me proxy is at '/', though i know i've proxied to other servers at different location paths in setups like this before.
-- docker compose
version: "3.7"
services:
fhsmgr:
build: fhsmgr
restart: always
fhsdir:
build: fhsdir
restart: always
ports:
- 5000:5000
nginx:
build: nginx
restart: always
ports:
- 80:80
environment:
- NGINX_ENVSUBST_OUTPUT_DIR=/etc/nginx
- FHSMGR_HOST=fhsmgr
- FHSMGR_PORT=5000
- FHSDIR_HOST=fhsdir
- FHSDIR_PORT=5000
-- nginx conf
events {}
http {
# upstream fhsmgr {
# server ${FHSMGR_HOST}:${FHSMGR_PORT};
# }
# upstream fhsdir {
# server ${FHSDIR_HOST}:${FHSDIR_PORT};
# }
# a simple reverse-proxy
server {
listen 80 default_server;
location / {
proxy_pass http://fhsmgr:5000;
proxy_redirect off;
}
location /dir {
proxy_pass http://fhsdir:5000;
proxy_redirect off;
}
}
}
i am modifying this project https://github.com/AwsGeek/lightsail-containers-nginx to get it to work for my use case. Haven't gotten to lightsail, just using docker compose locally
You miss the container_name in your docker-compose,try this
version: "3.7"
services:
fhsmgr:
build: fhsmgr
restart: always
container_name: fhsmgr # allow other containers to access by container_name
fhsdir:
build: fhsdir
restart: always
container_name: fhsdir
ports:
- 5000:5000
nginx:
build: nginx
restart: always
ports:
- 80:80
environment:
- NGINX_ENVSUBST_OUTPUT_DIR=/etc/nginx
- FHSMGR_HOST=fhsmgr
- FHSMGR_PORT=5000
- FHSDIR_HOST=fhsdir
- FHSDIR_PORT=5000
Here is github repo: https://github.com/irahulsah/mutlicontainerapp
please,visit it for more info,please help me to fix this error.
I'm running a multi-docker container locally with docker-compose, the containers are React front-end 'client', a Nodejs app 'api', and a Nginx proxy in sits in front of two. I have been using the docker-compose setup as follow for a while.
version: '3'
services:
client:
build:
dockerfile: Dockerfile.dev
context: ./client
volumes:
- /usr/app/node_modules
- ./client:/usr/app
api:
build:
dockerfile: Dockerfile.dev
context: ./server
volumes:
- /usr/app/node_modules
- ./server:/usr/app
nginx:
restart: always
build:
dockerfile: Dockerfile.dev
context: ./nginx
ports:
- '8080:80'
and my Nginx setup is as follows
upstream client {
server client:3000;
}
upstream api {
server api:5000;
}
server {
listen 80;
server_name _;
location / {
if ($http_x_forwarded_proto != 'https') {
return 301 https://$host$request_uri;
}
proxy_pass http://client;
}
location /api {
if ($http_x_forwarded_proto != 'https') {
return 301 https://$host$request_uri;
}
rewrite /api/(.*) /$1 break;
proxy_pass http://api;
}
}
Recently when I tried to start up the containers, i got following error:
nginx_1 | 2019/08/08 18:11:12 [emerg] 1#1: host not found in upstream "client:3000" in /etc/nginx/conf.d/default.conf:2
nginx_1 | nginx: [emerg] host not found in upstream "client:3000" in /etc/nginx/conf.d/default.conf:2
Any idea why nginx not able to find upstream?
I have tried to add links to nginx setup blocks as follows:
nginx:
restart: always
build:
dockerfile: Dockerfile.dev
context: ./nginx
links:
- client:client
- api:api
ports:
- '8080:80'
I also tried 'depends_on' ,but also am getting host can be found on client:300 error,any idea on how to fix this error,will be deeply appreciated.
Any help or direction are greatly appreciated!!!
I had this exactly same issue today. I solved it by attaching all containers that nginx upstream is referring to in the same docker virtual network.
Also, make sure to explicitly define the containers name. If I am not wrong, docker-compose prefix your services name with the yaml file name.
service_one:
networks:
- app-network # define the network
container_name: backend # the "backend" name must be used in the upstream directive in nginx configuration
I have tried this:
NGINX reverse proxy not working to other docker container
and this:
Docker nginx-proxy : proxy between containers
and followed nginx config from here:
nginx proxy_pass to a linked docker container
I am simply trying to tell nginx to proxy to a linked api service on port 4000. I do not want to expose 4000 to host machine because there will be multiple services running on this port.
This is my docker-compose.yml:
version: '3'
services:
api:
build: ./api
image: myapi:latest
container_nameE: api
api_nginx:
image: nginx:latest
container_name: api_nginx
depends_on:
- api
links:
- api
ports:
- "80:80"
environment:
- NGINX_SERVER_NAME:localhost
volumes:
- ./nginx:/etc/nginx/conf.d
...
...
and my nginx server is super minimal:
upstream backend {
server api;
}
server {
listen 80;
listen [::]:80;
server_name ${NGINX_SERVEE_NAME};
location / {
resolver 127.0.0.1;
proxy_pass http://backend/$1;
}
}
This is the error is throwing:
...[error] 20#20: *1 no resolver defined to resolve api, client: 172.23.0.1, server: ${nginx_server_name}....
and the page shows a 502 Bad Gateway
What is going on? I've followed other people's nginx configs and it's not working, I have no idea.
I have set up a web application in docker which is currently running internal to the host at 172.19.0.3:8888. I want this web application accessible over the internet on port 443 (https), with requests to port 80 (HTTP) redirected to 443.
I plan to use an Nginx reverse proxy in a docker container to achieve this, but I do not know how to properly configure it to point at the docker container 172.19.0.3:8888. Accessing http://172.19.0.3:8888 from the host works.
Here is the guide I tried to follow, but it just didn't show how to point at a docker container specifically.
https://medium.com/#pentacent/nginx-and-lets-encrypt-with-docker-in-less-than-5-minutes-b4b8a60d3a71
Note
If I set the port 443 proxy_pass to http://example.org, it works. So Cert configurations are working correctly.
Web application
Running on 172.19.0.3:8888 internal to the host
docker-compose for Nginx and Certbot
My certs are coming back clean.
version: '3'
services:
nginx:
image: nginx:1.15-alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./Volumes/nginx:/etc/nginx/conf.d
- ./Volumes/certbot/conf:/etc/letsencrypt
- ./Volumes/certbot/www:/var/www/certbot
certbot:
image: certbot/certbot
volumes:
- ./Volumes/certbot/conf:/etc/letsencrypt
- ./Volumes/certbot/www:/var/www/certbot
Nginx app.conf
server {
listen 80;
server_name forums.example.com;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name forums.example.com;
ssl_certificate /etc/letsencrypt/live/forums.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/forums.example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://172.19.0.3:8888/;
}
}
Web Application
flarum:
image: mondedie/docker-flarum:0.1.0-beta.8.1-stable
container_name: flarum
env_file:
- ./flarum.env
volumes:
- ./Volumes/assets:/flarum/app/public/assets
- ./Volumes/extensions:/flarum/app/extensions
- ./Volumes/nginx:/etc/nginx/conf.d
depends_on:
- mariadb
mariadb:
image: mariadb:10.2
container_name: mariadb
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=flarum
- MYSQL_USER=flarum
- MYSQL_PASSWORD=password
volumes:
- ./Volumes/mysql/db:/var/lib/mysql
Docker Compose creates a seprate network for docker-compose.yaml file.
So you can add your web application as service (eg: webapp) in current compose file. And in nginx.conf directly point to your service. Rather than using IP you can use the service name as DNS which will resolve by Docker for the same network.
location / {
proxy_pass http://webapp:8888/;
}
I'm having trouble creating a reverse proxy and having it point at apps that are in other containers.
What I have now is a docker-compose for Nginx, and then I want to have separate docker-containers for several different apps and have Nginx direct traffic to those apps.
My Nginx docker-compose is:
version: "3"
services:
nginx:
image: nginx:alpine
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
My default.conf is:
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
server {
listen 80;
server_name www.mydomain.com;
location /confluence {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://192.168.1.50:8090/confluence;
}
}
I can access confluence directly at: http://192.168.1.50:8090/confluence
My compose for confluence is:
version: "3"
services:
db:
image: postgres:9.6
container_name: pg_confluence
env_file:
- env.list
ports:
- "5434:5432"
volumes:
- ./pg_conf.sql:/docker-entrypoint-initdb.d/pg_conf.sql
- dbdata:/var/lib/postgresql/data
confluence:
image: my_custom_image/confluence:6.11.0
container_name: confluence
volumes:
- confluencedata:/var/atlassian/application-data/confluence
- ./server.xml:/opt/atlassian/confluence/conf/server.xml
environment:
- JVM_MAXIMUM_MEMORY=2g
ports:
- "8090:8090"
depends_on:
- db
volumes:
confluencedata:
dbdata:
I am able to see the Nginx "Welcome" screen when I hit mydomain.com but if I hit mydomain.com/confluence it gives a not found.
So it looks like Nginx is running, just not sending the traffic to the other container properly.
========================
=== Update With Solution ===
========================
I ended up switching to Traefik instead of Nginx. When I take the next step and start learning k8s this will help as well.
Although these network settings are what you need even if you stick with Nginx, I just didn't test them against Nginx, so hopefully they are helpful no matter which one you end up using.
For the confluence docker-compose.yml I added:
networks:
proxy:
external: true
internal:
external: false
services:
confluence:
...
networks:
- internal
- proxy
db:
...
networks:
- internal
And for the traefik docker-compose.yml I added:
networks:
proxy:
external: true
services:
reverse-proxy:
networks:
- proxy
I had to create the network manually with:
docker network create proxy
It is not really how to use docker the correct way.
If you are in a production environment, use a real orchestration tools (nowaday Kubernetes is the way to go)
If you are on you computer, you can reference a name of a container (or an alias) only if you use the same network AND this network is not the default one.
A way is to have only one docker-compose file.
Another way is to use the same network across your docker-compose.
Create a network docker network create --driver bridge my_network
use it on each docker-compose you have:
networks:
default:
external:
name: my_network