I'm running a multi-docker container locally with docker-compose, the containers are React front-end 'client', a Nodejs app 'api', and a Nginx proxy in sits in front of two. I have been using the docker-compose setup as follow for a while
version: '3'
services:
client:
build:
dockerfile: Dockerfile.dev
context: ./client
volumes:
- /usr/app/node_modules
- ./client:/usr/app
api:
build:
dockerfile: Dockerfile.dev
context: ./server
volumes:
- /usr/app/node_modules
- ./server:/usr/app
nginx:
restart: always
build:
dockerfile: Dockerfile.dev
context: ./nginx
ports:
- '8080:80'
and my Nginx setup is as follows
upstream client {
server client:3000;
}
upstream api {
server api:5000;
}
server {
listen 80;
server_name _;
location / {
if ($http_x_forwarded_proto != 'https') {
return 301 https://$host$request_uri;
}
proxy_pass http://client;
}
location /api {
if ($http_x_forwarded_proto != 'https') {
return 301 https://$host$request_uri;
}
rewrite /api/(.*) /$1 break;
proxy_pass http://api;
}
}
Recently when I tried to start up the containers, i got following error:
nginx_1 | 2019/08/08 18:11:12 [emerg] 1#1: host not found in upstream "client:3000" in /etc/nginx/conf.d/default.conf:2
nginx_1 | nginx: [emerg] host not found in upstream "client:3000" in /etc/nginx/conf.d/default.conf:2
Any idea why nginx not able to find upstream?
I have tried to add links to nginx setup blocks as follows:
nginx:
restart: always
build:
dockerfile: Dockerfile.dev
context: ./nginx
links:
- client:client
- api:api
ports:
- '8080:80'
I also tried 'depends_on' instead of links. After adding links, nginx no longer complains and exit with code 0. But when i visit the localhost:8080,
it gives a 301 redirect to https://localhost.
Any help or direction are greatly appreciated!!!
You should check names of your services. Docker compose will start your service api in pod named [YOUR_PROJECT_NAME]_api_1. Start only api and client and check output of docker ps. You should gey list of names of pods.
In newer docker_compose syntax versions you can use link_external to map [YOUR_PROJECT_NAME]_api_1 to api.
Related
I want to serve static html as a service with Docker and nginx as a reverse proxy (there are also a python backend and mysql container, which I excluded here)
I have got the following docker-compose file:
version: "3.7"
frontend:
build: ./frontend
container_name: frontend
restart: always
ports:
- "5000:80"
nginx:
build: ./nginx
container_name: nginx
restart: always
ports:
- "80:80"
Dockerfile for Frontend:
FROM nginx:alpine
COPY . /usr/share/nginx/html
nginx.conf in my I do this:
server {
listen 80;
location /frontend {
proxy_pass http://frontend:5000/;
#proxy_pass http://frontend:5000; -> also tried this
}
}
Everything builds fine, but the proxy_pass does not work as expected.
Where I can reach my app:
http://localhost:5000/
Desired:
http://localhost/frontend
What did I do wrong?
The NGINX location should be the root (I imagine there is no /frontend web path)
location / {
proxy_pass http://frontend:5000/;
}
I have a reactjs front end application and a simple python flask. And I am using a docker-compose.yml to spin up both the containers, and it is like this:
version: "3.2"
services:
frontend:
build: .
environment:
CHOKIDAR_USEPOLLING: "true"
ports:
- 80:80
links:
- "backend:backend"
depends_on:
- backend
backend:
build: ./api
# volumes:
# - ./api:/usr/src/app
environment:
# CHOKIDAR_USEPOLLING: "true"
FLASK_APP: /usr/src/app/server.py
FLASK_DEBUG: 1
ports:
- 8083:8083
I have used links so the frontend service can talk to backend service using axios as below:
axio.get("http://backend:8083/monitors").then(res => {
this.setState({
status: res.data
});
});
I used docker-compose up --build -d to build and start the two containers and they are started without any issue and running fine.
But now the frontend cannot talk to backend.
I am using an AWS ec2 instance. When the page loads, I tried to see the for any console errors and I get this error:
VM167:1 GET http://backend:8083/monitors net::ERR_NAME_NOT_RESOLVED
Can someone please help me?
The backend service is up and running.
You can use a nginx as reverse proxy for both
The compose file
version: "3.2"
services:
frontend:
build: .
environment:
CHOKIDAR_USEPOLLING: "true"
depends_on:
- backend
backend:
build: ./api
# volumes:
# - ./api:/usr/src/app
environment:
# CHOKIDAR_USEPOLLING: "true"
FLASK_APP: /usr/src/app/server.py
FLASK_DEBUG: 1
proxy:
image: nginx
volumes:
- ./nginx.conf:/etc/nginx/conf.d/example.conf
ports:
- 80:80
minimal nginx config (nginx.conf):
server {
server_name example.com;
server_tokens off;
location / {
proxy_pass http://frontend:80;
}
}
server {
server_name api.example.com;
server_tokens off;
location / {
proxy_pass http://backend:8083;
}
}
The request hits the nginx container and is routed according the domain to the right container.
To use example.com and api.example.com you need to edit your hosts file:
Linux: /etc/hosts
Windows: c:\windows\system32\drivers\etc\hosts
Mac: /private/etc/hosts
127.0.0.1 example.com api.example.com
Here is github repo: https://github.com/irahulsah/mutlicontainerapp
please,visit it for more info,please help me to fix this error.
I'm running a multi-docker container locally with docker-compose, the containers are React front-end 'client', a Nodejs app 'api', and a Nginx proxy in sits in front of two. I have been using the docker-compose setup as follow for a while.
version: '3'
services:
client:
build:
dockerfile: Dockerfile.dev
context: ./client
volumes:
- /usr/app/node_modules
- ./client:/usr/app
api:
build:
dockerfile: Dockerfile.dev
context: ./server
volumes:
- /usr/app/node_modules
- ./server:/usr/app
nginx:
restart: always
build:
dockerfile: Dockerfile.dev
context: ./nginx
ports:
- '8080:80'
and my Nginx setup is as follows
upstream client {
server client:3000;
}
upstream api {
server api:5000;
}
server {
listen 80;
server_name _;
location / {
if ($http_x_forwarded_proto != 'https') {
return 301 https://$host$request_uri;
}
proxy_pass http://client;
}
location /api {
if ($http_x_forwarded_proto != 'https') {
return 301 https://$host$request_uri;
}
rewrite /api/(.*) /$1 break;
proxy_pass http://api;
}
}
Recently when I tried to start up the containers, i got following error:
nginx_1 | 2019/08/08 18:11:12 [emerg] 1#1: host not found in upstream "client:3000" in /etc/nginx/conf.d/default.conf:2
nginx_1 | nginx: [emerg] host not found in upstream "client:3000" in /etc/nginx/conf.d/default.conf:2
Any idea why nginx not able to find upstream?
I have tried to add links to nginx setup blocks as follows:
nginx:
restart: always
build:
dockerfile: Dockerfile.dev
context: ./nginx
links:
- client:client
- api:api
ports:
- '8080:80'
I also tried 'depends_on' ,but also am getting host can be found on client:300 error,any idea on how to fix this error,will be deeply appreciated.
Any help or direction are greatly appreciated!!!
I had this exactly same issue today. I solved it by attaching all containers that nginx upstream is referring to in the same docker virtual network.
Also, make sure to explicitly define the containers name. If I am not wrong, docker-compose prefix your services name with the yaml file name.
service_one:
networks:
- app-network # define the network
container_name: backend # the "backend" name must be used in the upstream directive in nginx configuration
My docker-compose.yaml is
version: '3'
services:
nginx:
restart: always
build: ./nginx/
depends_on:
- web
ports:
- "8000:8000"
network_mode: "host" # Connection between containers
web:
build: .
image: app-image
ports:
- "80:80"
volumes:
- .:/app-name
command: uwsgi /app-path/web/app.ini
NGINX conf file is
upstream web {
server 0.0.0.0:80;
}
server {
listen 8000;
server_name web;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
alias "/app-static/";
}
location / {
proxy_pass http://web;
}
}
So basically I have Django and uWSGI in one container 'web' and NGINX in container 'nginx'. I linked both using NGINX via Proxy and both worked fine. (I somehow needed 'network_mode: "host"' without that didn't work)
Since they are different containers, I cannot use .sock file (Unless I use some volume hacks to share the .sock file which is not good!)
Even though this works, I have been asked to avoid using NGINX via proxy, so is there any other way to connect these two?
Searching didn't get me alternatives. I tried
I configured my django-uwsgi-nginx using docker compose with the following files.
From browser "http://127.0.0.1:8000/" works fine and gives me the django default page
From browser "http://127.0.0.1:80" throws a 502 Bad Gateway
dravoka-docker.conf
upstream web {
server 0.0.0.0:8000;
}
server {
listen 80;
server_name web;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
alias "/dravoka-static/";
}
location / {
include uwsgi_params;
proxy_pass http://web;
}
}
nginx/Dockerfile
FROM nginx:latest
RUN echo "---------------------- I AM NGINX --------------------------"
RUN rm /etc/nginx/conf.d/default.conf
ADD sites-enabled/ /etc/nginx/conf.d
RUN nginx -t
web is just from "django-admin startproject web"
docker-compose.yaml
version: '3'
services:
nginx:
restart: always
build: ./nginx/
depends_on:
- web
ports:
- "80:80"
web:
build: .
image: dravoka-image
ports:
- "8000:8000"
volumes:
- .:/dravoka
command: uwsgi /dravoka/web/dravoka.ini
Dockerfile
# Ubuntu base image
FROM ubuntu:latest
# Some installs........
EXPOSE 80
When you say from the docker instance , you are running curl from with in the container ?? or you are running the curl command from your local ?
if you are running it from your local , update your docker-compose's web service to following
...
web:
build: .
image: dravoka-image
expose:
- "8000:8000"
volumes:
- .:/dravoka
command: uwsgi /dravoka/web/dravoka.ini
and try again.