I have a reactjs app which I want to server through nginx. The app makes API requests a node.js Express server. I decided to dockerize nginx together with static website files (from reactjs app) in one docker-compose file. I created another docker-compose file for Express server. The containers are currently on my laptop and are using localhost. I haven't been able to get them to work for a long time and there's unfortunately not much information about this on the internet. (the website and Express server work when not inside Docker).
First of all I created a new docker network:
docker network create --driver bridge appstore-net
This is my docker-compose file for website and nginx:
version: '3.5'
services:
appstore-front:
container_name: appstore-front-production
build:
context: .
dockerfile: Dockerfile-prod
ports:
- '80:80'
networks:
- appstore-net
external_links:
- appstore-bl-server-production
networks:
appstore-net:
external: true
This is my docker-compose file for Express server:
version: '3'
services:
appstore-bl-server:
container_name: appstore-bl-server-production
build:
dockerfile: Dockerfile-prod
context: .
volumes:
- ".:/usr/src/app"
ports:
- "3000:3000"
networks:
- appstore-net
networks:
appstore-net:
external: true
This is my nginx configuration:
server {
listen 80;
listen [::]:80;
#Docker DNS
resolver 127.0.0.11;
server_name localhost;
access_log /var/log/nginx/appstore.access.log;
error_log /var/log/nginx/appstore.error.log;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
location /api {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
#hack to prevent nginx to resolve container's host on start up
set $docker_host "appstore-bl-server";
proxy_pass http://$docker_host:3000;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
As you can see I'm using docker DNS resolver as well as creating "passing proxy" to URL consisting of Express container name.
How can I make it work?
EDIT: I found 2 issues that could've been the reason:
1) external_links needs to refer to container name, not container service.
2) docker_host variable in nginx needs to refer to service name, not container name.
With these corrections the setup works (corrected the values in the OP as well).
Related
I've search a lot of online materials, but I wasn't able to find a solution for my problem. I'll try to make it as clear as possible. I think I'm missing something and maybe someone with more experience on the server configuration side may have the answer.
I have MERN stack app and I'm trying to deploy it on a DigitalOcean droplet, using Docker. All good so far, everything runs as it should should, except de fact that I'm not able to access my app by the domain. It works perfectly if I'm using the IP of the droplet.
What I've checked so far:
checked my ufw status and I have both HTTP and HTTPS enabled
the domain is from GoDaddy and it's live, linked with the proper namespaces from Digital Ocean.
in the domains sections from Digital Ocean everything it's set as it should. I have the proper CNAME records pointing to the IP of my droplet
a direct ping to my domain works fine (it returns the correct IP)
also checked DNS LookUp tools and everything seems to be linked just fine
When it comes to the Docker containers, I have 3 of them: client, backend and nginx.
This is how my docker-compose looks like:
version: '3'
services:
nginx:
container_name: jtg-nginx
depends_on:
- backend
- client
restart: always
image: host-of-my-image-nginx:latest
networks:
- local-net
ports:
- '80:80'
backend:
container_name: jtg-backend
image: host-of-my-image-backend:latest
ports:
- "5000:5000"
volumes:
- logs:/app/logs
- uploads:/app/uploads
networks:
- local-net
env_file:
- .env
client:
container_name: jtg-client
stdin_open: true
depends_on:
- backend
image: host-of-my-image-client:latest
networks:
- local-net
env_file:
- .env
networks:
local-net:
driver: bridge
volumes:
logs:
driver: local
uploads:
driver: local
I have two instances of Nginx. One is used inside the client container and the other one is used in it's own container.
This is the default.conf from the client:
server {
listen 3000;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
}
Now it comes the most important part. This is the default.conf used inside the main Nginx container:
upstream client {
server client:3000;
}
upstream backend {
server backend:5000;
}
server{
listen 80;
server_name my-domain.com www.my-domain.com;
location / {
proxy_pass http://client;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /backend {
rewrite /backend/(.*) /$1 break;
proxy_pass http://backend;
}
}
I really don't understand what's wrong with this configuration and I think it's something very small that I'm missing out.
Thank you!
If you want to setup a domain name in front, you'll need to have a webserver instance that allows you to proxy_pass your hostname to your container
So this is what you may wanna do :
server{
listen 80;
server_name my-domain.com www.my-domain.com;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /backend {
rewrite /backend/(.*) /$1 break;
proxy_pass http://backend;
}
}
The mistery was solved. After adding SSL certificate everything works as it should.
I am using Docker for Windows and want to set up nginx as reverse proxy. All is working fine but if I want to define a proxy to my localhost I always get a 502 or 504 error. I thought setting an extra_host would solve my problem but didn't. Is there any other IP that I can try to set as host or is something else wrong?
docker-compose.yml:
version: '3'
volumes:
etc:
driver: local
services:
nginx:
container_name: nginx
image: nginx:latest
volumes:
- ./etc:/etc/nginx
ports:
- 8088:80
extra_hosts:
- localhost:127.0.0.1
nginx.conf:
user nginx;
worker_processes 1;
events {
}
http {
server {
listen 80;
server_name localhost;
location /auth {
proxy_pass http://localhost:8082/auth;
}
location /graphql {
proxy_pass http://localhost:8080/graphql;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_http_version 1.1;
}
location ^~ / {
proxy_pass http://localhost:8082/auth;
}
location /sso/login {
proxy_pass http://localhost:8082/auth;
}
}
}
PS: all referred paths are docker-containers e.g. /auth is a keycloak authentification server
I solved this problem myself. If you open the docker settings (right-click on docker icon) then you have the following network settings.
Per default the DNS server is set to automatic -> change this to fixed 8.8.8.8
Then you can access your containers with 10.0.75.2 instead localhost.
Last but not least add this address as extra_host to your docker-compose file and fire it up.
version: '3'
volumes:
etc:
driver: local
services:
nginx:
container_name: nginx
image: nginx:latest
volumes:
- ./etc:/etc/nginx
ports:
- 8088:80
extra_hosts:
- localhost:10.0.75.2
Looking at the documentation, with docker for mac we can use host.docker.internal to resolve the internal IP used by the host
location /api {
proxy_pass http://host.docker.internal:8080;
}
I have created a docker-compose file to spin up both an nginx and tomcat image. I use volumised files such /etc/nginx/nginx.conf and /etc/nginx/conf.d/app.conf
Same for Tomcat but with xml config files and webapps.
Both spin up and run fine… on their own. I can browse to Nginx and get the welcom page and the same for Tomcat on their respective ports, 81/8080.
However I cannot proxy the request to the backend tomcat. I’ll admit, I’m Apache and have been for years but I need to experiment.
My nginx.conf hasnt changed, its still default. I have an app.conf for the tomcat application (below). I do try and CMD mv the default.conf in teh tomcat Dockerfile but it still remains along side my app.conf so that maybe causing the issue?
my app.conf config is here: (apologies, couldnt get the code to output properly)
"server {
listen *:81;
set $allowOriginSite *;
proxy_pass_request_headers on;
proxy_pass_header Set-Cookie;
access_log /var/log/nginx/access.log combined;
error_log /var/log/nginx/error.log error;
# Upload size unlimited
client_max_body_size 0;
location /evf {
proxy_pass http://tomcat:8080;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass_header Set-Cookie;
}
}
tomcat:8080 being the name of the service in my docker-compose file.
Any help would be appreciated!
Thank you,
Craig
docker-compose.yml for reference;
version: '3'
services:
nginx:
build: ./nginx
image: nginx:evf
command: nginx -g "daemon off;"
networks:
- evf
container_name: evf-nginx
volumes:
- ./volumes/config/nginx-evf.conf:/etc/nginx/conf.d/nginx-evf.conf
- ./volumes/config/default.conf.disabled:/etc/nginx/conf.d/default.conf.disabled
ports:
- "81:80"
tomcat:
image: tomcat
working_dir: /usr/local/tomcat
volumes:
- ./volumes/config/tomcat-users.xml:/usr/local/tomcat/conf/tomcat-users.xml
- ./volumes/webapps/EVF.war:/usr/local/tomcat/webapps/EVF.war
networks:
- evf
container_name: evf-tomcat
ports:
- "8080:8080" #expose 8080 externally to test connectivity.
networks:
evf:
Thanks,
In your nginx conf you have listen *:81 but you are exposing port 80 with "81:80".
So eiter expose port 81 with "81:81" or change you nginx config to listen *:80.
If the second option does not work try to replace the original nginx config by changing the volume file in your docker-compose.yml:
volumes:
- ./nginx/nginx-evf.conf:/etc/nginx/conf.d/default.conf
I am attempting to use Docker to help deploy an application. The idea is to have two containers. One is the front-end containing Nginx and an Angular app.
FROM nginx
COPY ./dist/ /usr/share/nginx/html
COPY ./nginx.conf /etc/nginx/nginx.conf
It is supposed to contact a Spring Boot based API generated using the gradle-docker plugin and Dockerfile recommended by Spring:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
They seem to run fine individually (I can access them on my development machine); however, I am having trouble connecting the two.
My docker-compose.yml file:
version: '3'
services:
webapp:
image: com.midamcorp/emp_front:latest
ports:
- 80:80
- 443:443
networks:
- app
api:
image: com.midamcorp/employee_search:latest
ports:
- 8080:8080
networks:
- app
networks:
app:
Based upon my understanding of the Docker documentation on networks, I was under the impression that the containers would be placed in the same network and thus could interact, with the service name (for example, api) acting as the "host". Based upon this assumption, I am attempting to access the API from the Angular application through the following:
private ENDPOINT_BASE: string = "http://api:8080/employee";
This returns an error: Http failure response for (unknown url): 0 Unknown Error.
To be honest, the sample I have looked at used this concept (substituting the service name for host to connect two containers) for database applications not HTTP. Is what I am attempting to accomplish not possible?
EDIT:
My nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
location / {
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
}
}
EDIT:
Updated nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
upstream docker-java {
server api:8080;
}
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 8081;
server_name localhost;
location / {
proxy_pass http://docker-java;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
location / {
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
}
}
and docker-compose.yml
version: '3'
services:
webapp:
image: com.midamcorp/emp_front:latest
ports:
- 80:80
- 443:443
networks:
- app
depends_on:
- api
api:
image: com.midamcorp/employee_search:latest
networks:
- app
networks:
app:
And the client / Angular app uses the following to contact the API private ENDPOINT_BASE: string = "http://localhost:8081/employee";
Output from docker ps
> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
947eb757eb4b b28217437313 "nginx -g 'daemon of…" 10 minutes ago Up 10 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp employee_service_webapp_1
e16904db67f3 com.midamcorp/employee_search:latest "java -Djava.securit…" 10 minutes ago Up 10 minutes employee_service_api_1
The problem you are experiencing is not anything weird. It's just that you did not explicitly name your containers. Instead, Docker generated a the names by itself. So, nginx will resolve employee_service_api_1, but will not recognize just api. Open you webapp container and take a look at your hosts (cat /etc/hosts) - it will show you employee_service_api_1 and it's IP address.
How to fix it.
Add container_name to your docker-compose.yml:
version: '3'
services:
webapp:
image: com.midamcorp/emp_front:latest
container_name: employee_webapp
ports:
- 80:80
- 443:443
networks:
- app
depends_on:
- api
api:
image: com.midamcorp/employee_search:latest
container_name: employee_api
networks:
- app
networks:
app:
I always refrain from using "simple" names (i.e. just api), cuz on my system multiple containers with similar names might show up, so I add some prefix. In this case I named the api container employee_api and now nginx will resolve to that name once you restart your containers.
I'm looking for a way to configure Nginx to access hosted services through a subdomain of my server. Those services and Nginx are instantiated with Docker-compose.
In short, when typing jenkins.192.168.1.2, I should access to Jenkins hosted on 192.168.1.2 redirected with Nginx proxy.
A quick look of what I currently have. It doesn't work without a top domain name, so it works fine on play-with-docker.com, but not locally with for example 192.168.1.2.
server {
server_name jenkins.REVERSE_PROXY_DOMAIN_NAME;
location / {
proxy_pass http://jenkins:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
To have a look of what I want: https://github.com/Ivaprag/devtools-compose
My overall goal is to access remote docker containers without modifying clients' DNS service.
Unfortunately nginx doesn't support sub-domains on IP addresses like that.
You would either have to modify the clients hosts file (which you said you didn't want to do)...
Or you can just set your nginx to redirect like so:
location /jenkins {
proxy_pass http://jenkins:8080;
...
}
location /other-container {
proxy_pass http://other-container:8080;
}
which would allow you to access jenkins at 192.168.1.2/jenkins
Or you can try and serve your different containers through different ports. E.g:
server {
listen 8081;
location / {
proxy_pass http://jenkins:8080;
...
}
}
server {
listen 8082;
location / {
proxy_pass http://other-container:8080;
...
}
}
And then access jenkins from 192.168.1.2:8081/
If you are already using docker-compose I recommend using the jwilder nginx-proxy container.
https://github.com/jwilder/nginx-proxy
This allows you to add unlimited number of web service containers to the backend of the defined nginx proxy, for example:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- "/etc/nginx/vhost.d"
- "/usr/share/nginx/html"
- "/var/run/docker.sock:/tmp/docker.sock:ro"
- "nginx_certs:/etc/nginx/certs:rw"
nginx:
build:
context: ./docker/nginx/
dockerfile: Dockerfile
volumes_from:
- data
environment:
VIRTUAL_HOST: www.host1.com
nginx_2:
build:
context: ./docker/nginx_2/
dockerfile: Dockerfile
volumes_from:
- data
environment:
VIRTUAL_HOST: www.host2.com
apache_1:
build:
context: ./docker/apache_1/
dockerfile: Dockerfile
volumes_from:
- data
environment:
VIRTUAL_HOST: www.host3.com
The nginx-proxy mount the host docker sock file in order to get information about the other containers running, if any of them have the env variable VIRTUAL_HOST set then it will add it to its configuration.
I was trying to configure subdomains in nginx (host), for two virtualhosts in one LXC container.
The way it worked for me:
For apache (in the container), I created two virtual hosts: one in port 80 and the other one in port 90.
For enabling port 90 in apache2 (container), it was necessary to add the line "Listen 90" below "Listen 80" in /etc/apache2/ports.conf
For NGINX (host machine), configured two DOMAINS, both in port 80 creating independent .conf files in /etc/nginx/sites-available. Created symbolic link for each file to /etc/nginx/sites-enabled.
In the first NGINX myfirstdomain.conf file, redirect to http://my.contai.ner.ip:80.
In the second NGINX myseconddomain.conf file, redirect to http://my.contai.ner.ip:90
That was it for me !