I want to containerize my web applications. Currently, I am using Apache to provide a couple of PHP apps.
Every app should be provided by their own container.
Nginx should be reachable by port 80/443. Depending on the sub route it should proxying to one of the containers.
For example:
www.url.de/hello1 --> hello1:80
www.url.de/hello2 --> hello2:80
docker-compose.yml:
version: '3'
services:
nginx:
image: nginx:latest
container_name: reverse_proxy
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
ports:
- "80:80"
- "443:443"
networks:
- app-network
depends_on:
- hello1
- hello2
hello1:
build: ./test1
image: hello1
container_name: hello1
expose:
- "80"
networks:
- app-network
hello2:
build: ./test2
image: hello2
container_name: hello2
expose:
- "80"
networks:
- app-network
networks:
app-network:
nginx.conf:
events {
}
http {
error_log /etc/nginx/error_log.log warn;
client_max_body_size 20m;
proxy_cache_path /etc/nginx/cache keys_zone=one:500m max_size=1000m;
server {
server_name wudio.de;
location / {
proxy_pass http://hello1:80;
}
location /hello1/ {
proxy_pass http://hello1:80;
rewrite ^/hello1(.*)$ $1 break;
}
location /hello2/ {
proxy_pass http://hello2:80;
rewrite ^/hello2(.*)$ $1 break;
}
}
}
If I run docker-compose up -d, only the container with image webapp-test1 is online. And I also can reach it by curl localhost:8081.
Nginx is not running. If I remove the line in which I add nginx.conf to the volume of Nginx, it´s working.
What I´m doing wrong?
Edit1:
http:// was missing. But proxying still not working on subroutes. Only location / is working. How I get /hell1 running?
Note the proxy_pass statement. You have to mention the protocol in that statement. Also note how you can refer to the name of the service in your docker-compose.yml file (in this case hello1).
events {
}
http {
error_log /etc/nginx/error_log.log warn;
client_max_body_size 20m;
proxy_cache_path /etc/nginx/cache keys_zone=one:500m max_size=1000m;
server {
listen 80;
location / {
try_files $uri #proxy ;
}
location #proxy {
proxy_pass http://hello1:80/;
}
}
}
Edit: Try this instead
events {
}
http {
error_log /etc/nginx/error_log.log warn;
client_max_body_size 20m;
proxy_cache_path /etc/nginx/cache keys_zone=one:500m max_size=1000m;
server {
listen 80;
location / {
try_files $uri #proxy ;
}
location #proxy {
if ($request_uri ~* "^\/hello1(\/.*)$") {
set $url "http://hello1:80$1";
}
if ($request_uri ~* "^\/hello2(\/.*)$") {
set $url "http://hello2:80$1";
}
proxy_pass "$url"
}
}
}
Related
I have 2 servers, one with dockerized nginx and one with 3 dockerized web apis allowing traffic through different ports (say 441, 442, 443) which has swagger UI along with it respectively.
with limited knowledge on nginx, I am trying to reverse proxy to all the swagger UI endpoints using the nginx container. This is how my nginx conf looks like, but it doesnt work as expected, it would be great if someone can advice where I am going wrong.
I am able to hit the service with the exact match location context /FileService which return index.html. But index.html has the script call where nginx fails to serve these static contents.
index.html
<script src="./swagger-ui-bundle.js" charset="UTF-8"> </script>
<script src="./swagger-ui-standalone-preset.js" charset="UTF-8"> </script>
nginx.conf
server {
listen 443 ssl http2;
server_name www.webby.com;
access_log /var/log/nginx/access.log;
ssl_certificate /etc/ssl/yyyy.crt;
ssl_certificate_key /etc/ssl/xxxx.key;
ssl_protocols TLSv1.2;
if ($http_referer = 'https://$host/FileService') {
rewrite ^/(\w+) /swagger/fileservice/$1;
}
if ($http_referer = 'https://$host/PreProcess') {
rewrite ^/(\w+) /swagger/preprocess/$1;
}
location = /FileService {
proxy_pass 'http://appy.com:441/swagger/index.html';
}
location = /PreProcess {
proxy_pass 'http://appy.com:442/swagger/index.html';
}
# curl http://appy.com:441/swagger/swagger-ui-bundle.js is giving the js on this container
location ~* /swagger/fileservice(.*) {
proxy_pass 'http://appy.com:441/swagger/$1';
}
location ~* /swagger/preprocess(.*) {
proxy_pass 'http://appy.com:442/swagger/$1';
}
}
accesslog on the nginx looks like
anyways I struggled my way to implement this. Not sure if its the right approach (because I read on the internet that if block inside location context is evil), but works for my case. Feel free to correct my answer
server {
listen 443 ssl http2;
server_name www.webby.com;
access_log /var/log/nginx/access.log;
ssl_certificate /etc/ssl/yyyy.crt;
ssl_certificate_key /etc/ssl/xxxx.key;
ssl_protocols TLSv1.2;
location = /FileService {
proxy_pass 'http://appy.com:441/swagger/index.html';
}
location = /PreProcess {
proxy_pass 'http://appy.com:442/swagger/index.html';
}
location ~ ^/swagger/(.*)$ {
if ($http_referer = 'https://$host/FileService') {
proxy_pass 'http://appy.com:441/swagger/$1';
}
if ($http_referer = 'https://$host/PreProcess') {
proxy_pass 'http://appy.com:442/swagger/$1';
}
}
location ~ ^/swagger(.*)$ {
if ($http_referer = 'https://$host/FileService') {
proxy_pass 'http://appy.com:441/swagger/swagger$1';
}
if ($http_referer = 'https://$host/PreProcess') {
proxy_pass 'hhttp://appy.com:442/swagger/swagger$1';
}
}
}
Okay, this quite big so just skip to the last section for a brief.
I have a demo application (netcore 6.0) built on micro-service architect, suppose we have 3 services:
identity (Auth service - IdentityServer4)
frontend (mvc - aspnet)
nginx (reverse proxy server)
and all three are running on docker environment here is the docker-compose file
services:
demo-identity:
image: ${DOCKER_REGISTRY-}demoidentity:lastest
build:
context: .
dockerfile: Identity/Demo.Identity/Dockerfile
ports:
- 5000:80 //only export port 80,
volumes:
- ./Identity/Demo.Identity/Certificate:/app/Certificate:ro
networks:
- internal
demo-frontend:
image: ${DOCKER_REGISTRY-}demofrontend:lastest
build:
context: .
dockerfile: Frontend/Demo.Frontend/Dockerfile
ports:
- 5004:80 //only export port 80,
networks:
- internal
proxy:
build:
context: ./nginx-reverse-proxy
dockerfile: Dockerfile
ports:
- 80:80
- 443:443
volumes:
- ./nginx-reverse-proxy/cert/:/etc/cert/
links:
- demo-identity
depends_on:
- demo-identity
- demo-frontend
networks:
- internal
They all design to run internal, but nginx, it will be the proxy server, and here is the nginx.config file
worker_processes 4;
events { worker_connections 1024; }
http {
upstream app_servers_identity {
server demo-identity:80;
}
upstream app_servers_frontend {
server demo-frontend:80;
}
server {
listen 80;
listen [::]:80;
server_name demo-identity;
return 301 https://identity.demo.local$request_uri;
}
server {
listen 80;
listen [::]:80;
server_name identity.demo.local;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name identity.demo.local;
ssl_certificate /etc/cert/demo.crt;
ssl_certificate_key /etc/cert/demo.key;
location / {
proxy_pass http://app_servers_identity;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
server {
listen 80;
listen [::]:80;
server_name frontend.demo.local;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name frontend.demo.local;
ssl_certificate /etc/cert/demo.crt;
ssl_certificate_key /etc/cert/demo.key;
location / {
proxy_pass http://app_servers_frontend;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
also I update the host file to configure two virtual hosts identity.demo.local and frontend.demo.local (the term "localhost" sometimes confusing me when using docker.)
Then I setup the identity server like this
...
builder.Services.Configure<IdentityOptions>(options => {
// Default Password settings.
});
services.AddIdentityServer()
.AddInMemoryIdentityResources(Config.Ids)
.AddInMemoryApiResources(Config.Apis)
.AddInMemoryClients(Config.Clients)
.AddInMemoryApiScopes(Config.ApiScopes)
.AddAspNetIdentity<ApplicationUser>()
.AddSigningCredential(new X509Certificate2("./Certificate/demo_dev.pfx", "******"));
...
and here is the client static config
...
new Client
{
ClientName = "MVC Client",
ClientId = "mvc-client",
AllowedGrantTypes = GrantTypes.Hybrid,
RedirectUris = new List<string>{ "http://gateway.demo.local/signin-oidc"},
RequirePkce = false,
AllowedScopes = { IdentityServerConstants.StandardScopes.OpenId, IdentityServerConstants.StandardScopes.Profile },
ClientSecrets = { new Secret("MVCSecret".Sha512()) }
}
...
In the Frontend service, I also configure Oidc as below
...
services.AddAuthentication(opt =>
{
opt.DefaultScheme = "Cookies";
opt.DefaultChallengeScheme = "oidc";
}).AddCookie("Cookies", opt => {
opt.CookieManager = new ChunkingCookieManager();
opt.Cookie.HttpOnly = true;
opt.Cookie.SameSite = SameSiteMode.None;
opt.Cookie.SecurePolicy = CookieSecurePolicy.Always;
})
.AddOpenIdConnect("oidc", opt => {
opt.SignInScheme = "Cookies";
opt.Authority = "http://demo-identity";
opt.ClientId = "mvc-client";
opt.ResponseType = "code id_token";
opt.SaveTokens = true;
opt.ClientSecret = "MVCSecret";
opt.ClaimsIssuer = "https://identity.demo.local";
opt.RequireHttpsMetadata = false;
});
...
TL,DR: A micro-service application host on docker, which included IdentityServer, MVC, Nginx. They all run internal and only can be access via nginx proxy. The host name also configure to virtual host names - which make more sense.
Okay here is the problem, when I access to a protected api of MVC, it redirect me to identity server (identity.demo.local) to login, but after I login success, it should redirect me to the mvc, but it did not. After research, I figure out the reason that after login, the identity redirect me to the origin site with the cookies contain authentication info, but the redirect uri is not secured, it's http://frontend.demo.local instead of https. I'm not sure how this property is configured ( I try to update the nginx.conf but nothing change). And it still work correctly when I run by visual studio, without docker.
Any help is appreciated.
I have 2 API containers (docker) running on port 10000 and 10003. I want to reverse proxy both of them so the API can be called from a single port which is port 80. I am trying to use NGINX to do that and this is my nginx configuration file:
worker_processes 1;
events { worker_connections 1024; }
http {
server {
listen 80;
server_name container1;
location / {
proxy_pass http://10.10.10.50:10003;
}
}
server {
listen 80;
server_name container2;
location / {
proxy_pass http://10.10.10.50:10000;
}
}
}
I found that it is only working on the container 1 and if there is a request for container 2, it will generate 404 not found warning because the request go to the container 1 instead of container 2.
Finally, I found a solution using NGINX. All I need to do is to create a new NGINX container then reconfigure the url of my 2 API container. The configuration file that I wrote looks like this:
worker_processes auto;
events { worker_connections 1024; }
http {
upstream container1 {
server 10.10.10.50:10003;
}
upstream container2 {
server 10.10.10.50:10000;
}
server {
listen 80;
location /container1/ {
proxy_pass http://container1/;
}
location /container2/ {
proxy_pass http://container2/;
}
}
}
Now, I can make requests for both API containers by using port 80 as it will be re-routed from the port into the designated port (reverse-proxy).
I have a home server I use primarily for self-hosted applications (notes, git server, Jenkins server, etc.) that I'd like to organize so I don't have to remember all the ports for each container. I was hoping to setup a structure as follows:
http://home.server/
http://home.server/jenkins
http://home.server/cowyo
http://home.server/pihole
etc…
I'd like the root endpoint to route to a container running google's cadvisor for monitoring the status of all my containers, /jenkins to route to Jenkins, /pihole to pi-hole, etc.
Here's how I setup my docker-compose and nginx.conf files based on my understanding of the configuration schema:
docker-compose.yml:
version: '3'
services:
webserver:
image: nginx:alpine
container_name: webserver
restart: unless-stopped
tty: true
ports:
- "80:80"
- "443:443"
volumes:
- ./:/var/www
- ./nginx/conf.d/:/etc/nginx/conf.d/
restart: unless-stopped
cadvisor:
image: 'google/cadvisor'
volumes:
- /:/rootfs:ro
- /var/run:/var/run:ro
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
- /dev/disk/:/dev/disk:ro
ports:
- "8090:8090"
pi-hole:
container_name: pihole
image: pihole/pihole:latest
ports:
- "53:53/tcp"
- "53:53/udp"
- "67:67/udp"
- "8070:8070/tcp"
- "4437:4437/tcp"
environment:
TZ: 'America/Chicago'
# WEBPASSWORD: 'set a secure password here or it will be random'
# Volumes store your data between container upgrades
volumes:
- './etc-pihole/:/etc/pihole/'
- './etc-dnsmasq.d/:/etc/dnsmasq.d/'
dns:
- 127.0.0.1
- 1.1.1.1
# Recommended but not required (DHCP needs NET_ADMIN)
# https://github.com/pi-hole/docker-pi-hole#note-on-capabilities
cap_add:
- NET_ADMIN
restart: unless-stopped
cowyo:
image: 'schollz/cowyo'
ports:
- "8050:8050"
volumes:
- './cowyo/data/:/data/'
restart: unless-stopped
jenkins:
image: 'bitnami/jenkins:2'
ports:
- '8080:8080'
- '8443:8443'
- '50000:50000'
volumes:
- 'jenkins_data:/bitnami'
restart: unless-stopped
volumes:
jenkins_data:
driver: local
nginx.conf:
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
upstream docker-cadvisor {
server cadvisor:8090;
}
upstream docker-pihole {
server pi-hole:8070;
}
upstream docker-cowyo {
server cowyo:8050;
}
upstream docker-jenkins {
server jenkins:8080;
}
server {
listen 80;
location / {
proxy_pass http://docker-cadvisor;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
server {
listen 80;
location /pihole {
proxy_pass http://docker-pihole;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
server {
listen 80;
location /cowyo {
proxy_pass http://docker-cowyo;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
server {
listen 80;
location /jenkins {
proxy_pass http://docker-jenkins;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
Any ideas on what I'm doing wrong? Any help would be appreciated.
Edit: The problem with the way it runs now is that when I go to one of the urls above, I get a "this website can't be reached" error.
Edit2: The problem above was caused by a bad nginx.conf file which was causing the docker container to not start properly. After resolving that, I get a "502 Bad Gateway".
Edit3 [resolution]: My solution was in this stack overflow answer. TLDR is I had to add the line "network_mode: host" to the nginx section in my docker-compose.yml file.
What is the error you are getting ? Tre below config and if you don't get expected result then see respective logs for the error .
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
upstream docker-cadvisor {
server cadvisor:8090;
}
upstream docker-pihole {
server pi-hole:8070;
}
upstream docker-cowyo {
server cowyo:8050;
}
upstream docker-jenkins {
server jenkins:8080;
}
server {
listen 80;
server_name home.server;
fastcgi_param home.server $host;
location / {
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
proxy_pass http://docker-cadvisor;
}
access_log /tmp/cadvisor-access.log main;
error_log /tmp/cadvisor-error.log error;
}
server {
listen 80;
server_name home.server;
fastcgi_param home.server $host;
location /jenkins {
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
proxy_pass http://docker-jenkins;
}
access_log /tmp/jenkins-access.log main;
error_log /tmp/jenkins-error.log error;
}
server {
listen 80;
server_name home.server;
fastcgi_param home.server $host;
location /cowyo {
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
proxy_pass http://docker-cowyo;
}
access_log /tmp/cowyo-access.log main;
error_log /tmp/cowyo-error.log error;
}
server {
listen 80;
server_name home.server;
fastcgi_param home.server $host;
location /pihole {
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
proxy_pass http://docker-pihole;
}
access_log /tmp/pihole-access.log main;
error_log /tmp/pihole-error.log error;
}
}
I think you should change to
upstream docker-cadvisor {
server 127.0.0.1:8090;
}
upstream docker-pihole {
server 127.0.0.1:8070;
}
upstream docker-cowyo {
server 127.0.0.1:8050;
}
upstream docker-jenkins {
server 127.0.0.1:8080;
}
When a location is with root and another is with proxy_pass nginx does not work in the url /laravel. The response of this url is "404 Not Found".If I remove url location / and /moda, the url /laravel works. I do this, because I want map docker containers.
nginx.conf file :
server {
listen 80;
server_name local.monllar.com;
location /laravel {
root /var/www/local.monllar.com/public_html;
index index.html index.htm;
}
location / {
proxy_pass http://localhost:32768;
}
location /moda {
proxy_pass http://localhost:2222/moda;
}
}
I found the solution. This maps the ips of the docker containers to my local server names
nginx.conf file :
server
{
listen 80;
server_name local.monllar.com;
location / {
root /var/www/local.monllar.com/public_html;
index index.html index.htm;
}
}
server
{
listen 80;
server_name local.moda.com;
location / {
proxy_pass http://localhost:2222/moda/;
}
}
server
{
listen 80;
server_name local.laravel.com;
location / {
proxy_pass http://localhost:32768;
}
}
/private/etc/hosts file in Mac
127.0.0.1 local.monllar.com
127.0.0.1 local.moda.com
127.0.0.1 local.laravel.com