I can't seem to figure out how to run Flask Backend with Webpack Dev Server. The flask routes have a login decorator. Webpack serves my assets but I can't access any backend routes. Client side I prefix the routes with port :8080/someFlaskRoute but that get's redirected to /login which isn't on port 8080?
route decorator:
def login_required(f):
#wraps(f)
def decorated_function(*args, **kwargs):
expiration = session.get('expires', 0)
now = int(time.time())
if expiration == 0 or expiration < now or 'user_id' not in session:
return redirect(url_for('saml_login', _external=True, _scheme='https', next=request.url))
return f(*args, **kwargs)
return decorated_function
docker-compose.yml:
version: "3"
services:
server-dev:
build:
context: ../..
dockerfile: Dockerfile-server-base
network_mode: host
ports:
- 10005:10005
tty: true
stdin_open: true
command: uwsgi --http-socket 0.0.0.0:10005 --http-websockets --module myapp:app --master --processes 4 --enable-threads --honour-stdin --py-autoreload=3 --buffer-size=65535
client-dev:
image: node:12.13.1-slim
network_mode: host
ports:
- 10001:10001
- 3000:3000
command: yarn dev
Webpack Dev Server:
devServer: {
host: '0.0.0.0',
public: '0.0.0.0:0',
port: 10001,
sockPort: 80,
hotOnly: true,
publicPath: '/',
headers: { 'Access-Control-Allow-Origin': '*' }
}
I see the request come in in the logs:
server-dev_1 | [pid: 119|app: 0|req: 1/1] 127.0.0.1 () {52 vars in 1051 bytes} [Wed Jun 10 10:00:03 2020] GET /dashboard/all_data?is_alignment=false&group_id=&hkrgy8asx787tzwnvf2wll => generated 561 bytes in 5 msecs (HTTP/1.1 302) 3 headers in 272 bytes (1 switches on core 0)
But the response header gives me this:
Connection: keep-alive
Content-Length: 561
Content-Type: text/html; charset=utf-8
Date: Wed, 10 Jun 2020 10:00:03 GMT
Location: /login/?next=http%3A%2F%2Fmysite.com%2Fdashboard%2Fall_data
Generated-By: dev-machine
Server: nginx
nginx config:
upstream flask_upstream {
server localhost:10005;
}
server {
listen 8080;
server_name ~^(?!api).*.mysite.com;
location / {
proxy_pass http://flask_upstream;
proxy_set_header X-Forwarded-Protocol ssl;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
include /etc/nginx/conf.d/proxy.conf;
set $upstream_name flask_upstream;
}
}
Related
I try to connect follow the request from nginx to port 9100 (Node exporter) on linux host.
this is my docker-compose.yml
version: '3.3'
services:
nginx:
image: nginx:1.21.4-perl
ports:
- 80:80
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
extra_hosts:
- 'host.docker.internal:10.187.1.52'
This is my nginx.conf
worker_processes auto;
http {
listen 80;
server_name localhost;
resolver 127.0.0.11 ipv6=off;
location ~ ^/node(/?.*) {
proxy_pass http://host.docker.internal:9100$1;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_read_timeout 300s;
}
}
This is my docker version
docker version
Client: Docker Engine - Community
Version: 20.10.10
API version: 1.41
Go version: go1.16.9
Git commit: b485636
Built: Mon Oct 25 07:44:50 2021
OS/Arch: linux/amd64
Context: default
Experimental: true
I do reverse proxy for Node Exporter on port 9100. It's run on linux host machine.
It works well when I put ip-address ("10.187.1.52") in nginx.cnf directly.
However, It will failed when I try to use hostname as "host.docker.internal".
I also try to define it on "extra_hosts" section in docker-compose.yml but the result still be failed. I got the same error '[error] 24#24: *1 no resolver defined to resolve host.docker.internal, client: 10.186.110.106, server: localhost, request: "GET /node/metrics HTTP/1.1"'
Could you please give me any suggestions to fix this?
Note!! I'm creating an example for monitoring with load testing on GitHub. This is the snap code from my project so you could see the full source code on this link.
Docker Compose by default exposes the service name of a service as hostname for inter-container networking. In your docker-compose.yml you have a service called appcadvisor so your hostname should be appcadvisor instead of cadvisor.
I am learning docker-compose and now I am trying to setup app and nginx in one docker-compose script on my WSL Ubuntu.
I am testing my endpoint with
curl -v http://127.0.0.1/weatherforecast
But I am receiving 502 Bad Gateway from nginx.
If I change port exposing to port publishing in docker-compose, as below, requests bypass nginx and reach my app and I receive an expected response.
ports:
- 5000:8080
My setup:
app's dockerfile
FROM mcr.microsoft.com/dotnet/aspnet:3.1 AS base
WORKDIR /app
ENV ASPNETCORE_URLS=http://+:8080
FROM mcr.microsoft.com/dotnet/sdk:3.1 AS build
WORKDIR /src
COPY ["WebApplication2.csproj", "."]
RUN dotnet restore "./WebApplication2.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "WebApplication2.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "WebApplication2.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "WebApplication2.dll"]
nginx.conf
events {
worker_connections 1024;
}
http {
access_log /var/log/nginx/access.log;
server {
listen 80;
location / {
proxy_pass http://127.0.0.1:8080/;
}
}
}
docker-compose.yml
version: "3.9"
services:
web:
depends_on:
- nginx
build: ./WebApplication2
expose:
- "8080"
nginx:
image: "nginx"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./logs:/var/log/nginx/
ports:
- 80:80
>docker-compose ps
Name Command State Ports
-----------------------------------------------------------------------------------------------
composetest_nginx_1 /docker-entrypoint.sh ngin ... Up 0.0.0.0:80->80/tcp,:::80->80/tcp
composetest_web_1 dotnet WebApplication2.dll Up 8080/tcp
/var/log/nginx/error.log
[error] 31#31: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.26.0.1, server: , request: "GET /weatherforecast HTTP/1.1", upstream: "http://127.0.0.1:8080/weatherforecast", host: "127.0.0.1"
cURL output:
* Trying 127.0.0.1:80...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> GET /weatherforecast HTTP/1.1
> Host: 127.0.0.1
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 502 Bad Gateway
< Server: nginx/1.21.1
< Date: Fri, 13 Aug 2021 17:50:56 GMT
< Content-Type: text/html
< Content-Length: 157
< Connection: keep-alive
<
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.21.1</center>
</body>
</html>
* Connection #0 to host 127.0.0.1 left intact
You should redirect your request to your web container instead of 127.0.0.1. Each container is running as separate part of network (each has different IP address) and 127.0.0.1 points to local container. So, in your case, it point to nginx itself. Instead of real IP address of container, you can use DNS name (it is equal to service name in docker-compose). Use something like:
events {
worker_connections 1024;
}
http {
access_log /var/log/nginx/access.log;
server {
listen 80;
location / {
proxy_pass http://web:8080/;
}
}
}
Also, you specified that your web container depends on nginx, but it should be viceversa. Like:
version: "3.9"
services:
web:
build: .
nginx:
image: "nginx"
depends_on:
- web
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
ports:
- 80:80
Setting up my rundeck application within a docker container and using nginx to reverse proxy. Presume my problem is originating from the proxy that is being received back into the server.
When I access the desired URL (https://vmName.Domain.corp/rundeck) I am able to see the login page, even though it doesn't have any UI. Once I enter the default admin:admin information I am directed to a 404 page. I pasted below one of the error logs from the docker-compose logs. You'll notice it's going to /etc/nginx to find rundeck's logo.
I can't determine if the problem is in my docker-compose file or nginx' config file.
Example of error log:
production_nginx | 2021-02-04T08:17:50.770544192Z 2021/02/04 08:17:50 [error] 29#29: *8 open() "/etc/nginx/html/assets/jquery-aafa4de7f25b530ee04ba20028b2d154.js" failed (2: No such file or directory), client: 10.243.5.116, server: vmName.Domain.corp, request: "GET /assets/jquery-aafa4de7f25b530ee04ba20028b2d154.js HTTP/1.1", host: "vmName.Domain.corp", referrer: "https://vmName.Domain.corp/rundeck/user/login"
If curious, I can access Rundeck's logo if I go to: https://vmName.Domain.corp/rundeck/assets/jquery-aafa4de7f25b530ee04ba20028b2d154.js"
Here's more information on my set-up
/nginx/sites-enabled/docker-compose.yml (main machine)
rundeck:
image: ${RUNDECK_IMAGE:-jordan/rundeck:latest}
container_name: production_rundeck
ports:
- 4440:4440
environment:
RUNDECK_GRAILS_SERVER_URL: "https://vmName.Domain.corp/rundeck"
RUNDECK_GRAILS_URL: "https://vmName.Domain.corp/rundeck"
RUNDECK_SERVER_FORWARDED: "true"
RDECK_JVM_SETTINGS: "-Xmx1024m -Xms256m -XX:MaxMetaspaceSize=256m -server -Dfile.encoding=UTF-8 -Drundeck.jetty.connector.forwarded=true -Dserver.contextPath=/rundeck -Dserver.https.port:4440"
#RUNDECK_SERVER_CONTEXTPATH: "https://vmName.Domain.corp/rundeck"
RUNDECK_MAIL_FROM: "rundeck#vmName.Domain.corp"
EXTERNAL_SERVER_URL: "https://vmName.Domain.corp/rundeck"
SERVER_URL: "https://vmName.Domain.corp/rundeck"
volumes:
- /etc/rundeck:/etc/rundeck
- /var/rundeck
- /var/lib/mysql
- /var/log/rundeck
- /opt/rundeck-plugins
nginx:
image: nginx:latest
container_name: production_nginx
links:
- rundeck
volumes:
- /etc/nginx/sites-enabled:/etc/nginx/conf.d
depends_on:
- rundeck
ports:
- 80:80
- 443:443
restart: always
networks:
default:
external:
name: vmName
nginx/sites-enabled/default.conf (main machine)
# Route all HTTP traffic through HTTPS
# ====================================
server {
listen 80;
server_name vmName;
return 301 https://vmName$request_uri;
}
server {
listen 443 ssl;
server_name vmName;
ssl_certificate /etc/nginx/conf.d/vmName.Domain.corp.cert;
ssl_certificate_key /etc/nginx/conf.d/vmName.Domain.corp.key;
return 301 https://vmName.Domain.corp$request_uri;
}
# ====================================
# Main webserver route configuration
# ====================================
server {
listen 443 ssl;
server_name vmName.Domain.corp;
ssl_certificate /etc/nginx/conf.d/vmName.Domain.corp.cert;
ssl_certificate_key /etc/nginx/conf.d/vmName.Domain.corp.key;
#===========================================================================#
## MAIN PAGE
location /example-app {
rewrite ^/example-app(.*) /$1 break;
proxy_pass http://example-app:5000/;
proxy_set_header Host $host/example-app;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# #Rundeck
location /rundeck/ {
# rewrite ^/rundeck(.*) /$1 break;
proxy_pass http://rundeck:4440/;
proxy_set_header Host $host/rundeck;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
[image container]/etc/rundeck/ rundeck-config.properties:
# change hostname here
grails.serverURL=https://vmName.Domain.corp/rundeck
grails.mail.default.from = rundeck#vmName.Domain.corp
server.useForwardHeaders = true
[image container]/etc/rundeck/ framework.properties:
framework.server.name = vmName.Domain.corp
framework.server.hostname = vmName.Domain.corp
framework.server.port = 443
framework.server.url = https://vmName.Domain.corp/rundeck
It seems related to the Rundeck image/network problem, I did a working example with the official one, take a look:
nginx.conf (located at config folder, check the docker-compose file volumes section):
server {
listen 80 default_server;
server_name rundeck-cl;
location / {
proxy_pass http://rundeck:4440;
}
}
docker-compose:
version: "3.7"
services:
rundeck:
build:
context: .
args:
IMAGE: ${RUNDECK_IMAGE:-rundeck/rundeck:3.3.9}
container_name: rundeck-nginx
ports:
- 4440:4440
environment:
RUNDECK_GRAILS_URL: http://localhost
RUNDECK_SERVER_FORWARDED: "true"
nginx:
image: nginx:alpine
volumes:
- ./config/nginx.conf:/etc/nginx/conf.d/default.conf:ro
ports:
- 80:80
Dockerfile:
ARG IMAGE
FROM ${IMAGE}
Build with: docker-compise build and run with docker-compose up.
rundeck-config.properties content:
#loglevel.default is the default log level for jobs: ERROR,WARN,INFO,VERBOSE,DEBUG
loglevel.default=INFO
rdeck.base=/home/rundeck
#rss.enabled if set to true enables RSS feeds that are public (non-authenticated)
rss.enabled=false
# Bind address and server URL
server.address=0.0.0.0
server.servlet.context-path=/
grails.serverURL=http://localhost
server.servlet.session.timeout=3600
dataSource.dbCreate = update
dataSource.url = jdbc:h2:file:/home/rundeck/server/data/grailsdb;MVCC=true
dataSource.username =
dataSource.password =
#Pre Auth mode settings
rundeck.security.authorization.preauthenticated.enabled=false
rundeck.security.authorization.preauthenticated.attributeName=REMOTE_USER_GROUPS
rundeck.security.authorization.preauthenticated.delimiter=,
# Header from which to obtain user name
rundeck.security.authorization.preauthenticated.userNameHeader=X-Forwarded-Uuid
# Header from which to obtain list of roles
rundeck.security.authorization.preauthenticated.userRolesHeader=X-Forwarded-Roles
# Redirect to upstream logout url
rundeck.security.authorization.preauthenticated.redirectLogout=false
rundeck.security.authorization.preauthenticated.redirectUrl=/oauth2/sign_in
rundeck.api.tokens.duration.max=30d
rundeck.log4j.config.file=/home/rundeck/server/config/log4j.properties
rundeck.gui.startpage=projectHome
rundeck.clusterMode.enabled=true
rundeck.security.httpHeaders.enabled=true
rundeck.security.httpHeaders.provider.xcto.enabled=true
rundeck.security.httpHeaders.provider.xxssp.enabled=true
rundeck.security.httpHeaders.provider.xfo.enabled=true
rundeck.security.httpHeaders.provider.csp.enabled=true
rundeck.security.httpHeaders.provider.csp.config.include-xcsp-header=false
rundeck.security.httpHeaders.provider.csp.config.include-xwkcsp-header=false
rundeck.storage.provider.1.type=db
rundeck.storage.provider.1.path=keys
rundeck.projectsStorageType=db
framework.properties file content:
# framework.properties -
#
# ----------------------------------------------------------------
# Server connection information
# ----------------------------------------------------------------
framework.server.name = 85845cd30fe9
framework.server.hostname = 85845cd30fe9
framework.server.port = 4440
framework.server.url = http://localhost
# ----------------------------------------------------------------
# Installation locations
# ----------------------------------------------------------------
rdeck.base=/home/rundeck
framework.projects.dir=/home/rundeck/projects
framework.etc.dir=/home/rundeck/etc
framework.var.dir=/home/rundeck/var
framework.tmp.dir=/home/rundeck/var/tmp
framework.logs.dir=/home/rundeck/var/logs
framework.libext.dir=/home/rundeck/libext
# ----------------------------------------------------------------
# SSH defaults for node executor and file copier
# ----------------------------------------------------------------
framework.ssh.keypath = /home/rundeck/.ssh/id_rsa
framework.ssh.user = rundeck
# ssh connection timeout after a specified number of milliseconds.
# "0" value means wait forever.
framework.ssh.timeout = 0
# ----------------------------------------------------------------
# System-wide global variables.
# ----------------------------------------------------------------
# Expands to ${globals.var1}
#framework.globals.var1 = value1
# Expands to ${globals.var2}
#framework.globals.var2 = value2
rundeck.server.uuid = a14bc3e6-75e8-4fe4-a90d-a16dcc976bf6
I am a bit stuck with configuring multiple services where nginx is the proxy server.
running :
docker -v
Docker version 19.03.8, build afacb8b7f0
docker-compose -v
docker-compose version 1.23.2, build 1110ad01
I want to start with this test, everything in the same docker-compose.yml-file :
link to jwilder/nginx
proxy : nginx-server (jwilder/nginx-proxy:0.7.0 , which is nginx 1.17.6)
container1 : httpd:2.4
container2 : httpd:2.4
updating my /etc/hosts before I start
127.0.0.1 container1.com
127.0.0.1 container2.com
Here is my docker-compose.yml-file (obs -> version 3.7)
version: '3.7'
services:
proxy:
image: jwilder/nginx-proxy:0.7.0
container_name: proxy-test
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./nginx-proxy.conf:/etc/nginx/conf.d/nginx-proxy.conf:ro
container1:
image: httpd:2.4
container_name: container-1
environment:
- VIRTUAL_HOST:container1.com
ports:
- 8080:80
container2:
image: httpd:2.4
container_name: container-2
environment:
- VIRTUAL_HOST:container2.com
ports:
- 8081:80
here is my nginx-proxy.conf:
server {
listen 80;
server_name container1.com;
location / {
proxy_pass http://localhost:8080;
}
}
server {
listen 80;
server_name container2.com;
location / {
proxy_pass http://localhost:8081;
}
}
After this I run the
docker exec container-1 sed -i 's/It works!/Container 1/' /usr/local/apache2/htdocs/index.html AND docker exec container-2 sed -i 's/It works!/Container 2/' /usr/local/apache2/htdocs/index.html
Test 1 : with curl to the port 8080 and port 8081
curl localhost:8080
response -> Container 1
curl localhost:8081
response -> Container 2
Test 2 : with curl to container1.com AND container2.com
curl container1.com
status 502
curl container2.com
status 502
Are the settings in my conf wrong ?
Troubleshooting 1:
docker exec -it proxy-test bash
I can see that the nginx-proxy.conf is in the directory (/etc/nginx/conf.d)
/etc/nginx/conf.d/default.conf is there as well
Troubleshooting 2: The proxy-log (Connection refused - while connecting to upstream)
proxy-test | nginx.1 | 2020/04/03 10:52:08 [error] 61#61: *9 connect() failed (111: Connection refused) while connecting to upstream, client: 172.29.0.1, server: container1.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "container1.com"
proxy-test | nginx.1 | 2020/04/03 10:52:08 [error] 61#61: *9 connect() failed (111: Connection refused) while connecting to upstream, client: 172.29.0.1, server: container1.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "container1.com"
Found 2 solutions to this.
(1)
the first is to update the nginx-proxy.conf with the name of the containers instead of pointing to http://localhost:8080; and http://localhost:8081; :
new config-file
server {
listen 80;
server_name container1.com;
location / {
proxy_pass http://container-1;
}
}
server {
listen 80;
server_name container2.com;
location / {
proxy_pass http://container-2;
}
}
(2)
Leaving out the nginx-proxy.conf-file , docker-compose.yml will map things correctly.
I have a simple example webAPI in .NET core, running in a docker container. I'm running Nginx also in a docker container as a reverse proxy for https redirection. The webAPI is accessible on http, but when accessing the https url, the API is not responding.
I have tried many different configurations in the nginx.conf file. I've tried using localhost, 0.0.0.0, and 127.0.0.1. I've tried using several different ports such as 5000, 8000, and 80. I've tried using upstream and also specifying the url on the proxy_pass line directly.
docker-compose.yml:
version: '3.4'
networks:
blogapi-dev:
driver: bridge
services:
blogapi:
image: blogapi:latest
depends_on:
- "postgres_image"
build:
context: .
dockerfile: Dockerfile
ports:
- "8000:80"
expose:
- "8000"
environment:
DB_CONNECTION_STRING: "host=postgres_image;port=5432;database=blogdb;username=bloguser;password=bloguser"
ASPNETCORE_ENVIRONMENT: development
#REMOTE_DEBUGGING: ${REMOTE_DEBUGGING}
networks:
- blogapi-dev
tty: true
stdin_open: true
postgres_image:
image: postgres:latest
ports:
- "5000:80"
restart: always
volumes:
- db_volume:/var/lib/postgresql/data
- ./BlogApi/dbscripts/seed.sql:/docker-entrypoint-initdb.d/seed.sql
environment:
POSTGRES_USER: "bloguser"
POSTGRES_PASSWORD: "bloguser"
POSTGRES_DB: blogdb
networks:
- blogapi-dev
nginx-proxy:
image: nginx:latest
container_name: nginx-proxy
ports:
- 80:80
- 443:443
networks:
- blogapi-dev
depends_on:
- "blogapi"
volumes:
- ./nginx-proxy/nginx.conf:/etc/nginx/nginx.conf
- ./nginx-proxy/error.log:/etc/nginx/error_log.log
- ./nginx-proxy/cache/:/etc/nginx/cache
- /etc/letsencrypt/:/etc/letsencrypt/
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./:/etc/nginx/
networks:
blogapi-dev:
driver: bridge
volumes:
db_volume:
nginx.conf:
events {}
http {
upstream backend {
server 127.0.0.1:8000;
}
server {
server_name local.website.dev;
rewrite ^(.*) https://local.website.dev$1 permanent;
}
server {
listen 443 ssl;
ssl_certificate localhost.crt;
ssl_certificate_key localhost.key;
ssl_ciphers HIGH:!aNULL:!MD5;
server_name local.website.dev;
location / {
proxy_pass http://backend;
}
}
}
Startup.cs:
namespace BlogApi
{
public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
public IConfiguration Configuration { get; }
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
var connectionString = Environment.GetEnvironmentVariable("DB_CONNECTION_STRING");
services.AddDbContext<BlogContext>(options =>
options.UseNpgsql(
connectionString));
services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_2);
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseMvc();
}
}
When I go to http://127.0.0.1:8000/api/blog, the browser returns the json response from my api. This tells me that the app us up and running on port 8000, although it should not be accessable via http:
[{"id":1,"title":"Title 1","body":"Body 1","timeStamp":"1999-01-08T04:05:06"},{"id":2,"title":"Title 2","body":"Body 2","timeStamp":"2000-01-08T04:05:06"},{"id":3,"title":"Title 3","body":"Body 3","timeStamp":"2001-01-08T04:05:06"},{"id":4,"title":"Title 4","body":"Body 4","timeStamp":"2002-01-08T04:05:06"}]
When I go to 127.0.0.1, the browser properly redirects to https://local.website.dev/ but I get no response from the api, just the chrome local.website.dev refused to connect. ERR_CONNECTION_REFUSED. I get the same response when to go to https://local.website.dev/api/blog.
Also, this is the output from docker-compose up:
Starting blog_postgres_image_1 ... done
Starting blog_blogapi_1 ... done
Starting nginx-proxy ... done
Attaching to blog_postgres_image_1, blog_blogapi_1, nginx-proxy
blogapi_1 | Hosting environment: development
blogapi_1 | Content root path: /app
blogapi_1 | Now listening on: http://[::]:80
blogapi_1 | Application started. Press Ctrl+C to shut down.
postgres_image_1 | 2019-06-27 11:20:49.441 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres_image_1 | 2019-06-27 11:20:49.441 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_image_1 | 2019-06-27 11:20:49.577 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_image_1 | 2019-06-27 11:20:49.826 UTC [25] LOG: database system was shut down at 2019-06-27 10:26:12 UTC
postgres_image_1 | 2019-06-27 11:20:49.893 UTC [1] LOG: database system is ready to accept connections
I got it working. There were a couple of issues. First, I was missing some boilerplate at the top of the nginx.conf file. Second, I needed to set the proxy_pass to the name of the docker container housing the service that I wanted to route to, in my case http://blogapi/, instead of localhost.
nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
proxy_set_header Host $host;
proxy_pass_request_headers on;
gzip on;
gzip_proxied any;
map $sent_http_content_type $expires {
default off;
~image/ 1M;
}
server {
listen 80;
listen [::]:80;
server_name localhost;
return 301 https://172.24.0.1$request_uri;
}
server {
listen 443 ssl;
server_name localhost;
ssl_certificate localhost.crt;
ssl_certificate_key localhost.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
proxy_pass http://blogapi/;
}
}
}
With the above configuration, I can access the webAPI at: https://172.24.0.1/api/blog/ If http://localhost/api/blog is entered, the browser will redirect to https://172.24.0.1/api/blog/ The IP address is the address of the blogapi-dev bridge network gateway as shown below.
docker inspect 20b
"Networks": {
"blog_blogapi-dev": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"20bd90d1a80a",
"blogapi"
],
"NetworkID": "1edd39000ac3545f9a738a5df33b4ea90e082a2be86752e7aa6cd9744b72d6f0",
"EndpointID": "9201d8a1131a4179c7e0198701db2835e3a15f4cbfdac2a4a4af18804573fea9",
"Gateway": "172.24.0.1",
"IPAddress": "172.24.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:18:00:03",
"DriverOpts": null
}
}