Nginx: upstream server temporarily disabled while proxying connection - docker

I am running Nginx on ECS Fargate with below config to implement a passthrough TLS proxy. I am getting intermittent errors - upstream server temporarily disabled while proxying connection in some of the AWS regions. The backend domain is an API Gateway domain.
stream {
map_hash_max_size 256;
map_hash_bucket_size 256;
map $ssl_preread_protocol $tlsmap {
"TLSv1.2" $upstream;
"TLSv1.3" $upstream;
default blackhole;
}
map $ssl_preread_server_name $upstream {
<api_domain> api_domain;
default blackhole;
}
upstream api_domain {
server api_domain:443;
}
upstream blackhole {
server 127.0.0.1:123;
}
server {
listen 443;
proxy_pass $tlsmap;
ssl_preread on;
}
}
Below is the nginx log for the request:
{
"time_local": "<removed>",
"remote_addr": "<removed>",
"remote_port": "24907",
"ssl_preread_server_name": "<removed>",
"ssl_preread_protocol": "TLSv1.2",
"status": "200",
"bytes_sent": "0",
"bytes_received": "0",
"session_time": "60.012",
"upstream_addr": "<removed>",
"upstream_bytes_sent": "0, 517",
"upstream_bytes_received": "0, 0",
"upstream_connect_time": "-, 0.000",
"connection": "85860",
"ssl_protocol": "",
"ssl_cipher": ""
}
Any pointers on what configuration can be fine tuned to fix this ?

Related

Configure redirect Uris of Identity server in docker environment

Okay, this quite big so just skip to the last section for a brief.
I have a demo application (netcore 6.0) built on micro-service architect, suppose we have 3 services:
identity (Auth service - IdentityServer4)
frontend (mvc - aspnet)
nginx (reverse proxy server)
and all three are running on docker environment here is the docker-compose file
services:
demo-identity:
image: ${DOCKER_REGISTRY-}demoidentity:lastest
build:
context: .
dockerfile: Identity/Demo.Identity/Dockerfile
ports:
- 5000:80 //only export port 80,
volumes:
- ./Identity/Demo.Identity/Certificate:/app/Certificate:ro
networks:
- internal
demo-frontend:
image: ${DOCKER_REGISTRY-}demofrontend:lastest
build:
context: .
dockerfile: Frontend/Demo.Frontend/Dockerfile
ports:
- 5004:80 //only export port 80,
networks:
- internal
proxy:
build:
context: ./nginx-reverse-proxy
dockerfile: Dockerfile
ports:
- 80:80
- 443:443
volumes:
- ./nginx-reverse-proxy/cert/:/etc/cert/
links:
- demo-identity
depends_on:
- demo-identity
- demo-frontend
networks:
- internal
They all design to run internal, but nginx, it will be the proxy server, and here is the nginx.config file
worker_processes 4;
events { worker_connections 1024; }
http {
upstream app_servers_identity {
server demo-identity:80;
}
upstream app_servers_frontend {
server demo-frontend:80;
}
server {
listen 80;
listen [::]:80;
server_name demo-identity;
return 301 https://identity.demo.local$request_uri;
}
server {
listen 80;
listen [::]:80;
server_name identity.demo.local;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name identity.demo.local;
ssl_certificate /etc/cert/demo.crt;
ssl_certificate_key /etc/cert/demo.key;
location / {
proxy_pass http://app_servers_identity;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
server {
listen 80;
listen [::]:80;
server_name frontend.demo.local;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name frontend.demo.local;
ssl_certificate /etc/cert/demo.crt;
ssl_certificate_key /etc/cert/demo.key;
location / {
proxy_pass http://app_servers_frontend;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
also I update the host file to configure two virtual hosts identity.demo.local and frontend.demo.local (the term "localhost" sometimes confusing me when using docker.)
Then I setup the identity server like this
...
builder.Services.Configure<IdentityOptions>(options => {
// Default Password settings.
});
services.AddIdentityServer()
.AddInMemoryIdentityResources(Config.Ids)
.AddInMemoryApiResources(Config.Apis)
.AddInMemoryClients(Config.Clients)
.AddInMemoryApiScopes(Config.ApiScopes)
.AddAspNetIdentity<ApplicationUser>()
.AddSigningCredential(new X509Certificate2("./Certificate/demo_dev.pfx", "******"));
...
and here is the client static config
...
new Client
{
ClientName = "MVC Client",
ClientId = "mvc-client",
AllowedGrantTypes = GrantTypes.Hybrid,
RedirectUris = new List<string>{ "http://gateway.demo.local/signin-oidc"},
RequirePkce = false,
AllowedScopes = { IdentityServerConstants.StandardScopes.OpenId, IdentityServerConstants.StandardScopes.Profile },
ClientSecrets = { new Secret("MVCSecret".Sha512()) }
}
...
In the Frontend service, I also configure Oidc as below
...
services.AddAuthentication(opt =>
{
opt.DefaultScheme = "Cookies";
opt.DefaultChallengeScheme = "oidc";
}).AddCookie("Cookies", opt => {
opt.CookieManager = new ChunkingCookieManager();
opt.Cookie.HttpOnly = true;
opt.Cookie.SameSite = SameSiteMode.None;
opt.Cookie.SecurePolicy = CookieSecurePolicy.Always;
})
.AddOpenIdConnect("oidc", opt => {
opt.SignInScheme = "Cookies";
opt.Authority = "http://demo-identity";
opt.ClientId = "mvc-client";
opt.ResponseType = "code id_token";
opt.SaveTokens = true;
opt.ClientSecret = "MVCSecret";
opt.ClaimsIssuer = "https://identity.demo.local";
opt.RequireHttpsMetadata = false;
});
...
TL,DR: A micro-service application host on docker, which included IdentityServer, MVC, Nginx. They all run internal and only can be access via nginx proxy. The host name also configure to virtual host names - which make more sense.
Okay here is the problem, when I access to a protected api of MVC, it redirect me to identity server (identity.demo.local) to login, but after I login success, it should redirect me to the mvc, but it did not. After research, I figure out the reason that after login, the identity redirect me to the origin site with the cookies contain authentication info, but the redirect uri is not secured, it's http://frontend.demo.local instead of https. I'm not sure how this property is configured ( I try to update the nginx.conf but nothing change). And it still work correctly when I run by visual studio, without docker.
Any help is appreciated.

Reverse proxy of multiple container

I have 2 API containers (docker) running on port 10000 and 10003. I want to reverse proxy both of them so the API can be called from a single port which is port 80. I am trying to use NGINX to do that and this is my nginx configuration file:
worker_processes 1;
events { worker_connections 1024; }
http {
server {
listen 80;
server_name container1;
location / {
proxy_pass http://10.10.10.50:10003;
}
}
server {
listen 80;
server_name container2;
location / {
proxy_pass http://10.10.10.50:10000;
}
}
}
I found that it is only working on the container 1 and if there is a request for container 2, it will generate 404 not found warning because the request go to the container 1 instead of container 2.
Finally, I found a solution using NGINX. All I need to do is to create a new NGINX container then reconfigure the url of my 2 API container. The configuration file that I wrote looks like this:
worker_processes auto;
events { worker_connections 1024; }
http {
upstream container1 {
server 10.10.10.50:10003;
}
upstream container2 {
server 10.10.10.50:10000;
}
server {
listen 80;
location /container1/ {
proxy_pass http://container1/;
}
location /container2/ {
proxy_pass http://container2/;
}
}
}
Now, I can make requests for both API containers by using port 80 as it will be re-routed from the port into the designated port (reverse-proxy).

Socket.IO-Client-Swift does not connect to nodejs socket.io server

I need your help about Websocket with iOS.
I am trying to connect to websocket server using Node.js from iOS client using Socket.IO-Client-Swift, but it seems Node.js websocket server does not recognize access from iOS client.
/etc/nginx/conf.d/rsp.arakaki.app.conf
(Configuration file for nginx)
Websocket server is running on port 3000, so requests to /socket.io/ will be proxied to upstream websocket (server localhost:3000;).
upstream php-fpm {
server localhost:9000;
}
upstream websocket {
server localhost:3000;
}
server {
server_name rsp.arakaki.app;
listen 80;
return 301 https://$host$request_uri;
}
server {
server_name rsp.arakaki.app;
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/rsp.arakaki.app/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/rsp.arakaki.app/privkey.pem;
root /var/www/rsp.arakaki.app/webroot;
index index.php;
location / {
try_files $uri $uri/ /index.php?$args;
}
location /socket.io/ {
proxy_pass http://websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
}
location ~ \.php$ {
try_files $uri =404;
include fastcgi_params;
fastcgi_pass php-fpm;
fastcgi_index index.php;
fastcgi_intercept_errors on;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
/var/www/rsp.arakaki.app/websocket/src/server.js
(Source file for websocket server)
I suppose that websocket server logs some messages when websocket client successfully connected to the server. But I can only see the messages when I use Web client. I cannot see any message when I use iOS client.
const fs = require("fs");
const express = require("express");
const app = express();
const http = require("http");
const server = http.createServer(app);
const io = require("socket.io")(server);
io.on("connection", (socket) => {
console.log("a user connected");
socket.on("disconnecting", (reason) => {
console.log("user disconnecting", { reason });
});
socket.on("disconnect", (reason) => {
console.log("user disconnected", { reason });
});
});
server.listen(3000, () => {
console.log("server running...");
});
MainViewController.swift
(iOS websocket client)
Websocket server logs nothing with this code.
I think it is because of iOS client since I succeeded to connect from web client.
I have no idea.
import UIKit
import SocketIO
class MainViewController: UIViewController {
var manager: SocketManager!
var socket: SocketIOClient!
override func viewDidLoad() {
super.viewDidLoad()
manager = SocketManager(socketURL: URL(string: "https://rsp.arakaki.app/")!, config: [.log(true), .forceWebsockets(true)])
socket = manager?.defaultSocket
socket.on(clientEvent: .connect) { (data, ack) in
print("socket connected")
self.socket.emit("join", ["hoge"])
}
socket.on(clientEvent: .error, callback: { (data, ack) in
print("socket error")
})
socket.connect(timeoutAfter: 3, withHandler: {
print("socket timeout")
})
}
}
websocket.php
(Web websocket client)
This script works well. Websocket server logs as connected.
<script src="https://rsp.arakaki.app/socket.io/socket.io.js"></script>
<script>
var socket = io();
</script>
Environment
CentOS 7
Nginx (1.18.0)
Node.js (14.15.3)
socket.io (3.0.4)
express (4.17.1)
Swift 5
Socket.IO-Client-Swift (15.2.0)
Xcode (12.3)
Thank you for your interest. Any comments are welcome. Please help me.
Socket.IO-Client-Swift could connect to server after I downgraded socket.io library for nodejs server to version 2.0.4. Thank you.
Your "Socket.IO-Client-Swift" version is 15.2, it is not compatible with socket.io server's 3.0 version.
"The client now supports socket.io 3 servers." is written in "Upgrading from v15 to v16" guide: https://nuclearace.github.io/Socket.IO-Client-Swift/15to16.html
You must use v16 minimum for server 3.0 version, but already not released. Probably it is coming soon.
So using the socket.io server's 2 version it is good solution for as now.
You can look version tags for ios client at the below link:
https://github.com/socketio/socket.io-client-swift/tags

Hashi-UI and Nomad authentication

I need advice how to set up authentication to Hashi-UI for management Nomad and Consul. I have Debian 8 server and there I installed Terraform, I created terraform file. This download and run Nomad and Consul. That works, but if I access to Hashi-UI there is no login, so everyone can access it. I run hashi like nomad job. It is run on Nginx. How can I set login for user like this for apache?
My Nomad file:
job "hashi-ui" {
region = "global"
datacenters = ["dc1"]
type = "service"
update {
stagger = "30s"
max_parallel = 2
}
group "server" {
count = 1
task "hashi-ui" {
driver = "docker"
config {
image = "jippi/hashi-ui"
network_mode = "host"
}
service {
port = "http"
check {
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
}
}
env {
NOMAD_ENABLE = 1
NOMAD_ADDR = "http://0.0.0.0:4646"
CONSUL_ENABLE = 1
CONSUL_ADDR = "http://0.0.0.0:8500"
}
resources {
cpu = 500
memory = 512
network {
mbits = 5
port "http" {
static = 3000
}
}
}
}
task "nginx" {
driver = "docker"
config {
image = "ygersie/nginx-ldap-lua:1.11.3"
network_mode = "host"
volumes = [
"local/config/nginx.conf:/etc/nginx/nginx.conf"
]
}
template {
data = <<EOF
worker_processes 2;
events {
worker_connections 1024;
}
env NS_IP;
env NS_PORT;
http {
access_log /dev/stdout;
error_log /dev/stderr;
auth_ldap_cache_enabled on;
auth_ldap_cache_expiration_time 300000;
auth_ldap_cache_size 10000;
ldap_server ldap_server1 {
url ldaps://ldap.example.com/ou=People,dc=example,dc=com?uid?sub?(objectClass=inetOrgPerson);
group_attribute_is_dn on;
group_attribute member;
satisfy any;
require group "cn=secure-group,ou=Group,dc=example,dc=com";
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 15080;
location / {
auth_ldap "Login";
auth_ldap_servers ldap_server1;
set $target '';
set $service "hashi-ui.service.consul";
set_by_lua_block $ns_ip { return os.getenv("NS_IP") or "127.0.0.1" }
set_by_lua_block $ns_port { return os.getenv("NS_PORT") or 53 }
access_by_lua_file /etc/nginx/srv_router.lua;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 31d;
proxy_pass http://$target;
}
}
}
EOF
destination = "local/config/nginx.conf"
change_mode = "noop"
}
service {
port = "http"
tags = [
"urlprefix-hashi-ui.example.com/"
]
check {
type = "tcp"
interval = "5s"
timeout = "2s"
}
}
resources {
cpu = 100
memory = 64
network {
mbits = 1
port "http" {
static = "15080"
}
}
}
}
}
}
Thank you for any advice.
Since you are using Nginx, you can easily enable authentication in Nginx. Here some useful links:
Basic Auth using Nginx: http://nginx.org/en/docs/http/ngx_http_auth_basic_module.html
LDAP Auth using Nginx: http://www.allgoodbits.org/articles/view/29
Interestingly, this problem is discussed in the HashiUI GitHub repo as well. Take a look at this approach too:
https://github.com/jippi/hashi-ui/blob/master/docs/authentication_example.md
Thanks,
Arul

Faye with nodejs over HTTPS

I'm trying to setup my production server to use faye messages using nodejs and HTTPS, but no luck.
What I have until now is:
A faye + nodejs server setup file:
var https = require('https');
var faye = require('faye');
var fs = require('fs');
var options = {
key: fs.readFileSync('/etc/httpd/ssl/example.com.key'),
cert: fs.readFileSync('/etc/httpd/ssl/example.com.crt'),
ca: fs.readFileSync('/etc/httpd/ssl/ca_bundle.crt')
};
var server = https.createServer(options);
var bayeux = new faye.NodeAdapter({mount: '/faye', timeout: 60});
bayeux.attach(server);
server.listen(8000);
A rails helper to send messages:
def broadcast(channel, &block)
message = {:channel => channel, :data => capture(&block)}
uri = URI.parse(Rails.configuration.faye_url)
Net::HTTPS.post(uri, message.to_json)
end
A javascript function to open a listener:
function openListener(channel, callback){
var faye_client = new Faye.Client("<%= Rails.configuration.faye_url %>");
faye_client.subscribe(channel , callback);
return faye_client;
}
My faye url config in production.rb:
config.faye_url = "https://example.com:8000/faye"
And finally, a call in my page javascript:
fayeClient = openListener("my_channel" , function(data) {
//do something...
});
Everything was working when testing over http on development machine. But in production don't.
If I point browser to https://example.com:8000/faye.js I got the correct javascript file.
What could be happen?
The problem was with Apache server.
I had switch to nginx and now it´s working.
However, I need to make some configurations:
Faye + node.js setup file:
var http = require('http'),
faye = require('faye');
var server = http.createServer(),
bayeux = new faye.NodeAdapter({mount: '/faye', timeout: 60});
bayeux.attach(server);
server.listen(8000);
Rails helper:
def broadcast(channel, &block)
message = {:channel => channel, :data => capture(&block)}
uri = URI.parse(Rails.configuration.faye_url)
Net::HTTP.post_form(uri, :message => message.to_json)
end
Faye url:
https://example.com/faye
And finally, nginx config
server {
# Listen on 80 and 443
listen 80;
listen 443 ssl;
server_name example.com;
passenger_enabled on;
root /home/rails/myapp/public;
ssl_certificate /home/rails/ssl/myapp.crt;
ssl_certificate_key /home/rails/ssl/myapp.key;
# Redirect all non-SSL traffic to SSL.
if ($ssl_protocol = "") {
rewrite ^ https://$host$request_uri? permanent;
}
location /faye {
proxy_pass http://127.0.0.1:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
Short words: nginx convert https requests in /faye address, to http in port 8000.
Use default http in server side, and https in client side.

Resources