I am running multi services app orchestrated by docker-compose and for testing purposes I want to run it on localhost (MacOS).
With this NGINX configuration:
upstream fe {
server fe:3000;
}
upstream be {
server be:4000;
}
server {
server_name localhost;
listen 80;
location / {
proxy_pass http://fe;
}
location /api/ {
proxy_pass http://be;
}
}
I am able to get FE in browser from http://localhost/ and BE from http://localhost/api/ as expected.
Issue is that FE refusing communicate with BE with this error:
Error: Network error: request to http://localhost/api/graphql failed, reason: connect ECONNREFUSED 127.0.0.1:80
(It's NEXT.JS FE with NODE/EXPRESS/APOLLO-GQL BE)
Note: I need to upstream BE, because I need to download files from email directly with URL.
Am I missing some NGINX headers, DNS configuration etc.?
Thanks in an advance!
Initial call to Apollo is from Next.js (FE container) 'server side', that means BE needs to be addressed to docker network (it cannot be localhost, because for this call is localhost FE container itself). In my case is that call to process.env.BE that is set to http://be:4000.
However for other calls (sending login request from browser) is docker network unknown (calling it from localhost that has no access to docker network) that mean you have to address localhost/api/graphql.
I am able to achieve that functionality just with a small change in my FE httpLink - apollo connecting function:
uri: isBrowser ? `/api/graphql` : `${process.env.BE}/api/graphql`
NGINX config is the same as above.
NOTE: This needs to be handle only on local environment, on remote server it work fine without this 'hack' and address is always domain.com/api/graphql.
Related
recently I'm trying to set up a litte Home Server with a buildin DNS.
The DNS Service is given by lancacheDNS and set up in combination with a Monolithic-Cache (Port 1234) in two docker containers on 192.168.178.11 (Host machine) in my local network.
Since I want to serve a Website(Port 8080) along with some independent APIs (Ports 8081, 8082 or whatsoever) I decided to use Nginx as a reverse Proxy.
The DNS does the following:
getr.me --> 192.168.178.11
The routing works completely fine and getr.me:8080 gives me my website as expected.
Now the tricky part (for me);
Set up Nginx such that:
website.getr.me --> serving website
api1.getr.me --> serving the API1
api2.getr.me --> serving the API2
For that I created a Newtwork "default_dash_nginx".
I edited the nginx to connect to that via:
networks: default: name: default_dash_nginx external: true
Also I connected my website serving container (dashboard) to the network via --network default_dash_nginx.
The serving website gets the IP 172.20.0.4 (received via docker inspect default_dash_nginx) and also the nginx server is connected to the network.
Nginx works and I can edit the admin page.
But unfortunaly event though I edited the proxyHost to the IP + Port of my website receiced from the network, the site is not available. Here the output of my network inspection: https://pastebin.com/jsuPZpqQ
I hope you have another Idea,
thanks in advance,
Maxi
Edit:
The nginx container is actually a NginxReverseProxyManager Container (I don´t know of it was unclear above or simply not important)
The Nginx container can actually Ping the website container ang also get the HTML files from Port 80 from it.
So it seems like the nginx itself isn´t working like it should.
The first answer got no results( I tried to save it as every of the mentioned files
here
Do I have missed something or am I just not smart enough?
nginx config, try and understand
server {
listen 80;
server_name api1.getr.me;
location / {
proxy_pass http://localhost:8081;
}
}
server {
listen 80;
server_name api2.getr.me;
location / {
proxy_pass http://localhost:8082;
}
}
server {
listen 80;
server_name some.getr.me;
location / {
proxy_pass http://localhost:XXXX;
}
}
I have a flask app that was built based on the following instructions that allows me to authenticate users based Azure AD.
https://learn.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-python-webapp
The app works great when tested on localhost:5000. Now I want to deploy it to a production server using docker and nginx reverse proxy. I have created a docker container so that the docker port is mapped to port 6000 on localhost. Then I have added a proxy_pass in nginx config to pass the traffic to the docker container.
nginx.conf
location /app/authenticated-app/ {
proxy_pass http://localhost:6000/;
proxy_redirect default;
}
With this config, I can go to the login page via https://server/app/authenticated-app however, when I click on login, the request that goes to azure has a query parameter redirect_uri that's set to http://localhost:6000/getToken. Therefore, once I complete the login, the app gets redirected to that url. Does anyone know how to fix this and get it redirected to the proper url. I have already added https://server/app/authenticated-app/getToken under the redirect_uri on azure portal.
I had a similar issue, with nginx and my flask app both running in docker containers in the same stack and using a self-signed SSL certificate.
My nginx redirects requests as follow:
proxy_pass http://$CONTAINER_NAME:$PORT;
and the msal app uses that URL when building its redirect_uri
def _build_auth_code_flow(authority=None, scopes=None):
return _build_msal_app(authority=authority).initiate_auth_code_flow(
scopes or [],
redirect_uri=url_for("auth.authorized", _external=True))
I cheated a little bit by hardcoding the return URL I wanted (which is identical to the one I configured in my azure app registration) in my config.py file and using that for the redirect_uri:
def _build_auth_code_flow(authority=None, scopes=None):
return _build_msal_app(authority=authority).initiate_auth_code_flow(
scopes or [],
redirect_uri=current_app.config['HARDCODED_REDIRECT_URL_MICROSOFT'])
In my case, that url would be https://localhost/auth/redirect/. I also needed to configure my nginx to redirect all requests from http to https:
events {}
http {
server {
listen 80;
server_name localhost;
return 301 https://localhost$request_uri;
}
...
I had the same issue, what I did is :
Use Cherrypy to enable ssl on custom port.
cherrypy.config.update({'server.socket_host': '0.0.0.0',
'server.socket_port': 8443,
'engine.autoreload.on': False,
'server.ssl_module':'builtin',
'server.ssl_certificate':'crt',
'server.ssl_private_key':'key'
})
Then install Nginx and proxy to https://127.0.0.1:8443
Not sure if that will help but this what I did to get my flask app working with MSAL.
So I've been facing a weird problem, and I'm not sure where the fault is. I'm running a container using docker-compose, and the following nginx configuration works great:
server {
location / {
proxy_pass http://container_name1:1337;
}
}
Where container_name was the name of the service I gave in docker-compose.yml file. It resolves to the IP perfectly and it works. However, the moment I change the above file to this:
upstream backend {
least_conn;
server container_name1:1337;
server container_name2:1337;
}
server {
location / {
proxy_pass http://backend;
}
}
It stops working completely and in error logs I get the following:
2020/03/17 13:16:03 [error] 8#8: *11 no live upstreams while connecting to upstream, client: xxxxxx, server: codedamn.com, request: "GET /HTTP/1.1", upstream: "http://backend/", host: "xxxxx"
Why is that? Is nginx not able to resolve DNS when inside upstream blocks? Could anyone help with this problem?
NOTE: This happens only on production (Ubuntu 16.04), on local (macOS Catalina), the same configuration works fine. I'm totally confused after discovering this.
Update 1: The following works:
upstream backend {
least_conn;
server container_name1:1337;
}
But not with more than one server. Why?!
Alright. Figured it out. This is because docker-compose creates contianers randomly and nginx quickly marks the containers as down (I was deploying this on production when there was some traffic). The app containers weren't ready, but nginx was, so it marked them down and stopped forwarding any traffic.
For now, instead of syncing up docker-compose container creation order (which was a bit hacky, as I discovered), I disabled the failed attempts of nginx to automatically mark service as down by writing:
server app_inst1:1337 max_fails=0;
which lets nginx still forward the traffic to a particular service (and my docker is configured to restart the container in case it crashes), which is fine.
We use docker swarm with service discovery for Backend REST application. The services in swarm are configured with endpoint_mode: vip and are running in global mode. Nginx is proxy passed with service discovery aliases. When we update Backend services sometimes nginx throws 502 as service discovery may point to the updating service.
In such case, We wanted to retry the same endpoint again. How can we achieve this?
According to this we added upstream with the host's private IP and used proxy_next_upstream error timeout http_502; but still the problem persists.
nginx.conf
upstream servers {
server 192.168.1.2:443; #private ip of host machine
server 192.168.1.2:443 backup;
}
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
proxy_next_upstream http_502;
location /endpoint1 {
proxy_pass http://docker.service1:8080/endpoint1;
}
location /endpoint2 {
proxy_pass http://docker.service2:8080/endpoint2;
}
location /endpoint3 {
proxy_pass http://docker.service3:8080/endpoint3;
}
}
Here if http://docker.service1:8080/endpoint1 throws 502 we want to hit http://docker.service1:8080/endpoint1 again.
Additional queries:
Is there any way in docker swarm to make it stop pointing to updating service in service discovery till that service is fully up?
Is upstream necessary here since we directly use docker service discovery?
I suggest you add a health check directly at container level (here)
By doing so, docker pings periodically an endpoint you specified, if it's found unhealthy it will 1) stop routing traffic to it 2) kill the container and restart a new one. Therefore you upstream will be resolved to one of the healthy containers. No need to retry.
As for your additional questions, the first one, docker won't start routing til it's healthy. The second, nginx is still useful to distribute traffic according to endpoint url. But personally nginx + swarm vip mode is not a great choice because swarm load balancer is poorly documented, it doesn't support sticky session and you can't have proxy level health check, I would use traefik instead, it has its own load balancer.
I am wanting to move a hosted mongo database service to a self hosted solution, behind a firewall. The hosted ruby on rails app connects via MongoMapper. I want to port only the database first, and then maybe the ruby on rails app.
I've set up server_name data.example.com port 80 on nginx 1.9.11 to redirected to another upstream port, localhost:8090.
This is done as I also have this nginx serve a website example.com:80 and www.example.com:80, but I only want data.example.com:80 to connect to the upstream tcp port. I do this as server_name is http section only, i.e. understandably not available in the stream server set-up)
so the "data-web-domain":80 to localhost:8090 config:
server {
listen 80;
server_name data.example.com;
location / {
proxy_pass http://localhost:8090;
}
Now I then pass localhost:8090 to mongodb via stream to allow tcp connection.
stream {
server {
listen 8090;
proxy_connect_timeout 1s;
proxy_timeout 3s;
proxy_pass stream_mongo_backend;
}
upstream stream_mongo_backend {
server localhost:27017;
}
}
and can browse to as data.example.com
Blockquote
You are trying to access MongoDB on the native driver port. For http diagnostic access, add 1000 to the port number
Ok so the above config allows a web request at the web domain to return an error message back from the mongodb
When connecting to the server directly, via localhost or the server's unix domain name to the port 8090 the stream redirect to 27017 works, i.e. this mongo command line connects
mongo server-unix-name:8090/dummydb -udummyuser -pdummysecret
But via the web domain doesn't
mongo data.example.com:80/dummydb -udummyuser -pdummysecret
What's broken between the data.example.com:80 to the localhost:8090?