How do I map a location to an upstream server in Nginx? - docker

I've got several Docker containers acting as web servers on a bridge network. I want to use Nginx as a proxy that exposes a service (web) outside the bridge network and embeds content from other services (i.e. wiki) using server side includes.
Long story short, I'm trying to use the configuration below, but my locations aren't working properly. The / location works fine, but when I add another location (e.g. /wiki) or change / to something more specific (e.g. /web) I get a message from Nginx saying that it "Can't get /wiki" or "Can't get /web" respectively:
events {
worker_connections 1024;
}
http {
upstream wiki {
server wiki:3000;
}
upstream web {
server web:3000;
}
server {
ssi on;
location = /wiki {
proxy_pass http://wiki;
}
location = / {
proxy_pass http://web;
}
}
}
I've attached to the Nginx container and validated that I can reach the other containers using CURL- they appear to be working properly.
I've also read the Nginx pitfalls and know that using hostnames (wiki, web) isn't ideal, but I don't know the IP addresses ahead of time and have tried to counter any DNS issues by telling docker-compose that the nginx container depends on web and wiki.
Any ideas?

You must turn proxy_pass http://wiki; to proxy_pass http://wiki/;.
As I know, Nginx would take two different way with/without backslash at the end of uri. You may find more details about proxy_pass directive on nginx.org.
In your case, a backslash(/) is essential as a uri to be passed to server. You've already got error message "Can't get /wiki". In fact, this error message means that there is no /wiki in server wiki:3000, not in Nginx scope.
Getting better knowing about proxy_pass directive with/without uri would help you much.
I hope this would help.

Related

Redirecting to docker registry with nginx

All I would like to do is control the top endpoint (MY_ENDPOINT where users will login and pull images. The registry and containers are being hosted (DOCKER_SAAS), so all I need is a seemingly simple redirect. Concretely, where you would normally do:
docker login -u ... -p ... DOCKER_SAAS
docker pull DOCKER_SAAS/.../...
I would like to allow:
docker login -u ... -p ... MY_ENDPOINT
docker pull MY_ENDPOINT/.../...
And even more optimally I would prefer:
docker login MY_ENDPOINT
docker pull MY_ENDPOINT/.../...
where the difference in the last item is that the endpoint contains a hashed version of the username and password, which is set into an Authorization header (using Basic) - so the user doesn't even need to worry about username and password, just their URL. I've tried a proxy_pass as we are already doing for basic packaging (using HTTPS), but that fails with a 404 (in part because we do not handle /v2 - do I need to redirect that through, also?). This led me to https://docs.docker.com/registry/recipes/nginx/, but this seems to only be pertinent if you are hosting the registry. Is what I am trying to do even possible?
It sounds like there is also an Nginx or similar reverse-proxy-server in front of the DOCKER_SAAS. Does the infrastructure look like this?
[MY_ENDPOINT: nginx] <--> ([DOCKER_SAAS ENDPOINT: ?] <--> [DOCKER_REGISTRY])
My guess is that since the server [DOCKER_SAAS ENDPOINT: ?] is apparently configured with a fixed domain name, it expects exactly that domain name in the request header (e.g. Host: DOCKER_SAAS.TLD). So the problem is probably that when proxying from [MY_ENDPOINT: nginx] to [DOCKER_SAAS ENDPOINT: ?] the wrong Host header is sent along, i.e. by default the host header MY_ENDPOINT.TLD is sent along, but it should be DOCKER_SAAS.TLD instead. E.g.:
upstream docker-registry {
server DOCKER_SAAS.TLD:8443;
}
server {
...
server_name MY_ENDPOINT.TLD;
location / {
proxy_pass https://docker-registry/;
proxy_set_header Host DOCKER_SAAS.TLD; # set the header explicitly
...
}
}
or
server {
...
server_name MY_ENDPOINT.TLD;
location / {
proxy_pass https://DOCKER_SAAS.TLD:8443/;
proxy_set_header Host DOCKER_SAAS.TLD; # set the header explicitly
...
}
}
Regarding this:
And even more optimally I would prefer: docker login MY_ENDPOINT
This could be set on the proxy server ([MY_ENDPOINT: nginx]), yes. (The Authorization: "Basic ..." can be dynamically filled with the respective token extracted from the MY_ENDPOINT, and so on). However, the docker CLI would still ask for a username and password anyway. Yes, the user can enter dummy values (to make the CLI happy), or this would also work though:
docker login -u lalala -p 123 MY_ENDPOINT
But this would be inconsistent, and would rather confuse the users, imho. So better let it be...
This simple config works both with GitHub and Amazon ECR:
server {
listen 80;
server_name localhost;
location / {
proxy_set_header Authorization "Basic ${NGINX_AUTH_CREDENTIALS}";
proxy_pass https://registry.example.com;
}
}
${NGINX_AUTH_CREDENTIALS} is a placeholder for actual hash that Docker uses to authenticate. You can get it from $HOME/.docker/config.json after using docker login once:
{
"auths": {
"registry.example.com": {
"auth": "THIS STRING"
}
}
Since proxy injects/replaces authentication header, there is no need to use docker login, just pull using the address of the proxy instead of registry address.
Why 404?
I had several 40X errors trying to test the proxy to GitHub with curl:
bad credentials - 404, not 401 or 403 as it normally is.
GET /v2/_catalog - 404 (not supported on GitHub yet, in backlog). Use GET /v2/repo_name/image_name/tags/list instead.
curl without -XGET - 405, it gives response anyway but to get 200 you need to explicitly use GET (-XGET)
Despite all that docker pull worked flawlessly from the beginning, so I recommend using it for testing.
How to handle /v2/
location / matches everything, including /v2/, so there is no particular need for that in proxy.

Nginx location app urls

I'm trying to get routing within the nignx config working. I have an app at http://app1:8081 and another app at http://app2:8080. (FYI I'm using docker containers so each app is in its own container) What I have working is for nginx is app1 is point to http://example.com. Where I'm having trouble getting http://example.com/gc to work.
server {
listen 80;
server_name http://example.com;
location /gc/ {
proxy_pass http://app2:8080/;
}
location / {
proxy_pass http://app1:8081/;
}
}
I've tried the proxy_pass with and without trailing / and the location with and without trailing /. I've had an odd result where going to example.com/gc/ would rewrite to example.com/home which didn't work.
I'm was hoping for something that is similar to IIS with application folders under a site. If you have a site that points to example.com and put an application named gc and point it to the application folder.
The end result should be example.com/gc/home renders app2:8080/home.
Any help with my nginx config would be greatly appreciated.

serving web app and python using nginx on remote server

Setup:
1> Web GUI using Angular JS hoasted in tomcat server and Python app using Flask is running on an AWS server.
2> I am working in a secure server hence am unable to access AWS directly.
3> I have setup NGINX to access GUI app from my local secured network. GUI app is running on awsserver:9506/appName
4> Flask app is running in AWS server hosted on 127.0.0.1:5000. This app has 2 uri's cross and accross:
127.0.0.1:5000/cross
127.0.0.1:5000/accross
Now in my GUI after NGINX setup i am able to access it using domain name and without port:
doman.name/appName
Now when i try to use it send a request to server my url changes to:
doman.name/cross. I did the changes in NGINX config and am able to access it but am not able to get a response back. Please find below my NGINX config file:
server {
listen 80;
server_name domain.name;
root /home/Tomcat/webapps/appName;
location / {
proxy_pass http://hostIP:9505/; #runs the tomcat home page
}
location /appName/ {
proxy_pass http://hostIP:9505/appName; #runs the application home page
}
location /cross/ {
proxy_pass http://127.0.0.1:5000/cross; #hits the python flask app and am trying to send post
}
}
Also what i noticed is that my POST request is being converted to GET at the server by NGINX
You need to be consistent with your use of the trailing /. With the proxy_pass statement (as with alias) nginx performs text substitution to form the rewritten URI.
Is the URI of the API /cross or /cross/? POST is converted to GET when the server is forced to perform a redirect (for example, to append a /).
Specifying the same URI on the location and proxy_pass is unnecessary as no changes are made.
If the hostIP in your first two location blocks is the same, and assuming that the missing trailing / is accidental, they can be combined into a single location block.
For example:
location / {
proxy_pass http://hostIP:9505;
}
location /cross {
proxy_pass http://127.0.0.1:5000;
}
See this document for more.

How can I host my API and web app on the same domain?

I have a Rails API and a web app(using express), completely separate and independent from each other. What I want to know is, do I have to deploy them separately? If I do, how can I make it so that my api is in mysite.com/api and the web app in mysite.com/
I've seen many projects that do it that way, even have the api and the app in separate repos.
Usually you don't expose such web applications directly to clients. Instead you use a proxy server, that forwards all incoming requests to the node or rails server.
nginx is a popular choice for that. The beginners guide even contains a very similar example to what you're trying to do.
You could achieve what you want with a config similar to this:
server {
location /api/ {
proxy_pass http://localhost:8000;
}
location / {
proxy_pass http://localhost:3000;
}
}
This is assuming your API runs locally on port 8000 and your express app on port 3000. Also this is not a full configuration file - this needs to be loaded in or added to the http block. Start with the default config of your distro.
When there are multiple location entries nginx chooses the most specific one. You could even add further entries, e.g. to serve static content.
While Svens answer is completely correct for the question given. I'd prefer doing it at the DNS level so that I can change the server to a new location just in case my API or Web App experience heavy load. This helps us to run our APIs without affecting WebApp and vice-versa
DNS Structure
api.mysite.com => 9.9.9.9 // public IP address of my server
www.mysite.com = > 9.9.9.9 // public IP address of my server
Since now you'd want both your WebApp and API to run on the same server, you can use nginx to forward requests appropriately.
server {
listen 80;
server_name api.mysite.com;
# ..
# Removed for simplicity
# ..
location / {
proxy_pass http://localhost:3000;
}
}
server {
listen 80;
server_name www.mysite.com;
# ..
# Removed for simplicity
# ..
location / {
proxy_pass http://localhost:8000;
}
}
Any time in future if you are experiencing overwhelming traffic, you can just alter the DNS to point to a new server and you'd be good.

Same server with different Website names

I have given two different different websites names with same IP(192.168.1.142)
Now i am using both these for configuring the reverse proxy using nginx.
Is it ok whether i can run ?
Kindly suggest any problems if any that i might face in future.
Yes this is fine. Use different server {} blocks and the server_name option to specify which configuration is which:
server {
listen 80;
server_name domain.com;
# rest of domain.com options go here
}
server {
# this will be the default site on this host
listen 80 default_server;
server_name other.com;
# rest of other.com options go here
}
In practice, splitting the two server {} blocks into different files would make maintenance easier and is often the norm.
If you only want the sites to be available on that one IP, change the listen directive to:
listen 192.168.1.142:80;
Also if you want to use SSL/HTTPS then you may run into complications as you can only have one SSL certificate per IP address. There are solutions if this is the case.

Resources