I've been trying to set up a go web application with docker and nginx as a reverse proxy.
My plan is to use a single domain for multiple applications e.g.: mydomain.com/myapp1.
However whenever I try to access my app in with an url like localhost/myapp/something, the request is redirected to http://localhost/something.
I've gone through all kinds of nginx configs, none of them worked, so I suspect that the problem is on the go side.
In the app itself, I'm using gorilla mux for routing, and also negroni for some middleware.
The relevant code looks something like this:
baseRouter := mux.NewRouter()
baseRouter.HandleFunc("/something", routes.SomeHandler).Methods("GET")
baseRouter.HandleFunc("/", routes.IndexHandler).Methods("GET")
commonMiddleware := negroni.New(
negroni.HandlerFunc(middleware.Debug),
)
commonMiddleware.UseHandler(baseRouter)
log.Fatal(http.ListenAndServe(":5600", commonMiddleware))
According to this, every request should go through my debug middleware, which just prints some request info to stdout, however when the redirects happen, it doesn't work.
But if the path doesn't match any handlers, everything works just fine, the standard go 404 message appears as expected, and the request is printed by the debug middleware as well.
My GET handlers generally only do something like this:
templ, _ := template.ParseFiles("public/something.html")
templ.Execute(w, utils.SomeTemplate{
Title: "something",
})
And finally, the relevant part in my nginx config:
server {
listen 80;
server_name localhost;
location /myapp/ {
# address "myapp" is set by docker-compose
proxy_pass http://myapp:5600/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache_bypass $http_upgrade;
}
}
This kind of nginx config used to be enough for nodeJS apps in the past, so I don't understand why it wouldn't work. If anyone could point out what the hell I'm doing wrongly, I would appreciate it a lot.
Your nginx looks fine to me.
In your Go code, when you create your router, you may use the myapp as the PathPrefix like below:
baseRouter := mux.NewRouter()
subRouter := baseRouter.PathPrefix("/myapp").Subrouter()
subRouter.HandleFunc("/something", routes.SomeHandler).Methods("GET")
Or simply add myapp to the path: baseRouter.HandleFunc("/myapp/something", routes.SomeHandler).Methods("GET")
Your nginx configuration is perfectly fine.
The path you mentioned (/myapp/something) will show you 404 because you have not registered that in your routes.
I would suggest that if you wish to host multiple applications using the same domain, prefer using subdomains (myapp1.mydomain.com) instead of path (mydomain.com/myapp1).
For each subdomain, you can create a separate nginx server block by changing the server_name value only and keeping the rest of the nginx server file the same.
Then, while using middleware, you may filter out the domains and provide the requested resource.
Related
EDIT: The problem is a bug in Plausible, not Nginx. See: https://github.com/plausible/analytics/discussions/1184
I've been struggling to get my Nginx Reverse Proxy to work for a Docker app. Similar questions have been asked but haven't provided the solution for this situation. I've spent hours now trying to get this working.
I'm trying to self host Plausible which runs in a Docker container at http://localhost:8000
Going to http://server-ip:8000/ works fine
I want to set up a Nginx reverse proxy to provide SSL + and set it up on my domain. The tricky part is that I want to serve it from a subfolder instead of the root.
So serving it from https://app.mydomain.com/plausible instead of https://app.mydomain.com/
Current Code:
location /plausible {
rewrite ^/plausible/?(.*) /$1 break;
proxy_pass http://localhost:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
The Problem
app.mydomain.com/plausible/ redirects to app.mydomain.com/login instead of app.mydomain.com/plausible/login.
If I manually go to app.mydomain.com/plausible/login I can see the form field but all the styling and scripts are broken. They try to load from app.mydomain.com/stylesheet.css instead of app.mydomain.com/plausible/stylesheet.css
So I believe the Docker app expects to be on the root URL. So Nginx should rewrite the requests to include the subfolder in some way? I just can't figure out how to do it.
What I've tried:
Trailing slash, no trailing slash
Just proxy_pass
All kinds of rewrite variations
Anyone who can help me in the right direction? Thank you in advance
am trying to setup a Bitbucket OAuth consumer for authentication for an application called SonarQube (linting tool). Following the guide, it looks like I have setup everything correctly - https://github.com/SonarSource/sonar-auth-bitbucket.
The callback URL is set to https://myserver/oauth2/callback. When I navigate to it directly, I get "You're not authorized to access this page. Please contact the administrator." - which probably is valid. I don't have any trailing slashes or incorrect scheme.
One thing to note is that I am using an nginx reverse proxy. I did read sometimes this issue surfaces when the headers X-Forwarded-For and X-Forwarded-Proto are set incorrectly. Please note my troubleshooting skills around this is not the greatest but when I use dev tools and navigate to https://myserver/oauth2/callback, I don't see those headers set. However when I run nginx -T | grep proxy_set_header, it seems to be correct.
root#01008bf897b1:/app# nginx -T | grep proxy_set_header
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header Proxy "";
Also when I look at the URL when doing the Bitbucket authentication, I notice it does not include https in the redirect_uri:
https://bitbucket.org/site/oauth2/authorize?response_type=code&client_id=Fs5Fq2e5VqfduRs4xD&redirect_uri=myserver%2Foauth2%2Fcallback%2Fbitbucket&scope=account
If I had https, like below, it actually prompts for "Confirm access to your account":
https://bitbucket.org/site/oauth2/authorize?response_type=code&client_id=Fs5Fq2e5VqfduRs4xD&redirect_uri=https%3A%2F%2Fmyserver%2Foauth2%2Fcallback%2Fbitbucket&scope=account
Is my reverse proxy setup incorrectly - proxy headers? Possible Bitbucket issue? Any help would be appreciated!
Had the same issue and solved by adding the actual Sonar URL to the config.
If you leaveit empty the defaul value is localhost
under Configuration->General
sonarqube configuration screen
This was not proxy related but a configuration issue in SonarQube.
I had originally set sonar.core.serverBaseURL=https://mysonarqube.com as an environment variable in my docker container which I thought wasn't being applied as when I checked in the UI, it was blank. I then updated the env variable to sonar.core.serverBaseURL=notworking so I can troubleshoot it/delete it later but it seemed to set that value even though the UI showed the correct value. Once it was updated it worked (as well as all my other auth integrations such as Google and GitHub).
I'm receiving the error Authentication required after I login in the Wildfly 13 Management Console.
If I type a user or password wrong, it asks again, but if I type correctly it shows the page with the error message (so I assume the user and password are correct, but something else after that gives the error).
I'm using docker to run a nginx container and a wildfly container.
The nginx listens externally on port 9991 and proxy pass the request to the wildfly container, but it shows the error described before.
It just happens with the Wildfly Console, every other request proxied, even request proxied to a websocket or to Wildfly on port 8080, are done successfully.
The Wildfly container listens externally on port 9990 and I can access the console successfully in this port. If on docker I map the port "9992:9990" I still can access the console successfully through port 9992.
So, it seems that this is not related to docker, but to the Wildfly Console itself. Probably some kind of authentication that is not happening successfully when using a reverse proxy in the middle.
I have a demo docker project on https://github.com/lucasbasquerotto/pod/tree/0.0.6, and you can download the tag 0.0.6 that has everything setup to work with Wildfly 13 and nginx, and to simulate this error.
git clone -b 0.0.6 --single-branch --depth 1 https://github.com/lucasbasquerotto/pod.git
cd pod
docker-compose up -d
Then, if you access the container directly in http://localhost:9990 with user monitor and password Monitor#70365 everything works.
But if you access http://localhost:9991 with the same credentials, through the nginx reverse proxy, you receive the error.
My nginx.conf file:
upstream docker-wildfly {
server wildfly:9990;
}
location / {
proxy_pass http://docker-wildfly;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
I've also tried with:
proxy_set_header X-Forwarded-Proto $scheme;
And also with the Authorization header (just the 2nd line and also with both):
proxy_set_header Authorization $http_authorization;
proxy_pass_header Authorization;
And also defining the host header with the port (instead of just $host):
proxy_set_header Host $server_addr:$server_port;
I've tried the above configurations isolated and combined together. All to no avail.
Any sugestions?
Has anyone successfully accessed the Wildfly Console through a reverse proxy?
Update (2018-09-22)
It seems Wildfly uses a digest authentication (instead of basic).
I see the header in the console like the following:
Authorization: Digest username="monitor", realm="ManagementRealm", nonce="AAAAAQAAAStPzpEGR3LxjJcd+HqIX2eJ+W8JuzRHejXPcGH++43AGWSVYTA=", uri="/console/index.html", algorithm=MD5, response="8d5b2b26adce452555d13598e77c0f63", opaque="00000000000000000000000000000000", qop=auth, nc=00000005, cnonce="fe0e31dd57f83948"
I don't see much documentation about using nginx to proxy pass requests with digest headers (but I think it should be transparent).
One question I saw here in SO is https://serverfault.com/questions/750213/http-digest-authentication-on-proxied-server, but there is no answer so far.
I saw that there is the nginx non-official module https://www.nginx.com/resources/wiki/modules/auth_digest/, but in the github repository (https://github.com/atomx/nginx-http-auth-digest) it says:
The ngx_http_auth_digest module supplements Nginx's built-in Basic
Authentication module by providing support for RFC 2617 Digest
Authentication. The module is currently functional but has only been
tested and reviewed by its author. And given that this is security
code, one set of eyes is almost certainly insufficient to guarantee
that it's 100% correct. Until a few bug reports come in and some of
the ‘unknown unknowns’ in the code are flushed out, consider this
module an ‘alpha’ and treat it with the appropriate amount of
skepticism.
Also it doesn't seem to me allright to hardcode the user and pass in a file to be used by nginx (the authentication should be transparent to the reverse proxy in this case).
In any case, I tried it and it correctly asks me to authenticate, even if the final destination does not have a digest authentication, like when trying to connect to the wildfly site (not console), it asks when trying to connect to nginx (before proxying the request), then it forwards successfully to the destination, except in the case of wildfly console, it keeps asking me to authenticate forever.
So I think this is not the solution. The problem seems to be in what the nginx is passing to the Wildfly Console.
I had the same problem with the HAL management console v3.3 and 3.2
I could not get ngnix HTTPS working due to authentication errors, even though the page prompted http basic auth user and pass
This was tested in standalone mode on the same server
My setup was :
outside (https) -> nginx -> http://halServer:9990/
This resulted in working https but with HAL authentication errors (seen in the browsers console) the webpage was blank.
At first access the webpage would ask http basic auth credentials normally, but then almost all https requests would return an authentication error
I managed to make it work correctly by first enabling the HAL console https with a self signed certificate and then configuring nginx to proxy pass to the HAL HTTPS listener
Working setup is :
outside (https) -> nginx (https) -> https://halServer:9993/
Here is the ngnix configuration
server {
listen 80;
listen [::]:80;
listen 443 ssl;
listen [::]:443 ssl;
server_name halconsole.mywebsite.com;
# SSL
ssl_certificate /keys/hal_fullchain.pem;
ssl_certificate_key /keys/hal_privkey.pem;
ssl_trusted_certificate /keys/hal_chain.pem;
# security
include nginxconfig.io/security.conf;
# logging
access_log /var/log/nginx/halconsole.mywebsite.com.access.log;
error_log /var/log/nginx/halconsole.mywebsite.com.error.log warn;
# reverse proxy
location / {
# or use static ip, or nginx upstream
proxy_pass https://halServer:9993;
include nginxconfig.io/proxy.conf;
}
# additional config
include nginxconfig.io/general.conf;
include nginxconfig.io/letsencrypt.conf;
}
# subdomains redirect
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name *.halconsole.mywebsite.com;
# SSL
ssl_certificate /keys/hal_fullchain.pem;
ssl_certificate_key /keys/hal_privkey.pem;
ssl_trusted_certificate /keys/hal_chain.pem;
return 301 https://halconsole.mywebsite.com$request_uri;
}
proxy.conf
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
# Proxy headers
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Forwarded $proxy_add_forwarded;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-By $server_addr;
# Proxy timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
The easiest way to enable https console is by using the console itself
generate a java JKS keystore using either the command line keytool or a GUI program
I like GUIs, so I used Key Store Explorer https://github.com/kaikramer/keystore-explorer
copy keystore file on the halServer server where it has read access (no need to keep it secret) i placed mine inside wildfly data dir in a "keystore" directory.
# your file paths might differ, don't copy paste
cp /home/someUser/sftp_uploads/managementKS /opt/wildfly/standalone/data/keystore/managementKS
set permissions
# your file paths might differ, don't copy paste
chown --recursive -H wildfly:wildfly /opt/wildfly/standalone/data/keystore
(use vpn) login to cleartext console http://halServer:9990/
add keystore : navigate :
configuration -> subsystems -> security (elytron) -> other settings (click view button)
stores -> keystore -> add
...
Name = managementKS
Type = JKS
Path = keystore/managementKS
Relative to = jboss.server.data.dir
Credential Reference Clear Text = keystore-password click Add
result in standalone.xml
<key-store name="managementKS">
<credential-reference clear-text="keystore-password"/>
<implementation type="JKS"/>
<file path="keystore/managementKS" relative-to="jboss.server.data.dir"/>
</key-store>
add key manager : navigate :
ssl -> key manager -> add
...
Name = managementKM
Credential Reference Clear Text = keystore-password
Key Store = managementKS
result in standalone.xml
<key-manager name="managementKM" key-store="managementKS">
<credential-reference clear-text="keystore-password"/>
</key-manager>
add ssl context : navigate :
ssl -> server ssl context -> add
...
Name = managementSSC
Key Manager = managementKM
...
Edit added : Protocols = TLSv1.2
save
result in standalone.xml
<server-ssl-contexts>
<server-ssl-context name="managementSSC" protocols="TLSv1.2" key-manager="managementKM"/>
</server-ssl-contexts>
go back
runtime -> server (click view button)
http management interface (edit)
set secure socket binding = management-https
set ssl context = managementSSC
save
restart wildfly
systemctl restart wildfly
I'm trying to set nginx location that will handle various paths and proxy them to my webapp.
Here is my conf:
server {
listen 80;
server_name www.example.org;
#this works fine
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:8081/myApp/;
}
#not working
location ~ ^/(.+)$ {
proxy_pass http://localhost:8081/myApp/$1;
}
}
I would like to access myApp with various paths like: /myApp/ABC, /myApp/DEF, myApp/GEH or /myApp/ZZZ.
Of course these paths are not available in myApp. I want them to point to root of myApp and keep url.
Is that possible to archive with nginx ?
Nginx locations match in order of definition. location / is basically a wildcard location, so it will match everything, and nothing will reach the second location. Reverse the order of the two definitions, and it should work. But actually, now that I look at it more closely, I think both locations are essentially doing the same thing:
/whatever/path/ ->>proxies-to->> http://localhost:8081/myApp/whatever/path/
A very late reply. this might help someone
try proxy_pass /myApp/ /location1 /location2;
Each location separated with space.
You will probably have to do a rewrite followed by a proxy pass, I had the same issue. Check here: How to make a conditional proxy_pass within NGINX
I'm developing a web site with web.py and nginx which, up until now I have been working on locally with the built in development server. Its now its time to move the site over to a live server. I'd like to deploy the site so the root is something like examples.com/test but all my url handling stuff is broken. I had thought I could create a url_prefix variable and pepper it around the web.py code but that sure seems dirty. It seems like the best thing to do would be to have nginx strip the prefix from the url so the web.py application never sees it but I'm not sure its even possible.
Does anybody know how to handle this situation?
Run the web.py app on a local port using a web server such as gunicorn, then configure nginx to host static files and reverse proxy the gunicorn server. Here are some configuration snippets, assuming that:
your project is in /var/www/example-webpy
your static files are in example-webpy/static
your nginx configuration is in /etc/nginx.
Expose the WSGI object in your application
It looks like web.py doesn't do this by default, so you'll want something like the following in your app.py (or whichever file bootstraps your app):
# For serving using any wsgi server
wsgi_app = web.application(urls, globals()).wsgifunc()
More information in this SO question.
Run your application server
Install gunicorn and start your application by running something like this (where example is the name of your Python module):
gunicorn example:wsgi_app -b localhost:3001
(You'll probably want to automate this using something like Supervisor so that the application server is restarted if your server bounces.)
Configure nginx
Put the following in /etc/nginx/reverse-proxy.conf (see this SO answer)
# Serve / from local http server.
# Just add the following to individual vhost configs:
# proxy_pass http://localhost:3001/;
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 10;
proxy_read_timeout 10;
Then configure your domain in /etc/nginx/sites-enabled/example.com.conf:
server {
server_name example.com
location /test/ {
include /etc/nginx/reverse-proxy.conf;
rewrite /test/(.*) /$1 break;
proxy_pass http://localhost:3001/;
}
location / {
root /var/www/example-webpy/static/;
}
}
Note the rewrite, which should ensure that your web.py app never sees the /test/ URL prefix. See the nginx documentation on proxy_pass and HttpRewriteModule.
This will result in requests for example.com/js/main.js to map to example-weby/static/js/main.js, so it assumes that your web.py templates didn't add a /static/ prefix. It also results in everything in the static directory becoming visible to the web, so make sure that's what you intend!