I have installed Gerrit 2.12.3 on my Ubuntu Server 16.04 system.
Gerrit is listening on http://127.0.0.1:8102.
behind an nginx server, which is listening on https://SERVER1:8102.
Some contents of the etc/gerrit.config file is as follow:
[gerrit]
basePatr = git
canonicalWebUrl = https://SERVER1:8102/
[httpd]
listenUrl = proxy-https://127.0.0.1:8102/
And some contents of my nginx settings is as follow:
server {
listen 10.10.20.202:8102 ssl;
ssl on;
ssl_certificate /etc/nginx/ssl/server1.crt;
ssl_certificate_key /etc/nginx/ssl/server1.key;
location / {
# Allow for large file uploads
client_max_body_size 0;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://127.0.0.1:8102;
}
}
Nearly all the function of Gerrit works very well now. But one problem I can not solved is that:
The url generated in notification email is https://SERVER1:8102/11 which seems right, but when I click the link, it redirects to https://SERVER1/#/c/11/ instead of https://SERVER1:8102/#/c/11/
Can anyone tell me how to solve it?
Thanks.
That the port of gerrit.canonicalWebUrl and httpd.listenUrl match makes no sense.
Specify as gerrit.canonicalWebUrl the URL that is accessible to your users through the Nginx proxy, e.g., https://gerrit.example.com.
This vhost in Nginx (listening to port 443) in turn is configured in the proxy to connect to the backend as specified in httpd.listenUrl, so e.g. port 8102 to which Gerrit would be listening in your case.
The canonicalWebUrl is just used that Gerrit knows its own host name, e.g., for sending email notifications IIRC.
You might also just follow Gerrit Documentation and stick to the ports as described there.
EDIT: I really noticed that you want the proxy AND Gerrit both to listen on port 8102 - on a public interface respectively on 127.0.0.1. While this would work, if you really make sure that Nginx is not binding to 0.0.0.0, I think it makes totally no sense. Don't you want your users to connect via HTTPS on port 443?
Related
In the past I tried setting up Jfrog Artifactory OSS and was able to get it through my reverse proxy exposed outside my home network, and I was able to push to it VIA my computer local CLI and through Drone CI but it took an abnormal amount of time (roughly 5 min) to push to my own registry when pushing to DockerHub or Gitlab took a matter of seconds.
My container is really small (think MBs) and I never have any issues with pushing it to any other remote registry. I always thought it might have been the registry and the fact it was running on an old machine until now.
I recently discovered my git solution Gitea has a registry built in, so I did the same, I got everything set up and mapped and once again it took an abnormal amount of time (roughly 5 min) to push to my own registry (this time backed by Gitea).
This leads me to think my issues is Nginx Proxy Manager related. I found some documenation online but it was really general and vague, I have the current proxy config below and it still has the issue. Could anyone point me in the right direction? I also included a few other posts related to this issue.
server {
set $forward_scheme http;
set $server "192.168.X.XX";
set $port 3000;
listen 8080;
#listen [::]:8080;
listen 4443 ssl http2;
#listen [::]:4443;
server_name my.domain.com;
# Let's Encrypt SSL
include conf.d/include/letsencrypt-acme-challenge.conf;
include conf.d/include/ssl-ciphers.conf;
ssl_certificate /etc/letsencrypt/live/npm-47/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/npm-47/privkey.pem;
# Force SSL
include conf.d/include/force-ssl.conf;
access_log /data/logs/proxy-host-10_access.log proxy;
error_log /data/logs/proxy-host-10_error.log warn;
#Additional fields I added ontop of the default Nginx Proxy Manager config
proxy_buffering off; proxy_ignore_headers "X-Accel-Buffering";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
location / {
# Proxy!
include conf.d/include/proxy.conf;
}
# Custom
include /data/nginx/custom/server_proxy[.]conf;
}
I also checked the live logs for Gitea and I see the requests coming real time and processed really fast, but there is always a significant delay before it receives the next request which makes me think the Nginx Proxy Manager is not correctly forwarding the requests or there is some setting that I missed. Any help would be greatly appreciated!
Some of the settings I got to try were from the below sources
Another registry
Another stack overflow suggestion
I have 2 dockers containers running on my EC2 instance:
Docker1: Wordpress website running with PHP server mapped to port 8081 of EC2 instance.
Docker2: Portal created on Angular running with NGINX mapped to port 8082 of EC2 instance.
I want to use the same EC2 instance for my domain and subdomain xyz.com and portal.xyz.com on the same port 80.
Ideally, if the request comes from xyz.com, it should redirect to Docker1 running on 8081 and if it is from portal.xyz.com, it should be redirected to Docker2 running on 8082.
Is it feasible and if yes, how? I do not want to spawn 2 EC2 instances for this and both have to be mapped to HTTP on port 80.
Using multiple load balancers and target groups can solve your problem. https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-ecs-services-now-support-multiple-load-balancer-target-groups/
You can set up both load balancers to listen on HTTP and target your one ECS instance on different ports. After that, setting up the routes in Route53 will be straight forward.
I had done something similar on a VPS server, technically it should work on an ec2 instance as well.
I created a new docker network 'proxy-network' (Note: You can do without creating a network and just proxy to localhost:8081 and localhost:8082. This is just cleaner)
Launch all the application servers in that network with proper names (eg: wordpress, angular). Use --name in run command or conatiner_name in docker-compose.
Launch a new nginx server map host port 80 and 443(if you need https to work). I used nginx:latest image. Created a new default.conf and replace /etc/nginx/conf.d/default.conf in container.
Sample proxy.conf should like this:
server {
listen 80;
server_name domain1.com www.domain1.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://wordpress;
}
}
server {
listen 80 default_server;
server_name domain2.com www.domain2.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://angular;
}
}
Once you update the alias records in domain registrar works like a charm. Hope it helps. Good luck.
Running numerous dockers right now on a new build for a homelab server and trying to make sure everything is locked down and secure. I use the server for a variety of things, both requiring access from the outside world (nextcloud) and things that I will only access from my internal network (plex). Of course the server is behind a router that limits open ports but looking for additional security - I would like to restrict those dockers that I want to only access via internal network, to 192.168.0.0/24. That way, if somehow a port became open on my router, it would not be exposed (am I being to paranoid?).
Currently docker-compose files are exposing ports via:
....
ports:
- 8989:8989
....
This is of course works fine but is accessible to the world should I open the port on my router. I know i can bind to localhost via
....
ports:
- 127.0.0.1:8989:8989
....
But that doesn't help me when I'm trying to access the docker from my internal network. I've read numerous articles regarding docker networks and various flags and also read about possibility iptables solution.
Any guidance is much appreciated.
Thanks,
Simply do not declare any ports in docker-compose, they are automatically visible between containers.
I use an elasticsearch container in this way and a separate kibana can connect to it by the server name declated on the yml.
if somehow a port became open on my router, it would not be exposed
Using this procedure the ports are never visible outside the docker environment (i.e. outside == in your local network).
If your concern is that ports are published in your LAN when doing the procedure I told you, they are not.
you are actually very close with
ports:
- 127.0.0.1:8989:8989
as with this it is accessible locally on your server, fun enough, your bind to localhost trick is exactly what i was looking for my own setup xD
from this point there are actually a couple of ways to set it up so that you can access it on your local network.
SSH Tunneling
the first one is the one i'm using on my own setup: ssh forwarding
you can, if you haven't already, set up an .ssh/config file to forward localhost ports to your computer. taking your example into account the syntax is as follows
Host some-hostname
HostName 192.168.x.x
User user-of-server
LocalForward 8989 127.0.0.1:8989
some-hostname is a shortname you can choose, user-of-server is the actual user you set up to log in with, 192.168.x.x is the actual local ip address of your server and you can also include a IdentityFile /path/to/ssh/key. with this you can run ssh some-hostname to ssh into your server from any computer on your local network and your server will be available at localhost:8989 on that specific computer
Reverse Proxy
the second is a reverse proxy like nginx, this too can be run in a docker container and you could bind it to any port like say for example to 6443 and you can mount its config file into the container with
volumes:
- 'config:/etc/nginx/conf.d'
ports:
- 6443:443
volumes:
config:
driver: local
driver_opts:
type: none;
o: bind
device: "./config"
the in the ./config/defaul.conf you could set up something like
server {
listen 443 ssl http2;
server_name 192.168.x.x;
ssl_certificate /etc/letsencrypt/signed_chain.crt;
ssl_certificate_key /etc/letencrypt/domain.key
include /etc/nginx/includes/ssl.conf
location /{
### force timeouts if one of backend is died ##
### such died, many backend, very timeouts ##
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
### Set headers ####
proxy_set_header Accept-Encoding "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host:server_port;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Proto $scheme;
add_header Front-End-Https on;
proxy_buffering off;
proxy_pass http://127.0.0.1:8989
}
then it should be available on and only on 192.168.x.x:6443
I'm receiving the error Authentication required after I login in the Wildfly 13 Management Console.
If I type a user or password wrong, it asks again, but if I type correctly it shows the page with the error message (so I assume the user and password are correct, but something else after that gives the error).
I'm using docker to run a nginx container and a wildfly container.
The nginx listens externally on port 9991 and proxy pass the request to the wildfly container, but it shows the error described before.
It just happens with the Wildfly Console, every other request proxied, even request proxied to a websocket or to Wildfly on port 8080, are done successfully.
The Wildfly container listens externally on port 9990 and I can access the console successfully in this port. If on docker I map the port "9992:9990" I still can access the console successfully through port 9992.
So, it seems that this is not related to docker, but to the Wildfly Console itself. Probably some kind of authentication that is not happening successfully when using a reverse proxy in the middle.
I have a demo docker project on https://github.com/lucasbasquerotto/pod/tree/0.0.6, and you can download the tag 0.0.6 that has everything setup to work with Wildfly 13 and nginx, and to simulate this error.
git clone -b 0.0.6 --single-branch --depth 1 https://github.com/lucasbasquerotto/pod.git
cd pod
docker-compose up -d
Then, if you access the container directly in http://localhost:9990 with user monitor and password Monitor#70365 everything works.
But if you access http://localhost:9991 with the same credentials, through the nginx reverse proxy, you receive the error.
My nginx.conf file:
upstream docker-wildfly {
server wildfly:9990;
}
location / {
proxy_pass http://docker-wildfly;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
I've also tried with:
proxy_set_header X-Forwarded-Proto $scheme;
And also with the Authorization header (just the 2nd line and also with both):
proxy_set_header Authorization $http_authorization;
proxy_pass_header Authorization;
And also defining the host header with the port (instead of just $host):
proxy_set_header Host $server_addr:$server_port;
I've tried the above configurations isolated and combined together. All to no avail.
Any sugestions?
Has anyone successfully accessed the Wildfly Console through a reverse proxy?
Update (2018-09-22)
It seems Wildfly uses a digest authentication (instead of basic).
I see the header in the console like the following:
Authorization: Digest username="monitor", realm="ManagementRealm", nonce="AAAAAQAAAStPzpEGR3LxjJcd+HqIX2eJ+W8JuzRHejXPcGH++43AGWSVYTA=", uri="/console/index.html", algorithm=MD5, response="8d5b2b26adce452555d13598e77c0f63", opaque="00000000000000000000000000000000", qop=auth, nc=00000005, cnonce="fe0e31dd57f83948"
I don't see much documentation about using nginx to proxy pass requests with digest headers (but I think it should be transparent).
One question I saw here in SO is https://serverfault.com/questions/750213/http-digest-authentication-on-proxied-server, but there is no answer so far.
I saw that there is the nginx non-official module https://www.nginx.com/resources/wiki/modules/auth_digest/, but in the github repository (https://github.com/atomx/nginx-http-auth-digest) it says:
The ngx_http_auth_digest module supplements Nginx's built-in Basic
Authentication module by providing support for RFC 2617 Digest
Authentication. The module is currently functional but has only been
tested and reviewed by its author. And given that this is security
code, one set of eyes is almost certainly insufficient to guarantee
that it's 100% correct. Until a few bug reports come in and some of
the ‘unknown unknowns’ in the code are flushed out, consider this
module an ‘alpha’ and treat it with the appropriate amount of
skepticism.
Also it doesn't seem to me allright to hardcode the user and pass in a file to be used by nginx (the authentication should be transparent to the reverse proxy in this case).
In any case, I tried it and it correctly asks me to authenticate, even if the final destination does not have a digest authentication, like when trying to connect to the wildfly site (not console), it asks when trying to connect to nginx (before proxying the request), then it forwards successfully to the destination, except in the case of wildfly console, it keeps asking me to authenticate forever.
So I think this is not the solution. The problem seems to be in what the nginx is passing to the Wildfly Console.
I had the same problem with the HAL management console v3.3 and 3.2
I could not get ngnix HTTPS working due to authentication errors, even though the page prompted http basic auth user and pass
This was tested in standalone mode on the same server
My setup was :
outside (https) -> nginx -> http://halServer:9990/
This resulted in working https but with HAL authentication errors (seen in the browsers console) the webpage was blank.
At first access the webpage would ask http basic auth credentials normally, but then almost all https requests would return an authentication error
I managed to make it work correctly by first enabling the HAL console https with a self signed certificate and then configuring nginx to proxy pass to the HAL HTTPS listener
Working setup is :
outside (https) -> nginx (https) -> https://halServer:9993/
Here is the ngnix configuration
server {
listen 80;
listen [::]:80;
listen 443 ssl;
listen [::]:443 ssl;
server_name halconsole.mywebsite.com;
# SSL
ssl_certificate /keys/hal_fullchain.pem;
ssl_certificate_key /keys/hal_privkey.pem;
ssl_trusted_certificate /keys/hal_chain.pem;
# security
include nginxconfig.io/security.conf;
# logging
access_log /var/log/nginx/halconsole.mywebsite.com.access.log;
error_log /var/log/nginx/halconsole.mywebsite.com.error.log warn;
# reverse proxy
location / {
# or use static ip, or nginx upstream
proxy_pass https://halServer:9993;
include nginxconfig.io/proxy.conf;
}
# additional config
include nginxconfig.io/general.conf;
include nginxconfig.io/letsencrypt.conf;
}
# subdomains redirect
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name *.halconsole.mywebsite.com;
# SSL
ssl_certificate /keys/hal_fullchain.pem;
ssl_certificate_key /keys/hal_privkey.pem;
ssl_trusted_certificate /keys/hal_chain.pem;
return 301 https://halconsole.mywebsite.com$request_uri;
}
proxy.conf
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
# Proxy headers
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Forwarded $proxy_add_forwarded;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-By $server_addr;
# Proxy timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
The easiest way to enable https console is by using the console itself
generate a java JKS keystore using either the command line keytool or a GUI program
I like GUIs, so I used Key Store Explorer https://github.com/kaikramer/keystore-explorer
copy keystore file on the halServer server where it has read access (no need to keep it secret) i placed mine inside wildfly data dir in a "keystore" directory.
# your file paths might differ, don't copy paste
cp /home/someUser/sftp_uploads/managementKS /opt/wildfly/standalone/data/keystore/managementKS
set permissions
# your file paths might differ, don't copy paste
chown --recursive -H wildfly:wildfly /opt/wildfly/standalone/data/keystore
(use vpn) login to cleartext console http://halServer:9990/
add keystore : navigate :
configuration -> subsystems -> security (elytron) -> other settings (click view button)
stores -> keystore -> add
...
Name = managementKS
Type = JKS
Path = keystore/managementKS
Relative to = jboss.server.data.dir
Credential Reference Clear Text = keystore-password click Add
result in standalone.xml
<key-store name="managementKS">
<credential-reference clear-text="keystore-password"/>
<implementation type="JKS"/>
<file path="keystore/managementKS" relative-to="jboss.server.data.dir"/>
</key-store>
add key manager : navigate :
ssl -> key manager -> add
...
Name = managementKM
Credential Reference Clear Text = keystore-password
Key Store = managementKS
result in standalone.xml
<key-manager name="managementKM" key-store="managementKS">
<credential-reference clear-text="keystore-password"/>
</key-manager>
add ssl context : navigate :
ssl -> server ssl context -> add
...
Name = managementSSC
Key Manager = managementKM
...
Edit added : Protocols = TLSv1.2
save
result in standalone.xml
<server-ssl-contexts>
<server-ssl-context name="managementSSC" protocols="TLSv1.2" key-manager="managementKM"/>
</server-ssl-contexts>
go back
runtime -> server (click view button)
http management interface (edit)
set secure socket binding = management-https
set ssl context = managementSSC
save
restart wildfly
systemctl restart wildfly
My company tries very hard to keep a SSO for all third party services. I'd like to make Kibana work with our Google Apps accounts. Is that possible? How?
From Elasticsearch, Kibana 5.0, shield plugin (security plugin) is embedded in x-pack (paid service). So from Kibana 5.0 you can :
use X-Pack
use Search Guard
Both these plugin can be used with basic authentication, so you can apply an Oauth2 proxy like this one. One additionnal proxy would forward the request with the right Authorization header with the digest base64(username:password)
The procedure is depicted in this article for x-pack. So you will have :
I've setup a docker-compose configuration in this repo for using either searchguard or x-pack with Kibana/Elasticsearch 6.1.1 :
docker-compose for searchguard
docker-compose for x-pack
Kibana leaves it up to you to implement security. I believe that Elastic's Shield product has support for security-as-a-plugin, but I haven't navigated the subscription model or looked much into it.
The way that I handle this is by using an oauth2 proxy application and use nginx to reverse proxy to Kibana.
server {
listen 80;
server_name kibana.example.org;
# redirect http->https while we're at it
rewrite ^ https://$server_name$request_uri? permanent;
}
server {
# listen for traffic destined for kibana.example.org:443
listen 443 default ssl;
server_name kibana.example.org;
ssl_certificate /etc/nginx/ssl/cert.crt;
ssl_certificate_key /etc/nginx/ssl/cert.key.pem;
add_header Strict-Transport-Security max-age=1209600;
# for https://kibana.example.org/, send to our oauth2 proxy app
location / {
# the oauth2 proxy application i use listens on port :4180
proxy_pass http://127.0.0.1:4180;
# preserve our host and ip from the request in case we want to
# dispatch the request to a named nginx directive
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 15;
proxy_send_timeout 30;
proxy_read_timeout 30;
}
}
The request comes in, triggers an nginx directive that sends the request to the ouath application, which in turn handles the SSO resource and redirects to a listening Kibana instance on the server's localhost. It's secure because connections cannot be made directly to Kibana.
Use oauth2-proxy application and Kibana with configured anonymous authentication as on config below:
xpack.security.authc.providers:
anonymous.anonymous1:
order: 0
credentials:
username: "username"
password: "password"
The user which credentials are specified in config can be created either via Kibana UI or Elasticsearch create or update users API.
Note! Kibana instance should not be publicly available, otherwise anybody will be able to access Kibana UI.