I run onlyoffice with docker docker run -i -t -d -p 80:80 onlyoffice/documentserver and a nginx load balancer which provide ssl encryption.
My question is, how can i provide a authentication? without to touch the load balancer.
The Problem is, everybody can use the server.
The Problem is, everybody can use the server.
We would recommend to enable JWT on the Document Server.
It is supported by the NC connector
http basic auth works, tested with nextcloud integration:
root#e54c225ab8aa:/# cat /etc/nginx/conf.d/onlyoffice-documentserver.conf
include /etc/nginx/includes/onlyoffice-http.conf;
server {
listen 0.0.0.0:80;
listen [::]:80 default_server;
server_tokens off;
include /etc/nginx/includes/onlyoffice-documentserver-*.conf;
}root#e54c225ab8aa:/#
insert eg.:
auth_basic “Administrator’s Area”;
auth_basic_user_file /etc/nginx/.htpasswd;
and restart nginx /etc/init.d/nginx restart
Related
I'm running php+nginx api inside docker container. It is available on port 8080. I trying to add nginx reverse proxy to open api on address api.versite.online and frontend project on versite.online.
I installed nginx on server, added /etc/nginx/sites-available/api.versite.online config (also added symlink to sites-enabled directory), tested config with nginx -t, restarted nginx service with systemctl reload nginx, but it had no effect. api.versite.online:8080 and versite.online:8080 makes request to docker container, looks like top level nginx are ignored.
Nginx access log is empty.
/etc/nginx/sites-available/api.versite.online config
server {
listen 80;
server_name api.versite.online;
access_log /var/log/nginx/api.versite.access.log;
location / {
proxy_pass http://localhost:8080;
}
}
It seems that i forgot to add a firewall rule with sudo ufw allow 'Nginx HTTP'
I am trying to use NGINX as a proxy with a next.js frontend and FastAPI backend, each running in their own container.
I got everything working fine with HTTP, but having some issues getting things to work with HTTPS.
All containers start running without any issues, and things seem to be working, but when I try to communicate with the proxy, I get the following errors:
From host:
lafton#lafton-platform:~$ curl localhost -L
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to localhost:443
Form inside NGINX container using localhost:
root#6016e75698cf:/# curl localhost -L
curl: (60) SSL: no alternative certificate subject name matches target host name 'localhost'
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
From inside NGINX container using lafton.io:
root#6016e75698cf:/# curl https://lafton.io
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to lafton.io:443
I tried to install NGINX locally instead of inside Docker and it works as expected. I tried to enable the SSL configuration which is commented out in the default configuration, and it worked perfectly with SSL locally.
I then tried to use the default SSL configuration with my setup, but it does not work.
This is the NGINX config I am running inside /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
load_module /etc/nginx/modules/ngx_http_js_module.so;
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name lafton.io;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name lafton.io;
ssl_certificate /etc/certs/fullchain1.pem;
ssl_certificate_key /etc/certs/privkey1.pem;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://lafton-website:3000;
}
location /api/albums {
proxy_pass http://lafton-albums:8000;
}
}
}
The port 80 part is just a redirect to https. It is the exact same without it.
The ciphers is from Mozillas recommendations. I tried to change this from the default as some of the troubleshooting I did seemed to indicate no matching ciphers.
I am really lost here and not sure where to look for further troubleshooting. Any help would be really appreciated!
Timo Stark's comment solved the issue.
It didn't work inside the container because the certificates CN was lafton.io, so I had to use the -k flag in the curl command.
When that worked, I saw a typo in my docker-compose file, so the container had exposed port 433, not 443.
I am using the dockerized Nextcloud as shown here: https://github.com/nextcloud/docker/tree/master/.examples/docker-compose/with-nginx-proxy-self-signed-ssl/mariadb/fpm
I set this up with port 80 mapped to 12345 and port 443 mapped to 12346. When I go to https://mycloud.example.com:12346, I get the self-signed certificate prompt, but otherwise everything is fine and I see the NextCloud web UI. But when I go to http://mycloud.example.com:12345 nginx (the proxy container) gives error "503 Service Temporarily Unavailable". The error also shows up in proxy's logs.
How can I diagnose the issue? Why is HTTPS working but not HTTP?
Can you provide your docker command starting nextcloud, or docker-compose file ?
Diagnosis is as usual with docker stuff : get the id for the currently running container
docker ps
Then check the logs
docker logs [id or name of your container]
docker-compose logs [name of your service]
Connect in the container
docker exec -ti [id or name of your container] [bash or ash if alpine based container]
There read the nginx conf files involved. In your case I'ld check the redirection being made from http to https, most likely it's something like below with no specific port specified for https, hence port 443, hence not working
server {
listen 80;
server_name my.domain.com;
return 301 https://$server_name$request_uri; <======== no port = 443
}
server {
listen 443 ssl;
server_name my.domain.com;
# add Strict-Transport-Security to prevent man in the middle attacks
add_header Strict-Transport-Security "max-age=31536000" always;
[....]
}
I'm using docker-compose for a rails app to have an app and db container. In order to test some app functionality I need SSL...so I'm going with LetsEncrypt vs self-signed.
The app uses nginx, and the server is ubuntu 14.04 lts, with the phusion passenger docker image as a base image (lightweight debian)
Normally with LetsEncrypt, I run the usual ./certbot-auto certonly --webroot -w /path/to/app/public -d www.example.com
My server runs nginx (proxy passing the app to the container), so I've hopped into the container to run the certbot command without issue.
However, when I try to go to https://test-app.example.com it doesn't work. I can't figure out why.
Error on site (Chrome):
This site can’t be reached
The connection was reset.
Curl gives a bit better error:
curl: (35) Unknown SSL protocol error in connection to test-app.example.com
Server nginx app.conf
upstream test_app { server localhost:4200; }
server {
listen 80;
listen 443 default ssl;
server_name test-app.example.com;
# for SSL
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_dhparam /etc/ssl/dhparam.pem;
ssl_prefer_server_ciphers on;
ssl_ciphers 'ECDHE-RSA-blahblahblah-SHA';
location / {
proxy_set_header Host $http_host;
proxy_pass http://test_app;
}
}
Container's nginx app.conf
server {
server_name _;
root /home/app/test/public;
ssl_certificate /etc/letsencrypt/live/test-app.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/test-app.example.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_dhparam /etc/ssl/dhparam.pem;
ssl_prefer_server_ciphers on;
ssl_ciphers 'ECDHE-RSA-blahblah-SHA';
passenger_enabled on;
passenger_user app;
passenger_ruby /usr/bin/ruby2.3;
passenger_app_env staging;
location /app_test/assets/ {
passenger_enabled off;
alias /home/app/test/public/assets/;
gzip_static on;
expires +7d;
add_header Cache-Control public;
break;
}
}
In my Dockerfile, I have:
# expose port
EXPOSE 80
EXPOSE 443
In my docker-compose.yml file I have:
test_app_app:
build: "."
env_file: config/test_app-application.env
links:
- test_app_db:postgres
environment:
app_url: https://test-app.example.com
ports:
- 4200:80
And with docker ps it shows up as:
Up About an hour 443/tcp, 0.0.0.0:4200->80/tcp
I am now suspecting it's because the server's nginx - the "front-facing" server - doesn't have the certs, but I can't run the LetsEncrypt command without an app location.
I tried running the manual LetsEncrypt command on the server, but because I presumably have port 80 exposed, I get this: socket.error: [Errno 98] Address already in use Did I miss something here?
What do I do?
Fun one.
I would tend to agree that it's likely due to not getting the certs.
First and foremost read my disclaimer at the end. I would try to use DNS authentication., IMHO it's a better method for something like Docker. A few ideas come to mind. Easiest that answers your question would be a docker entrypoint script that gets the certs first and then starts nginx:
#!/bin/bash
set -ea
#get cert
./certbot-auto certonly --webroot -w /path/to/app/public -d www.example.com
#start nginx
nginx
This is "okay" solution, IMHO, but is not really "automated" (which is part of the lets encrypt goals). It doesn't really address renewing the certificate down the road. If that's not a concern of yours, then there you go.
You could get really involved and create an entrypoint script that detects when the cert expires and then rerun the command to renew it and then reloads nginx.
A much more complicated (but also more scalable solution) would be to create a docker image that's sole purpose in life is to handle lets_encrypt certificates and renewals and then provide a way of distributing those certificates to other containers, eg: nfs (or shared docker volumes if you are really careful).
For anyone in the future reading this: this was written before compose hooks was an available feature, which would be by far the best way of handling something like this.
Please read this disclaimer:
Docker is not really the best solution for this, IMHO. Docker images should be static data. Because lets encrypt certificates expire after 3 months, that means your container should have a shelf-life of three months or less (or, like I said above, account for renewing). "Thats fine!" I hear you say. But that would also mean you are constantly getting a new certificate issued each time you start the container (with the entrypoint method). At the very least, that means that the previous certificate gets revoked every time. I don't know what the ramifications are for doing this with Lets Encrypt. They may only give you so many revokes before they think something fishy is going on.
What I tend to do most often is actually use configuration management and use nginx as the "front" on the host system. Or rely on some other mechanism to handle SSL termination. But that doesn't answer your question of how to get Lets Encrypt to work with docker. :-)
I hope that helps or points you in a better direction. :-)
I knew I was missing one small thing. As stated in the question, since the nginx on the server is the 'front-facing' nginx, with the container's nginx specifically for the app, the server's nginx needed to know about the SSL.
The answer was super simple. Copy the certs over! (Kudos to my client's ops lead)
I cat the fullchain.pem and privkey.pem in the docker container and created the associated files in /etc/ssl on the server.
On the server's /etc/nginx/sites-enabled/app.conf I added:
ssl_certificate /etc/ssl/test-app-fullchain.pem;
ssl_certificate_key /etc/ssl/test-app-privkey.pem;
Checked configuration and restarted nginx. Boom! Worked like a charm. :)
I'm hosting my own docker-registry in a docker container. It's fronted by nginx running in a separate container to add basic auth. Checking the _ping routes I can see that nginx is routing appropriately. When calling docker login from boot2docker (on Mac OSX) I get this error:
FATA[0003] Error response from daemon: Invalid registry endpoint https://www.example.com:8080/v1/: Get https://www.example.com:8080/v1/_ping: x509: certificate signed by unknown authority. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add --insecure-registry www.example.com:8080 to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/www.example.com:8080/ca.crt
Which is odd - because it's a valid CA SSL cert. I've tried adding --insecure-registry in EXTRA-ARGS as per these instructions: https://github.com/boot2docker/boot2docker#insecure-registry but initially the 'profile' file doesn't exist it. If I create it, and add
EXTRA_ARGS="--insecure-registry www.example.com:8080"
I see no improvement. I wanted to isolate the example and so tried docker login from an ubuntu VM (not boot2docker). Now I get a different error:
Error response from daemon:
The docker registry is run directly from the public hub, e.g.
docker run -d -p 5000:5000 registry
(Note that nginx routes from 8080 to 5000). Any help and/or resources to help debug this would be much appreciated.
UPDATE
I was looking to a guide to help comprehensively solve this problem. Specifically:
Create a private registry
Secure the registry with basic Auth
Use the registry from boot2docker
I have created the registry and tested locally, it works. I have secured the registry with nginx adding basic auth.
The trouble is now actually using the registry from two types of client:
1) Non boot2docker client.
One of the answers below helped with this. I added --insecure-registry flag to options in /etc/default/docker and now I can talk to my remote docker registry.
However, this isn't compatible with auth as docker login gets an error:
2015/01/15 21:33:57 HTTP code 401, Docker will not send auth headers over HTTP.
So, if I want to use auth I'll need to use HTTPS. I already have this server serving over HTTPS but that doesn't work if I set --insecure-registry. There appears to be a certificate trust issue, which I'm confident I can solve on non-boot2docker but..
2) For a boot2docker client, I can't get --insecure-registry to work or certificates to be trusted?
UPDATE 2
Following this stack exchange question I managed to add the ca to my ubuntu VM and I can now use from non boot2docker client. However, there is still a lot of odd behavior.
Even though my current user is a member of the docker group (so I don't have to use sudo) I now have to use sudo or I get the following error when trying to login or pull from my private registry
user#ubuntu:~$ docker login example.com:8080
WARNING: open /home/parallels/.dockercfg: permission denied
parallels#ubuntu:~$ docker pull example.com:8080/hw:1
WARNING: open /home/parallels/.dockercfg: permission denied
And when running containers pulled from my private registry for the first time, I have to specify them by image ID - not their name.
Edit the docker file
sudo vim /etc/default/docker
Add the DOCKER_OPTS
DOCKER_OPTS="$DOCKER_OPTS --insecure-registry=www.example.com:8080"
Restarting the docker service
sudo service docker restart
Run the following command:
boot2docker ssh "echo $'EXTRA_ARGS=\"--insecure-registry <YOUR INSECURE HOST>\"' | sudo tee -a /var/lib/boot2docker/profile && sudo /etc/init.d/docker restart"
Docker version > 1.3.1 communicates over HTTPS by default when connecting to docker registry
If you are using Nginx to proxy_pass to port 5000 where docker registry is listening you will need to terminate docker client's SSL connection to docker registry at webserver/LB (Nginx in this case). To verify if Nginx is terminating SSL connection well use cURL https://www.example.com:8081/something where 8081 is another port set up for testing SSL cert.
If you don't care if your docker client connects to the registry over HTTP and not HTTPS, add
OPTIONS="--insecure-registry www.example.com:8080"
in /etc/sysconfig/docker (or equivalent in other distros) and restart docker service.
Hope it helps.
As of Docker version 1.3.1, if your registry doesn't support HTTPS, you must add it as an insecure registry. For boot2docker, this is a bit more complicated than usual. See: https://github.com/boot2docker/boot2docker#insecure-registry
The relevant commands are:
$ boot2docker init
$ boot2docker up
$ boot2docker ssh
$ echo 'EXTRA_ARGS="--insecure-registry <YOUR INSECURE HOST>"' | sudo tee -a /var/lib/boot2docker/profile
$ sudo /etc/init.d/docker restart
If you want to add SSL certificates to the boot2docker instance, it's going to be something similar (boot2docker ssh followed by sudo).
For ubuntu, please modify file /etc/default/docker
DOCKER_OPTS="$DOCKER_OPTS --insecure-registry=10.27.19.230:5000"
For rehl, please modify file /etc/sysconfig/docker
other_args="--insecure-registry 10.27.19.230:5000"
Register an SSL key from https://letsencrypt.org/ If you need more instructions, refer this link.
Enable SSL for nginx. Attention to SSL part in the code below, after register SSL key, you have fullchain.pem, privkey.pem, dhparam.pem using it for nginx to enable SSL.
`
server {
listen 443;
server_name docker.mydomain.com;
# SSL
ssl on;
ssl_certificate /etc/nginx/conf.d/fullchain.pem;
ssl_certificate_key /etc/nginx/conf.d/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/conf.d/dhparam.pem;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security max-age=15768000;
# disable any limits to avoid HTTP 413 for large image uploads
client_max_body_size 0;
# required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486)
chunked_transfer_encoding on;
location /v2/ {
# Do not allow connections from docker 1.5 and earlier
# docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
return 404;
}
# To add basic authentication to v2 use auth_basic setting plus add_header
auth_basic "registry.localhost";
auth_basic_user_file /etc/nginx/conf.d/registry.password;
add_header 'Docker-Distribution-Api-Version' 'registry/2.0' always;
proxy_pass http://docker-registry;
proxy_set_header Host $http_host; # required for docker client's sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 900;
}
}
It resolves my problem, hopes it help you.
Try running the daemon with the args:
docker -d --insecure-registry="www.example.com:8080"
instead of setting EXTRA_ARGS