I have configured Docker Registry using htpasswd authentication as docker service. I use a port mapping 443:5000 and it works perfectly. "docker login :443" works, "docker pull <mydomain:433/myimage" works.
The container service looks like this:
myhub:
image: myhub
ports:
- "443:5000"
environment:
- REGISTRY_HTTP_ADDR=0.0.0.0:5000
- REGISTRY_HTTP_TLS_KEY=/certs/efimeridopolis.privkey.pem
- REGISTRY_HTTP_TLS_CERTIFICATE=/certs/efimeridopolis.fullchain.pem
- REGISTRY_AUTH=htpasswd
- REGISTRY_AUTH_HTPASSWD_REALM='Registry Realm'
- REGISTRY_AUTH_HTPASSWD_PATH=/auth/registry.htpasswd
volumes:
- /mnt/volume1/hub:/var/lib/registry
networks:
- myoverlay
Now, I try to put it behind HAProxy. SSL is terminated at HAProxy, so the service looks like:
myhub:
image: myhub
environment:
- REGISTRY_HTTP_ADDR=0.0.0.0:5000
- REGISTRY_AUTH=htpasswd
- REGISTRY_AUTH_HTPASSWD_REALM='Registry Realm'
- REGISTRY_AUTH_HTPASSWD_PATH=/auth/registry.htpasswd
volumes:
- /mnt/volume1/hub:/var/lib/registry
networks:
- myoverlay
and the relevant part of HAProxy configuration is:
frontend fe_443
mode http
bind *:443 ssl crt /etc/ssl/private/mydomain.pem
http-request add-header X-Forwarded-Proto https if { ssl_fc }
acl host_registry hdr(host) -i hub.mydomain
use_backend be_registry_443 if host_registry
backend be_registry_443
mode http
option forwardfor
server hub1 myhub:5000 check
It seems something is going wrong here. While accessing
https://hub.mydomain/v2/_catalog
through browser works, which means I am asked for username/password and then I get the list or repositories, when I try to use the console to:
$ docker pull hub.mydomain:443/v2/myhaproxy
it gives me:
Using default tag: latest
Error response from daemon: received unexpected HTTP status: 503 Service Unavailable
The same when I try:
$ docker login mydomain:443
I am asked username and password but then I get the same 503 message.
Since browser can list the repositories, I know the registry is online and accessible.
What is wrong ?
https (443 port) should be tcp mode and using req.ssl_sni instead hdr(host).
Related
I have a server running docker-compose. Docker-compose has 2 services: nginx (as a reverse proxy) and back (as an api that handles 2 requests). In addition, there is a database that is not located on the server, but separately (database as a service).
Requests processed by back:
get('/api') - the back service simply replies "API is running" to it
get('/db') - the back service sends a simple query to an external database ('SELECT random() as random, current_database() as db')
request 1 - works fine, request 2 - the back service crashes, nginx continues to work and a 502 Bad Gateway error appears in the console.
An error occurs in the nginx service Logs: upstream prematurely
closed connection while reading response header from upstream.
The back service Logs: connection terminated due to connection timeout.
These are both rather vague errors. And I don’t know how else to get close to them, given that the code is not in a container, without Nginx and with the same database, it works as it should.
What I have tried:
increase the number of cores and RAM (now 2 cores and 4 GB of Ram);
add/remove/change proxy_read_timeout, proxy_send_timeout and proxy_connect_timeout parameters;
test the www.test.com/db request via postman and curl (fails with the same error);
run the code on your local machine without a container and compose and connect to the same database using the same pool and the same ip (everything is ok, both requests work and send what you need);
change the parameter worker_processes (tested with a value of 1 and auto);
add/remove attribute proxy_set_header Host $http_host, replace $http_host with "www.test.com".
Question:
What else can I try to fix the error and make the db request work?
My nginx.conf:
worker_processes 1;
events {
worker_connections 1024;
}
http{
upstream back-stream {
server back:8080;
}
server {
listen 80;
listen [::]:80;
server_name test.com www.test.com;
location / {
root /usr/share/nginx/html;
resolver 121.0.0.11;
proxy_pass http://back-stream;
}
}
}
My docker-compose.yml:
version: '3.9'
services:
nginx-proxy:
image: nginx:stable-alpine
container_name: nginx-proxy
ports:
- 80:80
- 443:443
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
networks:
- network
back:
image: "mycustomimage"
container_name: back
restart: unless-stopped
ports:
- '81:8080'
networks:
- network
networks:
network:
driver: bridge
I can upload other files if needed. Just taking into account the fact that the code does not work correctly in the container, the problem is rather in setting up the container.
I will be grateful for any help.
Code of the back: here
The reason for the error is this: I forgot to add my server's ip to the list of allowed addresses in the database cluster.
After spending hours searching why I cannot access to my webUI, I turn to you.
I setup freeipa on docker using docker-compose. I opened some port to gain remote access using host-ip:port on my own computer. Freeipa is supposed to be run on my server (lets say 192.168.1.2) and the webui accessible with any other local computer on port 80 / 443 (192.168.1.4:80 or 192.168.1.4:443)
When I run my .yaml file, freeipa get setup with a "the ipa-server-install command was successful" message.
I thought it could come from my tight iptables rules and tried to put all policies to ACCEPT to debug. It didn't do it.
I'm a bit lost to how I could debbug this or find how to fix it.
OS : ubuntu 20.04.3
Docker version: 20.10.12, build e91ed57
freeipa image: freeipa/freeipa:centos-8-stream
Docker-compose version: 1.29.2, build 5becea4c
My .yaml file:
version: "3.8"
services:
freeipa:
image: freeipa/freeipa-server:centos-8-stream
hostname: sanctuary
domainname: serv.sanctuary.local
container_name: freeipa-dev
ports:
- 80:80
- 443:443
- 389:389
- 636:636
- 88:88
- 464:464
- 88:88/udp
- 464:464/udp
- 123:123/udp
dns:
- 10.64.0.1
- 1.1.1.1
- 1.0.0.1
restart: unless-stopped
tty: true
stdin_open: true
environment:
IPA_SERVER_HOSTNAME: serv.sanctuary.local
IPA_SERVER_IP: 192.168.1.100
TZ: "Europe/Paris"
command:
- -U
- --domain=sanctuary.local
- --realm=sanctuary.local
- --admin-password=pass
- --http-pin=pass
- --dirsrv-pin=pass
- --ds-password=pass
- --no-dnssec-validation
- --no-host-dns
- --setup-dns
- --auto-forwarders
- --allow-zone-overlap
- --unattended
cap_add:
- SYS_TIME
- NET_ADMIN
restart: unless-stopped
volumes:
- /etc/localtime:/etc/localtime:ro
- /sys/fs/cgroup:/sys/fs/cgroup:ro
- ./data:/data
- ./logs:/var/logs
sysctls:
- net.ipv6.conf.all.disable_ipv6=0
- net.ipv6.conf.lo.disable_ipv6=0
security_opt:
- "seccomp:unconfined"
labels:
- dev
I tried to tinker with the deployment file (add or remove conf found on internet such as add/remove IPA_SERVER_IP, add/remove an external bridge network)
Thank you very much for any help =)
Alright, for those who might have the same problem, I will explain everything I did to debug this.
I extensively relieded on the answers found here : https://floblanc.wordpress.com/2017/09/11/troubleshooting-freeipa-pki-tomcatd-fails-to-start/
First, I checked the status of each services with ipactl status. Depending of the problem, you might have different output but mine was like this :
Directory Service: RUNNING
krb5kdc Service: RUNNING
kadmin Service: RUNNING
named Service: RUNNING
httpd Service: RUNNING
ipa-custodia Service: RUNNING
pki-tomcatd Service: STOPPED
ipa-otpd Service: RUNNING
ipa-dnskeysyncd Service: RUNNING
ipa: INFO: The ipactl command was successful
I therefore checked the logs for tomcat /var/log/pki/pki-tomcat/ca/debug-xxxx. I realised I had connection refused with something related to the certificates.
Here, I first checked that my certificate was present in /etc/pki/pki-tomcat/alias using sudo certutil -L -d /etc/pki/pki-tomcat/alias -n 'subsystemCert cert-pki-ca'.
## output :
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 4 (0x4)
...
...
Then I made sure that the private key can be read using the password found in /var/lib/pki/pki-tomcat/conf/password.conf (with the tag internal=…)
grep internal /var/lib/pki/pki-tomcat/conf/password.conf | cut -d= -f2 > /tmp/pwdfile.txt
certutil -K -d /etc/pki/pki-tomcat/alias -f /tmp/pwdfile.txt -n 'subsystemCert cert-pki-ca'
I still had nothings strange so I assumed that at this point :
pki-tomcat is able to access the certificate and the private key
The issue is likely to be on the LDAP server side
I tried to read the user entry in the LDAP to compare it to the certificate using ldapsearch -LLL -D 'cn=directory manager' -W -b uid=pkidbuser,ou=people,o=ipaca userCertificate description seeAlso but had an error after entering the password. Because my certs were OK and LDAP service running, I assumed something was off with the certificates date.
Indeed, during the install freeipa setup the certs using your current system date as base. But it also install chrony for server time synchronization. After reboot, my chrony conf were wrong and set my host date 2 years ahead.
I couldnt figure out the problem with the chrony conf so I stopped the service and set the date manually using timedatectl set-time "yyyy-mm-dd hh:mm:ss".
I restarted freeipa services amd my pki-tomcat service was working again.
After that, I set the freeipa IP in my router as DNS. I restarted services and computer in the local network so DNS config were refreshed. After that, the webUI was accessible !
I currently have a Traefik instance that's being run using the following. It works fine forwarding HTTP connections to the appropriate backends.
services_lb:
image: traefik:v2.2
cmd: |
--entrypoints.web.address=:80
--entrypoints.websecure.address=:443
--entrypoints.web.http.redirections.entryPoint.to=websecure
--entrypoints.web.http.redirections.entryPoint.scheme=https
--entrypoints.web.http.redirections.entrypoint.permanent=true
--entrypoints.matrixfederation.address=:8448
--entrypoints.prosodyc2s.address=:5222
--entrypoints.prosodys2s.address=:5269
--providers.docker
--providers.docker.constraints=Label(`lb.net`,`services`)
--providers.docker.network=am-services
--certificatesresolvers.lec.acme.email=notify#battlepenguin.com
--certificatesresolvers.lec.acme.storage=/letsencrypt/acme.json
--certificatesresolvers.lec.acme.tlschallenge=true
--entryPoints.web.forwardedHeaders.trustedIPs=172.50.0.1/24
ports:
- 80
- 443
# Matrix
- 8448
# XMPP
- 5222
- 5269
My web and Matrix federation connections work fine as they're all HTTP. But for Prosody (XMPP) I need to forward 5222 and 5269 directly without any HTTP routing. I configured the container like so:
xmpp:
image: prosody/prosody:0.11
network:
- services
- database
labels:
lb.net: services
traefik.tcp.services.prosodyc2s.loadbalancer.server.port: "5222"
traefik.tcp.services.prosodys2s.loadbalancer.server.port: "5269"
traefik.http.routers.am-app-xmpp.entrypoints: "websecure"
traefik.http.routers.am-app-xmpp.rule: "Host(`xmpp.example.com`)"
traefik.http.routers.am-app-xmpp.tls.certresolver: "lec"
traefik.http.services.am-app-xmpp.loadbalancer.server.port: "5280"
volumes:
- prosody-config:/etc/prosody:rw
- services_certs:/certs:ro
- prosody-logs:/var/log/prosody:rw
- prosody-modules:/usr/lib/prosody-modules:rw
With the tcp services, I still can't get Traefik to forward the raw TCP connections to this container. I've tried removing the --entrypoints from the Traefik instance and of course, Traefik stopped listening on those ports. I assumed the traefik.tcp.service definition would cause that entrypoint to switch to a TCP passthrough mode, but that isn't the case. I couldn't see anything in the Traefik documentation on putting the entrypoint itself into TCP mode instead of HTTP mode.
How do I pass the raw TCP connection from Traefik to this particular container using labels on the container and CLI options for Traefik?
I figured it out. You can't use any standard Traefik TLS offloading due to the differences in how Traefik and Prosidy handle TLS. I had to disable TLS entirely and use the special HostSNI(*) rule below to allow straight pass throughts. I was also missing the routers that connect the Traefik entrypoints to the TCP services.
labels:
lb.net: services
# client to server
traefik.tcp.routers.prosodyc2s.entrypoints: prosodyc2s
traefik.tcp.routers.prosodyc2s.rule: HostSNI(`*`)
traefik.tcp.routers.prosodyc2s.tls: "false"
traefik.tcp.services.prosodyc2s.loadbalancer.server.port: "5222"
traefik.tcp.routers.prosodyc2s.service: prosodyc2s
# server to server
traefik.tcp.routers.prosodys2s.entrypoints: prosodys2s
traefik.tcp.routers.prosodys2s.rule: HostSNI(`*`)
traefik.tcp.routers.prosodys2s.tls: "false"
traefik.tcp.services.prosodys2s.loadbalancer.server.port: "5269"
traefik.tcp.routers.prosodys2s.service: prosodys2s
# web
traefik.http.routers.am-app-xmpp.entrypoints: "websecure"
traefik.http.routers.am-app-xmpp.rule: "Host(`xmpp.example.com`)"
traefik.http.routers.am-app-xmpp.tls.certresolver: "lec"
traefik.http.services.am-app-xmpp.loadbalancer.server.port: "5280"
I have a simple server written in Python that listens on port 8000 inside a private network (HTTP communication). There is now a requirement to switch to HTTPS communications and every client that sends a request to the server should get authenticated with his own cert/key pair.
I have decided to use Traefik v2 for this job. Please see the block diagram.
Traefik runs as a Docker image on a host that has IP 192.168.56.101. First I wanted to simply forward a HTTP request from a client to Traefik and then to the Python server running outside Docker on port 8000. I would add the TLS functionality when the forwarding is running properly.
However, I can not figure out how to configure Traefik to reverse proxy from i.e. 192.168.56.101/notify?wrn=1 to the Python server 127.0.0.1:8000/notify?wrn=1.
When I try to send the above mentioned request to the server (curl "192.168.56.101/notify?wrn=1") I get "Bad Gateway" as an answer. What am I missing here? This is the first time that I am in contact with Docker and reverse proxy/Traefik. I believe it has something to do with ports but I can not figure it out.
Here is my Traefik configuration:
docker-compose.yml
version: "3.3"
services:
traefik:
image: "traefik:v2.1"
container_name: "traefik"
hostname: "traefik"
ports:
- "80:80"
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "./traefik.yml:/traefik.yml:ro"
traefik.yml
## STATIC CONFIGURATION
log:
level: INFO
api:
insecure: true
dashboard: true
entryPoints:
web:
address: ":80"
providers:
docker:
watch: true
endpoint: "unix:///var/run/docker.sock"
file:
filename: "traefik.yml"
## DYNAMIC CONFIGURATION
http:
routers:
to-local-ip:
rule: "Host(`192.168.56.101`)"
service: to-local-ip
entryPoints:
- web
services:
to-local-ip:
loadBalancer:
servers:
- url: "http://127.0.0.1:8000"
First, 127.0.0.1 will resolve to the traefik container and not to the docker host. You need to provide a private IP of the node and it needs to be accessible form the traefik container.
There is some workaround to make proxy to localhost:
change 127.0.0.1 to IP of docker0 interface
It should be 172.17.0.1
and then try to listen your python server on all interfaces (0.0.0.0)
if you use simple python http server nothing change... on default it listen on all interfaces
I have developed several docker-ized full-stack web applications that I am trying to route requests to using Traefik. I want to take advantage of the dynamic configuration via docker-compose labels. I would like to apply the stripPrefix middleware option so I can use the same application routing as if each app were served at the root. However, once these rules are applied it results in a 504 Gateway Timeout response.
Here's my set up:
Traefik 2.0.1
Docker 19.03.2, Compose 1.24.1
NGINX:latest images
A global docker network on which the Traefik container runs
Multiple multi-container applications, each of which includes an NGINX web server
All applications have their own local networks and the NGINX containers are also on the global network.
Each application is configured to be listening at /
Here is the docker-compose.yml definition for the offending NGINX container:
nginx:
image: nginx:latest
container_name: "mps_nginx"
volumes:
- ./nginx/confs/nginx.conf:/etc/nginx/default.conf
- ./static:/www/static
restart: "always"
labels:
- traefik.http.routers.mps.rule=Host(`localhost`) && PathPrefix(`/mps`)
- traefik.http.middlewares.strip-mps.stripprefix.prefixes=/mps
- traefik.http.routers.mps.middlewares=strip-mps#docker
networks:
- default
- mps
The aggravating part is, when I comment out the middlewares labels, it runs just fine but cannot find a matching URL pattern.
Prior to this I tested my approach using the whoami container that is defined in the Traefik Quickstart Tutorial:
# Test service to make sure our local docker-compose network functions
whoami:
image: containous/whoami
labels:
- traefik.http.routers.whoami.rule=Host(`localhost`) && PathPrefix(`/whoami`)
- traefik.http.middlewares.strip-who.stripprefix.prefixes=/whoami
- traefik.http.routers.whoami.middlewares=strip-who#docker
A request to http://localhost/whoami returns (among other things)
GET / HTTP/1.1.
This is exactly how I expected my routing approaches to work for all my other applications. The Traefik dashboard shows green all around for every middleware I register and yet all I see is 504 errors.
If anyone has any clues I would sincerely appreciate it.
To summarize my solution to a similar problem
my-service:
image: some/image
ports: # no need to expose port, just to show that service listens on 8090
- target: 8090
published: 8091
mode: host
labels:
- "traefik.enable=true"
- "traefik.http.routers.myservice-router.rule=PathPrefix(`/myservice/`)"
- "traefik.http.routers.myservice-router.service=my-service"
- "traefik.http.services.myservice-service.loadbalancer.server.port=8090"
- "traefik.http.middlewares.myservice-strip.stripprefix.prefixes=/myservice"
- "traefik.http.middlewares.myservice-strip.stripprefix.forceslash=false"
- "traefik.http.routers.myservice-router.middlewares=myservice-strip"
Now details
Routing rule
... redirect all calls thats path starts with /myservice/
- "traefik.http.routers.myservice-router.rule=PathPrefix(`/myservice/`)"
... to service my-service
- "traefik.http.routers.myservice-router.service=my-service"``
... to port 8090
- "traefik.http.services.myservice-service.loadbalancer.server.port=8090"
... and uses myservice-strip middleware
- "traefik.http.routers.myservice-router.middlewares=myservice-strip"
... this middleware will strip /myservice from path
- "traefik.http.middlewares.myservice-strip.stripprefix.prefixes=/myservice"
... and will not add trailing slash (/) if path becomes an empty string
- "traefik.http.middlewares.myservice-strip.stripprefix.forceslash=false"
There is an issue with prefix without ending with '/'.
Test your config like this:
- "traefik.http.routers.whoami.rule=Host(`localhost`) && (PathPrefix(`//whoami/`) || PathPrefix(`/portainer`))"
- "traefik.http.middlewares.strip-who.stripprefix.prefixes=/whoami"
I had a similar problem that was solved with the addition of
- "traefik.http.middlewares.strip-who.stripprefix.forceslash=true"
It makes sure the strip prefix doesn't also remove the forward slash.
You can read more about the forceslash documentation https://docs.traefik.io/middlewares/stripprefix/