Expose port in docker-compose or configure second letsencrypt certificate - docker

I'm running a selfhosted gitlab docker instance, but I'm facing some problems configuring the registry as I do get the error
Error response from daemon: Get https://example.com:4567/v2/: dial tcp <IP>:4567: connect: connection refused
for doing docker login example.com:4567.
So it seems that I have to expose the port 4567 somehow.
An (better) alternative would be to configure a second domain for the registry - like registry.example.com. As you can see below I'm using letsencrypt certificates for my gitlab instance. But how do I get a second certificate for the registry?
This is how my docker-compose looks like - I'm using jwilder/nginx-proxy for my reverse proxy.
docker-compose.yml
gitlab:
image: gitlab/gitlab-ce:11.9.0-ce.0
container_name: gitlab
networks:
- reverse-proxy
restart: unless-stopped
ports:
- '50022:22'
volumes:
- /opt/gitlab/config:/etc/gitlab
- /opt/gitlab/logs:/var/log/gitlab
- /opt/gitlab/data:/var/opt/gitlab
- /opt/nginx/conf.d:/etc/nginx/conf.d
- /opt/nginx/certs:/etc/nginx/certs:ro
environment:
VIRTUAL_HOST: example.com
VIRTUAL_PROTO: https
VIRTUAL_PORT: 443
LETSENCRYPT_HOST: example.com
LETSENCRYPT_EMAIL: certs#example.com
gitlab.rb
external_url 'https://example.com'
nginx['redirect_http_to_https'] = true
nginx['ssl_certificate'] = '/etc/nginx/certs/example.com/fullchain.pem'
nginx['ssl_certificate_key'] = '/etc/nginx/certs/example.com/key.pem'
gitlab_rails['backup_keep_time'] = 604800
gitlab_rails['backup_path'] = '/backups'
gitlab_rails['registry_enabled'] = true
registry_external_url 'https://example.com:4567'
registry_nginx['ssl_certificate'] = "/etc/nginx/certs/example.com/fullchain.pem"
registry_nginx['ssl_certificate_key'] = "/etc/nginx/certs/example.com/key.pem"
For the second alternative it would look like:
registry_external_url 'https://registry.example.com'
registry_nginx['ssl_certificate'] = "/etc/nginx/certs/registry.example.com/fullchain.pem"
registry_nginx['ssl_certificate_key'] = "/etc/nginx/certs/registry.example.com/key.pem"
But how do I set this up in my docker-compose?
Update
Im configuring nginx just via jwilder package, without changing anyhting. So this part of my docker-compose.yml file just looks like this:
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
networks:
- reverse-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /opt/nginx-proxy/vhost.d:/etc/nginx/vhost.d:rw
- /opt/nginx/certs:/etc/nginx/certs:ro
- html:/usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
nginx-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-letsencrypt
networks:
- reverse-proxy
depends_on:
- nginx-proxy
volumes:
- /opt/nginx-proxy/vhost.d:/etc/nginx/vhost.d:rw
- html:/usr/share/nginx/html
- /opt/nginx/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:rw
environment:
NGINX_PROXY_CONTAINER: "nginx-proxy"

TL; DR:
So it seems that I have to expose the port 4567 somehow.
Yes, however jwilder/nginx-proxy does not support more than one port per virtual host and port 443 is already exposed. There is a pull request for that feature but it has not been merged yet. You'll need to expose this port another way (see below)
You are using jwilder/nginx-proxy as reverse proxy to access a Gitlab instance in a container but with your current configuration onlyport 443 is exposed:
environment:
VIRTUAL_HOST: example.com
VIRTUAL_PROTO: https
VIRTUAL_PORT: 443
All other Gitlab services (including the registry on port 4567) are not proxied and therefore not reachable through example.com.
Unfortunately it is not possible yet to expose multiple port on a single hostname with jwilder/nginx-proxy. There is a pull request open for that use case but it had not been merged yet (you are not the only one with this kind of issue).
An (better) alternative would be to configure a second domain for the registry
This won't work if you keep using jwilder/nginx-proxy as even if you changed registry_external_url, you'll still be stuck with the port issue, and you cannot allocate the same port to two different services.
What you can do:
vote and comment for mentioned PR to be merged :)
try to build the Docker image from mentionned pull request's fork and configure your compose with something like VIRTUAL_HOST=example.com:443,example.com:4567
configure a reverse proxy manually fort port 4567 - you may wind-up a plain nginx container in addition with your current configuration which would specifically do this, or re-configure your entire proxying scheme without using jwilder images
update your configuration to expose example.com:4567 instead of example.com:443 but you'll lose HTTPS access. (though it's probably not what you are looking for)
I am aware this does not provide a finite solution but I hope it helps.

Related

Traefik Docker proxy - Cannot change listening port of PHP-Apache

I have a simple PHP Laravel docker image, created finally with PHP Apache, listening on port 80 (by default).
I have a Docker Traefik installation that works very well, via HTTPS (443 port).
Now, if I use the following docker-compose.yml for the laravel installation:
version: "3.8"
services:
resumecv:
image: sineverba/resumecv-backend:0.1.0-dev
container_name: resumecv
networks:
- proxy
labels:
- "traefik.enable=true"
- "traefik.docker.network=proxy"
- "traefik.http.routers.resumecv-backend.entrypoints=websecure"
- "traefik.http.routers.resumecv-backend.service=resumecv-backend"
- "traefik.http.routers.resumecv-backend.rule=Host(`resumecvbackend.example.com`)"
- "traefik.http.services.resumecv-backend.loadbalancer.server.port=80"
networks:
proxy:
external: true
It works (mapped against 80 port).
If I would change the listening port:
version: "3.8"
services:
resumecv:
image: sineverba/resumecv-backend:0.1.0-dev
container_name: resumecv
networks:
- proxy
ports:
- "9999:80"
labels:
- "traefik.enable=true"
- "traefik.docker.network=proxy"
- "traefik.http.routers.resumecv-backend.entrypoints=websecure"
- "traefik.http.routers.resumecv-backend.service=resumecv-backend"
- "traefik.http.routers.resumecv-backend.rule=Host(`resumecvbackend.example.com`)"
- "traefik.http.services.resumecv-backend.loadbalancer.server.port=9999"
networks:
proxy:
external: true
I get a Bad Gateway from Cloudflare (service not reachable).
I know that I could change the Apache port inside the container itself, but I would use the out <-> in mapping with ports definition.
Curl test
From the host, I can curl http://127.0.0.1:9999 with success.
I can also browse website using the IP of the host (192.168.1.100:9999).
Label traefik port
I did try to add traefik.port=9999 label without luck
Removing Label balancer
If I remove "traefik.http.services.resumecv-backend.loadbalancer.server.port=9999" label, I get a laconic 404 not found.
Port publishing...
ports:
- "9999:80"
...doesn't change the port on which your container is listening. It simply establishes a mapping from the host into the container. Your service is still listening on port 80, and that's the port other containers -- including traefik -- will need to use to contact your service.
If you're using a frontend like traefik you don't need the ports entry (because you'll be accessing the service through traefik, rather than directly through a host port).

Traefik with Docker-Compose not working as expected

I am fairly new to using traefik, so I might be totally missing something simple, but I have the following docker-compose.yaml:
version: '3.8'
services:
reverse-proxy:
container_name: reverse_proxy
restart: unless-stopped
image: traefik:v2.0
command:
- --entrypoints.web.address=:80
- --entrypoints.web-secure.address=:443
- --api.insecure=true
- --providers.file.directory=/conf/
- --providers.file.watch=true
- --providers.docker=true
ports:
- "80:80"
- "8080:8080"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./scripts/certificates/conf/:/conf/
- ./scripts/certificates/ssl/:/certs/
networks:
- bnkrl.io
labels:
- "traefik.enable=true"
- "traefik.http.routers.api.rule=Host(`traefik.bnkrl.io`)"
- "traefik.docker.network=bnkrl.io"
bankroll:
container_name: bankroll
build:
context: .
ports:
- "3000"
volumes:
- .:/usr/src/app
command: yarn start
networks:
- bnkrl.io
labels:
- "traefik.http.routers.bankroll.rule=Host(`bankroll.bnkrl.io`)"
- "traefik.docker.network=bnkrl.io"
- "traefik.http.services.bankroll.loadbalancer.server.port=3000"
- "traefik.http.routers.bankroll-https.rule=Host(`bankroll.bnkrl.io`)"
- "traefik.http.routers.bankroll-https.tls=true"
networks:
bnkrl.io:
external: true
But for some reason the following is happening:
Running curl when ssh'd into my bankroll container gives the following:
/usr/src/app# curl bankroll.bnkrl.io
curl: (7) Failed to connect to bankroll.bnkrl.io port 80: Connection refused
Despite having - "traefik.http.services.bankroll.loadbalancer.server.port=3000" label set up.
I am also unable to hit traefik from my application container:
curl traefik.bnkrl.io
curl: (6) Could not resolve host: traefik.bnkrl.io
Despite my expectation to be able to do so since they are both on the same network.
Any help with understanding what I might be doing wrong would be greatly appreciated! My application (bankroll) is a very basic hello-world react app, but I don't think any of the details around that are relevant to the issue I'm facing.
EDIT: I am also not seeing any error logs on traefik side of things.
You are using host names that are not declared and therefore are unreachable.
To reach a container from another container, you need to use the service name, for example, if you connect to bankroll from the reverse-proxy it will hit the other service.
While if you want to access them from the host machine, you will have to publish the ports (which you did, it's all the stuff in ports in your Docker-compose file) and access from localhost or from your machine local IP address instead of traefik.bnkrl.io.
If you want to access from traefik.bnkrl.io, you will have to declare this host name, and point it to the place where the Docker containers are running from.
So either a DNS record in the domain bnkrl.io pointing to your local machine, or a HOSTS file entry in your computer pointing to 127.0.0.1.
Another note: For SSL you are going to need a valid certificate to use for the host name. While in local development, you can use the self-signed certificate provided by Traefik, but you may have to install it in the computer connecting to the service, or allow untrusted certificates from your browser, or wherever you are making the requests from (some browsers no longer support using self-signed certificates). For SSL on the Internet you will need to look at things like Let's Encrypt.

docker-compose: varnish+apache2 return a 503 error `Backend fetch failed`

I am trying to run a very simple Docker-compose.yml file based on varnish and php7.1+apache2 services:
version: "3"
services:
cache:
image: varnish
container_name: varnish
volumes:
- ./default.vcl:/etc/varnish/default.vcl
links:
- web:webserver
depends_on:
- web
ports:
- 80:80
web:
image: benit/stretch-php-7.1
container_name: web
ports:
- 8080:80
volumes:
- ./index.php:/var/www/html/index.php
The default.vcl contains:
vcl 4.0;
backend default {
.host = "webserver";
.port = "8080";
}
I encountered the following error when browsing at http://localhost/:
Error 503 Backend fetch failed
Backend fetch failed
Guru Meditation:
XID: 9
Varnish cache server
The web service works fine when I test it at http://localhost:8080/.
What's wrong?
You need to configure varnish to communicate with "web" on port "80" rather than "webserver" on port "8080".
The "web" comes from the service name in your compose file. There's no need to set a container name, and indeed that breaks the ability to scale or perform rolling updates if you transition to swarm mode. Links have been deprecated in favor of shared networks that docker compose will provide (links are very brittle, breaking if you update the web container). And depends_on does not assure that the other service is ready to receive requests. If you have a hard dependency to hold varnish from starting until the web server is ready to receive requests, then you'll want to update the entrypoint with a task to wait for the remote port to be reachable and have a plan for how to handle the web server going down.
The port 80 comes from the container port. There is no need to publish port 8080 on the docker host if you only want to access it through varnish, and this would be a security risk to many. Containers communicate directly to the container port, not back out to the host and mapped back into a container.
The resulting compose file could look like:
version: "3"
services:
cache:
image: varnish
container_name: varnish
volumes:
- ./default.vcl:/etc/varnish/default.vcl
ports:
- 80:80
web:
image: benit/stretch-php-7.1
volumes:
- ./index.php:/var/www/html/index.php
And importantly, your varnish config would look like:
vcl 4.0;
backend default {
.host = "web";
.port = "80";
}

Traefik basic configuration for running in a Docker Swarm

From what I can see it goes like this:
docker-traefik.yml:
version: '3'
services:
traefik:
image: traefik
command: --docker # enable Docker Provider
# use Docker Swarm Mode as data provider
--docker.swarmmode
ports:
- "80:80"
volumes:
# for it to be able to listen to Docker events
- /var/run/docker.sock:/var/run/docker.sock
docker-whoami.yml:
version: '3'
networks:
traefik_default:
external: true
services:
whoami:
image: containous/whoami
networks:
# add to traefik network
- traefik_default
deploy:
labels:
# whoami is on port 80
- "traefik.port=80"
# whoami is on traefik_default network
- "traefik.docker.network=traefik_default"
# when to forward requests to whoami
- "traefik.frontend.rule=Host:example.com"
Let me quote the documentation here:
Required labels:
traefik.frontend.rule
traefik.port - Without this the debug logs will show this service is deliberately filtered out.
traefik.docker.network - Without this a 504 may occur.
...
traefik.docker.network Overrides the default docker network to use for connections to the container. [1]
traefik.port=80 Registers this port. Useful when the container exposes multiples ports.
But why can't it just take the exposed ports for a default value of traefik.port? And from what I can see it works without traefik.docker.network (that is, if traefik_default is the first service's network). When do I get 504's?
But why can't it just take the exposed ports for a default value of traefik.port?
If ur container has 3 or 4 exposed ports, which should traefik use? So who saying to traefik, which of these ports the right one? So you do - with traefik.port. Where is the problem to use the default port of your configured service?
U should expose 80, 443 and 8080 - so 80 and 443 for http/https webpages and 8080 for traefik dashboard. If u dont wanna use the dashboard, u dont need to expose 8080.
And i dont see any network configured # traefik in your composer file - should this have no network? Ur service and traefik need to be in the same network. Otherwise traefik cant reach ur service and forward.
Also where are the endpoints?

docker and jwilder/nginx-proxy http/https issue

I'm using docker on osx via boot2docker.
I have 2 hosts: site1.loc.test.com and site2.loc.test.com pointed to ip address of docker host.
Both should be available via 80 and 443 ports.
So I'm using jwilder/nginx-proxy for reverse proxy purposes.
But in fact when I'm running all of them via docker-compose every time I try to open via 80 port I get redirect to 443 (301 Moved Permanently).
May be I've missed something in jwilder/nginx-proxy configuration?
docker-compose.yml
proxy:
image: jwilder/nginx-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- certs:/etc/nginx/certs
ports:
- "80:80"
- "443:443"
site1:
image: httpd:2.4
volumes:
- site1:/usr/local/apache2/htdocs
environment:
VIRTUAL_HOST: site1.loc.test.com
expose:
- "80"
site2:
image: httpd:2.4
volumes:
- site2:/usr/local/apache2/htdocs
environment:
VIRTUAL_HOST: site2.loc.test.com
expose:
- "80"
Just to keep this topic up to date, the jwilder/nginx-proxy meanwhile introduced a flag for that: HTTPS_METHOD=noredirect; To be set as environment variable.
Further reading on github
I think your configuration should be correct, but it seems that this is the intended behaviour of jwilder/nginx-proxy. See these lines in the file nginx.tmpl: https://github.com/jwilder/nginx-proxy/blob/master/nginx.tmpl#L89-L94
It seems that if a certificate is found, you will always be redirected to https.
EDIT: I found the confirmation in the documentation
The behavior for the proxy when port 80 and 443 are exposed is as
follows:
If a container has a usable cert, port 80 will redirect to 443 for that container so that HTTPS is always preferred when available.
You can still use a custom configuration. You could also try to override the file nginx.tmpl in a new Dockefile .
To serve traffic in both SSL and non-SSL modes without redirecting to SSL, you can include the environment variable HTTPS_METHOD=noredirect (the default is HTTPS_METHOD=redirect).
HTTPS_METHOD must be specified on each container for which you want to override the default behavior.
Here is an example Docker Compose file:
version: '3'
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- '80:80'
- '443:443'
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./config/certs:/etc/nginx/certs
environment:
DEFAULT_HOST: my.example.com
app:
build:
context: .
dockerfile: ./Dockerfile
environment:
HTTPS_METHOD: noredirect
VIRTUAL_HOST: my.example.com
Note: As in this example, environment variable HTTPS_METHOD must be set on the app container, not the nginx-proxy container.
Ref: How SSL Support Works section for the jwilder/nginx-proxy Docker image.

Resources