I'm having a jenkins container running on my gcp instance and want to redirect to https when someone enters a url with http extension.
Here my docker-compose file
version: '3.8'
services:
jenkins:
image: jenkins/jenkins
container_name: jenkins-docker
restart: always
privileged: true
user: root
ports:
- 80:80
- 443:8443
- 50000:50000
volumes:
- ./jenkins_home:/var/jenkins_home
- ../opt/cert/dcsjenkins.jks:/var/lib/jenkins/dcsjenkins.jks
environment:
JAVA_OPTS: -Duser.timezone=CET -Xmx2048m -Djava.awt.headless=true
JENKINS_OPTS: --httpPort=-1 --httpsPort=8443
Solution 1
Try setting the --httpsRedirectHttp flag on server startup. It requires both HTTP and HTTPs ports defined.
--httpsRedirectHttp = redirect http requests to https (requires both --httpPort and --httpsPort)
All available flags can be found here.
Solution 2
The other option is to deploy a reverse proxy before the Jenkins container. If you are already using an LB in GCP to expose your Jenkins instance, you can simply add a rule there to redirect HTTP traffic.
Related
I am running a Symfony Project via drud/ddev (nginx) for local development.
I did this many times before and had no issues whatsoever.
In my recent project I have to use the Mercure-Hub to push Notifications from the server to the client.
I required the symfony/mercure-bundle via composer and copied the generated docker-compose content into a docker-compose.mercure.yaml (.ddev/docker-compose.mercure.yaml)
After starting the container the Mercure-Hub works seamlessly but is only reachable over http.
My problem: I only have beginner knowledge in the field of nginx and docker-compose.
I am thankful for every bit of advice! :)
Steps to reproduce
Setup basic Symfony Project and run it via DDEV.
Require symfony/mercure-bundle.
Copy docker-compose.yaml and docker-compose.override.yaml content to a docker-compose.mercure.yaml in the .ddev folder (change the port).
Configure Mercure-Hub URL in .env.
Start the container and visit [DDEV-URL]:[MERCURE-PORT] / subscribe a Mercure topic.
My problem
Mercure-Hub only reachable via http.
HTTPS call gets an 'ERR_SSL_PROTOCOL_ERROR'
My wish
Access the Mercure-Hub URL / subscribe to Mercure topics via HTTPS.
What I've tried
Reading the Mercure-Hub Docs and trying to adapt the Docker SSL / HTTPS instructions to my local drud/ddev environment
Adding another server to the nginx configuration as in the Mercure-Cookbook "Using NGINX as an HTTP/2 Reverse Proxy in Front of the Hub"
Googling a bunch
Hours of trial and error
Files
ddev config.yaml
name: project-name
type: php
docroot: public
php_version: "8.1"
webserver_type: nginx-fpm
router_http_port: "80"
router_https_port: "443"
xdebug_enabled: true
additional_hostnames: []
additional_fqdns: []
database:
type: mariadb
version: "10.4"
nfs_mount_enabled: true
mutagen_enabled: false
use_dns_when_possible: true
composer_version: "2"
web_environment: []
nodejs_version: "16"
docker-compose.mercure.yaml
version: '3'
services:
###> symfony/mercure-bundle ###
mercure:
image: dunglas/mercure
restart: unless-stopped
environment:
SERVER_NAME: ':3000'
MERCURE_PUBLISHER_JWT_KEY: '!ChangeThisMercureHubJWTSecretKey!'
MERCURE_SUBSCRIBER_JWT_KEY: '!ChangeThisMercureHubJWTSecretKey!'
# Set the URL of your Symfony project (without trailing slash!) as value of the cors_origins directive
MERCURE_EXTRA_DIRECTIVES: |
cors_origins http://127.0.0.1:8000
# Comment the following line to disable the development mode
command: /usr/bin/caddy run -config /etc/caddy/Caddyfile.dev
volumes:
- mercure_data:/data
- mercure_config:/config
ports:
- "3000:3000"
###< symfony/mercure-bundle ###
volumes:
###> symfony/mercure-bundle ###
mercure_data:
mercure_config:
###< symfony/mercure-bundle ###
.env
###> symfony/mercure-bundle ###
# See https://symfony.com/doc/current/mercure.html#configuration
# The URL of the Mercure hub, used by the app to publish updates (can be a local URL)
MERCURE_URL=http://ddev-pnp-master-mercure-1:3000/.well-known/mercure
# The public URL of the Mercure hub, used by the browser to connect
MERCURE_PUBLIC_URL=http://ddev-pnp-master-mercure-1:3000/.well-known/mercure
# The secret used to sign the JWTs
MERCURE_JWT_SECRET="!ChangeThisMercureHubJWTSecretKey!"
###< symfony/mercure-bundle ###
Edit 1
I changed my docker-compose thanks to the advice from rfay.
(only showing the relevant part below)
[...]
services:
mercure:
image: dunglas/mercure
restart: unless-stopped
expose:
- "3000"
environment:
- SERVER_NAME=":3000"
- HTTP_EXPOSE=9998:3000
- HTTPS_EXPOSE=9999:3000
[...]
replaced ports with expose
added HTTP_EXPOSE & HTTPS_EXPOSE
Problem with this
Now my problem is that the container doesn't expose any ports (see docker desktop screenshot below).
docker desktop port screenshot
Solution
With the help of rfay I found the solution (which consisted of reading the ddev documentation properly lol).
What I did
replacing ports with expose
adding VIRTUAL_HOST, HTTP_EXPOSE and HTTPS_EXPOSE under environment
adding container_name & labels (see code below)
My final docker-compose.mercure.yaml
version: '3'
services:
mercure:
image: dunglas/mercure
restart: unless-stopped
container_name: "ddev-${DDEV_SITENAME}-mercure-hub"
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.approot: ${DDEV_APPROOT}
expose:
- "3000"
environment:
VIRTUAL_HOST: $DDEV_HOSTNAME
SERVER_NAME: ":3000"
HTTP_EXPOSE: "9998:3000"
HTTPS_EXPOSE: "9999:3000"
MERCURE_PUBLISHER_JWT_KEY: '!ChangeThisMercureHubJWTSecretKey!'
MERCURE_SUBSCRIBER_JWT_KEY: '!ChangeThisMercureHubJWTSecretKey!'
MERCURE_EXTRA_DIRECTIVES: |
cors_origins https://project-name.ddev.site
# Comment the following line to disable the development mode
command: /usr/bin/caddy run -config /etc/caddy/Caddyfile.dev
volumes:
- mercure_data:/data
- mercure_config:/config
volumes:
mercure_data:
mercure_config:
With this docker-compose in place my mercure container is available via HTTPS over the port 9999.
For further information see the ddev documentation: https://ddev.readthedocs.io/en/latest/users/extend/custom-compose-files/#docker-composeyaml-examples
The solution in https://stackoverflow.com/a/74735903/21252828 does not work until you add a minus before the config option at the command:
...
command: /usr/bin/caddy run --config /etc/caddy/Caddyfile.dev
...
Otherwise the container fails (and restarts endless).
Maybe you can edit your post Christian Neugebauer?
I am trying to connect MinIO with KeyCloak and I follow the instructions provided in this documentation:
https://github.com/minio/minio/blob/master/docs/sts/keycloak.md
What I have done so far is deploy a Docker container for the MinIO server, another one for the MinioClient and a third one used for the KeyCloak server.
As you can see in the following snippet the configuration of the Minio Client container is done correctly, since I can list the buckets available in the Minio Server:
mc ls myminio
[2020-05-14 11:54:59 UTC] 0B bucket1/
[2020-05-06 12:23:01 UTC] 0B bucket2/
I have an issue arising when I try to configure MinIO as depicted in step 3 (Configure MinIO) of the documentation. In more detail, the command that I run is this one:
mc admin config set myminio identity_openid config_url="http://localhost:8080/auth/realms/demo/.well-known/openid-configuration" client_id="account"
And the error I get is this one:
mc: <ERROR> Cannot set 'identity_openid config_url=http://localhost:8080/auth/realms/demo/.well-known/openid-configuration client_id=account' to server. Get http://localhost:8080/auth/realms/demo/.well-known/openid-configuration: dial tcp 127.0.0.1:8080: connect: connection refused.
When I curl this address http://localhost:8080/auth/realms/demo/.well-known/openid-configuration from the MinIO Client container though, I retrieve the JSON file.
Turns out, all I had to do is change the localhost in the config_url, from localhost to the IP of the KeyCloak container (172.17.0.3).
This is just a temporary solution that works for now, but I will continue searching for something more concrete than just hardcoding the IP.
When I figure out the solution, this answer will be updated.
Update
I had to create a docker-compose.yml file as the one below in order to overcome the issues without having to manually place the IP of the KeyCloak container.
version: '2'
services:
miniod:
image: minio/minio
restart: always
container_name: miniod
ports:
- 9000:9000
volumes:
- "C:/data:/data"
environment:
- "MINIO_ACCESS_KEY=access_key"
- "MINIO_SECRET_KEY=secret_key"
command: ["server", "/data"]
networks:
- minionw
mcd:
image: minio/mc
container_name: mcd
networks:
- minionw
kcd:
image: quay.io/keycloak/keycloak:10.0.1
container_name: kcd
restart: always
ports:
- 8080:8080
environment:
- "KEYCLOAK_USER=admin"
- "KEYCLOAK_PASSWORD=pass"
networks:
- minionw
networks:
minionw:
driver: "bridge"
Connection refused occurs when a port is not accessible on the hostname or IP we specified.
Please try exposing the port using --expose flag along with the port number which you wish to expose when using the docker CLI. Then being exposed, you can access on it on localhost
I have a service, some-service, that needs to make http requests to a Jenkins service - both running in separate Docker containers. My issue is that whenever I make a request, my connection is refused.
Both some-service and Jenkins are running on ports 3030 and 4040 with host names some-service and jenkins, respectively.
I can hit Jenkins successfully on my local machine outside of some-service with:
curl -v http://localhost:4040/
However, I cannot reach Jenkins from inside some-service using:
curl -v http://jenkins:4040/
I'm using this simple Docker-compose.yaml file to create both some-service and Jenkins:
version: '3'
services:
some-service:
container_name: service
image: service:latest
hostname: some-service
build:
context: service/
dockerfile: Dockerfile
environment:
GET_HOSTS_FROM: dns
networks:
- eg-net
ports:
- 3030:3030
depends_on:
- jenkins
links:
- jenkins
labels:
kompose.service.type: LoadBalancer
jenkins:
container_name: jenkins
image: jenkinsci/blueocean
restart: always
hostname: jenkins
networks:
- eg-net
ports:
- 4040:8080
volumes:
- ./jenkins-data:/var/jenkins_home
networks:
eg-net:
driver: bridge
You can't access http://jenkins:4040/ from within your service because port 4040 is exposed only to the host machine. Thats why curl -v http://localhost:4040/ on your host machine works.
If you want to access jenkins from within another container you have to use the port 8080 because this port is exposed within the network. So curl -v http://jenkins:8080/ from within your service will work.
Hope this will clarify it.
I'm running a selfhosted gitlab docker instance, but I'm facing some problems configuring the registry as I do get the error
Error response from daemon: Get https://example.com:4567/v2/: dial tcp <IP>:4567: connect: connection refused
for doing docker login example.com:4567.
So it seems that I have to expose the port 4567 somehow.
An (better) alternative would be to configure a second domain for the registry - like registry.example.com. As you can see below I'm using letsencrypt certificates for my gitlab instance. But how do I get a second certificate for the registry?
This is how my docker-compose looks like - I'm using jwilder/nginx-proxy for my reverse proxy.
docker-compose.yml
gitlab:
image: gitlab/gitlab-ce:11.9.0-ce.0
container_name: gitlab
networks:
- reverse-proxy
restart: unless-stopped
ports:
- '50022:22'
volumes:
- /opt/gitlab/config:/etc/gitlab
- /opt/gitlab/logs:/var/log/gitlab
- /opt/gitlab/data:/var/opt/gitlab
- /opt/nginx/conf.d:/etc/nginx/conf.d
- /opt/nginx/certs:/etc/nginx/certs:ro
environment:
VIRTUAL_HOST: example.com
VIRTUAL_PROTO: https
VIRTUAL_PORT: 443
LETSENCRYPT_HOST: example.com
LETSENCRYPT_EMAIL: certs#example.com
gitlab.rb
external_url 'https://example.com'
nginx['redirect_http_to_https'] = true
nginx['ssl_certificate'] = '/etc/nginx/certs/example.com/fullchain.pem'
nginx['ssl_certificate_key'] = '/etc/nginx/certs/example.com/key.pem'
gitlab_rails['backup_keep_time'] = 604800
gitlab_rails['backup_path'] = '/backups'
gitlab_rails['registry_enabled'] = true
registry_external_url 'https://example.com:4567'
registry_nginx['ssl_certificate'] = "/etc/nginx/certs/example.com/fullchain.pem"
registry_nginx['ssl_certificate_key'] = "/etc/nginx/certs/example.com/key.pem"
For the second alternative it would look like:
registry_external_url 'https://registry.example.com'
registry_nginx['ssl_certificate'] = "/etc/nginx/certs/registry.example.com/fullchain.pem"
registry_nginx['ssl_certificate_key'] = "/etc/nginx/certs/registry.example.com/key.pem"
But how do I set this up in my docker-compose?
Update
Im configuring nginx just via jwilder package, without changing anyhting. So this part of my docker-compose.yml file just looks like this:
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
networks:
- reverse-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /opt/nginx-proxy/vhost.d:/etc/nginx/vhost.d:rw
- /opt/nginx/certs:/etc/nginx/certs:ro
- html:/usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
nginx-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-letsencrypt
networks:
- reverse-proxy
depends_on:
- nginx-proxy
volumes:
- /opt/nginx-proxy/vhost.d:/etc/nginx/vhost.d:rw
- html:/usr/share/nginx/html
- /opt/nginx/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:rw
environment:
NGINX_PROXY_CONTAINER: "nginx-proxy"
TL; DR:
So it seems that I have to expose the port 4567 somehow.
Yes, however jwilder/nginx-proxy does not support more than one port per virtual host and port 443 is already exposed. There is a pull request for that feature but it has not been merged yet. You'll need to expose this port another way (see below)
You are using jwilder/nginx-proxy as reverse proxy to access a Gitlab instance in a container but with your current configuration onlyport 443 is exposed:
environment:
VIRTUAL_HOST: example.com
VIRTUAL_PROTO: https
VIRTUAL_PORT: 443
All other Gitlab services (including the registry on port 4567) are not proxied and therefore not reachable through example.com.
Unfortunately it is not possible yet to expose multiple port on a single hostname with jwilder/nginx-proxy. There is a pull request open for that use case but it had not been merged yet (you are not the only one with this kind of issue).
An (better) alternative would be to configure a second domain for the registry
This won't work if you keep using jwilder/nginx-proxy as even if you changed registry_external_url, you'll still be stuck with the port issue, and you cannot allocate the same port to two different services.
What you can do:
vote and comment for mentioned PR to be merged :)
try to build the Docker image from mentionned pull request's fork and configure your compose with something like VIRTUAL_HOST=example.com:443,example.com:4567
configure a reverse proxy manually fort port 4567 - you may wind-up a plain nginx container in addition with your current configuration which would specifically do this, or re-configure your entire proxying scheme without using jwilder images
update your configuration to expose example.com:4567 instead of example.com:443 but you'll lose HTTPS access. (though it's probably not what you are looking for)
I am aware this does not provide a finite solution but I hope it helps.
I'm trying to redirect all incoming Traefik from http to https, for a web application which gets served out of a docker container with a custom port.
If I build this docker compose file, and scale the application everything works as expected. I'm able to request http and https of the application, but I try to accomplish that only https get served and http gets redirected to https.
Since I use a Docker-Compose file, I don't have a Traefik.toml, and try to accomplish this without one.
Docker Compose:
traefik:
image: traefik:latest
command:
- "--api"
- "--docker"
- "--docker.domain=example.com"
- "--logLevel=DEBUG"
- "--docker.watch"
labels:
- "traefik.enable=true"
ports:
- "80:80"
- "8080:8080"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /dev/null:/traefik.toml
application:
image: application
command: web
tty: false
stdin_open: true
restart: always
expose:
- "8081"
labels:
- "traefik.backend=application"
- "traefik.frontend.rule=HostRegexp:{subdomain:[a-z]+}.example.com"
- "traefik.frontend.priority=1"
- "traefik.enable=true"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
I try'd different variations on the application container, such as:
- "traefik.frontend.entryPoints=http,https"
- "traefik.frontend.redirect.entryPoint=https"
- "traefik.frontend.headers.SSLRedirect=true"
But the maximum I could accomplish was a to many redirects response, with the SSLRedirect label, and without I get the following from traefik and neither http or https requests get forwarded correctly.
level=error msg="Recovered from panic in http handler: runtime error: invalid memory address or nil pointer dereference"
Can anyone push me in the right direction?
Thanks in advance ;)
I run under the following Settings
user:~$ docker --version
Docker version 1.13.1, build 092cba3
user:~$ docker-compose --version
docker-compose version 1.8.0
Docker PS Response
IMAGE COMMAND ... PORTS NAMES
application "dotnet Web..." ... 8081/tcp components_application_1
traefik:latest "/traefik --api --..." ... 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:8080->8080/tcp components_traefik_1
Infrasturcture Setup
aws-elb => vpc => ec2...ecn
traefik per instance,
n applications per instance
This only works until traefik v1.7, after v2.* you need another config setup, which i haven't figured out yet
After a deeper research, i found the solution myself.
The problem was a missing label on the application Container,
after i added
- "traefik.frontend.headers.SSLProxyHeaders=X-Forwarded-Proto: https"
- "traefik.frontend.headers.SSLRedirect=true"
on my application containers it worked like a charm with a clear 301 redirect.
Why the need of the header, in default the aws-elb takes a https request and forwards it with a HTTP(80) to the connected Instance, during this process the elb adds the X-Forwarded-Proto: https Header to the request.
Since traefik doesn't know that it is running behind an elb it does the redirect over and over again. But the Header stops this behavior.