Rancher, public subdomains and nginx - docker

I was running a complete CI stack on some local servers that I try to migrate to Rancher.
First, I have created the following configuration on one node with
docker-compose that seems to runs perfectly (i.e., I can access to
each elements separately via external public subdomains).
jwilder/nginx-proxy
jrcs/letsencrypt-nginx-proxy-companion:latest
registry:2.6.2
rancher/server:latest
Now, I want to access to some elements from brand new rancher stacks via
their respective external public subdomains. For instance,
https://gitlab.example.com, https://jenkins.example.com. Unfortunately, it doesn't work.
Actually, when I upload the following docker-compose.yml file when creating a stack, it looks like not being able to make the connection with the existing stack, the one which supports rancher itself and basically, I cannot access to the services which are running fine:
version: '2'
services:
gitlab:
image: gitlab/gitlab-ce:latest
labels:
io.rancher.container.pull_image: always
ports:
- "27100:80"
- "27143:443"
- "27122:22"
restart: always
volumes:
- /var/gitlab_volume/config:/etc/gitlab
- /var/gitlab_volume/logs:/var/log/gitlab
- /var/gitlab_volume/data:/var/opt/gitlab
environment:
VIRTUAL_HOST: "gitlab.example.com"
VIRTUAL_PORT: 80
LETSENCRYPT_HOST: "gitlab.example.com"
LETSENCRYPT_EMAIL: "admin#example.com"
What is the appropriate approach?
For info, I have already checked Rancher external subdomains but at this stage, I want to use my nginx server as load balancer.

Here the final docker-compose.yml file definition:
version: '2'
services:
gitlab:
image: gitlab/gitlab-ce:latest
network_mode: bridge
labels:
io.rancher.container.pull_image: always
ports:
- "27100:80"
- "27143:443"
- "27122:22"
restart: always
volumes:
- /var/gitlab_volume/config:/etc/gitlab
- /var/gitlab_volume/logs:/var/log/gitlab
- /var/gitlab_volume/data:/var/opt/gitlab
environment:
VIRTUAL_HOST: "gitlab.example.com"
VIRTUAL_PORT: 80
LETSENCRYPT_HOST: "gitlab.example.com"
LETSENCRYPT_EMAIL: "admin#example.com"
We just need to force the network_mode on each container definition.

Related

Different domain with different phpmyadmin service and the "same port" problem (nginx reverse proxy, docker)

I have a VPS with nginx-proxy container, and I create some wordpress website with phpmyadmin service. If I want to create another site with this definition I got "same port" problem.
Ok, I can change the port to 2998 and it works fine but I need to add a new open port to my VPS. I don't want to add or change the port for each site.
Now:
example-a.com:2999 -> example-a phpmyadmin login page
examlpe-b.com:2998 -> example-b phpymadmin login page
Is there a way to direct me to the appropriate container by domain address?
example-a.com:2999 -> example-a phpmyadmin login page
examlpe-b.com:2999 -> example-b phpymadmin login page
My nginx proxy definition
networks:
nginx-proxy:
external: false
name: nginx-reverse-proxy
default:
name: nginx-reverse-proxy-default
version: '2'
services:
nginx-proxy:
build:
context: .nginx-proxy
dockerfile: Dockerfile
container_name: nginx-proxy
ports:
- 80:80
- 443:443
restart: always
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- .nginx-proxy/certs:/etc/nginx/certs:ro
- .nginx-proxy/vhost.d:/etc/nginx/vhost.d
- .nginx-proxy/dhparam:/etc/nginx/dhparam
- /usr/share/nginx/html
networks:
- nginx-proxy
nginx-proxy-acme:
image: nginxproxy/acme-companion
container_name: nginx-proxy-acme
restart: always
volumes_from:
- nginx-proxy
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- .nginx-proxy/certs:/etc/nginx/certs:rw
- .nginx-proxy-acme/acme:/etc/acme.sh
And this is my wordpress site definition
version: "3.9"
volumes:
database_volume: {}
x-logging:
&default-logging
driver: json-file
options:
max-size: '1m'
max-file: '3'
services:
web:
build:
context: ./.docker
dockerfile: Dockerfile_web
container_name: test_web
ports:
- '3000:80'
volumes:
- ./wp:/var/www
depends_on:
- database
- php
restart: always
logging: *default-logging
database:
image: mariadb:latest
container_name: test_database
environment:
MYSQL_USER: wp
MYSQL_PASSWORD: wp
MYSQL_DATABASE: wp
MYSQL_ROOT_PASSWORD: wp
volumes:
- ./database_volume:/var/lib/mysql
expose:
- 3306
restart: always
logging: *default-logging
php:
build:
context: ./.docker
dockerfile: Dockerfile_php
container_name: test_php
working_dir: /var/www/
volumes:
- ./wordpress:/var/www
restart: always
logging: *default-logging
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: test_phpmyadmin
links:
- database:db
ports:
- '2999:80'
restart: always
logging: *default-logging
What you want is not possible, but you probably don't actually want it. It becomes clear once you think through what you want to configure, and what would happen if a user would go to either URL:
you have configured example-a.com to point to your IP
you have configured example-b.com to point to your IP
you have configured your nginx-proxy container to listen on ports 80 and 443
you want to configure your WordPress containers to both listen on port 2999
you, or rather the acme-companion, have configured your nginx container to forward HTTP requests that ask for host example-a.com to go to the container for example A with port 2999, and requests that ask for example-b.com to go to container B with port 2999
Now, you can see right away that you have two things attempting to listen on the same network interface with port 2999 - that doesn't work, and it can't, because who would handle picking up incoming requests before the request is parsed to find out which host it wanted ? Container A can't accept the request and, if it's meant for B, hand the request over - A doesn't know about B.
So if you think about a user sending a request to example-a.com:2999, what really happens is that a request goes to <yourip>:2999, just like if a user goes to example-b.com:2999, it will end up going to <yourip>:2999.
How can that problem be solved ? By having a third container C that accepts user requests, looks into the request, and based on whether they wanted container A or B, hands the request over to A or B.
Here is the great thing: you already have that! Container C is really your nginx container, which is listening on port 80/443. So if your users go to example-a.com without providing a port, it will go to 80 or 443 (depending on whether they used http or https). Then, nginx will analyze the request, and send it to the correct container. For this, it doesn't really matter what port A and B listen on, because to the outside world, it looks like they are listening on 80/443.
So the real answer is that while you can't combine custom ports with virtual hosts and use the same port for multiple containers (other than 80/443), you don't actually NEED custom ports in the first place! If you just configure your containers with the default ports, users can use both https://example-a.com and https://example-b.com and it will 'just work'™

How to allow a docker container to communicate with another container over localhost

I have a unique situation where I need to be able to access a container over a custom local domain (example.test), which I've added to my /etc/hosts file which points to 127.0.0.1. The library I'm using for OIDC uses this domain for redirecting the browser and if it is an internal docker hostname, obviously the browser will not resolve.
I've tried pointing it to example.test, but it says it cannot connect. I've also tried looking up the private ip of the docker network, and that just times out.
Add the network_mode: host to the service definition of the calling application in the docker-compose.yml file. This allows calls to localhost to be routed to the server's localhost and not the container's localhost.
E.g.
docker-compose.yml
version: '3.7'
services:
mongodb:
image: mongo:latest
restart: always
logging:
driver: local
environment:
MONGO_INITDB_ROOT_USERNAME: ${DB_ADMIN_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${DB_ADMIN_PASSWORD}
ports:
- 27017:27017
volumes:
- mongodb_data:/data/db
callingapp:
image: <some-img>
restart: always
logging:
driver: local
env_file:
- callingApp.env
ports:
- ${CALLING_APP_PORT}:${CALLING_APP_PORT}
depends_on:
- mongodb
network_mode: host // << Add this line
app:
image: <another-img>
restart: always
logging:
driver: local
depends_on:
- mongodb
env_file:
- app.env
ports:
- ${APP_PORT}:${APP_PORT}
volumes:
mongodb_data:

docker nginx reverse proxy 503 Service Temporarily Unavailable

I want to use nginx as reverse proxy for my remote home automation access.
My infrastructure yaml looks like follows:
# /infrastructure/docker-compose.yaml
version: '3'
services:
proxy:
image: jwilder/nginx-proxy:alpine
container_name: proxy
networks:
- raspberry_network
ports:
- 80:80
- 443:443
environment:
- ENABLE_IPV6=true
- DEFAULT_HOST=${RASPBERRY_IP}
volumes:
- ./proxy/conf.d:/etc/nginx/conf.d
- ./proxy/vhost.d:/etc/nginx/vhost.d
- ./proxy/html:/usr/share/nginx/html
- ./proxy/certs:/etc/nginx/certs
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: always
networks:
raspberry_network:
My yaml containing the app configuration looks like this:
# /apps/docker-compose.yaml
version: '3'
services:
homeassistant:
container_name: home-assistant
image: homeassistant/raspberrypi4-homeassistant:stable
volumes:
- ./homeassistant:/config
- /etc/localtime:/etc/localtime:ro
environment:
- 'TZ=Europe/Berlin'
- 'VIRTUAL_HOST=${HOMEASSISTANT_VIRTUAL_HOST}'
- 'VIRTUAL_PORT=8123'
deploy:
resources:
limits:
memory: 250M
restart: unless-stopped
networks:
- infrastructure_raspberry_network
ports:
- '8123:8123'
networks:
infrastructure_raspberry_network:
external: true
Via portainer I validated that both containers are contected to the same network. However, when accessing my local IP of the raspberry pi 192.168.0.10 I am receiving "503 Service Temporarily Unavailable".
Of course when I try accessing my app via the virtual host domain xxx.xxx.de it neither works.
Any idea what the issue might be? Or any ideas how to further debug this?
You need to specify the correct VIRTUAL_HOST in the backends environment variable and make sure that they're on the same network (or docker bridge network)
Make sure that any containers that specify VIRTUAL_HOST are running before the nginx-proxy container runs. With docker-compose, this can be achieved by adding to depends_on config of the nginx-proxy container

Why can't I connect to this docker compose service?

So I have this docker compose file
version: "2.1"
services:
nginx:
image: pottava/proxy
ports:
- 8080:80
environment:
- PROXY_URL=http://transmission-container:5080/
- BASIC_AUTH_USER=admin
- BASIC_AUTH_PASS=admin
- ACCESS_LOG=true
transmission:
image: linuxserver/transmission
container_name: transmission-container
ports:
- 5080:9091
restart: unless-stopped
I'm new to docker compose and trying it out for the first time. I need to be able to access the transmission service via http://localhost:8080 but nginx is returning a 502.
How should I change my compose file so that http://localhost:8080 will connect to the transmission service?
How can I make the transmission service not accessible via http://localhost:5080 and only accessible via http://localhost:8080 using docker compose?
I have tested the code below, it is working
version: "2.1"
services:
nginx:
image: pottava/proxy
ports:
- 8080:80
environment:
- PROXY_URL=http://transmission-container:9091/
- BASIC_AUTH_USER=admin
- BASIC_AUTH_PASS=admin
- ACCESS_LOG=true
transmission:
image: linuxserver/transmission
container_name: transmission-container
expose:
- "9091"
restart: unless-stopped
You no need to expose port 5080 to the host, the Nginx container can access directly the container port. The proxy URL needs to point to port 9091. Now you can't directly access the transmission service but need to go though the proxy server.
You should be able to access the other container using the service name and container port:
- PROXY_URL=http://transmission:9091/
If you do not want to access the transmission service from locahost, do not declare the host port:
ports:
- 9091

Docker compose set container name for stacks

I am deploying a small stack onto a UCP
One of the issues I am facing is naming the container for service1.
I need to have a static name for the container, since it's utilized by mycustomimageforservice2
The container_name option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
I have to use version: 3 compose files.
version: "3"
services:
service1:
image: dockerhub/service1
ports:
- "8080:8080"
container_name: service1container
networks:
- mynet
service2:
image: myrepo/mycustomimageforservice2
networks:
- mynet
restart: on-failure
networks:
mynet:
What are my options?
You can't force a containerName in compose as its designed to allow things like scaling a service (by updating the number of replicas) and that wouldn't work with names.
One service can access the other using servicename (http://serviceName:internalServicePort) instead and docker will do the rest for you (such as resolving to an actual container address, load balancing between replicas....).
This works with the default network type which is overlay
You can face your problem linking services in docker-compose.yml file.
Something like:
version: "3"
services:
service1:
image: dockerhub/service1
ports:
- "8080:8080"
networks:
- mynet
service2:
image: myrepo/mycustomimageforservice2
networks:
- mynet
restart: on-failure
links:
- service1
networks:
mynet:
Using links arguments in your docker-compose.yml you will allow some service to access another using the container name, in this case, service2 would establish a connection to service1 thanks to the links parameter. I'm not sure why you use a network but with the links parameter would not be necessary.
container_name option is ignored when deploying a stack in swarm mode since container names need to be unique.
https://docs.docker.com/compose/compose-file/#container_name
If you do have to use version 3 but don't work with swarms, you can add --compatibility to your commands.
Specify a custom container name, rather than a generated default name.
container_name: my-web-container
see this in the full docker-compose file
version: '3.9'
services:
node-ecom:
build: .
image: "node-ecom-image:1.0.0"
container_name: my-web-container
ports:
- "4000:3000"
volumes:
- ./:/app:ro
- /app/node_modules
- /config/.env
env_file:
- ./config/.env
know more

Resources