I'm trying to create some kind of reverse proxy server that would serve a port running on my local network (192.168.0.15:5083) through either another port (192.168.0.15:<ANOTHER PORT>) or through another path on the IP address (192.168.0.15/pathname). I want this to be reachable from other computers on the same network.
I'm trying to achieve this using Traefik with Docker through a docker-compose.yml file. Currently I have it set up like this:
lms:
container_name: lms
image: epoupon/lms
user: ${PUID}:${PGID}
ports:
- 5083:5082
volumes:
- ${USERDIR}/docker/lms:/var/lms
- /media/music:/music:ro
environment:
- TZ=${TZ}
- PUID=${PUID}
- PGID=${PGID}
labels:
- "traefik.enable=true"
- "traefik.http.routers.lms.rule=Path(`/lms`)"
restart: unless-stopped
reverse-proxy:
image: traefik:v2.6
command: --api.insecure=true --providers.docker
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
What I'm trying to do here is to create a path on the servers IP address (192.168.0.15/lms) that would serve port 5083 (192.168.0.15:5083). The purpose of this is to then be able to apply CORS headers to the proxy server.
When visiting 192.168.0.15/lms from another machine on the same network as the server, I get this error message:
Fatal error: failed loading /js/jquery-1.10.2.min.js
I interpret this as if it gets a connection to port 5083, but the assets/resources being used on the front end on that port is not loading correctly.
Am I doing this right or should I do it in a different way to succeed?
Related
I have a VM which run multiple containers all linked to one docker network.
Traefik (as reverse proxy & load balancer)
cloudflared as tunnel
whoami (for testing purposes)
and some containers like photoprism, nextcloud, node-red,...
I generated an origin cert via Cloudflare which has been added to Traefik.
In Cloudflare, I have a subdomain which points via the tunnel to https://172.16.10.11 (ip from the VM). This causes an unsecure connection (IP SAN applied -> I don't think this is possible on a private ip?). When I disable TLS verification on Cloudflare, it works. However, I am trying to set this up properly. Next,I tried pointing my domain towards https://localhost. the cloudflared service running in a container cannot reach any other services as these are located other containers.
I was thinking, what if I run the cloudflared service within the Traefik container, I believe I can reach Traefik via localhost?
Do you have any advice on how to achieve a secure tunnel with cert verification? Or is this not realistic when self-hosting?
Current docker compose:
version: '3'
services:
traefik:
image: traefik:latest
command:
- --log.level=debug
- --api.insecure=true
- --providers.docker=true
- --providers.docker.exposedbydefault=false
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --serverstransport.insecureskipverify
- --providers.file.filename=/etc/traefik/dynamic_conf.yml
- --providers.file.watch=true
ports:
- "8080:8080"
- "443:443"
- "80:80"
networks:
- proxy_network
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- traefik-data:/etc/traefik
labels:
- traefik.enable=true
- traefik.docker.network=proxy_network
- traefik.http.routers.traefik.rule=Host(`${DOMAINNAME_TRAEFIK}`)
- traefik.http.routers.traefik.entrypoints=web
- traefik.http.routers.traefik.service=traefik
- traefik.http.services.traefik.loadbalancer.server.port=8080
tunnel:
container_name: cloudflared-tunnel
image: cloudflare/cloudflared
#restart: unless-stopped
networks:
- proxy_network
command: tunnel --no-autoupdate run --token ${CLOUDFLARED_TOKEN}
whoami:
image: traefik/whoami
container_name: whoami1
command:
# It tells whoami to start listening on 2001 instead of 80
- --port=2000
- --name=iamfoo
networks:
- proxy_network
labels:
- traefik.enable=true
- traefik.http.routers.whoami.rule=Host(`${DOMAINNAME}`)
- traefik.http.routers.whoami.entrypoints=websecure
- traefik.http.routers.whoami.tls=true
- traefik.http.routers.whoami.service=whoami
- traefik.http.services.whoami.loadbalancer.server.port=2000
volumes:
traefik-data:
driver: local
networks:
proxy_network:
name: proxy_network
external: true
I expect a secure tunnel solution and to make sure that this architecture is setup in a good way.
I want to create 2 elasticsearch cluster in single docker-compose file, so that I can test few changes only on new es cluster,
My docker-compose file is look like this
version: "2.2"
services:
elasticsearch-master:
image: elasticsearch:6.6.0
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- "9200:9200"
mem_limit: '2048M'
new-elasticsearch-master:
image: elasticsearch:6.6.0
volumes:
- esdata2:/usr/share/elasticsearch/data
ports:
- "9400:9200"
mem_limit: '2048M'
search:
image: search:latest
entrypoint: java -Delasticsearch.host=elasticsearch-master -DnewElasticsearch.host=new-elasticsearch-master -DnewElasticsearch.port=9400 -jar app.jar
ports:
- "8083:8083"
depends_on:
- elasticsearch-master
- new-elasticsearch-master
mem_limit: '500M'
volumes:
esdata1:
esdata2:
I have 1 java service where I am adding both the host with different environment variable
-Delasticsearch.host=elasticsearch-master
-DnewElasticsearch.host=new-elasticsearch-master
But when I run code from java search service as follow
new RestTemplate().getForEntity("http://elasticsearch-master:9200/_cat/indices?v",String.class)
This gives me correct response.
But when I try to connect to another host on 9400.
new RestTemplate().getForEntity("http://new-elasticsearch-master:9400/_cat/indices?v",String.class)
I am getting Connection Refused error
When I try same host with 9200 then that gives me 200 response.
new RestTemplate().getForEntity("http://new-elasticsearch-master:9200/_cat/indices?v",String.class)
Can someone please tell me how can I make 2 different connection with different port as below.
http://elasticsearch-master:9200
http://new-elasticsearch-master:9400
Thanks
You got the expected behavior. The ports field in docker-compose map the ports to your localhost, which mean that the "old" Elasticsearch will be available via localhost:9200 and the "new" Elasticsearch will be available via localhost:9400.
On the other hand, docker-compose services communicate in an internal network and the service name is the hostname and the port is the original listening port.
Thus, you were able to access (internally) your old one via http://elasticsearch-master:9200 and the new one via http://new-elasticsearch-master:9200.
If you wish to use the new Elasticsearch with 9400 you need to change its settings: http.port. You can do that like:
new-elasticsearch-master:
image: elasticsearch:6.6.0
volumes:
- esdata2:/usr/share/elasticsearch/data
environment:
- http.port=9400
ports:
- "9400:9400"
mem_limit: '2048M'
note that you have to change the port mapping as well (because it will map your new port, 9400 to the localhost 9400).
I'm using this https://github.com/wernight/docker-ngrok , so that my dockerize app will expose to internet. I added it to my docker-compose, but when I up my container I get this error "Failed to complete tunnel connection". when I want to access my app I do it like this myapp.local and it works find because I set up windows host. when I access like this http://localhost I see this , I noticed I cannot access using localhost that's why I used windows host.
Here is my docker-compose
web:
image: nginx:stable
container_name: webcontainer
ports:
- "80:80"
volumes:
- ./:/var/www/myapp
- ./myapp.conf:/etc/nginx/conf.d/myapp.conf
expose:
- 9000
external_links:
- php
- db
ngrok:
image: wernight/ngrok
links:
- web
ports:
- "4040:4040"
I have Ubuntu 18:04/NGINX VPS where I have a bunch of Laravel project blocks, all use ssl (certbot).
I wanted to deploy Nextcloud via Docker Compose on the same VPS:
version: "3"
services:
proxy:
image: jwilder/nginx-proxy:alpine
labels:
# labels needed by lets encrypt to identify container to generate certs in
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"
container_name: nextcloud-proxy
networks:
- nextcloud_network
ports:
- 80:80
- 443:443
volumes:
- ./proxy/conf.d:/etc/nginx/conf.d:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- ./proxy/certs:/etc/nginx/certs:ro
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: unless-stopped
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion:v1.12.1
container_name: nextcloud-letsencrypt
depends_on:
- proxy
networks:
- nextcloud_network
volumes:
- ./proxy/certs:/etc/nginx/certs:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
restart: unless-stopped
db:
image: mariadb:10.5.1
container_name: nextcloud-mariadb
networks:
- nextcloud_network
volumes:
- ./db:/var/lib/mysql
- ./dbdumps:/var/dbdumps
- /etc/localtime:/etc/localtime:ro
environment:
- MYSQL_ROOT_PASSWORD=... # set me
- MYSQL_PASSWORD=... # set me
- MYSQL_DATABASE=... # set me
- MYSQL_USER=... # set me
restart: unless-stopped
redis:
container_name: nextcloud-redis
image: redis:5.0.8
restart: unless-stopped
networks:
- nextcloud_network
volumes:
- ./redis/data:/data
command: ["redis-server", "--appendonly yes"]
app:
image: nextcloud:18.0.2
container_name: nextcloud-app
networks:
- nextcloud_network
depends_on:
- letsencrypt
- proxy
- redis
- db
volumes:
- ./nextcloud:/var/www/html
- ./app/config:/var/www/html/config
- ./app/custom_apps:/var/www/html/custom_apps
- ./app/data:/var/www/html/data
- ./app/themes:/var/www/html/themes
- /etc/localtime:/etc/localtime:ro
environment:
- VIRTUAL_HOST=YOURDOMAINHERE # set me
- LETSENCRYPT_HOST=YOURDOMAINHERE # set me
- LETSENCRYPT_EMAIL=you#example.com # set me
restart: unless-stopped
networks:
nextcloud_network:
driver: bridge
When I run this I get:
ERROR: for 3f210d699b80_nextcloud-proxy Cannot start service proxy: driver failed programming
external connectivity on endpoint nextcloud-proxy
(2d76e425c94abb95da70a7d903bf8830d4e9192a512e17db1b39f76da85c7b97): Error starting userland proxy:
listen tcp 0.0.0.0:443: bind: address already in use
ERROR: for proxy Cannot start service proxy: driver failed programming external connectivity on
endpoint nextcloud-proxy (2d76e425c94abb95da70a7d903bf8830d4e9192a512e17db1b39f76da85c7b97): Error
starting userland proxy: listen tcp 0.0.0.0:443: bind: address already in use
ERROR: Encountered errors while bringing up the project.
Because this port is already in use.
If I stop NGINX on VPS and run docker-compose up -d again, everything is ok and Nextcloud service is accessible via URL.
I tried to change outside ports to
- 8080:80
- 4444:443
And rebuild it. Then I don't see the above error but everything is messed up - the url point to wrong domain...
Is it possible to tweak the proxy container settings somehow to resolve this?
2 services are unable to listen to the same port as you have found. Your laravel applications are already listening on ports 80/443, so when start your nextcloud containers, it won't be able to bind to those ports.
You'll have to have your jwilder/nginx-proxy:alpine act as a proxy to both the nextcloud container and the laravel servers. This can be done via your nginx configurations and mount it to your container (which you seem to be using the ./proxy/ directory):
https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/
Although, if your VPS is able to have 2 IP addresses, then you are able to bind the laravel applications to one interface and your nextcloud proxy to the other which will also solve your problem. The first method is better practice as would allow you to scale your server better without having to add another IP address per-application.
https://docs.docker.com/config/containers/container-networking/
hey guys I have a docker container A with a domain name attached to it on a host B with a domain name attached to it as well.....how can I access the said container A via A's domain name rather than an B's ip address or domain name from computer C on the host B's local network.
thus C -> A( via wwww.cname.url) rather than C -> B( www.bname.url:port) -> A
E.G.
the following is a docker-compose with services
version: "3.2"
services:
php:
links:
- mysql
image: arm32v6/php:7.1.24-fpm-alpine3.8-lavalite
networks:
- backend
working_dir: /var/www/html
volumes:
- ./website/:/var/www/html/
privileged: true
node:
domainname: docker.local
hostname: node
networks:
frontend:
aliases:
- node.docker.local
links:
- "apache:dev.docker.local"
depends_on:
- apache
image: arm32v7/node:latest
entrypoint: yarn
command: twill-dev
volumes:
- ./website:/usr/src/app
working_dir: /usr/src/app
ports:
- "3000:3000"
- "3001:3001"
apache:
domainname: docker.local
hostname: dev
image: arm32v7/httpd:2.4
depends_on:
- php
- mysql
networks:
frontend:
aliases:
- apache
- dev.docker.local
backend:
aliases:
- apache
privileged: true
ports:
- "8880:80"
working_dir: /var/www/html
volumes:
- ./website/:/var/www/html/
- ./httpd.conf:/usr/local/apache2/conf/httpd.conf
- ./fpm.conf:/usr/local/apache2/conf/extra/httpd-vhosts.conf
mysql:
image: yobasystems/alpine-mariadb:arm32v7
volumes:
- ./datadir:/var/lib/mysql
networks:
- backend
environment:
- MYSQL_VERSION=5.7
- MYSQL_ROOT_PASSWORD=rootpassword
- MYSQL_USER=test
- MYSQL_PASSWORD=testpass
- MYSQL_DATABASE=test_db
networks:
frontend:
external:
name: localnet
backend:
I want to be able to access service apache by its domain name set to dev.docker.local the ip of which is on a network 17.18.0.1/24
The host has an IP which is on a network 192.168.1.0/24 with a domain name dev.server.local
I have a dev pc on the network 192.168.1.0/24 and it can access the service containers via the hosts IP and usually a port exposed for the particular service.
UPDATE
The host can be reached at server.local from outside the network
my network interface has the following entries
dns-search server.local
dns-domain server.local
the docker container has the following
hostname nginx
domainname server.local
do I need to also edit a host file or resolv.conf file?
It seems the host is running avahi service discovery. Would this affect anything?
So can I
set an internal domain set to the host and have docker containers on subdomains? How would outside devices access this via the domain?
attach the docker container to be on the host's network thus having an ip in the 192.168.1.0/24 and being able to be pinged by devices on that network as well. Will the domain resolve to it?
Is there a dynamic DNS software I can use that can hook this up to me so that its not a manual process. Thus it will detect the server and route incoming requests to it via the domain name?
You can do this by configuring an nginx container with the containers bound to the subdomain.
So for example the host is accessible by domain example.com and you want the php container to be accessible on php.example.com you could use a setup like the following:
services:
php:
image: arm32v6/php:7.1.24-fpm-alpine3.8-lavalite
environment:
- VIRTUAL_HOST=php.example.com
nginx-proxy:
image: jwilder/nginx-proxy
depends_on:
- php
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
Any request to the subdomain would first be send to the host, this is bound by nginx, which in turn registers that because the subdomain php is requested it should send the user to that container.
I hope this can help you and if you have any questions please let me know