I have setup multiple containers on my server and I want to access them through a more friendly url; So I've also setup nginx-proxy-manager. Every other container is accessible except for pgadmin4.
Other containers like Grafana or Prometheus are accessible from the nginx-proxy-manager, but pgadmin4 can only be accessed by hitting the IP:PORT directly. By typing pgadmin.keivanipchihagh.ir I'll get: Bad gateway Error code 502 (do visit the website for more info on the error)
My pgadmin4 docker-compose.yml:
version: '3.5'
services:
# pgadmin4
pgadmin:
container_name: ${PGDAMIN_CONTAINER_NAME:-pgadmin}
image: dpage/pgadmin4:6.13
restart: unless-stopped
user: "$UID:$GID"
environment:
PGADMIN_DEFAULT_EMAIL: ${PGADMIN_EMAIL}
PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_PASSWORD}
PGADMIN_CONFIG_SERVER_MODE: 'True'
volumes:
- ./pgadmin-data:/var/lib/pgadmin
ports:
- "${PGADMIN_PORT:-5050}:80"
networks:
- epd
networks:
epd:
external: true
Also my nginx-proxy-manager config (just like any other container I've setup already):
Any Ideas?
Related
I'm running a server that I want to setup to provide several webservices. One service is WikiJS.
I want the service to only be accessible through nginx-proxy-manager via a subdomain, but not directly accessing the IP (and port) of the server.
My try was:
version: "3"
services:
nginxproxymanager:
image: 'jc21/nginx-proxy-manager:latest'
restart: unless-stopped
ports:
# These ports are in format <host-port>:<container-port>
- '80:80' # Public HTTP Port
- '443:443' # Public HTTPS Port
- '8181:81' # Admin Web Port
# Add any other Stream port you want to expose
# - '21:21' # FTP
# Uncomment the next line if you uncomment anything in the section
# environment:
# Uncomment this if you want to change the location of
# the SQLite DB file within the container
# DB_SQLITE_FILE: "/data/database.sqlite"
# Uncomment this if IPv6 is not enabled on your host
# DISABLE_IPV6: 'true'
volumes:
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
networks:
- reverseproxy-nw
db:
image: postgres:11-alpine
environment:
POSTGRES_DB: wiki
POSTGRES_PASSWORD: ###DBPW
POSTGRES_USER: wikijs
logging:
driver: "none"
restart: unless-stopped
volumes:
- db-data:/var/lib/postgresql/data
networks:
- reverseproxy-nw
wiki:
image: requarks/wiki:2
depends_on:
- db
environment:
DB_TYPE: postgres
DB_HOST: db
DB_PORT: 5432
DB_USER: wikijs
DB_PASS: ###DBPW
DB_NAME: wiki
restart: unless-stopped
ports:
- "3001:3000"
networks:
- reverseproxy-nw
volumes:
db-data:
networks:
reverseproxy-nw:
external: true
In nginx-proxy-manager I then tried to use "wikijs" as the forwarding host.
The service is accessible if I try: http://publicip:3001, however not via the assigned subdomain in nginx-proxy-manager. I only get a 502 which usually means, that nginx-proxy-manager cannot access the given service.
What do I have to change to make the service available unter the domain but not from http://publicip:3001 ?
Thanks in advance.
Ok, I finally found out what my conceptual problem was:
I needed to create a network bridge for the two containers. Basically it was as basic as specifying the driver of the network:
networks:
reverseproxy-nw:
driver: bridge
Like this the wikijs-container is only available through nginx as I want it to be.
I have a unique situation where I need to be able to access a container over a custom local domain (example.test), which I've added to my /etc/hosts file which points to 127.0.0.1. The library I'm using for OIDC uses this domain for redirecting the browser and if it is an internal docker hostname, obviously the browser will not resolve.
I've tried pointing it to example.test, but it says it cannot connect. I've also tried looking up the private ip of the docker network, and that just times out.
Add the network_mode: host to the service definition of the calling application in the docker-compose.yml file. This allows calls to localhost to be routed to the server's localhost and not the container's localhost.
E.g.
docker-compose.yml
version: '3.7'
services:
mongodb:
image: mongo:latest
restart: always
logging:
driver: local
environment:
MONGO_INITDB_ROOT_USERNAME: ${DB_ADMIN_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${DB_ADMIN_PASSWORD}
ports:
- 27017:27017
volumes:
- mongodb_data:/data/db
callingapp:
image: <some-img>
restart: always
logging:
driver: local
env_file:
- callingApp.env
ports:
- ${CALLING_APP_PORT}:${CALLING_APP_PORT}
depends_on:
- mongodb
network_mode: host // << Add this line
app:
image: <another-img>
restart: always
logging:
driver: local
depends_on:
- mongodb
env_file:
- app.env
ports:
- ${APP_PORT}:${APP_PORT}
volumes:
mongodb_data:
I have a backuppc local service running e.g. 127.0.0.1:8081 .
I can also reach it directly on http://172.23.0.4 (container ip)
docker-compose.yml
version: '3.7'
services:
backuppc-app:
image: tiredofit/backuppc
container_name: backuppc-app
ports:
- "8081:80"
- "8082:10050"
environment:
- BACKUPPC_UUID=1000
- BACKUPPC_GUID=1000
restart: always
depends_on:
- backuppc-mysql
networks:
- nginx-proxy
I want to assign it a hostname, something like
hostname: backup.local
I tried to add it but doesn't work as expected
backuppc-app:
image: tiredofit/backuppc
container_name: backuppc-app
hostname: backup.local
Should I manually edit my local /ets/hosts ?
172.23.0.4 backup.local
You can add a hostname as a network alias:
version: '3.7'
services:
backuppc-app:
networks:
nginx-proxy:
aliases:
- backup.local
For containers in nginx-proxy network it will be available both as backuppc-app and as backup.local.
If you want that hostname to be visible to your host you need to modify hosts file. But don't put container IP there - it can change. Rather add it as another name for localhost:
127.0.0.1 localhost myhostname backup.local
Then you can access it both with localhost:8081 and backup.local:8081 (that works due to port forwarding you've declared with ports: key).
I am running multiple docker containers. I want to invoke a graphql Hasura api running on a docker container from a node js application running on another container. I am unable to use same url - (http:///v1/graphql) that I use to access the Hasura api for accessing from node js application.
I tried http://localhost/v1/graphql but that is not also working.
The following is the docker compose file for Hasura graphql
version: '3.6'
services:
postgres:
image: postgis/postgis:12-master
restart: always
volumes:
- db_data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: <postgrespassword>
pgadmin:
image: dpage/pgadmin4
restart: always
depends_on:
- postgres
ports:
- 5050:80
## you can change pgAdmin default username/password with below environment variables
environment:
PGADMIN_DEFAULT_EMAIL: <email>
PGADMIN_DEFAULT_PASSWORD: <pass>
graphql-engine:
image: hasura/graphql-engine:v1.3.0-beta.3
depends_on:
- "postgres"
restart: always
environment:
# database url to connect
HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:postgrespassword#postgres:5432/postgres
# enable the console served by server
HASURA_GRAPHQL_ENABLE_CONSOLE: "true" # set "false" to disable console
## uncomment next line to set an admin secret key
HASURA_GRAPHQL_ADMIN_SECRET: <secret>
HASURA_GRAPHQL_UNAUTHORIZED_ROLE: anonymous
HASURA_GRAPHQL_JWT_SECRET: '{ some secret }'
command:
- graphql-engine
- serve
caddy:
image: abiosoft/caddy:0.11.0
depends_on:
- "graphql-engine"
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/Caddyfile
- caddy_certs:/root/.caddy
volumes:
db_data:
caddy_certs:
The caddy file has the following configuration:
# replace :80 with your domain name to get automatic https via LetsEncrypt
:80 {
proxy / graphql-engine:8080 {
websocket
}
}
What is the api end point I should be using from another docker container (not present in this docker-compose) to access the hasura api? From browser I use http://#ipaddress /v1/graphql.
What is the configuration of caddy actually do here?
I have several domains sharing one public IP (EC2 instance). My setup is like this:
/home/ubuntu contains docker-compose.yml:
version: '3'
services:
nginx-proxy:
image: "jwilder/nginx-proxy"
container_name: nginx-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
ports:
- "80:80"
restart: "always"
This creates a network named ubuntu_default which will allow other compose instances to join. The nginx-proxy image creates reverse proxies for these other compose instances so that you can visit example.com and be routed to the appropriate UI within the appropriate compose instance.
/home/ubuntu/example.com/project-1 contains a docker-compose.yml like:
version: '3'
services:
db:
build: "./db" # mongo
volumes:
- "./data:/data/db"
restart: "always"
api:
build: "./api" # a node backend
ports:
- "9005:9005"
restart: "always"
depends_on:
- db
ui:
build: "./ui" # a react front end
ports:
- "8005:8005"
restart: "always"
environment:
- VIRTUAL_HOST=project-1.example.com # this tells nginx-proxy which domain to proxy
- VIRTUAL_PORT=8005 # this tells nginx-proxy which port to proxy
networks:
default:
external:
name: ubuntu_default
/home/ubuntu/testing.com/project-2 contains a docker-compose.yml like:
version: '3'
services:
db:
build: "./db" # postgres
volumes:
- "./data:/var/lib/postgresql/data"
restart: "always"
api:
build: "./api" # a python backend
ports:
- "9000:9000"
restart: "always"
depends_on:
- db
ui:
build: "./ui" # a react front end
ports:
- "8000:8000"
restart: "always"
environment:
- VIRTUAL_HOST=testing.com,www.testing.com # tells nginx-proxy which domains to proxy
- VIRTUAL_PORT=8000 # tells nginx-proxy which port to proxy
networks:
default:
external:
name: ubuntu_default
So basically:
project-1.example.com:80 forwards to the UI running on :8005
project-1.example.com:80/api forwards to the API running on :9005
testing.com forwards to the UI running on :8000
testing.com/api forwards to the API running on :9000
...and that all works perfectly as long as I only run one at a time. The moment I start both Compose instances, the /api urls start clashing. I can sit on one of them and refresh repeatedly and sometimes I'll see the one for example.com/api and sometimes I'll see the one for testing.com/api.
I have no idea whats going on at this point. Maybe the premise I'm working against is fundamentally flawed but it seems like an intended use of Docker/Compose. I'm open to suggestions to accomplish the same otherwise.
Docker containers communicate using DNS lookups on their network. If multiple containers have the same alias on the same network, it will round robin load balance between the containers with each network connection. If you don't want containers to talk to each other, then you don't want them on the same docker network. The good news is you solve this by using more than one network, and not putting the api and db server on the frontend proxy network:
version: '3'
services:
db:
build: "./db" # postgres
volumes:
- "./data:/var/lib/postgresql/data"
restart: "always"
api:
build: "./api" # a python backend
ports:
- "9000:9000"
restart: "always"
depends_on:
- db
ui:
build: "./ui" # a react front end
ports:
- "8000:8000"
restart: "always"
networks:
- default
- proxy
environment:
- VIRTUAL_HOST=testing.com,www.testing.com # tells nginx-proxy which domains to proxy
- VIRTUAL_PORT=8000 # tells nginx-proxy which port to proxy
networks:
proxy:
external:
name: ubuntu_default
If you do not override the default network, docker will create one for your compose project and use it for any containers not assigned to another network.