How to use zabbix-web-nginx-mysql with existing nginx container? - docker

I am trying to use docker on my debian server. There are several sites using Django framework. Every project run in it's own container with gunicorn, single nginx container works as a reverse proxy, data stores in mariadb container. Everything works correctly. It is necessary to add zabbix monitoring system on server. So, I use zabbix-server-mysql image as a zabbix-backend and zabbix-web-nginx-mysql image as a frontend. Backend run successfully, frontend fails with errors such as: "can't binding to 0.0.0.0:80 port is already allocated", nginx refuse connection to domains. As I understand, zabbix-web-nginx-mysql create another nginx container and it causes problems. Is there a right way to use zabbix images with existing nginx container?

I have a nginx reverse proxy installed on the host, which I use for proxy redirect into container. I have a working configuration for docker zabbix with the following configuration (I have omitted the environment variables).
My port 80 for the web application is served through anoter which is same set on nginx proxy_pass. Here the configuration
version: '2'
services:
zabbix-server4:
container_name: zabbix-server4
image: zabbix/zabbix-server-mysql:alpine-4.0.5
user: root
networks:
zbx_net:
aliases:
- zabbix-server4
- zabbix-server4-mysql
ipv4_address: 172.16.238.5
zabbix-web4:
container_name: zabbix-web4
image: zabbix/zabbix-web-nginx-mysql:alpine-4.0.5
ports:
- 127.0.0.1:11011:80
links:
- zabbix-server4
networks:
zbx_net:
aliases:
- zabbix-web4
- zabbix-web4-nginx-alpine
- zabbix-web4-nginx-mysql
ipv4_address: 172.16.238.10
zabbix-agent4:
container_name: zabbix-agent4
image: zabbix/zabbix-agent:alpine-4.0.5
links:
- zabbix-server4
networks:
zbx_net:
aliases:
- zabbix-agent4
ipv4_address: 172.16.238.15
networks:
zbx_net:
driver: bridge
driver_opts:
com.docker.network.enable_ipv6: "false"
ipam:
driver: default
config:
- subnet: 172.16.238.0/24
gateway: 172.16.238.1

Related

docker nginx reverse proxy 503 Service Temporarily Unavailable

I want to use nginx as reverse proxy for my remote home automation access.
My infrastructure yaml looks like follows:
# /infrastructure/docker-compose.yaml
version: '3'
services:
proxy:
image: jwilder/nginx-proxy:alpine
container_name: proxy
networks:
- raspberry_network
ports:
- 80:80
- 443:443
environment:
- ENABLE_IPV6=true
- DEFAULT_HOST=${RASPBERRY_IP}
volumes:
- ./proxy/conf.d:/etc/nginx/conf.d
- ./proxy/vhost.d:/etc/nginx/vhost.d
- ./proxy/html:/usr/share/nginx/html
- ./proxy/certs:/etc/nginx/certs
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: always
networks:
raspberry_network:
My yaml containing the app configuration looks like this:
# /apps/docker-compose.yaml
version: '3'
services:
homeassistant:
container_name: home-assistant
image: homeassistant/raspberrypi4-homeassistant:stable
volumes:
- ./homeassistant:/config
- /etc/localtime:/etc/localtime:ro
environment:
- 'TZ=Europe/Berlin'
- 'VIRTUAL_HOST=${HOMEASSISTANT_VIRTUAL_HOST}'
- 'VIRTUAL_PORT=8123'
deploy:
resources:
limits:
memory: 250M
restart: unless-stopped
networks:
- infrastructure_raspberry_network
ports:
- '8123:8123'
networks:
infrastructure_raspberry_network:
external: true
Via portainer I validated that both containers are contected to the same network. However, when accessing my local IP of the raspberry pi 192.168.0.10 I am receiving "503 Service Temporarily Unavailable".
Of course when I try accessing my app via the virtual host domain xxx.xxx.de it neither works.
Any idea what the issue might be? Or any ideas how to further debug this?
You need to specify the correct VIRTUAL_HOST in the backends environment variable and make sure that they're on the same network (or docker bridge network)
Make sure that any containers that specify VIRTUAL_HOST are running before the nginx-proxy container runs. With docker-compose, this can be achieved by adding to depends_on config of the nginx-proxy container

Local Communication Between Services

I have 2 services inside my docker cluster. frontend runs on port 8090, and backend runs on port 8000. How can I make frontend call backend via local DNS like fetch('https://backend.local/')? Because if I use docker-hostname, I need to specify the port to call the back-end. Do I need to have a local DNS Server inside my docker?
You have to create a Software Defined Network (SDN) in docker and then all containers running in that network can communicate with each other using the container names or you can define alias for each and use that. A simple docker-compose file for a backend microservice and mysql database can be created using the below configs.
version: '3.2'
networks:
testNetwork:
services:
mysql-dev:
image: mysql:latest
container_name: mysql-dev
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=root
ports:
- "3306:3306"
networks:
- testNetwork
backend:
image: backend:1.0
container_name: backend
environment:
- DB_USER=root
- DB_PASS=root
- DB_NAME=root
- DB_HOST=mysql-dev
- DB_DIALECT=mysql
ports:
- "4000:4000"
working_dir: /backend
command: npm start
networks:
- testNetwork

docker-compose connection between containers

I have 3 containers with my bot, server and db. after docker-compose up, server and db are working. telegram bot does get-request and takes this error:
Get "http://localhost:8080/user/": dial tcp 127.0.0.1:8080: connect: connection refused
docker-compose.yml
version: "3"
services:
db:
image: postgres
container_name: todo_postgres
restart: always
ports:
- "5432:5432"
environment:
# TODO: Change it to environment variables
POSTGRES_USER: user
POSTGRES_DB: somedb
POSTGRES_PASSWORD: pass
server:
depends_on:
- db
build: .
restart: always
ports:
- 8080:8080
environment:
DB_NAME: somedb
DB_USERNAME: user
DB_PASSWORD: pass
bot:
depends_on:
- server
build:
./src/telegram_bot
environment:
BOT_TOKEN: TOKEN
restart: always
links:
- server
When using compose, try using the containers hostname.. in the case your bot should try to connect to
server:8080
Compose will handle the name resolution to the IP you need
What you try is to access localhost within your container (service) bot.
Maybe this answer will help you to solve the problem. It sound similar to your problem.
But I want to provide you another solution to your problem:
In case it's not needed to access the containers form outside (from your host), one appraoch would be making use of the expose functionality and a docker network.
See docs.docker.com: network.
The expose functionality allows to access your other containers within your network
See docs.docker.com: expose
Expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port can be specified.
Example
What is this example doing?
A couple of steps that are not mandatory
Set a static ip within your docker container
These Steps are not needed and can be omitted. However, I like to do this, since you have now a better control over the network. You can access the containers by their hostname (which is the container name or service name) as well.
The steps that are needed are the following:
This exposes port 8080, but do not publish it.
expose:
- 8080
The network which allows static ip configuration
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
A complete file could look similar to this:
version: "3.8"
services:
first-service:
image: <your-image>
networks:
vpcbr:
ipv4_address: 10.5.0.2
expose:
- 8080
second-service:
image: <your-image>
networks:
vpcbr:
ipv4_address: 10.5.0.3
depends_on:
- first-service
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
Your bot container is up before your server & db containers.
When you use depends_on it's not accually waiting them to finish setup themeselves.
You should try some tricky algorithem for waiting the other container finish setup.
I remmember that when I used Nginx proxy I used something called wait-for-it.sh

Docker Compose Nginx Link containers

I have a docker compose container that runs Nginx. The site hosted is just a .test domain, like example.test.
Also in the container Nginx runs a location proxy and redirects it to example.test:8000. But it's not able to connect to that because that's actually being hosted from a different container on the same system (all bridged networks).
How can I let the containers communicate using example.test domain?
Or if I can't get them to communicate via example.test then how can I link them so they can use their docker-compose service name such as api or frontend?
Docker compose:
version: '3'
services:
db:
image: postgres
ports:
- "5432:5432"
django:
build: ./api
command: ["./docker_up.sh"]
restart: always
volumes:
- ./api:/app/api
- api-static:/app/api/staticfiles
ports:
- "8000:8000"
depends_on:
- db
environment:
- MODE=DEV
volumes:
frontend-build:
api-static:
certificates:
2nd compose file (run together):
version: '3'
services:
django:
environment:
- MODE=PROD
#links:
# - hosting
hosting:
build: ./hosting
restart: always
network_mode: bridge
volumes:
- frontend-build:/var/www
ports:
- "80:80"
- "443:443"
environment:
- MODE=PROD
#links:
# - django
volumes:
frontend-build:
With these current settings I get an error when I run it
ERROR: for 92b89f848637_opensrd_hosting_1 Cannot start service hosting: Cannot link to /opensrd_django_1, as it does not belong to the default network
Edit: Altered docker-compose.prod.yml:
networks:
app_net:
driver: bridge
ipam:
driver: default
config:
-
subnet: 172.16.238.0/24
services:
django:
environment:
- MODE=PROD
networks:
app_net:
ipv4_address: 172.16.238.10
But this gives me an error.
ERROR: The Compose file './docker-compose.prod.yml' is invalid because:
networks.app_net value Additional properties are not allowed ('config' was unexpected)
networks.app_net.ipam contains an invalid type, it should be an object
So I tried the options given by #trust512 and #DimaL, and those didn't work.
However after deleting the network and links from my compose files, and removing the existing default network and built containers, it worked, and I can not refer between container using db, django, and hosting.
The only thing different is I changed the composer version from 3 to 3.5.
These are the final files for anyone interested:
version: '3.5'
services:
db:
image: postgres
ports:
- "5432:5432"
django:
build: ./api
command: ["./docker_up.sh"]
restart: always
volumes:
- ./api:/app/api
- api-static:/app/api/staticfiles
ports:
- "8000:8000"
depends_on:
- db
environment:
- MODE=DEV
volumes:
frontend-build:
api-static:
docker-compose.prod.yml:
version: '3.5'
services:
django:
environment:
- MODE=PROD
hosting:
build: ./hosting
restart: always
volumes:
- frontend-build:/var/www
ports:
- "80:80"
- "443:443"
environment:
- MODE=PROD
volumes:
frontend-build:
You can use external_links (https://docs.docker.com/compose/compose-file/#external_links) or try to put all containers on the same virtual network.
As far as I understand you just want them (django and nginx) to be linked across composes?
Then a native solution would be to use external_links exampled here
And use it like this:
services:
[...]
hosting:
[...]
external_links:
- django_1:example
[...]
Where django_1 stands for the container name created by the compose you provided and example is the alias that the container will be visible inside Django container.
Other way round you can just point a example.test domain to a specific address by editing your /etc/hosts (provided you work on linux/mac)
for example by adding a record like
172.16.238.10 example.test
Where the address above would point to your django application (container).
The above can be achieved without altering your /etc/hosts by using native solution from compose (extra_hosts) documented here
Additionally if you prefer a static ip address for your django/nginx containers in case you stick to the /etc/hosts od extra_hosts solution you can utilize another native solution provided by compose that sets up a static ip for a chosen services, properly exampled here
A adjusted listing from the linked documentation:
services:
[...]
django:
[...]
networks:
app_net:
ipv4_address: 172.16.238.10
networks:
app_net:
driver: bridge
ipam:
driver: default
config:
-
subnet: 172.16.238.0/24

Customized port is not functional in Docker-compose

I'm having a dumb question regarding to using docker-compose.
Current scenario is I'm trying use my reverse_proxy to talk to my frontend_server. Inside the reverse_proxy, It redirects to frontend_server just like the following:
Suppose I receive http://${REV_IP}:${REV_PORT} It should redirect me to http://${FE_IP}:${FE_PORT} but it redirect me to 15000
(PROXIED_FRONTEND is http://${FE_IP}:${FE_PORT}, this environment veriable is used for this redirection)
Here is the code snippet for my docker-compose.yml
version: '3'
services:
reverse_proxy:
image: "${ARTIFACTORY}/template-reverse-proxy:${BRANCH}-${REV_TAG}"
networks:
nucleus-network:
ipv4_address: ${REV_IP}
ports:
- "${REV_PORT}:15999"
environment:
- KEYFILE_REVPROXY=${REV_KEY}
- CERTFILE_REVPROXY=${REV_CERT}
- PUBLIC_URL=${PUBLIC_URL}
- PUBLIC_API_URL=${PUBLIC_API_URL}
- PROXIED_FRONTEND=${PROXIED_FRONTEND}
- PROXIED_PDF=${PROXIED_PDF}
depends_on:
- frontend_server
frontend_server:
image: "${ARTIFACTORY}/fe_server:${BRANCH}-${PDF_TAG}"
ports:
- "${FE_PORT}:15000"
networks:
nucleus-network:
ipv4_address: ${FE_IP}
environment:
- FILEPATH_FE_SERVER=${FILEPATH_FE_SERVER}
volumes:
- "/home/lluo/dist_share:/app/dist"
depends_on:
- frontend_static
networks:
nucleus-network:
driver: bridge
ipam:
driver: default
config:
- subnet: ${SUB_NET}
It's no need for you to publish frontend_server port to outside world (via ports:), unless you want to access it directly (i.e. for debugging)
Since you use docker-compose and depends_on, it will create you a inner docker network, in which containers will see each other.
Only thing for you to do is to setup your reverse proxy to have the proxied backend pointing to http://frontend_server:15000 and your good to go. inner docker DNS will resolve the frontend_sever service name to appropriate container IP address.
for reference and more info see this question and links provided there: https://serverfault.com/questions/800689/how-to-use-haproxy-in-load-balancing-and-as-a-reverse-proxy-with-docker

Resources