I'm having a dumb question regarding to using docker-compose.
Current scenario is I'm trying use my reverse_proxy to talk to my frontend_server. Inside the reverse_proxy, It redirects to frontend_server just like the following:
Suppose I receive http://${REV_IP}:${REV_PORT} It should redirect me to http://${FE_IP}:${FE_PORT} but it redirect me to 15000
(PROXIED_FRONTEND is http://${FE_IP}:${FE_PORT}, this environment veriable is used for this redirection)
Here is the code snippet for my docker-compose.yml
version: '3'
services:
reverse_proxy:
image: "${ARTIFACTORY}/template-reverse-proxy:${BRANCH}-${REV_TAG}"
networks:
nucleus-network:
ipv4_address: ${REV_IP}
ports:
- "${REV_PORT}:15999"
environment:
- KEYFILE_REVPROXY=${REV_KEY}
- CERTFILE_REVPROXY=${REV_CERT}
- PUBLIC_URL=${PUBLIC_URL}
- PUBLIC_API_URL=${PUBLIC_API_URL}
- PROXIED_FRONTEND=${PROXIED_FRONTEND}
- PROXIED_PDF=${PROXIED_PDF}
depends_on:
- frontend_server
frontend_server:
image: "${ARTIFACTORY}/fe_server:${BRANCH}-${PDF_TAG}"
ports:
- "${FE_PORT}:15000"
networks:
nucleus-network:
ipv4_address: ${FE_IP}
environment:
- FILEPATH_FE_SERVER=${FILEPATH_FE_SERVER}
volumes:
- "/home/lluo/dist_share:/app/dist"
depends_on:
- frontend_static
networks:
nucleus-network:
driver: bridge
ipam:
driver: default
config:
- subnet: ${SUB_NET}
It's no need for you to publish frontend_server port to outside world (via ports:), unless you want to access it directly (i.e. for debugging)
Since you use docker-compose and depends_on, it will create you a inner docker network, in which containers will see each other.
Only thing for you to do is to setup your reverse proxy to have the proxied backend pointing to http://frontend_server:15000 and your good to go. inner docker DNS will resolve the frontend_sever service name to appropriate container IP address.
for reference and more info see this question and links provided there: https://serverfault.com/questions/800689/how-to-use-haproxy-in-load-balancing-and-as-a-reverse-proxy-with-docker
Related
I have 3 containers with my bot, server and db. after docker-compose up, server and db are working. telegram bot does get-request and takes this error:
Get "http://localhost:8080/user/": dial tcp 127.0.0.1:8080: connect: connection refused
docker-compose.yml
version: "3"
services:
db:
image: postgres
container_name: todo_postgres
restart: always
ports:
- "5432:5432"
environment:
# TODO: Change it to environment variables
POSTGRES_USER: user
POSTGRES_DB: somedb
POSTGRES_PASSWORD: pass
server:
depends_on:
- db
build: .
restart: always
ports:
- 8080:8080
environment:
DB_NAME: somedb
DB_USERNAME: user
DB_PASSWORD: pass
bot:
depends_on:
- server
build:
./src/telegram_bot
environment:
BOT_TOKEN: TOKEN
restart: always
links:
- server
When using compose, try using the containers hostname.. in the case your bot should try to connect to
server:8080
Compose will handle the name resolution to the IP you need
What you try is to access localhost within your container (service) bot.
Maybe this answer will help you to solve the problem. It sound similar to your problem.
But I want to provide you another solution to your problem:
In case it's not needed to access the containers form outside (from your host), one appraoch would be making use of the expose functionality and a docker network.
See docs.docker.com: network.
The expose functionality allows to access your other containers within your network
See docs.docker.com: expose
Expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port can be specified.
Example
What is this example doing?
A couple of steps that are not mandatory
Set a static ip within your docker container
These Steps are not needed and can be omitted. However, I like to do this, since you have now a better control over the network. You can access the containers by their hostname (which is the container name or service name) as well.
The steps that are needed are the following:
This exposes port 8080, but do not publish it.
expose:
- 8080
The network which allows static ip configuration
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
A complete file could look similar to this:
version: "3.8"
services:
first-service:
image: <your-image>
networks:
vpcbr:
ipv4_address: 10.5.0.2
expose:
- 8080
second-service:
image: <your-image>
networks:
vpcbr:
ipv4_address: 10.5.0.3
depends_on:
- first-service
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
Your bot container is up before your server & db containers.
When you use depends_on it's not accually waiting them to finish setup themeselves.
You should try some tricky algorithem for waiting the other container finish setup.
I remmember that when I used Nginx proxy I used something called wait-for-it.sh
I am trying to use docker on my debian server. There are several sites using Django framework. Every project run in it's own container with gunicorn, single nginx container works as a reverse proxy, data stores in mariadb container. Everything works correctly. It is necessary to add zabbix monitoring system on server. So, I use zabbix-server-mysql image as a zabbix-backend and zabbix-web-nginx-mysql image as a frontend. Backend run successfully, frontend fails with errors such as: "can't binding to 0.0.0.0:80 port is already allocated", nginx refuse connection to domains. As I understand, zabbix-web-nginx-mysql create another nginx container and it causes problems. Is there a right way to use zabbix images with existing nginx container?
I have a nginx reverse proxy installed on the host, which I use for proxy redirect into container. I have a working configuration for docker zabbix with the following configuration (I have omitted the environment variables).
My port 80 for the web application is served through anoter which is same set on nginx proxy_pass. Here the configuration
version: '2'
services:
zabbix-server4:
container_name: zabbix-server4
image: zabbix/zabbix-server-mysql:alpine-4.0.5
user: root
networks:
zbx_net:
aliases:
- zabbix-server4
- zabbix-server4-mysql
ipv4_address: 172.16.238.5
zabbix-web4:
container_name: zabbix-web4
image: zabbix/zabbix-web-nginx-mysql:alpine-4.0.5
ports:
- 127.0.0.1:11011:80
links:
- zabbix-server4
networks:
zbx_net:
aliases:
- zabbix-web4
- zabbix-web4-nginx-alpine
- zabbix-web4-nginx-mysql
ipv4_address: 172.16.238.10
zabbix-agent4:
container_name: zabbix-agent4
image: zabbix/zabbix-agent:alpine-4.0.5
links:
- zabbix-server4
networks:
zbx_net:
aliases:
- zabbix-agent4
ipv4_address: 172.16.238.15
networks:
zbx_net:
driver: bridge
driver_opts:
com.docker.network.enable_ipv6: "false"
ipam:
driver: default
config:
- subnet: 172.16.238.0/24
gateway: 172.16.238.1
I'm trying to pass redis url to docker container but so far i couldn't get it to work. I did a little research and none of the answers worked for me.
version: '3.2'
services:
redis:
image: 'bitnami/redis:latest'
container_name: redis
hostname: redis
expose:
- 6379
links:
- api
api:
image: tufanmeric/api:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- proxy
environment:
- REDIS_URL=redis
depends_on:
- redis
deploy:
mode: global
labels:
- 'traefik.port=3002'
- 'traefik.frontend.rule=PathPrefix:/'
- 'traefik.frontend.rule=Host:api.example.com'
- 'traefik.docker.network=proxy'
networks:
proxy:
Error: Redis connection to redis failed - connect ENOENT redis
You can only communicate between containers on the same Docker network. Docker Compose creates a default network for you, and absent any specific declaration your redis container is on that network. But you also declare a separate proxy network, and only attach the api container to that other network.
The single simplest solution to this is to delete all of the network: blocks everywhere and just use the default network Docker Compose creates for you. You may need to format the REDIS_URL variable as an actual URL, maybe like redis://redis:6379.
If you have a non-technical requirement to have separate networks, add - default to the networks listing for the api container.
You have a number of other settings in your docker-compose.yml that aren't especially useful. expose: does almost nothing at all, and is usually also provided in a Dockerfile. links: is an outdated way to make cross-container calls, and as you've declared it to make calls from Redis to your API server. hostname: has no effect outside the container itself and is usually totally unnecessary. container_name: does have some visible effects, but usually the container name Docker Compose picks is just fine.
This would leave you with:
version: '3.2'
services:
redis:
image: 'bitnami/redis:latest'
api:
image: tufanmeric/api:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- REDIS_URL=redis://redis:6379
depends_on:
- redis
deploy:
mode: global
labels:
- 'traefik.port=3002'
- 'traefik.frontend.rule=PathPrefix:/'
- 'traefik.frontend.rule=Host:api.example.com'
- 'traefik.docker.network=default'
I have application in c# 'web_dotnet' in one container which downloads data from php service 'web_php' in second container. But what is url for php service? Url 'http://web_php:80' from c# service doesn't work. That is mine docker-compose.yml:
version: '3.5'
services:
web_php:
image: php:7.2.2-apache
container_name: my_php_container
volumes:
- ./php/:/var/www/html/
ports:
- 3000:80
networks:
- mynet
web_dotnet:
build: .
container_name: my_dotnet_container
ports:
- 2000:80
networks:
- mynet
networks:
mynet:
name: xyz_net
driver: bridge
First, you can simplify you file, removing unnecessary network declaration and port exposing. docker-compose creates default user-defined bridge network for you and links all services to it - no need to do it manually. Also inside network all ports are being exposed to services automatically.
Second, remove container_name. You are confusing yourself. Services get their host names equal to service names by default.
version: '3.5'
services:
web_php:
image: php:7.2.2-apache
volumes:
- ./php/:/var/www/html/
web_dotnet:
build: .
Now, after all useless stuff is cleaned, just call web_php:80 from web_dotnet.
After, if you would like to access web_dotnet ** from outside** docker-compose, then you add ports directive to make it visible from host.
I have a docker compose container that runs Nginx. The site hosted is just a .test domain, like example.test.
Also in the container Nginx runs a location proxy and redirects it to example.test:8000. But it's not able to connect to that because that's actually being hosted from a different container on the same system (all bridged networks).
How can I let the containers communicate using example.test domain?
Or if I can't get them to communicate via example.test then how can I link them so they can use their docker-compose service name such as api or frontend?
Docker compose:
version: '3'
services:
db:
image: postgres
ports:
- "5432:5432"
django:
build: ./api
command: ["./docker_up.sh"]
restart: always
volumes:
- ./api:/app/api
- api-static:/app/api/staticfiles
ports:
- "8000:8000"
depends_on:
- db
environment:
- MODE=DEV
volumes:
frontend-build:
api-static:
certificates:
2nd compose file (run together):
version: '3'
services:
django:
environment:
- MODE=PROD
#links:
# - hosting
hosting:
build: ./hosting
restart: always
network_mode: bridge
volumes:
- frontend-build:/var/www
ports:
- "80:80"
- "443:443"
environment:
- MODE=PROD
#links:
# - django
volumes:
frontend-build:
With these current settings I get an error when I run it
ERROR: for 92b89f848637_opensrd_hosting_1 Cannot start service hosting: Cannot link to /opensrd_django_1, as it does not belong to the default network
Edit: Altered docker-compose.prod.yml:
networks:
app_net:
driver: bridge
ipam:
driver: default
config:
-
subnet: 172.16.238.0/24
services:
django:
environment:
- MODE=PROD
networks:
app_net:
ipv4_address: 172.16.238.10
But this gives me an error.
ERROR: The Compose file './docker-compose.prod.yml' is invalid because:
networks.app_net value Additional properties are not allowed ('config' was unexpected)
networks.app_net.ipam contains an invalid type, it should be an object
So I tried the options given by #trust512 and #DimaL, and those didn't work.
However after deleting the network and links from my compose files, and removing the existing default network and built containers, it worked, and I can not refer between container using db, django, and hosting.
The only thing different is I changed the composer version from 3 to 3.5.
These are the final files for anyone interested:
version: '3.5'
services:
db:
image: postgres
ports:
- "5432:5432"
django:
build: ./api
command: ["./docker_up.sh"]
restart: always
volumes:
- ./api:/app/api
- api-static:/app/api/staticfiles
ports:
- "8000:8000"
depends_on:
- db
environment:
- MODE=DEV
volumes:
frontend-build:
api-static:
docker-compose.prod.yml:
version: '3.5'
services:
django:
environment:
- MODE=PROD
hosting:
build: ./hosting
restart: always
volumes:
- frontend-build:/var/www
ports:
- "80:80"
- "443:443"
environment:
- MODE=PROD
volumes:
frontend-build:
You can use external_links (https://docs.docker.com/compose/compose-file/#external_links) or try to put all containers on the same virtual network.
As far as I understand you just want them (django and nginx) to be linked across composes?
Then a native solution would be to use external_links exampled here
And use it like this:
services:
[...]
hosting:
[...]
external_links:
- django_1:example
[...]
Where django_1 stands for the container name created by the compose you provided and example is the alias that the container will be visible inside Django container.
Other way round you can just point a example.test domain to a specific address by editing your /etc/hosts (provided you work on linux/mac)
for example by adding a record like
172.16.238.10 example.test
Where the address above would point to your django application (container).
The above can be achieved without altering your /etc/hosts by using native solution from compose (extra_hosts) documented here
Additionally if you prefer a static ip address for your django/nginx containers in case you stick to the /etc/hosts od extra_hosts solution you can utilize another native solution provided by compose that sets up a static ip for a chosen services, properly exampled here
A adjusted listing from the linked documentation:
services:
[...]
django:
[...]
networks:
app_net:
ipv4_address: 172.16.238.10
networks:
app_net:
driver: bridge
ipam:
driver: default
config:
-
subnet: 172.16.238.0/24