Docker Compose Nginx Link containers - docker

I have a docker compose container that runs Nginx. The site hosted is just a .test domain, like example.test.
Also in the container Nginx runs a location proxy and redirects it to example.test:8000. But it's not able to connect to that because that's actually being hosted from a different container on the same system (all bridged networks).
How can I let the containers communicate using example.test domain?
Or if I can't get them to communicate via example.test then how can I link them so they can use their docker-compose service name such as api or frontend?
Docker compose:
version: '3'
services:
db:
image: postgres
ports:
- "5432:5432"
django:
build: ./api
command: ["./docker_up.sh"]
restart: always
volumes:
- ./api:/app/api
- api-static:/app/api/staticfiles
ports:
- "8000:8000"
depends_on:
- db
environment:
- MODE=DEV
volumes:
frontend-build:
api-static:
certificates:
2nd compose file (run together):
version: '3'
services:
django:
environment:
- MODE=PROD
#links:
# - hosting
hosting:
build: ./hosting
restart: always
network_mode: bridge
volumes:
- frontend-build:/var/www
ports:
- "80:80"
- "443:443"
environment:
- MODE=PROD
#links:
# - django
volumes:
frontend-build:
With these current settings I get an error when I run it
ERROR: for 92b89f848637_opensrd_hosting_1 Cannot start service hosting: Cannot link to /opensrd_django_1, as it does not belong to the default network
Edit: Altered docker-compose.prod.yml:
networks:
app_net:
driver: bridge
ipam:
driver: default
config:
-
subnet: 172.16.238.0/24
services:
django:
environment:
- MODE=PROD
networks:
app_net:
ipv4_address: 172.16.238.10
But this gives me an error.
ERROR: The Compose file './docker-compose.prod.yml' is invalid because:
networks.app_net value Additional properties are not allowed ('config' was unexpected)
networks.app_net.ipam contains an invalid type, it should be an object

So I tried the options given by #trust512 and #DimaL, and those didn't work.
However after deleting the network and links from my compose files, and removing the existing default network and built containers, it worked, and I can not refer between container using db, django, and hosting.
The only thing different is I changed the composer version from 3 to 3.5.
These are the final files for anyone interested:
version: '3.5'
services:
db:
image: postgres
ports:
- "5432:5432"
django:
build: ./api
command: ["./docker_up.sh"]
restart: always
volumes:
- ./api:/app/api
- api-static:/app/api/staticfiles
ports:
- "8000:8000"
depends_on:
- db
environment:
- MODE=DEV
volumes:
frontend-build:
api-static:
docker-compose.prod.yml:
version: '3.5'
services:
django:
environment:
- MODE=PROD
hosting:
build: ./hosting
restart: always
volumes:
- frontend-build:/var/www
ports:
- "80:80"
- "443:443"
environment:
- MODE=PROD
volumes:
frontend-build:

You can use external_links (https://docs.docker.com/compose/compose-file/#external_links) or try to put all containers on the same virtual network.

As far as I understand you just want them (django and nginx) to be linked across composes?
Then a native solution would be to use external_links exampled here
And use it like this:
services:
[...]
hosting:
[...]
external_links:
- django_1:example
[...]
Where django_1 stands for the container name created by the compose you provided and example is the alias that the container will be visible inside Django container.
Other way round you can just point a example.test domain to a specific address by editing your /etc/hosts (provided you work on linux/mac)
for example by adding a record like
172.16.238.10 example.test
Where the address above would point to your django application (container).
The above can be achieved without altering your /etc/hosts by using native solution from compose (extra_hosts) documented here
Additionally if you prefer a static ip address for your django/nginx containers in case you stick to the /etc/hosts od extra_hosts solution you can utilize another native solution provided by compose that sets up a static ip for a chosen services, properly exampled here
A adjusted listing from the linked documentation:
services:
[...]
django:
[...]
networks:
app_net:
ipv4_address: 172.16.238.10
networks:
app_net:
driver: bridge
ipam:
driver: default
config:
-
subnet: 172.16.238.0/24

Related

How to allow a docker container to communicate with another container over localhost

I have a unique situation where I need to be able to access a container over a custom local domain (example.test), which I've added to my /etc/hosts file which points to 127.0.0.1. The library I'm using for OIDC uses this domain for redirecting the browser and if it is an internal docker hostname, obviously the browser will not resolve.
I've tried pointing it to example.test, but it says it cannot connect. I've also tried looking up the private ip of the docker network, and that just times out.
Add the network_mode: host to the service definition of the calling application in the docker-compose.yml file. This allows calls to localhost to be routed to the server's localhost and not the container's localhost.
E.g.
docker-compose.yml
version: '3.7'
services:
mongodb:
image: mongo:latest
restart: always
logging:
driver: local
environment:
MONGO_INITDB_ROOT_USERNAME: ${DB_ADMIN_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${DB_ADMIN_PASSWORD}
ports:
- 27017:27017
volumes:
- mongodb_data:/data/db
callingapp:
image: <some-img>
restart: always
logging:
driver: local
env_file:
- callingApp.env
ports:
- ${CALLING_APP_PORT}:${CALLING_APP_PORT}
depends_on:
- mongodb
network_mode: host // << Add this line
app:
image: <another-img>
restart: always
logging:
driver: local
depends_on:
- mongodb
env_file:
- app.env
ports:
- ${APP_PORT}:${APP_PORT}
volumes:
mongodb_data:

Docker : Accessing another container by host

I have two containers defined in a docker-compose yaml file that need to talk to each other, but they can't.
version: "3.9"
networks:
localdev:
driver: 'bridge'
services:
master-db:
image: mysql:8.0
container_name: master-db
hostname: master-db
command: --default-authentication-plugin=mysql_native_password
restart: always
ports:
- "4000:3306"
networks:
- localdev
page-store:
hostname: page-store
build:
context: .
dockerfile: Dockerfile.page_store
container_name: page-store
ports:
- "2020:2020"
networks:
- localdev
links:
- master-db
In the page-store Python Flask microservice, I try to access the MySQL database by using its hostname of master-db, but the name cannot resolve.
You should be able to connect each other using respective service names. master-db and page-store removing hostname
As per Official guide you may have to define master-db,page-store in container's /etc/hosts, if you want to use hostname: page-store etc.
Please refer this SO thread.
Also using --links may not be the best option.

How to create 2 different running app with the same docker-compose.yml file?

I already have a docker-compose.yml file like this:
version: "3.1"
services:
memcached:
image: memcached:alpine
container_name: dl-memcached
redis:
image: redis:alpine
container_name: dl-redis
mysql:
image: mysql:5.7.21
container_name: dl-mysql
restart: unless-stopped
working_dir: /application
environment:
- MYSQL_DATABASE=dldl
- MYSQL_USER=docker
- MYSQL_PASSWORD=docker
- MYSQL_ROOT_PASSWORD=docker
volumes:
- ./../:/application
ports:
- "8007:3306"
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: dl-phpmyadmin
environment:
- PMA_ARBITRARY=1
- PMA_HOST=dl-mysql
- PMA_PORT=3306
- MYSQL_USER=docker
- MYSQL_PASSWORD=docker
- MYSQL_ROOT_PASSWORD=docker
restart: always
ports:
- 8002:80
volumes:
- /application
links:
- mysql
elasticsearch:
build: phpdocker/elasticsearch
container_name: dl-es
volumes:
- ./phpdocker/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- "8003:9200"
webserver:
image: nginx:alpine
container_name: dl-webserver
working_dir: /application
volumes:
- ./../:/application:delegated
- ./phpdocker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
- ./logs:/var/log/nginx:delegated
ports:
- "9003:80"
php-fpm:
build: phpdocker/php-fpm
container_name: dl-php-fpm
working_dir: /application
volumes:
- ./../:/application:delegated
- ./phpdocker/php-fpm/php-ini-overrides.ini:/etc/php/7.2/fpm/conf.d/99-overrides.ini
- ./../docker/php-fpm/certs/store_stock/:/usr/local/share/ca-certificates/
- ./logs:/var/log:delegated # nginx logs
- /application/var/cache
environment:
XDEBUG_CONFIG: remote_host=host.docker.internal
PHP_IDE_CONFIG: "serverName=dl"
node:
build:
dockerfile: dl/phpdocker/node/Dockerfile
context: ./../
container_name: dl-node
working_dir: /application
ports:
- "8008:3000"
volumes:
- ./../:/application:cached
tty: true
My goal is to have 2 isolate environments working at the same time in the same server with the same docker-compose file? I wonder if it's possible?
I want to be able to stop and update one env. while the other one is still running and getting the traffic.
Maybe I need another approach in my case?
There are a couple of problems with what you're trying to do. If your goal is to put things behind a load balancer, I think that rather than trying to start multiple instances of your project, a better solution would be to use the scaling features available to docker-compose. In particular, if your goal is to put some services behind a load balancer, you probably don't want multiple instances of things like your database.
If you combine this with a dynamic front-end proxy like Traefik, you can make the configuration largely automatic.
Consider a very simple example consisting of a backend container running a simple webserver and a traefik frontend:
---
version: "3"
services:
webserver:
build:
context: web
labels:
traefik.enable: true
traefik.port: 80
traefik.frontend.rule: "PathPrefix:/"
frontend:
image: traefik
command:
- --api
- --docker
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
ports:
- "80:80"
- "127.0.0.1:8080:8080"
If I start it like this, I get a single backend and a single frontend:
docker-compose up
But I can also ask docker-compose to scale out the backend:
docker-compose up --scale webserver=3
In this case, I get a single frontend and three backend servers. Traefik will automatically discover the backends and will round-robin connections between them. You can download this example and try it out.
Caveats
There are a few aspects of your configuration that would need to change in order to make this work (and in fact, you would need to change them even if you were to create multiple instances of your project as you have proposed in your question).
Conflicting paths
Take for example the configuration of your webserver container:
volumes:
- ./logs:/var/log/nginx:delegated
If you start two instances of this service, both containers will mount ./logs on /var/log/nginx. If they both attempt to write to /var/log/nginx/access.log, you're going to have problems.
The easiest solution here is to avoid bind mounts for things like log directories (and any other directories to which you will be writing), and instead use named docker volumes.
Hardcoding container names
In some places, you are hardcoding the container name, like this:
mysql:
image: mysql:5.7.21
container_name: dl-mysql
This will cause problems if you attempt to start multiple instances of this project or multiple instances of the mysql container. Don't statically set the container name.
Deprecated links syntax
Your configuration is using the deprecated links syntax:
links:
- mysql
Don't do that. In modern docker, containers on the same network can simply refer to each other by name. In other words, if your compose configuration has:
mysql:
image: mysql:5.7.21
restart: unless-stopped
working_dir: /application
environment:
- MYSQL_DATABASE=dldl
- MYSQL_USER=docker
- MYSQL_PASSWORD=docker
- MYSQL_ROOT_PASSWORD=docker
volumes:
- ./../:/application
ports:
- "8007:3306"
Other containers in your compose stack can simply use the hostname mysql to refer to this service.
You won't be able to run same compose file on a host without changing the port mappings because that will cause port conflict. I'd recommend creating a base compose file and using extends to override port mappings for different environments.

Docker hostnames are not resolved in a custom network

I have the following configuration in my docker-composer.yml file.
version: '3.3'
services:
service-1:
container_name: 'service-1'
build: './service-1'
depends_on:
- 'mongo'
- 'consul'
networks:
backend:
aliases:
- service-1
service-2:
build: './service-2'
ports:
- '8825:8825'
- '8835:8835'
networks:
frontend:
backend:
aliases:
- service-2
depends_on:
- 'mongo'
- 'consul'
consul:
image: 'consul:latest'
networks:
backend:
aliases:
- consul
mongo:
image: 'mongo:latest'
networks:
backend:
aliases:
- mongo
networks:
frontend:
backend:
internal: true
When my containers start they are not able to communicate between each other using host names.
Most of containers use the mongo db container, but they are not able even reach it and I am getting the following error.
Error connecting to mongo : no reachable servers
Please help me to solve the problem, I got stuck.
Thanks.
You've got a lot of unneeded settings in the compose file, here's a stripped down version that would work just as well:
version: '3.3'
services:
service-1:
build: './service-1'
networks:
- backend
service-2:
build: './service-2'
ports:
- '8825:8825'
- '8835:8835'
networks:
- frontend
- backend
consul:
image: 'consul:latest'
networks:
- backend
mongo:
image: 'mongo:latest'
networks:
- backend
networks:
frontend:
backend:
internal: true
You automatically get the alias of the service name for each container, no need to duplicate that. You also lose the ability to scale a service if you give it a container name. I'd also recommend moving the build step out of the compose file and use an image name for the apps you're building locally.
Now for the likely issue, you have a depends_on in your compose file. At best, this will not do what you're looking for. All it checks that the other container has been created and started, but not that the application inside is ready to serve traffic, and a DB may take time to become available. At worst, you'll get an error that it's unsupported if you try to move this into swarm mode.
Instead of depending on docker for this, update your application entrypoint to check for the external dependencies and wait a minute or two for them to become available before failing. A very simple example tool for this is wait-for-it that is written as a bash shell script.

Docker compose set container name for stacks

I am deploying a small stack onto a UCP
One of the issues I am facing is naming the container for service1.
I need to have a static name for the container, since it's utilized by mycustomimageforservice2
The container_name option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
I have to use version: 3 compose files.
version: "3"
services:
service1:
image: dockerhub/service1
ports:
- "8080:8080"
container_name: service1container
networks:
- mynet
service2:
image: myrepo/mycustomimageforservice2
networks:
- mynet
restart: on-failure
networks:
mynet:
What are my options?
You can't force a containerName in compose as its designed to allow things like scaling a service (by updating the number of replicas) and that wouldn't work with names.
One service can access the other using servicename (http://serviceName:internalServicePort) instead and docker will do the rest for you (such as resolving to an actual container address, load balancing between replicas....).
This works with the default network type which is overlay
You can face your problem linking services in docker-compose.yml file.
Something like:
version: "3"
services:
service1:
image: dockerhub/service1
ports:
- "8080:8080"
networks:
- mynet
service2:
image: myrepo/mycustomimageforservice2
networks:
- mynet
restart: on-failure
links:
- service1
networks:
mynet:
Using links arguments in your docker-compose.yml you will allow some service to access another using the container name, in this case, service2 would establish a connection to service1 thanks to the links parameter. I'm not sure why you use a network but with the links parameter would not be necessary.
container_name option is ignored when deploying a stack in swarm mode since container names need to be unique.
https://docs.docker.com/compose/compose-file/#container_name
If you do have to use version 3 but don't work with swarms, you can add --compatibility to your commands.
Specify a custom container name, rather than a generated default name.
container_name: my-web-container
see this in the full docker-compose file
version: '3.9'
services:
node-ecom:
build: .
image: "node-ecom-image:1.0.0"
container_name: my-web-container
ports:
- "4000:3000"
volumes:
- ./:/app:ro
- /app/node_modules
- /config/.env
env_file:
- ./config/.env
know more

Resources