Here is my v3.5 docker-compose.yml definition file. It has an analytics network (using an alias of the same name), and where both included services connect to said network to communicate with one another. This works.
However, I want these services (ports) exposed to the HOST machine, as well. There's a way to do that by defining an additional network and/or specifying additional ports: entries within the services themselves, but I can't figure out exactly how because the documentation is very confusing and version-specific (moving targets).
Without destroying the below (because it works internally), what additions do I make (and where) to expose both services to the HOST machine as well?
Thank you!
version: '3.5'
networks:
analytics:
name: analytics
driver: bridge
# ===========================================
# Service: Zookeeper
# ===========================================
zookeeper:
image: 'wurstmeister/zookeeper:latest'
container_name: analytics-ZooKeeper
networks:
- analytics
ports:
- "2181:2181"
volumes:
- ./data.d/zookeeper.d:/opt/zookeeper-3.4.9/data
# ===========================================
# ===========================================
# Service: Kafka
# ===========================================
kafka:
build:
context: ./kafka.d
dockerfile: Dockerfile
image: nmvega/kafka:latest
networks:
- analytics
ports:
- 9092-9094:9092 # For one to three Kafka brokers.
environment:
#KAFKA_ADVERTISED_HOST_NAME: vps00 # Docker host Name. <--- BEFORE
KAFKA_ADVERTISED_HOST_NAME: 192.168.0.180 # Docker host IP. <--- AFTER
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./data.d/kafka.d:/kafka
depends_on:
- zookeeper
# ===========================================
EDIT:
Upon further investigation, the above configuration, as originally posted, is correct with the small modification from the name of the Docker Host to the IP of the Docker host (as prescribed by the readme for the image that I'm using). Accidentally using the name didn't matter until I attempted to access the service from the Host.
Hopefully this example will be valuable to others wanting to see one.
Thank you to the commenters below.
Related
I have running a DDEV-Environment for Magento2, locally on my Mac OSX (Ventura)
https://ddev.readthedocs.io/en/stable/users/quickstart/#magento-2
For testing purpose I included Nifi per docker-compose.yaml inside my ddev project .ddev/docker-compose.nifi.yaml
Below you can see the docker-compose, which is really minimal at this point. Nifi works like expected, because I can login etc, although it is not persistent yet, but thats a different problem
version: '3'
services:
nifi:
image: apache/nifi:latest
container_name: ddev-${DDEV_SITENAME}-nifi
ports:
# HTTP
- "8080:8080"
# HTTPS
- "8443:8443"
volumes:
# - ./nifi/database_repository:/opt/nifi/nifi-current/database_repository
# - ./nifi/flowfile_repository:/opt/nifi/nifi-current/flowfile_repository
# - ./nifi/content_repository:/opt/nifi/nifi-current/content_repository
# - ./nifi/provenance_repository:/opt/nifi/nifi-current/provenance_repository
# - ./nifi/state:/opt/nifi/nifi-current/state
# - ./nifi/logs:/opt/nifi/nifi-current/logs
# - ./nifi/conf/login-identity-providers.xml:/opt/nifi/nifi-current/conf/login-identity-providers.xml
- ".:/mnt/ddev_config"
All I want to do is sending a POST-requst from Nifi to my Magento2 module.
I tried several IPs now, which I got from docker inspect ddev-ddev-magento2-web but I always receive "Connection refused"
My output from docker network ls:
NETWORK ID NAME DRIVER SCOPE
95bea4031396 bridge bridge local
692b58ca294e ddev-ddev-magento2_default bridge local
46be47991abe ddev_default bridge local
7e19ae1626f1 host host local
f8f4f1aeef04 nifi_docker_default bridge local
dbdba30546d7 nifi_docker_mynetwork bridge local
ca12e667b773 none null local
My Magento2-Module is working properly, because sending requests from Postmanto it works fine
You don't want most of what you have. Please remove the ports statement, which you shouldn't need at all; if you need anything, you'll need an expose. But I doubt you need that in this case?
You'll want to look at the docs:
Additional services and add-ons
Additional services with docker-compose
Then create a .ddev/docker-compose.nifi.yaml with something like
services:
nifi:
image: apache/nifi:latest
container_name: ddev-${DDEV_SITENAME}-nifi
container_name: "ddev-${DDEV_SITENAME}-someservice"
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.approot: ${DDEV_APPROOT}
expose:
- "8080"
environment:
- VIRTUAL_HOST=$DDEV_HOSTNAME
- HTTP_EXPOSE=8080:8080
- HTTPS_EXPOSE=9999:8080
volumes:
- ".:/mnt/ddev_config"
The name of the "web" container from inside your nifi container will be "web", curl http://web:8080, assuming that you have nifi on port 8080.
I don't know what you're trying to accomplish, but this may get you started. Feel free to come over to the DDEV Discord channel for more interactive help.
I have 3 containers with my bot, server and db. after docker-compose up, server and db are working. telegram bot does get-request and takes this error:
Get "http://localhost:8080/user/": dial tcp 127.0.0.1:8080: connect: connection refused
docker-compose.yml
version: "3"
services:
db:
image: postgres
container_name: todo_postgres
restart: always
ports:
- "5432:5432"
environment:
# TODO: Change it to environment variables
POSTGRES_USER: user
POSTGRES_DB: somedb
POSTGRES_PASSWORD: pass
server:
depends_on:
- db
build: .
restart: always
ports:
- 8080:8080
environment:
DB_NAME: somedb
DB_USERNAME: user
DB_PASSWORD: pass
bot:
depends_on:
- server
build:
./src/telegram_bot
environment:
BOT_TOKEN: TOKEN
restart: always
links:
- server
When using compose, try using the containers hostname.. in the case your bot should try to connect to
server:8080
Compose will handle the name resolution to the IP you need
What you try is to access localhost within your container (service) bot.
Maybe this answer will help you to solve the problem. It sound similar to your problem.
But I want to provide you another solution to your problem:
In case it's not needed to access the containers form outside (from your host), one appraoch would be making use of the expose functionality and a docker network.
See docs.docker.com: network.
The expose functionality allows to access your other containers within your network
See docs.docker.com: expose
Expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port can be specified.
Example
What is this example doing?
A couple of steps that are not mandatory
Set a static ip within your docker container
These Steps are not needed and can be omitted. However, I like to do this, since you have now a better control over the network. You can access the containers by their hostname (which is the container name or service name) as well.
The steps that are needed are the following:
This exposes port 8080, but do not publish it.
expose:
- 8080
The network which allows static ip configuration
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
A complete file could look similar to this:
version: "3.8"
services:
first-service:
image: <your-image>
networks:
vpcbr:
ipv4_address: 10.5.0.2
expose:
- 8080
second-service:
image: <your-image>
networks:
vpcbr:
ipv4_address: 10.5.0.3
depends_on:
- first-service
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
Your bot container is up before your server & db containers.
When you use depends_on it's not accually waiting them to finish setup themeselves.
You should try some tricky algorithem for waiting the other container finish setup.
I remmember that when I used Nginx proxy I used something called wait-for-it.sh
Overview & Goal:
I have a docker-compose configuration for running multiple services and one of the features I'm trying to implement requires that the containers can refer to each other as localhost.
What I've tried:
I attempted using network_mode: host which did the job as far as my testing showed, but wouldn't work for team members that have MacOS.
I also tried using network aliases. In the example I fail to wget localhost:6000 from the service_1 container
services:
service_1:
#other attributes
ports:
- 5000:5000
networks:
shared_net:
aliases:
- localhost
service_2:
#other attributes
ports:
- 6000:6000
networks:
shared_net:
aliases:
- localhost
networks:
shared_net:
Many thanks
I'm fairly new to docker and docker compose, so forgive me if this is a stupid question...
I have a compose file with 2 containers. A homeassistant container with port 8123 exposed and a database with 5432. The homeassistant can access the database using the url postgresql://user:password#db:5432/homeassistant_db. I think that this is because docker has created a db binding on the host and that's why I can connect to db.
However I need to bind the homeassistant to the host, which I can do with network_mode: "host" which you can see commented out in my config. When I do this I can indeed bind to the host and homeassistant can do it's discovery of network devices etc...
Unfortunately this breaks the connection with the database so that I can't use the postgresql://user:password#db:5432/homeassistant_db url any longer.
How do I attach homeassistant to the host AND keep the database connection working? I guess I could change the database host from db to the pi's url or network name (eg. postgresql://user:password#192.168.0.100:5432/homeassistant_db or postgresql://user:password#homeassistant.local:5432/homeassistant_db) but this doesn't feel as clean or as robust as it could be.
I don't really understand the network bindings so I wan to try and learn so I can fix this myself going forward.
compose file below:
version: '3'
services:
db:
restart: always
container_name: "homeassistant_db_container"
# image: postgres:latest
image: tobi312/rpi-postgresql
ports:
- "5432:5432"
volumes:
- ./data/postgres/data:/var/lib/postgresql/data/pgdata
env_file:
- ./envs/database.env
home_assistant:
container_name: "homeassistant_container"
restart: always
image: homeassistant/raspberrypi3-homeassistant
ports:
- "8123:8123"
# network_mode: "host"
env_file:
- ./envs/homeassistant.env
volumes:
- ./configs/homeassistant:/config
depends_on:
- db
volumes:
data:
driver_opts:
type: none
o: bind
device: "${PWD}/data/postgres"
You can add both the containers in the same network as shown below. Then you could use the way you want to. Just add below code to your compose file. Then it will create a network and add both these containers there. This will also give you a security layer, so that no other containers can talk to your db container.
Second, remove container_name. You are confusing yourself. Services get their host names equal to service names by default.
networks:
default:
external:
name: "tools"
I have a couple of app containers that I want to connect to the mongodb container. I tried with external_links but I can not connect to the mongodb.
I get
MongoError: failed to connect to server [mongodb:27017] on first
connect
Do I have to add the containers into the same network to get external_links working?
MongoDB:
version: '2'
services:
mongodb:
image: mongo:3.4
restart: always
ports:
- "27017:27017"
volumes:
- data:/data/db
volumes:
data:
App:
version: '2'
services:
app-dev:
restart: Always
build: repository/
ports:
- "3000:80"
env_file:
- ./environment.env
external_links:
- mongodb_mongodb_1:mongodb
Networks:
# sudo docker network ls
NETWORK ID NAME DRIVER SCOPE
29f8bae3e136 bridge bridge local
67d5519cb2e6 dev_default bridge local
9e7097c844cf host host local
481ee4301f7c mongodb_default bridge local
4275508449f6 none null local
873a46298cd9 prod_default bridge local
Documentation at https://docs.docker.com/compose/compose-file/#/externallinks says
If you’re using the version 2 file format, the externally-created containers must be connected to at least one of the same networks as the service which is linking to them.
Ex:
Create a new docker network
docker network create -d bridge custom
docker-compose-1.yml
version: '2'
services:
postgres:
image: postgres:latest
ports:
- 5432:5432
networks:
- custom
networks:
custom:
external: true
docker-compose-2.yml
version: '2'
services:
app:
image: training/webapp
networks:
- custom
external_links:
- postgres:postgres
networks:
custom:
external: true
Yuva's answer above for the version 2 holds good for version 3 as well.
The documentation for the external_links isn't clear enough.
For more clarity I pasted the version 3 variation with annotation
version: '3'
services:
app:
image: training/webapp
networks:
- <<network created by other compose file>>
external_links:
- postgres:postgres
networks:
<<network created by other compose file>>:
external: true
Recently I faced Name resolution failure trying to link 2 containers handled by docker-compose v3 representing gRPC server and client in my case, but failed and with external_links.
I'll probably duplicate some of the info posted here, but will try to summarize
as all these helped me solving the issue.
From external_links docs (as mentioned in earlier answer):
If you’re using the version 2 or above file format, the externally-created containers must be connected to at least one of the same networks as the service that is linking to them.
The following configuration solved the issue.
project-grpc-server/docker-compose.yml
version: '3'
services:
app:
networks:
- some-network
networks:
some-network:
Server container configured as expected.
project-grpc-client/docker-compose.yml
services:
app:
external_links:
# Assigning easy alias to the target container
- project-grpc-server_app_1:server
networks:
# Mentioning current container as a part of target network
- project-grpc-server_some-network
networks:
# Announcing target network (where server resides)
project-grpc-server_some-network:
# Telling that announced network already exists (shouldn't be created but used)
external: true
When using defaults (no container_name configured) the trick with configuring client container is in prefixes. In my case network name had prefix project-grpc-server_ when working with docker-compose and than goes the name itself some-network (project-grpc-server_some-network). So fully qualified network names should be passed when dealing with separate builds.
While container name is obvious as it appears from time to time on the screen the full network name is not easy-to-guess candidate when first facing this section of Docker, unless docker network ls.
I'm not a Docker expert, so please don't judge too strict if all this is obvious and essential in Docker world.