My dockerized app needs to access something on the localhost network which is not possible without network_mode: "host"
version: '3.4'
services:
app:
network_mode: "host"
image: node:latest
volumes:
- .:/usr/app
- node_modules:/usr/app/node_modules
working_dir: /usr/app
ports:
- 3000:3000
volumes:
node_modules:
If I comment out network_mode: "host" my app works perfectly on http://localhost:3000. If I re-add it it stills runs but no longer accessible on http://localhost:3000.
Edit: I just tested a hello world on Ubuntu and it works, but not on mac, mac doesm't seem to work with network_mode: "host"
yes at the moment of writing this, there is definitely an issue with mac, the issues is that docker in mac uses a linux virtual machine where containers live, so when you use network_mode: "host" it will only be valid for the VM's network, not for your mac :( more info here.
As an alternative, do not use network_mode: "host", instead keep it in bridge (the default) then configure your service to instead of reaching localhost:xxxx, use host.docker.internal:xxxx
Related
My docker-compose.yml contains this:
version: '3.2'
services:
mysql:
image: mysql:latest
container_name: mysql
restart: always
network_mode: "host"
hostname: localhost
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
volumes:
- $HOME/data/datasql:/var/lib/mysql
ports:
- 3306:3306
user-management-service:
build: user-management-service/
container_name: user-management-service
restart: always
depends_on:
- mysql
- rabbitmq
- eureka
network_mode: "host"
hostname: localhost
ports:
- 8089:8089
When I try to do docker-compose up, I get the following error:
"host" network_mode is incompatible with port_bindings
Can anyone help me with the solution?
network_mode: host is almost never necessary. For straightforward servers, like the MySQL server you show or what looks like a normal HTTP application, it's enough to use normal (bridged) Docker networking and ports:, like you show.
If you do set up host networking, it completely disables Docker's networking stack. You can't call to other containers using their host name, and you can't remap a container's port using ports: (or choose to not publish it at all).
You should delete the network_mode: lines you show in your docker-compose.yml file. The container_name: and hostname: lines are also unnecessary, and you can delete those too (specific exception: RabbitMQ needs a fixed hostname:).
I feel like the two places I see host networking are endorsed are either to call back to the host machine (see From inside of a Docker container, how do I connect to the localhost of the machine?), or because the application code has hard-coded localhost as the host name of the database or other components (in which case Docker and a non-Docker development setup fundamentally act differently, and you should configure these locations using environment variable or another mechanism).
Quick solution:
Downgrade the docker-compose version and you'll be fine. The issue is with the latest docker-compose version and network_mode: "host"
I faced the same issue on v1.29.2 and while everything worked smooth on v1.27.4.
I had the same problem with network_mode: 'host'.
When downgrading docker-compose from 1.29.2 to 1.25.4, it worked fine. Maybe some bug added in new versions?
Get rid of the param ports in your services containing network_mode its like doing mapping twice.
mysql:
image: mysql:latest
container_name: mysql
restart: always
network_mode: "host"
hostname: localhost
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
volumes:
- $HOME/data/datasql:/var/lib/mysql
....
....
To access the host's http://localhost inside your docker, you need to replace:
network_mode: host
with:
ports:
- 80:80
You can do the same with any other port.
If you want to connect to a local database then, when connecting to that database, don't use "localhost" or "127.0.0.1". Instead use "host.docker.internal" and that will allow traffic between your container to the database.
This is basically the same question as this one, except on a Mac, setting network to host has no effect whatsoever.
I'm trying to give a Docker container, running on MacOS, access to its host ARP table. My docker-compose.yaml:
services:
homeassistant:
container_name: home-assistant
image: homeassistant/home-assistant
environment:
# This is the required way to set a timezone on macOS and differs from the Linux compose file
- TZ=XX/XXXX
volumes:
- ./config:/config
restart: unless-stopped
privileged: true
ports:
# Also required for macOS since the network directive in docker-compose does not work
- "8123:8123"
# Add this or docker-compose will complain that it did not find the key for locally mapped volume
volumes:
config:
How can I make my spring boot application running inside docker containers connect to postgres database that is running in remote server (non-docker environment). Here is my docker-compose.yml file:
version: "3.3"
services:
app1:
image: repo/app1:latest
ports:
- 8000:8000
restart: always
network_mode: "host"
extra_hosts:
- 'postgresdb:192.168.2.50'
app2:
image: repo/app2:latest
ports:
- 8001:8001
restart: always
network_mode: "host"
extra_hosts:
- 'postgresdb:192.168.2.50'
IP of remote PostgreSQL database machine is: 192.168.2.50(hostname: postgresdb)
I am using network_mode: "host" option and works without any problem but I believe this would defeat the purpose of using docker network. What other options are available to make this work without using network_mode? IP address and necessary ports on both, the docker machine and remote database server, are all whitelisted and have access through the firewalls.
Such implementation, obviously will not work.
Since your database deployed remotely, the only working solution will be provided with environment variables.
version: "3.3"
services:
app1:
image: repo/app1:latest
ports:
- 8000:8000
restart: always
network_mode: "host"
environment:
- DBHOST: "192.168.2.50"
All you need in your application is to use this variable.
Python example:
dbhost = os.getenv("DBHOST")
My app has 2 dependencies which I specify in my docker-compose, a postgres and kafka service:
services:
postgres:
image: postgres:9.6-alpine
ports:
- "5432:5432"
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
I run my code and tests outside the docker network, and use these two containers as my dependencies.
As these both expose ports, I can configure my app to hit them via: localhost:5432, localhost:9092. This works.
The problem I have is when I want to test the app image itself, I add this as a service to the docker-compose file:
app:
image: myapp
links:
- postgres
- kafka
The app is still configured to use localhost, so I allow the app container to access my network using --net=host
Whilst the app container can now access localhost:5432 and localhost:9092 (confirmed by curling from inside the container), the host names fail to resolve when the code runs and the dependencies are unreachable - possibly as a result of using localhost from inside the container and confusing the client libraries? I'm really not sure.
It feels like the use of localhost in the app configuration isn't the right approach here. Is it possible to refer to the service names 'postgres' and 'kafka' from outside the docker network?
Why do you want to continue using localhost:xxx in your app?
The best approach for you is to change connection strings in your application when it is being launched from docker-compose. You just use postgres:5432 and kafka:9092 and everything will work, because inside docker-compose network all machines are visible to each other under their service names.
If for some great reasons you insist on using localhost as a connection target, you need to turn all services into host mode. But remember - in this case ports are not exposed, so you access services with their original port values.
version: '3'
services:
postgres:
image: postgres:9.6-alpine
network_mode: "host"
kafka:
image: wurstmeister/kafka
network_mode: "host"
app:
image: myapp
network_mode: "host"
And by the way, forget about links. They are deprecated.
I have two different Docker containers and each has a different image. Each app in the containers uses non-conflicting ports. See the docker-compose.yml:
version: "2"
services:
service_a:
container_name: service_a.dev
image: service_a.dev
ports:
- "6473:6473"
- "6474:6474"
- "1812:1812"
depends_on:
- postgres
volumes:
- ../configs/service_a/var/conf:/opt/services/service_a/var/conf
postgres:
container_name: postgres.dev
hostname: postgres.dev
image: postgres:9.6
ports:
- "5432:5432"
volumes:
- ../configs/postgres/scripts:/docker-entrypoint-initdb.d/
I can cURL each image successfully from the host machine (Mac OS), e.g. curl -k https://localhost:6473/service_a/api/version works. What I'd like to do is to be able to refer to postgres container from the service_a container via localhost as if these two containers were one and they share the same localhost. I know that it's possible if I use the hostname postgres.dev from inside the service_a container, but I'd like to be able to use localhost. Is this possible? Please note that I am not very well versed in networking or Docker.
Mac version: 10.12.4
Docker version: Docker version 17.03.0-ce, build 60ccb22
I have done quite some prior research, but couldn't find a solution.
Relevant: https://forums.docker.com/t/localhost-and-docker-compose-networking-issue/23100/2
The right way: don't use localhost. Instead use docker's built in DNS networking and reference the containers by their service name. You shouldn't even be setting the container name since that breaks scaling.
The bad way: if you don't want to use the docker networking feature, then you can switch to host networking, but that turns off a very key feature and other docker capabilities like the option to connect containers together in their own isolated networks will no longer work. With that disclaimer, the result would look like:
version: "2"
services:
service_a:
container_name: service_a.dev
image: service_a.dev
network_mode: "host"
depends_on:
- postgres
volumes:
- ../configs/service_a/var/conf:/opt/services/service_a/var/conf
postgres:
container_name: postgres.dev
image: postgres:9.6
network_mode: "host"
volumes:
- ../configs/postgres/scripts:/docker-entrypoint-initdb.d/
Note that I removed port publishing from the container to the host, since you're no longer in a container network. And I removed the hostname setting since you shouldn't change the hostname of the host itself from a docker container.
The linked forum posts you reference show how when this is a VM, the host cannot communicate with the containers as localhost. This is an expected limitation, but the containers themselves will be able to talk to each other as localhost. If you use a VirtualBox based install with docker-toolbox, you should be able to talk to the containers by the virtualbox IP.
The really wrong way: abuse the container network mode. The mode is available for debugging container networking issues and specialized use cases and really shouldn't be used to avoid reconfiguring an application to use DNS. And when you stop the database, you'll break your other container since it will lose its network namespace.
For this, you'll likely need to run two separate docker-compose.yml files because docker-compose will check for the existence of the network before taking any action. Start with the postgres container:
version: "2"
services:
postgres:
container_name: postgres.dev
image: postgres:9.6
ports:
- "5432:5432"
volumes:
- ../configs/postgres/scripts:/docker-entrypoint-initdb.d/
Then you can make a second service in that same network namespace:
version: "2"
services:
service_a:
container_name: service_a.dev
image: service_a.dev
network_mode: "container:postgres.dev"
ports:
- "6473:6473"
- "6474:6474"
- "1812:1812"
volumes:
- ../configs/service_a/var/conf:/opt/services/service_a/var/conf
Specifically for Mac and during local testing, I managed to get the multiple containers working using docker.for.mac.localhost approach. I documented it http://nileshgule.blogspot.sg/2017/12/docker-tip-workaround-for-accessing.html