How to make docker container to use host network? - docker

I am running docker for mac. My docker compose configuration file is:
version: "2.3"
services:
base:
build:
context: .
dev:
network_mode: "host"
extends:
service: base
when the container is launched via docker-compose run --rm dev sh, it can't ping a IP address (172.25.36.32). But I can ping this address from host. I have set network_mode: "host" on the configuration file. How can I make the docker container share host network?
I found that host network doesn't work for Mac. Is there a solution for that in Mac?
Below is the docker network inspect ID output:
[
{
"Name": "my_container_default",
"Id": "0441cf2b99b692d2047ded88d29a470e2622a1669a7bfce96804b50d609dc3b0",
"Created": "2019-08-27T06:06:30.984427063Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"22d3e7500ccfdc7fcd192a9f5977ef32e086e340908b1c0ff007e4144cc91f2e": {
"Name": "time-series-api_dev_run_b35174fdf692",
"EndpointID": "23924b4f68570bc99e01768db53a083533092208a3c8c92b20152c7d2fefe8ce",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "time-series-api",
"com.docker.compose.version": "1.24.1"
}
}
]

i believe you need to add network option during the build. Try with
version: "2.3"
services:
base:
build:
context: .
network: host
dev:
network_mode: "host"
extends:
service: base
EDIT: Works on Linux, please see documentation for Mac
The host networking driver only works on Linux hosts, and is not supported on Docker Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server.

I think you need to start the container with up option not run since run override many options:
docker-compose up dev
or you may try with --use-aliases with run
--use-aliases Use the service's network aliases in the network(s) the
container connects to.
see this
P.S
After your update
the following will work on MAC
dev:
network: host
extends:
service: base

Related

Docker containers in the same network can't communicate (ARP is possible but not upper layers messages)

I have been trying to split a simple API-rest into different services using docker. Unfortunately, I have not been able to make it work. I have read the docker docs several times and have followed multiple stack-over-flow and docker forum threads but non of the answers worked for me. I am new to Docker, so I might be missing something.
I detected that the communication host-container was ok but container-container wasn't, so in order to see what was going on I installed ping on get and post services (which run on a debian:bullseye-slim based image) and also wireshark in my host machine. What I have detected is that I can ping the host (172.22.0.1) and also the name resolution is okay (when I run ping post its IP is displayed) but for some reason when I send a ping request from post to get no reply is received.
My docker-compose.yaml file is the following:
version: '3.9'
services:
mydb:
image: mariadb:latest
environment:
MYSQL_DATABASE: 'cars'
MYSQL_ALLOW_EMPTY_PASSWORD: 'true'
ports:
- "3306:3306"
container_name: mydb
networks:
- mynw
post:
build: ./post-service
ports:
- "8081:8081"
container_name: post
networks:
- mynw
privileged: true
get:
build: ./get-service
ports:
- "8080:8080"
container_name: get
networks:
- mynw
privileged: true
nginx2:
build: ./nginx2
ports:
- "80:80"
container_name: nginx2
networks:
- mynw
networks:
mynw:
external: true
Initially, I was using the default network, but I read that this might cause internal DNS problems I changed it. I created the network by CLI without any special parameters (docker network create mynw). The JSON displayed when running docker network inspect mynw is the following:
[
{
"Name": "mynw",
"Id": "f925467f7efee99330f0eaaa82158006ac645cc92e7abda693f052c10da485bd",
"Created": "2022-10-14T18:42:14.145569533+02:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"4eb6e348d84b2433199e6581b4406eb74fb93c1fc2269691b81b34c13c723db5": {
"Name": "nginx2",
"EndpointID": "b19fab264c1489b616d919f09a5b80a1774561ea6f2538beb86157065c1e787b",
"MacAddress": "02:42:ac:16:00:03",
"IPv4Address": "172.22.0.3/16",
"IPv6Address": ""
},
"5f20802a59708bf4a592e137f52fca29dc857734983abc1c61548783e2e61896": {
"Name": "mydb",
"EndpointID": "3ef7b5d619b5b9ad9441dbc2efabd5a0e5a6bb2ea68bbd58fae8f7dfd2ac36ed",
"MacAddress": "02:42:ac:16:00:02",
"IPv4Address": "172.22.0.2/16",
"IPv6Address": ""
},
"dee816dd62aa08773134bb7a7a653544ab316275ec111817e11ba499552dea5b": {
"Name": "post",
"EndpointID": "cca2cbe801160fa6c35b3a34493d6cc9a10689cd33505ece36db9ca6dcf43900",
"MacAddress": "02:42:ac:16:00:04",
"IPv4Address": "172.22.0.4/16",
"IPv6Address": ""
},
"e23dcd0cecdb609e4df236fd8aed0999c12e1adc7b91b505fc88c53385a81292": {
"Name": "get",
"EndpointID": "83b73045887827ecbb1779cd27d5c4dac63ef3224ec42f067cfc39ba69b5484e",
"MacAddress": "02:42:ac:16:00:05",
"IPv4Address": "172.22.0.5/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Curiously, when sniffing the network using wireshark I see that the ARP messages between the containers are exchanged without problem (get service asks for post's MAC adress and this one replies with its MAC, and then this information is processed correctly to send the ICMP request).
I thought that maybe the network layer was dropping the replies for some reason and installed iptables to both services and added a ACCEPT rule for icmp messages to both INPUT and OUTPUT, but also didn't change anything. If someone knows what else could I do or what am I missing it would be very helpful.
Finally the solution was to delete everything and reinstall Docker and Docker Compose.

Cannot access Postgress (in container) from another container in the same network

I learned a lot by my self about docker but I still facing one major problem and I need your help, please.
This is my dockercompose:
version: '3.3'
services:
postgres:
container_name: postgres-tc
networks:
- tools-net
image: postgres
expose:
- 5432
environment:
- POSTGRES_PASSWORD=Admin10..
- POSTGRES_HOST_AUTH_METHOD=trust
#service
teamcity-server:
ports:
- '8112:8111'
volumes:
- '/var/run/docker.sock:/tmp/docker.sock:ro'
- '/data/teamcity_server/datadir:/data/envdata/tc/datadir'
- '/opt/teamcity/logs:/data/envdata/tc/logs'
links:
- "postgres:postgres"
logging:
options:
max-size: 1g
container_name: tc-server
networks:
- tools-net
image: jetbrains/teamcity-server
networks:
tools-net:
external: true
teamcity-serve needs to access postgress on its port to start working.
Both are in the same netwotk created by this command.
docker network create -d bridge --subnet 172.50.0.0/16 tools-net
Here the network inspect after running dockercompose up:
[
{
"Name": "tools-net",
"Id": "74708d3d114394032cbeb5f0a2a93893da38ce5dae2a555a451a189b00b52b2e",
"Created": "2021-07-04T07:04:39.105791768Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.50.0.0/16"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"15a57ccfc229e361e40e940d01b6d6025820fee5ad50db4a61d0c411d4d61750": {
"Name": "postgres-tc",
"EndpointID": "8d5b9d192ed90545abe958134b9853d0aecba33cabd56f31a2c9681106ccdf6e",
"MacAddress": "02:42:ac:32:00:02",
"IPv4Address": "172.50.0.2/16",
"IPv6Address": ""
},
"94eaa0ea0524ca4419ba8e300e80687db487e4f46b6623dabcc15d65c60bdde6": {
"Name": "tc-server",
"EndpointID": "90825befcc5633c3c59c5ec9d58b188d2862cd65cd2283b5c56ec3ecf5a95fd6",
"MacAddress": "02:42:ac:32:00:03",
"IPv4Address": "172.50.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Now when I try to access the postgress from teamcity-server I got this error:
Please guys help me to solve this issue.
Thanks in advance
UPDATE :
It seems to be better with your help #Hans.
For now I have another issue :
My be I have to add a permission into postgress pg_hba.conf which I can lot locate within the container.
Can youu help please?
Thanks
You're telling Teamcity to connect to a postgres server running on the default localhost address. In Docker terms, that means running in in the same container as Teamcity. It isn't.
You need to tell Teamcity to connect to database host postgres-tc. That's the network name of the Postgres container on your network.

Docker network affecting host network on container restart : ERR_NETWORK_CHANGED in chrome

I am facing a weird issue with the docker networks. I am using an external bridge network named extenal_network in my docker containers with auto-restart enabled. I am not able to access my host network if any of the containers is restarting due to some error, maybe code or infra related.
Please refer attached screenshot for more clarity.
I've tried the below links but no luck.
https://superuser.com/questions/1336567/installing-docker-ce-in-ubuntu-18-04-breaks-internet-connectivity-of-host
https://success.docker.com/article/how-do-i-influence-which-network-address-ranges-docker-chooses-during-a-docker-network-create
https://forums.docker.com/t/cant-access-internet-after-installing-docker-in-a-fresh-ubuntu-18-04-machine/53416
https://superuser.com/questions/747735/regularly-getting-err-network-changed-errors-in-chrome/773971
Dockerfile
FROM node:10.20.1-alpine
RUN apk add --no-cache python make g++
WORKDIR /home/app
COPY package.json package-lock.json* ./
RUN npm install
COPY . .
Docker-Compose
version: "3"
services:
app:
container_name: app
build:
context: ./
dockerfile: Dockerfile
image: 'network_poc:latest'
ports:
- 8080:8080
deploy:
resources:
limits:
memory: 2G
networks:
- extenal_network
restart: always
command: node index.js
networks:
shared_network:
external:
name: extenal_network
docker inspect extenal_network
[
{
"Name": "extenal_network",
"Id": "96476c227ddc14aa23d376392d380b2674fcbad109c90e7436c0cddd5c0a9ac5",
"Created": "2020-04-14T00:17:10.89980675+05:30",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"7621a2b5a7e460a905bf86c427aea38b6374ac621c0c1a2b9eca4b671aea4dfe": {
"Name": "app",
"EndpointID": "04e9d14a17af05eb7a2b478526365cbce7f726a62f5e2cd315244c2639891b1e",
"MacAddress": "**:**:**:**:**:**",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Any help or suggestion is highly appreciable.

How to set up mysql host limitation working with docker container

I'm setting up a mysql:5 container (image from docker hub) into a sub-network shared with another container (web server).
I'd like to limit my mysql user to host that correspond to the web server container.
If i limit it directly whith the web-server container ip it's work but this ip cant change because of the docker environement so i'd like to have somethig like:
GRANT ALL PRIVILEGES ON `db`.* TO 'user'#'container-name'
And when i try to connect server respond:
Access denied for user 'user'#'172.18.0.4'
Where 172.18.0.4 is the correct ip for the web-server container.
Example :
docker-compose.yaml
version: '2'
services:
mysql1:
image: mysql
container_name: mysql1
ports:
- '3306:3306'
environment:
- MYSQL_ROOT_PASSWORD=rootpwd
mysql2:
image: mysql
container_name: mysql2
environment:
- MYSQL_ROOT_PASSWORD=rootpwd
Up containers
docker-compose up -d
Create user into mysql1
docker-compose exec mysql1 mysql -u root --password="rootpwd" -e "CREATE USER user#mysql2 IDENTIFIED BY 'pwd'; GRANT ALL PRIVILEGES ON * TO user#mysql2" sys
Try to access mysql1 from mysql2 by user
docker-compose exec mysql2 mysql -u user --password="pwd" -h mysql1 sys
ERROR 1045 (28000): Access denied for user 'user'#'172.18.0.3' (using password: YES)
Docker network info
docker network inspect test-mysql-user-host_default
{
"Name": "test-mysql-user-host_default",
"Id": "305f4da33e0b79d899ac289e6b3fc1ebf2733baf0bf3d60a53cc94cec44176d1",
"Created": "2019-04-26T09:53:23.3237197Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"133ebd1e6ba485212e5448670c66c1718917650bc217264183c725fb1a928118": {
"Name": "mysql1",
"EndpointID": "ce89aa1674e9c46fad50b2f36aec8d1eecf2227f597a785be67785ade770fef7",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"d67072a5c486752b7018fa321e47ae09fb873199604c2b520f2305968d43b577": {
"Name": "mysql2",
"EndpointID": "e6b62c6ce9e266d38be383fa6029f378e1ca67a18420dd3f508a3089200c0d98",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
Comment from bellackn
It seems to be a problem of MySQL's skip-resolve-host option which is currently hard-coded in the Dockerfile (see this GitHub issue). TLDR: You could create an own image based on mysql:5 and sed this option away, or you use the % wildcard instead of mysql2.
He's right here the official dockerfile for mysql:5.7, and at line 70 we can find:
#don't reverse lookup hostnames, they are usually another container
&& echo '[mysqld]\nskip-host-cache\nskip-name-resolve' > /etc/mysql/conf.d/docker.cnf
I did a new Dockerfile witch remove this configuration file.
FROM mysql:5.7
RUN rm /etc/mysql/conf.d/docker.cnf
EXPOSE 3306 33060
CMD ["mysqld"]

how can a docker container on 2 subnets access the internet? (using docker-compose)

I have a container with 2 subnets:
one is the reverse proxy subnet
the second one is the internal subnet for the different containers of that project
The container needs to access an external SMTP server (on mailgun.com), but it looks like, with docker-compose, you can put a container on both one or more subnets and give it access to the host network at the same time.
Is there a way to allow this container to initiate connections to the outside world?
and, if no, what common workarounds are used? (for example, adding an extra IP to the container to be on the host network, etc.)
This is the docker compose file:
version: '2.3'
services:
keycloak:
container_name: keycloak
image: jboss/keycloak
restart: unless-stopped
volumes:
- '/appdata/keycloak:/opt/jboss/keycloak/standalone/data'
expose:
- 8080
external_links:
- auth
networks:
- default
- nginx
environment:
KEYCLOAK_USER: XXXX
KEYCLOAK_PASSWORD: XXXX
PROXY_ADDRESS_FORWARDING: 'true'
ES_JAVA_OPTS: '-Xms512m -Xmx512m'
VIRTUAL_HOST: auth.XXXX.com
VIRTUAL_PORT: 80
LETSENCRYPT_HOST: auth.XXXX.com
LETSENTRYPT_EMAIL: admin#XXXX.com
networks:
default:
external:
name: app-network
nginx:
external:
name: nginx-proxy
The networks are as follows:
$ dk network ls
NETWORK ID NAME DRIVER SCOPE
caba49ae8b1c bridge bridge local
2b311986a6f6 app-network bridge local
67f70f82aea2 host host local
9e0e2fe50385 nginx-proxy bridge local
dab9f171e37f none null local
and nginx-proxy network info is:
$ dk network inspect nginx-proxy
[
{
"Name": "nginx-proxy",
"Id": "9e0e2fe503857c5bc532032afb6646598ee0a08e834f4bd89b87b35db1739dae",
"Created": "2019-02-18T10:16:38.949628821Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"360b49ab066853a25cd739a4c1464a9ac25fe56132c596ce48a5f01465d07d12": {
"Name": "keycloak",
"EndpointID": "271ed86cac77db76f69f6e76686abddefa871b92bb60a007eb131de4e6a8cb53",
"MacAddress": "02:42:ac:12:00:04",
"IPv4Address": "172.18.0.4/16",
"IPv6Address": ""
},
"379dfe83d6739612c82e99f3e8ad9fcdfe5ebb8cdc5d780e37a3212a3bf6c11b": {
"Name": "nginx-proxy",
"EndpointID": "0fcf186c6785dd585b677ccc98fa68cc9bc66c4ae02d086155afd82c7c465fef",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"4c944078bcb1cca2647be30c516b8fa70b45293203b355f5d5e00b800ad9a0d4": {
"Name": "adminmongo",
"EndpointID": "65f1a7a0f0bcef37ba02b98be8fa1f29a8d7868162482ac0b957f73764f73ccf",
"MacAddress": "02:42:ac:12:00:06",
"IPv4Address": "172.18.0.6/16",
"IPv6Address": ""
},
"671cc99775e09077edc72617836fa563932675800cb938397597e17d521c53fe": {
"Name": "portainer",
"EndpointID": "950e4b5dcd5ba2a13acba37f50e315483123d7da673c8feac9a0f8d6f8b9eb2b",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"90a98111cbdebe76920ac2ebc50dafa5ea77eba9f42197216fcd57bad9e0516e": {
"Name": "kibana",
"EndpointID": "fe1768274eec9c02c28c74be0104326052b9b9a9c98d475015cd80fba82ec45d",
"MacAddress": "02:42:ac:12:00:05",
"IPv4Address": "172.18.0.5/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Update:
The following test was done to test the solution proposed by lbndev:
a test network was created:
# docker network create \
-o "com.docker.network.bridge.enable_icc"="true" \
-o "com.docker.network.bridge.enable_ip_masquerade"="true" \
-o "com.docker.network.bridge.host_binding_ipv4"="0.0.0.0" \
-o"com.docker.network.driver.mtu"="1500" \
test_network
e21057cf83eec70e9cfeed459d79521fb57e9f08477b729a8c8880ea83891ed9
we can display the contents:
# docker inspect test_network
[
{
"Name": "test_network",
"Id": "e21057cf83eec70e9cfeed459d79521fb57e9f08477b729a8c8880ea83891ed9",
"Created": "2019-02-24T21:52:44.678870135+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
Then we can inspect the container:
I put the contents on pastebin: https://pastebin.com/5bJ7A9Yp since it's quite large and would make this post unreadable.
and testing:
# docker exec -it 5d09230158dd sh
sh-4.2$ ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
^C
--- 1.1.1.1 ping statistics ---
11 packets transmitted, 0 received, 100% packet loss, time 10006ms
So, we couldn't get this solution to work.
Looks like your bridge network is missing a few options, to allow it to reach the outside world.
Try executing docker network inspect bridge (the default bridge network). You'll see this in the options :
...
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
...
On your nginx-proxy network, these are missing.
You should delete your network and re-create it with these additional options. From the documentation on user-defined bridged networks and docker network create command :
docker network create \
-o "com.docker.network.bridge.enable_icc"="true" \
-o "com.docker.network.bridge.enable_ip_masquerade"="true" \
-o "com.docker.network.bridge.host_binding_ipv4"="0.0.0.0" \
-o"com.docker.network.driver.mtu"="1500" \
nginx-proxy
Enabling ICC or not is up to you.
What will enable you to reach your mail server is ip_masquerade to be enabled. Without this setup, your physical infrastructure (= network routers) would need to properly route the IPs of the docker network subnet (which I assume is not the case).
Alternatively, you could configure your docker network's subnet, ip range and gateway, to match those of your physical network.
In the end, the problem turned out to be very simple:
In the daemon.json file, in the docker config, there was the following line:
{"iptables": false, "dns": ["1.1.1.1", "1.0.0.1"]}
It comes from the setup scripts we’ve been using and we didn’t know about iptables:false
It prevents docker from updating the host’s iptables; while the bridge networks were set up correctly, there was no communication possible with the outside.
While simple in nature, it proved very long to find, so I’m posting it as an answer with the hope it might help someone.
Thanks to everyone involved for trying to solve this issue!

Resources