Cannot access Neo4j container with Neomodel - docker

I have 2 different containers running with docker-compose. Here is my docker-compose.yml:
version: '3'
services:
#Create backend container
backend:
build: ./backend # path to Dockerfile
ports: # Port binding to host from docker container
- "5000:5000"
container_name: buzzworks-backend
volumes:
- ${PWD}/backend:/app
depends_on:
- db
environment:
FLASK_APP: flaskr
FLASK_ENV: development
NEO_USER: ${NEO_USER}
NEO_PW: ${NEO_PW}
db:
image: neo4j:4.1.1
container_name: buzzworks-neo4j
ports:
- "7474:7474"
- "7687:7687"
volumes:
- ${HOME}/neo4j/data:/data
- ${HOME}/neo4j/logs:/logs
- ${HOME}/neo4j/import:/var/lib/neo4j/import
- ${HOME}/neo4j/plugins:/plugins
environment:
NEO4J_AUTH: ${NEO_USER}/${NEO_PW}
NEO4J_dbms_logs_debug_level: ${NEO_DEBUG_LEVEL}
The corresponding network it generates looks right to me:
{
"Name": "buzzworksai_default",
"Id": "db4efc0286a9464cadde13cf1306f241b7a353295904b15b163e761289ba9d3f",
"Created": "2020-08-27T11:23:15.925483629-04:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"640650c163f746e480bf677abdeaf8edf6483b7dac2a260c2e3b3bc3319dffef": {
"Name": "buzzworks-neo4j",
"EndpointID": "ddbad1a179cc51655a779b07c91d6d949b0612bf985abc9c45e1794b35f4a565",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"ba47173d1dbc31e4e416eaf30d2314e6d2a20a36b389cb76cd1edcbea489184e": {
"Name": "buzzworks-backend",
"EndpointID": "17ff278f3db5ad609be682cdf912ca755587e07ef08d6023bf3ecb33a6c4bc31",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "buzzworksai",
"com.docker.compose.version": "1.26.2"
}
}
]
I can access the web interface of the neo4j database just fine. The problem occurs when I am trying to connect to the database with neomodel. I have tried running neomodel_remove_labels --db bolt://<user>:<password>#db:7687 with the appropriate credentials from the shell of the python container. I get this error:
Traceback (most recent call last):
File "/usr/local/bin/neomodel_remove_labels", line 35, in <module>
main()
File "/usr/local/bin/neomodel_remove_labels", line 30, in main
db.set_connection(bolt_url)
File "/usr/local/lib/python3.8/dist-packages/neomodel/util.py", line 93, in set_connection
self.driver = GraphDatabase.driver(u.scheme + '://' + hostname,
File "/usr/local/lib/python3.8/dist-packages/neo4j/__init__.py", line 108, in driver
return Driver(uri, **config)
File "/usr/local/lib/python3.8/dist-packages/neo4j/__init__.py", line 147, in __new__
return subclass(uri, **config)
File "/usr/local/lib/python3.8/dist-packages/neo4j/__init__.py", line 221, in __new__
pool.release(pool.acquire())
File "/usr/local/lib/python3.8/dist-packages/neobolt/direct.py", line 715, in acquire
return self.acquire_direct(self.address)
File "/usr/local/lib/python3.8/dist-packages/neobolt/direct.py", line 608, in acquire_direct
connection = self.connector(address, error_handler=self.connection_error_handler)
File "/usr/local/lib/python3.8/dist-packages/neo4j/__init__.py", line 218, in connector
return connect(address, **dict(config, **kwargs))
File "/usr/local/lib/python3.8/dist-packages/neobolt/direct.py", line 972, in connect
raise last_error
File "/usr/local/lib/python3.8/dist-packages/neobolt/direct.py", line 963, in connect
s, der_encoded_server_certificate = _secure(s, host, security_plan.ssl_context, **config)
File "/usr/local/lib/python3.8/dist-packages/neobolt/direct.py", line 854, in _secure
s = ssl_context.wrap_socket(s, server_hostname=host if HAS_SNI and host else None)
File "/usr/lib/python3.8/ssl.py", line 500, in wrap_socket
return self.sslsocket_class._create(
File "/usr/lib/python3.8/ssl.py", line 1040, in _create
self.do_handshake()
File "/usr/lib/python3.8/ssl.py", line 1309, in do_handshake
self._sslobj.do_handshake()
OSError: [Errno 0] Error
I have tried to access the container from my host with bolt://<user>:<password>#localhost:7687 but I still get the same error.

I believe this may be your issue which is related to neo4j: https://github.com/neo4j/neo4j/issues/12392
The suggestion there is:
Neo4j 4.0 has encryption turned off by default. You need to explicitly
turn encryption back on for Bolt server in server config file. Then
you shall be able to connect using 1.7 python driver with default
settings.
The following is an example of how you can turn encryption back on
Bolt Server with private.key and public.crt in directory
$neo4jHome/certificates/bolt.
dbms.connector.bolt.enabled=true
# allows both encrypted and unencrypted driver connections
dbms.connector.bolt.tls_level=OPTIONAL
dbms.ssl.policy.bolt.enabled=true
dbms.ssl.policy.bolt.base_directory=certificates/bolt
#dbms.ssl.policy.bolt.private_key=private.key
#dbms.ssl.policy.bolt.public_certificate=public.crt
You can choose any trusted key and certificate service to generate the
private key and public certificate used here.

You need to set the network option on your docker-compose file. Please read this link and then you will understand it well.
You should do something like this:
version: '3'
services:
#Create backend container
backend:
build: ./backend # path to Dockerfile
ports: # Port binding to host from docker container
- "5000:5000"
container_name: buzzworks-backend
volumes:
- ${PWD}/backend:/app
depends_on:
- db
environment:
FLASK_APP: flaskr
FLASK_ENV: development
NEO_USER: ${NEO_USER}
NEO_PW: ${NEO_PW}
networks:
- mynetwork
db:
image: neo4j:4.1.1
container_name: buzzworks-neo4j
ports:
- "7474:7474"
- "7687:7687"
volumes:
- ${HOME}/neo4j/data:/data
- ${HOME}/neo4j/logs:/logs
- ${HOME}/neo4j/import:/var/lib/neo4j/import
- ${HOME}/neo4j/plugins:/plugins
environment:
NEO4J_AUTH: ${NEO_USER}/${NEO_PW}
NEO4J_dbms_logs_debug_level: ${NEO_DEBUG_LEVEL}
networks:
- mynetwork
networks:
mynetwork:

Related

Docker containers in the same network can't communicate (ARP is possible but not upper layers messages)

I have been trying to split a simple API-rest into different services using docker. Unfortunately, I have not been able to make it work. I have read the docker docs several times and have followed multiple stack-over-flow and docker forum threads but non of the answers worked for me. I am new to Docker, so I might be missing something.
I detected that the communication host-container was ok but container-container wasn't, so in order to see what was going on I installed ping on get and post services (which run on a debian:bullseye-slim based image) and also wireshark in my host machine. What I have detected is that I can ping the host (172.22.0.1) and also the name resolution is okay (when I run ping post its IP is displayed) but for some reason when I send a ping request from post to get no reply is received.
My docker-compose.yaml file is the following:
version: '3.9'
services:
mydb:
image: mariadb:latest
environment:
MYSQL_DATABASE: 'cars'
MYSQL_ALLOW_EMPTY_PASSWORD: 'true'
ports:
- "3306:3306"
container_name: mydb
networks:
- mynw
post:
build: ./post-service
ports:
- "8081:8081"
container_name: post
networks:
- mynw
privileged: true
get:
build: ./get-service
ports:
- "8080:8080"
container_name: get
networks:
- mynw
privileged: true
nginx2:
build: ./nginx2
ports:
- "80:80"
container_name: nginx2
networks:
- mynw
networks:
mynw:
external: true
Initially, I was using the default network, but I read that this might cause internal DNS problems I changed it. I created the network by CLI without any special parameters (docker network create mynw). The JSON displayed when running docker network inspect mynw is the following:
[
{
"Name": "mynw",
"Id": "f925467f7efee99330f0eaaa82158006ac645cc92e7abda693f052c10da485bd",
"Created": "2022-10-14T18:42:14.145569533+02:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"4eb6e348d84b2433199e6581b4406eb74fb93c1fc2269691b81b34c13c723db5": {
"Name": "nginx2",
"EndpointID": "b19fab264c1489b616d919f09a5b80a1774561ea6f2538beb86157065c1e787b",
"MacAddress": "02:42:ac:16:00:03",
"IPv4Address": "172.22.0.3/16",
"IPv6Address": ""
},
"5f20802a59708bf4a592e137f52fca29dc857734983abc1c61548783e2e61896": {
"Name": "mydb",
"EndpointID": "3ef7b5d619b5b9ad9441dbc2efabd5a0e5a6bb2ea68bbd58fae8f7dfd2ac36ed",
"MacAddress": "02:42:ac:16:00:02",
"IPv4Address": "172.22.0.2/16",
"IPv6Address": ""
},
"dee816dd62aa08773134bb7a7a653544ab316275ec111817e11ba499552dea5b": {
"Name": "post",
"EndpointID": "cca2cbe801160fa6c35b3a34493d6cc9a10689cd33505ece36db9ca6dcf43900",
"MacAddress": "02:42:ac:16:00:04",
"IPv4Address": "172.22.0.4/16",
"IPv6Address": ""
},
"e23dcd0cecdb609e4df236fd8aed0999c12e1adc7b91b505fc88c53385a81292": {
"Name": "get",
"EndpointID": "83b73045887827ecbb1779cd27d5c4dac63ef3224ec42f067cfc39ba69b5484e",
"MacAddress": "02:42:ac:16:00:05",
"IPv4Address": "172.22.0.5/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Curiously, when sniffing the network using wireshark I see that the ARP messages between the containers are exchanged without problem (get service asks for post's MAC adress and this one replies with its MAC, and then this information is processed correctly to send the ICMP request).
I thought that maybe the network layer was dropping the replies for some reason and installed iptables to both services and added a ACCEPT rule for icmp messages to both INPUT and OUTPUT, but also didn't change anything. If someone knows what else could I do or what am I missing it would be very helpful.
Finally the solution was to delete everything and reinstall Docker and Docker Compose.

Bad Gateway when using Traefik in docker swarm

I'm currently struggling a lot to spin up a small traefik example on my docker swarm instance.
I started first with an docker-compose file for local development and everything is working as expected.
But when I define this as swarm file to bring that environment into production I always get an Bad Gateway from traefik.
After searching a lot about this it seems to be related to an networking issue from traefik since it tries to request between two different networks, but I'm not able to find the issue.
After certain iterations I tried to reproduce the Issue with "official" containers to provide an better example for other people.
So this is my traefik.yml
version: "3.7"
networks:
external:
external: true
services:
traefik:
image: "traefik:v2.8.1"
command:
- "--log.level=INFO"
- "--accesslog=true"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.swarmMode=true"
- "--providers.docker.exposedbydefault=false"
- "--providers.docker.network=external"
- "--entrypoints.web.address=:80"
- "--entrypoints.web.forwardedHeaders.insecure"
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- external
deploy:
placement:
constraints: [node.role == manager]
host-app:
image: traefik/whoami
ports:
- "9000:80"
networks:
- external
deploy:
labels:
- "traefik.enable=true"
- "traefik.http.routers.host-app.rule=PathPrefix(`/whoami`)"
- "traefik.http.services.host-app.loadbalancer.server.port=9000"
- "traefik.http.routers.host-app.entrypoints=web"
- "traefik.http.middlewares.host-app-stripprefix.stripprefix.prefixes=/"
- "traefik.http.routers.host-app.middlewares=host-app-stripprefix#docker"
- "traefik.docker.network=external"
The network is created with: docker network create -d overlay external
and I deploy the stack with docker stack deploy -c traefik.yml server
Until here no issues and everything spins up fine.
When I curl localhost:9000 I get the correct response:
curl localhost:9000
Hostname: 7aa77bc62b44
IP: 127.0.0.1
IP: 10.0.0.8
IP: 172.25.0.4
IP: 10.0.4.6
RemoteAddr: 10.0.0.2:35068
GET / HTTP/1.1
Host: localhost:9000
User-Agent: curl/7.68.0
Accept: */*
but on
curl localhost/whoami
Bad Gateway%
I always get the bad Gateway issue.
So I checked my network with docker network inspect external to ensure that both are running in the same network and this is the case.
[
{
"Name": "external",
"Id": "iianul6ua9u1f1bb8ibsnwkyc",
"Created": "2022-08-09T19:32:01.4491323Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.4.0/24",
"Gateway": "10.0.4.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"7aa77bc62b440e32c7b904fcbd91aea14e7a73133af0889ad9e0c9f75f2a884a": {
"Name": "server_host-app.1.m2f5x8jvn76p2ssya692f4ydp",
"EndpointID": "5d5175b73f1aadf2da30f0855dc0697628801a31d37aa50d78a20c21858ccdae",
"MacAddress": "02:42:0a:00:04:06",
"IPv4Address": "10.0.4.6/24",
"IPv6Address": ""
},
"e23f5c2897833f800a961ab49a4f76870f0377b5467178a060ec938391da46c7": {
"Name": "server_traefik.1.v5g3af00gqpulfcac84rwmnkx",
"EndpointID": "4db5d69e1ad805954503eb31c4ece5a2461a866e10fcbf579357bf998bf3490b",
"MacAddress": "02:42:0a:00:04:03",
"IPv4Address": "10.0.4.3/24",
"IPv6Address": ""
},
"lb-external": {
"Name": "external-endpoint",
"EndpointID": "ed668b033450646629ca050e4777ae95a5a65fa12a5eb617dbe0c4a20d84be28",
"MacAddress": "02:42:0a:00:04:04",
"IPv4Address": "10.0.4.4/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4100"
},
"Labels": {},
"Peers": [
{
"Name": "3cb3e7ba42dc",
"IP": "192.168.65.3"
}
]
}
]
and by checking the traefik logs I get the following
10.0.0.2 - - [09/Aug/2022:19:42:34 +0000] "GET /whoami HTTP/1.1" 502 11 "-" "-" 4 "host-app#docker" "http://10.0.4.9:9000" 0ms
which is the correct server:port for the whoami service. And even connecting into the traefik container and ping 10.0.4.9 works fine.
PING 10.0.4.9 (10.0.4.9): 56 data bytes
64 bytes from 10.0.4.9: seq=0 ttl=64 time=0.066 ms
64 bytes from 10.0.4.9: seq=1 ttl=64 time=0.057 ms
^C
--- 10.0.4.9 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.057/0.061/0.066 ms
This logs and snippets are all on my local swarm on Docker for Windows with wsl2 Ubuntu distribution. But I tested this on an CentOS Swarm which can be requested within my company and also with https://labs.play-with-docker.com/ and leads all to the same error.
So please can anybody tell me what configuration I'm missing or what mistake I made to get this running?
After consulting a coworker and creating another example we finally found the solution by our self.
Its just my own failure that I used the published port for loadbalancing the traefik to the service which is wrong.
host-app:
image: traefik/whoami
ports:
- "9000:80"
networks:
- external
deploy:
labels:
- "traefik.enable=true"
- "traefik.http.routers.host-app.rule=PathPrefix(`/whoami`)"
- "traefik.http.services.host-app.loadbalancer.server.port=80" # <--- this was wrong
- "traefik.http.routers.host-app.entrypoints=web"
- "traefik.http.middlewares.host-app-stripprefix.stripprefix.prefixes=/"
- "traefik.http.routers.host-app.middlewares=host-app-stripprefix#docker"
- "traefik.docker.network=external"
and that's the reason for the Bad Gateway since traefik tries to reach the published port from the server which is not present internally.

Update docker mountpoint

I've mounted a docker-compose volume to a mounted drive
geoserver:
build: ./geoserver
env_file:
- .env
- .geoserver
links:
- db
expose:
- 8080
- 8443
volumes:
- ./geoserver-exts:/var/local/geoserver-exts/
- geoserver_data:/var/local/geoserver
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.rule=Host(`${SITE_URL}`)"
- "traefik.http.routers.traefik.entrypoints=websecure"
- "traefik.http.routers.traefik.tls=true"
- "traefik.http.routers.traefik.tls.certresolver=leresolver"
volumes:
geo-db-data:
le-certs:
geoserver_data:
driver: local
driver_opts:
type: 'none'
o: 'bind'
device: '/media/base_geoserver_geoserver_data/_data'
But after down, rebuilding the container and fresh up -d the volume is still pointing to the /var/lib/docker/volumes/base_geoserver_geoserver_data/_data path instead of /media/base_geoserver_geoserver_data/_data. (And taking space there)
sudo docker volume inspect base_geoserver_geoserver_data
[
{
"CreatedAt": "2022-01-27T17:08:44Z",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "base_geoserver",
"com.docker.compose.version": "1.29.2",
"com.docker.compose.volume": "geoserver_data"
},
"Mountpoint": "/var/lib/docker/volumes/base_geoserver_geoserver_data/_data",
"Name": "base_geoserver_geoserver_data",
"Options": {
"device": "/media/base_geoserver_geoserver_data/_data",
"o": "bind",
"type": "none"
},
"Scope": "local"
}
]
How can I force refreshing the mount point path? And writing data to my /media volume?

Cannot access Postgress (in container) from another container in the same network

I learned a lot by my self about docker but I still facing one major problem and I need your help, please.
This is my dockercompose:
version: '3.3'
services:
postgres:
container_name: postgres-tc
networks:
- tools-net
image: postgres
expose:
- 5432
environment:
- POSTGRES_PASSWORD=Admin10..
- POSTGRES_HOST_AUTH_METHOD=trust
#service
teamcity-server:
ports:
- '8112:8111'
volumes:
- '/var/run/docker.sock:/tmp/docker.sock:ro'
- '/data/teamcity_server/datadir:/data/envdata/tc/datadir'
- '/opt/teamcity/logs:/data/envdata/tc/logs'
links:
- "postgres:postgres"
logging:
options:
max-size: 1g
container_name: tc-server
networks:
- tools-net
image: jetbrains/teamcity-server
networks:
tools-net:
external: true
teamcity-serve needs to access postgress on its port to start working.
Both are in the same netwotk created by this command.
docker network create -d bridge --subnet 172.50.0.0/16 tools-net
Here the network inspect after running dockercompose up:
[
{
"Name": "tools-net",
"Id": "74708d3d114394032cbeb5f0a2a93893da38ce5dae2a555a451a189b00b52b2e",
"Created": "2021-07-04T07:04:39.105791768Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.50.0.0/16"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"15a57ccfc229e361e40e940d01b6d6025820fee5ad50db4a61d0c411d4d61750": {
"Name": "postgres-tc",
"EndpointID": "8d5b9d192ed90545abe958134b9853d0aecba33cabd56f31a2c9681106ccdf6e",
"MacAddress": "02:42:ac:32:00:02",
"IPv4Address": "172.50.0.2/16",
"IPv6Address": ""
},
"94eaa0ea0524ca4419ba8e300e80687db487e4f46b6623dabcc15d65c60bdde6": {
"Name": "tc-server",
"EndpointID": "90825befcc5633c3c59c5ec9d58b188d2862cd65cd2283b5c56ec3ecf5a95fd6",
"MacAddress": "02:42:ac:32:00:03",
"IPv4Address": "172.50.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Now when I try to access the postgress from teamcity-server I got this error:
Please guys help me to solve this issue.
Thanks in advance
UPDATE :
It seems to be better with your help #Hans.
For now I have another issue :
My be I have to add a permission into postgress pg_hba.conf which I can lot locate within the container.
Can youu help please?
Thanks
You're telling Teamcity to connect to a postgres server running on the default localhost address. In Docker terms, that means running in in the same container as Teamcity. It isn't.
You need to tell Teamcity to connect to database host postgres-tc. That's the network name of the Postgres container on your network.

How to set up mysql host limitation working with docker container

I'm setting up a mysql:5 container (image from docker hub) into a sub-network shared with another container (web server).
I'd like to limit my mysql user to host that correspond to the web server container.
If i limit it directly whith the web-server container ip it's work but this ip cant change because of the docker environement so i'd like to have somethig like:
GRANT ALL PRIVILEGES ON `db`.* TO 'user'#'container-name'
And when i try to connect server respond:
Access denied for user 'user'#'172.18.0.4'
Where 172.18.0.4 is the correct ip for the web-server container.
Example :
docker-compose.yaml
version: '2'
services:
mysql1:
image: mysql
container_name: mysql1
ports:
- '3306:3306'
environment:
- MYSQL_ROOT_PASSWORD=rootpwd
mysql2:
image: mysql
container_name: mysql2
environment:
- MYSQL_ROOT_PASSWORD=rootpwd
Up containers
docker-compose up -d
Create user into mysql1
docker-compose exec mysql1 mysql -u root --password="rootpwd" -e "CREATE USER user#mysql2 IDENTIFIED BY 'pwd'; GRANT ALL PRIVILEGES ON * TO user#mysql2" sys
Try to access mysql1 from mysql2 by user
docker-compose exec mysql2 mysql -u user --password="pwd" -h mysql1 sys
ERROR 1045 (28000): Access denied for user 'user'#'172.18.0.3' (using password: YES)
Docker network info
docker network inspect test-mysql-user-host_default
{
"Name": "test-mysql-user-host_default",
"Id": "305f4da33e0b79d899ac289e6b3fc1ebf2733baf0bf3d60a53cc94cec44176d1",
"Created": "2019-04-26T09:53:23.3237197Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"133ebd1e6ba485212e5448670c66c1718917650bc217264183c725fb1a928118": {
"Name": "mysql1",
"EndpointID": "ce89aa1674e9c46fad50b2f36aec8d1eecf2227f597a785be67785ade770fef7",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"d67072a5c486752b7018fa321e47ae09fb873199604c2b520f2305968d43b577": {
"Name": "mysql2",
"EndpointID": "e6b62c6ce9e266d38be383fa6029f378e1ca67a18420dd3f508a3089200c0d98",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
Comment from bellackn
It seems to be a problem of MySQL's skip-resolve-host option which is currently hard-coded in the Dockerfile (see this GitHub issue). TLDR: You could create an own image based on mysql:5 and sed this option away, or you use the % wildcard instead of mysql2.
He's right here the official dockerfile for mysql:5.7, and at line 70 we can find:
#don't reverse lookup hostnames, they are usually another container
&& echo '[mysqld]\nskip-host-cache\nskip-name-resolve' > /etc/mysql/conf.d/docker.cnf
I did a new Dockerfile witch remove this configuration file.
FROM mysql:5.7
RUN rm /etc/mysql/conf.d/docker.cnf
EXPOSE 3306 33060
CMD ["mysqld"]

Resources