I've mounted a docker-compose volume to a mounted drive
geoserver:
build: ./geoserver
env_file:
- .env
- .geoserver
links:
- db
expose:
- 8080
- 8443
volumes:
- ./geoserver-exts:/var/local/geoserver-exts/
- geoserver_data:/var/local/geoserver
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.rule=Host(`${SITE_URL}`)"
- "traefik.http.routers.traefik.entrypoints=websecure"
- "traefik.http.routers.traefik.tls=true"
- "traefik.http.routers.traefik.tls.certresolver=leresolver"
volumes:
geo-db-data:
le-certs:
geoserver_data:
driver: local
driver_opts:
type: 'none'
o: 'bind'
device: '/media/base_geoserver_geoserver_data/_data'
But after down, rebuilding the container and fresh up -d the volume is still pointing to the /var/lib/docker/volumes/base_geoserver_geoserver_data/_data path instead of /media/base_geoserver_geoserver_data/_data. (And taking space there)
sudo docker volume inspect base_geoserver_geoserver_data
[
{
"CreatedAt": "2022-01-27T17:08:44Z",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "base_geoserver",
"com.docker.compose.version": "1.29.2",
"com.docker.compose.volume": "geoserver_data"
},
"Mountpoint": "/var/lib/docker/volumes/base_geoserver_geoserver_data/_data",
"Name": "base_geoserver_geoserver_data",
"Options": {
"device": "/media/base_geoserver_geoserver_data/_data",
"o": "bind",
"type": "none"
},
"Scope": "local"
}
]
How can I force refreshing the mount point path? And writing data to my /media volume?
Related
I'm currently struggling a lot to spin up a small traefik example on my docker swarm instance.
I started first with an docker-compose file for local development and everything is working as expected.
But when I define this as swarm file to bring that environment into production I always get an Bad Gateway from traefik.
After searching a lot about this it seems to be related to an networking issue from traefik since it tries to request between two different networks, but I'm not able to find the issue.
After certain iterations I tried to reproduce the Issue with "official" containers to provide an better example for other people.
So this is my traefik.yml
version: "3.7"
networks:
external:
external: true
services:
traefik:
image: "traefik:v2.8.1"
command:
- "--log.level=INFO"
- "--accesslog=true"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.swarmMode=true"
- "--providers.docker.exposedbydefault=false"
- "--providers.docker.network=external"
- "--entrypoints.web.address=:80"
- "--entrypoints.web.forwardedHeaders.insecure"
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- external
deploy:
placement:
constraints: [node.role == manager]
host-app:
image: traefik/whoami
ports:
- "9000:80"
networks:
- external
deploy:
labels:
- "traefik.enable=true"
- "traefik.http.routers.host-app.rule=PathPrefix(`/whoami`)"
- "traefik.http.services.host-app.loadbalancer.server.port=9000"
- "traefik.http.routers.host-app.entrypoints=web"
- "traefik.http.middlewares.host-app-stripprefix.stripprefix.prefixes=/"
- "traefik.http.routers.host-app.middlewares=host-app-stripprefix#docker"
- "traefik.docker.network=external"
The network is created with: docker network create -d overlay external
and I deploy the stack with docker stack deploy -c traefik.yml server
Until here no issues and everything spins up fine.
When I curl localhost:9000 I get the correct response:
curl localhost:9000
Hostname: 7aa77bc62b44
IP: 127.0.0.1
IP: 10.0.0.8
IP: 172.25.0.4
IP: 10.0.4.6
RemoteAddr: 10.0.0.2:35068
GET / HTTP/1.1
Host: localhost:9000
User-Agent: curl/7.68.0
Accept: */*
but on
curl localhost/whoami
Bad Gateway%
I always get the bad Gateway issue.
So I checked my network with docker network inspect external to ensure that both are running in the same network and this is the case.
[
{
"Name": "external",
"Id": "iianul6ua9u1f1bb8ibsnwkyc",
"Created": "2022-08-09T19:32:01.4491323Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.4.0/24",
"Gateway": "10.0.4.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"7aa77bc62b440e32c7b904fcbd91aea14e7a73133af0889ad9e0c9f75f2a884a": {
"Name": "server_host-app.1.m2f5x8jvn76p2ssya692f4ydp",
"EndpointID": "5d5175b73f1aadf2da30f0855dc0697628801a31d37aa50d78a20c21858ccdae",
"MacAddress": "02:42:0a:00:04:06",
"IPv4Address": "10.0.4.6/24",
"IPv6Address": ""
},
"e23f5c2897833f800a961ab49a4f76870f0377b5467178a060ec938391da46c7": {
"Name": "server_traefik.1.v5g3af00gqpulfcac84rwmnkx",
"EndpointID": "4db5d69e1ad805954503eb31c4ece5a2461a866e10fcbf579357bf998bf3490b",
"MacAddress": "02:42:0a:00:04:03",
"IPv4Address": "10.0.4.3/24",
"IPv6Address": ""
},
"lb-external": {
"Name": "external-endpoint",
"EndpointID": "ed668b033450646629ca050e4777ae95a5a65fa12a5eb617dbe0c4a20d84be28",
"MacAddress": "02:42:0a:00:04:04",
"IPv4Address": "10.0.4.4/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4100"
},
"Labels": {},
"Peers": [
{
"Name": "3cb3e7ba42dc",
"IP": "192.168.65.3"
}
]
}
]
and by checking the traefik logs I get the following
10.0.0.2 - - [09/Aug/2022:19:42:34 +0000] "GET /whoami HTTP/1.1" 502 11 "-" "-" 4 "host-app#docker" "http://10.0.4.9:9000" 0ms
which is the correct server:port for the whoami service. And even connecting into the traefik container and ping 10.0.4.9 works fine.
PING 10.0.4.9 (10.0.4.9): 56 data bytes
64 bytes from 10.0.4.9: seq=0 ttl=64 time=0.066 ms
64 bytes from 10.0.4.9: seq=1 ttl=64 time=0.057 ms
^C
--- 10.0.4.9 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.057/0.061/0.066 ms
This logs and snippets are all on my local swarm on Docker for Windows with wsl2 Ubuntu distribution. But I tested this on an CentOS Swarm which can be requested within my company and also with https://labs.play-with-docker.com/ and leads all to the same error.
So please can anybody tell me what configuration I'm missing or what mistake I made to get this running?
After consulting a coworker and creating another example we finally found the solution by our self.
Its just my own failure that I used the published port for loadbalancing the traefik to the service which is wrong.
host-app:
image: traefik/whoami
ports:
- "9000:80"
networks:
- external
deploy:
labels:
- "traefik.enable=true"
- "traefik.http.routers.host-app.rule=PathPrefix(`/whoami`)"
- "traefik.http.services.host-app.loadbalancer.server.port=80" # <--- this was wrong
- "traefik.http.routers.host-app.entrypoints=web"
- "traefik.http.middlewares.host-app-stripprefix.stripprefix.prefixes=/"
- "traefik.http.routers.host-app.middlewares=host-app-stripprefix#docker"
- "traefik.docker.network=external"
and that's the reason for the Bad Gateway since traefik tries to reach the published port from the server which is not present internally.
I learned a lot by my self about docker but I still facing one major problem and I need your help, please.
This is my dockercompose:
version: '3.3'
services:
postgres:
container_name: postgres-tc
networks:
- tools-net
image: postgres
expose:
- 5432
environment:
- POSTGRES_PASSWORD=Admin10..
- POSTGRES_HOST_AUTH_METHOD=trust
#service
teamcity-server:
ports:
- '8112:8111'
volumes:
- '/var/run/docker.sock:/tmp/docker.sock:ro'
- '/data/teamcity_server/datadir:/data/envdata/tc/datadir'
- '/opt/teamcity/logs:/data/envdata/tc/logs'
links:
- "postgres:postgres"
logging:
options:
max-size: 1g
container_name: tc-server
networks:
- tools-net
image: jetbrains/teamcity-server
networks:
tools-net:
external: true
teamcity-serve needs to access postgress on its port to start working.
Both are in the same netwotk created by this command.
docker network create -d bridge --subnet 172.50.0.0/16 tools-net
Here the network inspect after running dockercompose up:
[
{
"Name": "tools-net",
"Id": "74708d3d114394032cbeb5f0a2a93893da38ce5dae2a555a451a189b00b52b2e",
"Created": "2021-07-04T07:04:39.105791768Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.50.0.0/16"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"15a57ccfc229e361e40e940d01b6d6025820fee5ad50db4a61d0c411d4d61750": {
"Name": "postgres-tc",
"EndpointID": "8d5b9d192ed90545abe958134b9853d0aecba33cabd56f31a2c9681106ccdf6e",
"MacAddress": "02:42:ac:32:00:02",
"IPv4Address": "172.50.0.2/16",
"IPv6Address": ""
},
"94eaa0ea0524ca4419ba8e300e80687db487e4f46b6623dabcc15d65c60bdde6": {
"Name": "tc-server",
"EndpointID": "90825befcc5633c3c59c5ec9d58b188d2862cd65cd2283b5c56ec3ecf5a95fd6",
"MacAddress": "02:42:ac:32:00:03",
"IPv4Address": "172.50.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Now when I try to access the postgress from teamcity-server I got this error:
Please guys help me to solve this issue.
Thanks in advance
UPDATE :
It seems to be better with your help #Hans.
For now I have another issue :
My be I have to add a permission into postgress pg_hba.conf which I can lot locate within the container.
Can youu help please?
Thanks
You're telling Teamcity to connect to a postgres server running on the default localhost address. In Docker terms, that means running in in the same container as Teamcity. It isn't.
You need to tell Teamcity to connect to database host postgres-tc. That's the network name of the Postgres container on your network.
I have 2 different containers running with docker-compose. Here is my docker-compose.yml:
version: '3'
services:
#Create backend container
backend:
build: ./backend # path to Dockerfile
ports: # Port binding to host from docker container
- "5000:5000"
container_name: buzzworks-backend
volumes:
- ${PWD}/backend:/app
depends_on:
- db
environment:
FLASK_APP: flaskr
FLASK_ENV: development
NEO_USER: ${NEO_USER}
NEO_PW: ${NEO_PW}
db:
image: neo4j:4.1.1
container_name: buzzworks-neo4j
ports:
- "7474:7474"
- "7687:7687"
volumes:
- ${HOME}/neo4j/data:/data
- ${HOME}/neo4j/logs:/logs
- ${HOME}/neo4j/import:/var/lib/neo4j/import
- ${HOME}/neo4j/plugins:/plugins
environment:
NEO4J_AUTH: ${NEO_USER}/${NEO_PW}
NEO4J_dbms_logs_debug_level: ${NEO_DEBUG_LEVEL}
The corresponding network it generates looks right to me:
{
"Name": "buzzworksai_default",
"Id": "db4efc0286a9464cadde13cf1306f241b7a353295904b15b163e761289ba9d3f",
"Created": "2020-08-27T11:23:15.925483629-04:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"640650c163f746e480bf677abdeaf8edf6483b7dac2a260c2e3b3bc3319dffef": {
"Name": "buzzworks-neo4j",
"EndpointID": "ddbad1a179cc51655a779b07c91d6d949b0612bf985abc9c45e1794b35f4a565",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"ba47173d1dbc31e4e416eaf30d2314e6d2a20a36b389cb76cd1edcbea489184e": {
"Name": "buzzworks-backend",
"EndpointID": "17ff278f3db5ad609be682cdf912ca755587e07ef08d6023bf3ecb33a6c4bc31",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "buzzworksai",
"com.docker.compose.version": "1.26.2"
}
}
]
I can access the web interface of the neo4j database just fine. The problem occurs when I am trying to connect to the database with neomodel. I have tried running neomodel_remove_labels --db bolt://<user>:<password>#db:7687 with the appropriate credentials from the shell of the python container. I get this error:
Traceback (most recent call last):
File "/usr/local/bin/neomodel_remove_labels", line 35, in <module>
main()
File "/usr/local/bin/neomodel_remove_labels", line 30, in main
db.set_connection(bolt_url)
File "/usr/local/lib/python3.8/dist-packages/neomodel/util.py", line 93, in set_connection
self.driver = GraphDatabase.driver(u.scheme + '://' + hostname,
File "/usr/local/lib/python3.8/dist-packages/neo4j/__init__.py", line 108, in driver
return Driver(uri, **config)
File "/usr/local/lib/python3.8/dist-packages/neo4j/__init__.py", line 147, in __new__
return subclass(uri, **config)
File "/usr/local/lib/python3.8/dist-packages/neo4j/__init__.py", line 221, in __new__
pool.release(pool.acquire())
File "/usr/local/lib/python3.8/dist-packages/neobolt/direct.py", line 715, in acquire
return self.acquire_direct(self.address)
File "/usr/local/lib/python3.8/dist-packages/neobolt/direct.py", line 608, in acquire_direct
connection = self.connector(address, error_handler=self.connection_error_handler)
File "/usr/local/lib/python3.8/dist-packages/neo4j/__init__.py", line 218, in connector
return connect(address, **dict(config, **kwargs))
File "/usr/local/lib/python3.8/dist-packages/neobolt/direct.py", line 972, in connect
raise last_error
File "/usr/local/lib/python3.8/dist-packages/neobolt/direct.py", line 963, in connect
s, der_encoded_server_certificate = _secure(s, host, security_plan.ssl_context, **config)
File "/usr/local/lib/python3.8/dist-packages/neobolt/direct.py", line 854, in _secure
s = ssl_context.wrap_socket(s, server_hostname=host if HAS_SNI and host else None)
File "/usr/lib/python3.8/ssl.py", line 500, in wrap_socket
return self.sslsocket_class._create(
File "/usr/lib/python3.8/ssl.py", line 1040, in _create
self.do_handshake()
File "/usr/lib/python3.8/ssl.py", line 1309, in do_handshake
self._sslobj.do_handshake()
OSError: [Errno 0] Error
I have tried to access the container from my host with bolt://<user>:<password>#localhost:7687 but I still get the same error.
I believe this may be your issue which is related to neo4j: https://github.com/neo4j/neo4j/issues/12392
The suggestion there is:
Neo4j 4.0 has encryption turned off by default. You need to explicitly
turn encryption back on for Bolt server in server config file. Then
you shall be able to connect using 1.7 python driver with default
settings.
The following is an example of how you can turn encryption back on
Bolt Server with private.key and public.crt in directory
$neo4jHome/certificates/bolt.
dbms.connector.bolt.enabled=true
# allows both encrypted and unencrypted driver connections
dbms.connector.bolt.tls_level=OPTIONAL
dbms.ssl.policy.bolt.enabled=true
dbms.ssl.policy.bolt.base_directory=certificates/bolt
#dbms.ssl.policy.bolt.private_key=private.key
#dbms.ssl.policy.bolt.public_certificate=public.crt
You can choose any trusted key and certificate service to generate the
private key and public certificate used here.
You need to set the network option on your docker-compose file. Please read this link and then you will understand it well.
You should do something like this:
version: '3'
services:
#Create backend container
backend:
build: ./backend # path to Dockerfile
ports: # Port binding to host from docker container
- "5000:5000"
container_name: buzzworks-backend
volumes:
- ${PWD}/backend:/app
depends_on:
- db
environment:
FLASK_APP: flaskr
FLASK_ENV: development
NEO_USER: ${NEO_USER}
NEO_PW: ${NEO_PW}
networks:
- mynetwork
db:
image: neo4j:4.1.1
container_name: buzzworks-neo4j
ports:
- "7474:7474"
- "7687:7687"
volumes:
- ${HOME}/neo4j/data:/data
- ${HOME}/neo4j/logs:/logs
- ${HOME}/neo4j/import:/var/lib/neo4j/import
- ${HOME}/neo4j/plugins:/plugins
environment:
NEO4J_AUTH: ${NEO_USER}/${NEO_PW}
NEO4J_dbms_logs_debug_level: ${NEO_DEBUG_LEVEL}
networks:
- mynetwork
networks:
mynetwork:
I am running my containers on the docker swarm. asset-frontend service is my frontend application which is running Nginx inside the container and exposing port 80. now if I do
curl http://10.255.8.21:80
or
curl http://127.0.0.1:80
from my host where I am running these containers I am able to see my asset-frontend application but it is not accessible outside of the host. I am not able to access it from another machine, my host machine operating system is centos 8.
this is my docker-compose file
version: "3.3"
networks:
basic:
services:
asset-backend:
image: asset/asset-management-backend
env_file: .env
deploy:
replicas: 1
depends_on:
- asset-mongodb
- asset-postgres
networks:
- basic
asset-mongodb:
image: mongo
restart: always
env_file: .env
ports:
- "27017:27017"
volumes:
- $HOME/asset/mongodb:/data/db
networks:
- basic
asset-postgres:
image: asset/postgresql
restart: always
env_file: .env
ports:
- "5432:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=asset-management
volumes:
- $HOME/asset/postgres:/var/lib/postgresql/data
networks:
- basic
asset-frontend:
image: asset/asset-management-frontend
restart: always
ports:
- "80:80"
environment:
- ENV=dev
depends_on:
- asset-backend
deploy:
replicas: 1
networks:
- basic
asset-autodiscovery-cron:
image: asset/auto-discovery-cron
restart: always
env_file: .env
deploy:
replicas: 1
depends_on:
- asset-mongodb
- asset-postgres
networks:
- basic
this is my docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
auz640zl60bx asset_asset-autodiscovery-cron replicated 1/1 asset/auto-discovery-cron:latest
g6poofhvmoal asset_asset-backend replicated 1/1 asset/asset-management-backend:latest
brhq4g4mz7cf asset_asset-frontend replicated 1/1 asset/asset-management-frontend:latest *:80->80/tcp
rmkncnsm2pjn asset_asset-mongodb replicated 1/1 mongo:latest *:27017->27017/tcp
rmlmdpa5fz69 asset_asset-postgres replicated 1/1 asset/postgresql:latest *:5432->5432/tcp
My 80 port is open in firewall
following is the output of firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0
sources:
services: cockpit dhcpv6-client ssh
ports: 22/tcp 2376/tcp 2377/tcp 7946/tcp 7946/udp 4789/udp 80/tcp
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
if i inspect my created network the output is following
[
{
"Name": "asset_basic",
"Id": "zw73vr9xigfx7hy16u1myw5gc",
"Created": "2019-11-26T02:36:38.241352385-05:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.3.0/24",
"Gateway": "10.0.3.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"9348f4fc6bfc1b14b84570e205c88a67aba46f295a5e61bda301fdb3e55f3576": {
"Name": "asset_asset-frontend.1.zew1obp21ozmg8r1tzmi5h8g8",
"EndpointID": "27624fe2a7b282cef1762c4328ce0239dc70ebccba8e00d7a61595a7a1da2066",
"MacAddress": "02:42:0a:00:03:08",
"IPv4Address": "10.0.3.8/24",
"IPv6Address": ""
},
"943895f12de86d85fd03d0ce77567ef88555cf4766fa50b2a8088e220fe1eafe": {
"Name": "asset_asset-mongodb.1.ygswft1l34o5vfaxbzmnf0hrr",
"EndpointID": "98fd1ce6e16ade2b165b11c8f2875a0bdd3bc326c807ba6a1eb3c92f4417feed",
"MacAddress": "02:42:0a:00:03:04",
"IPv4Address": "10.0.3.4/24",
"IPv6Address": ""
},
"afab468aefab0689aa3488ee7f85dbc2cebe0202669ab4a58d570c12ee2bde21": {
"Name": "asset_asset-autodiscovery-cron.1.5k23u87w7224mpuasiyakgbdx",
"EndpointID": "d3d4c303e1bc665969ad9e4c9672e65a625fb71ed76e2423dca444a89779e4ee",
"MacAddress": "02:42:0a:00:03:0a",
"IPv4Address": "10.0.3.10/24",
"IPv6Address": ""
},
"f0a768e5cb2f1f700ee39d94e380aeb4bab5fe477bd136fd0abfa776917e90c1": {
"Name": "asset_asset-backend.1.8ql9t3qqt512etekjuntkft4q",
"EndpointID": "41587022c339023f15c57a5efc5e5adf6e57dc173286753216f90a976741d292",
"MacAddress": "02:42:0a:00:03:0c",
"IPv4Address": "10.0.3.12/24",
"IPv6Address": ""
},
"f577c539bbc3c06a501612d747f0d28d8a7994b843c6a37e18eeccb77717539e": {
"Name": "asset_asset-postgres.1.ynrqbzvba9kvfdkek3hurs7hl",
"EndpointID": "272d642a9e20e45f661ba01e8731f5256cef87898de7976f19577e16082c5854",
"MacAddress": "02:42:0a:00:03:06",
"IPv4Address": "10.0.3.6/24",
"IPv6Address": ""
},
"lb-asset_basic": {
"Name": "asset_basic-endpoint",
"EndpointID": "142373fd9c0d56d5a633b640d1ec9e4248bac22fa383ba2f754c1ff567a3502e",
"MacAddress": "02:42:0a:00:03:02",
"IPv4Address": "10.0.3.2/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4100"
},
"Labels": {
"com.docker.stack.namespace": "asset"
},
"Peers": [
{
"Name": "8170c4487a4b",
"IP": "10.255.8.21"
}
]
}
]
Ran into this same issue and it turns out it was a clash between my local networks subnet and the subnet of the automatically created ingress network. This can be verified using docker network inspect ingress and checking if the IPAM.Config.Subnet value overlaps with your local network.
To fix you can update the configuration of the ingress network as specified in Customize the default ingress network; in summary:
Remove services that publish ports
Remove existing network: docker network rm ingress
Recreate using non-conflicting subnet:
docker network create \
--driver overlay \
--ingress \
--subnet 172.16.0.0/16 \ # Or whatever other subnet you want to use
--gateway 172.16.0.1 \
ingress
Restart services
You can avoid a clash to begin with by specifying the default subnet pool when initializing the swarm using the --default-addr-pool option.
docker service update your-service --publish-add 80:80
You can publish ports by updating the service.
Can you try this url instead of the ip adres? host.docker.internal so something like http://host.docker.internal:80
I suggest you verify the "right" behavior using docker-compose first. Then, try to use docker swarm without network specification just to verify there are no network interface problems.
Also, you could use the below command to verify your LISTEN ports:
netstat -tulpn
EDIT: I faced this same issue but I was able to access my services through 127.0.0.1
While running docker provide an port mapping, like
docker run -p 8081:8081 your-docker-image
Or, provide the port mapping in the docker desktop while starting the container.
I got into this same issue. It turns out that's my iptables filter causes external connections not work.
In docker swarm mode, docker create a virtual network bridge device docker_gwbridge to access to overlap network. My iptables has following line to drop packet forwards:
:FORWARD DROP
That makes network packets from physical NIC can't reach the docker ingress network, so that my docker service only works on localhost.
Change iptables rule to
:FORWARD ACCEPT
And problem solved without touching the docker.
I am running multiple docker-compositions on one host (identical images for different usecases).
For that reason I use different HTTPS (+REST) ports the compositions are available remotely under. However, docker will reference the port range of the first composition in every other composition as well, but not use it. Although I cannot see any negative implication atm, I would like to get rid of it, fearing that some implication might eventually arise.
docker ps shows this
PORTS
Second container:
**8643-8644/tcp**, 0.0.0.0:8743-8744->8743-8744/tcp
0.0.0.0:27020->27020/tcp
First container:
0.0.0.0:**8643-8644->8643-8644**/tcp
0.0.0.0:27019->27019/tcp
First docker-compose file (excerpt):
version: '2'
services:
mongo:
image: *****
ports:
- "27019:27019"
tty: true
volumes:
- /data/mongodb
- /data/db
- /var/log/mongodb
entrypoint: [ "/usr/bin/mongod", "--port", "27019" ]
rom:
image: *****
links:
- mongo
ports:
- "8643:8643"
- "8644:8644"
environment:
WEB_PORT_SECURE: 8643
REST_PORT_SECURE: 8644
MONGO_PORT: 27019
MONGO_INST: mongod
entrypoint: [ "node", "/usr/src/app/app.js" ]
Second docker-compose file (excerpt):
version: '2'
services:
mongo:
image: *****
ports:
- "27020:27020"
tty: true
volumes:
- /data/mongodb
- /data/db
- /var/log/mongodb
entrypoint: [ "/usr/bin/mongod", "--port", "27020" ]
rom:
image: *****
links:
- mongo
ports:
- "8743:8743"
- "8744:8744"
environment:
WEB_PORT_SECURE: 8743
REST_PORT_SECURE: 8744
MONGO_PORT: 27020
MONGO_INST: mongod
entrypoint: [ "node", "/usr/src/app/app.js" ]
and finally, docker inspect shows this for the second container
"Config": {
"Hostname": *****,
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"8643/tcp": {},
"8644/tcp": {},
"8743/tcp": {},
"8744/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"MONGO_PORT=27020",
"MONGO_INST=mongodb",
"WEB_PORT_SECURE=8743",
"REST_PORT_SECURE=8744",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"NPM_CONFIG_LOGLEVEL=info",
"NODE_VERSION=4.7.2",
"WORK_DIR=/usr/src/app"
],
"NetworkSettings": {
"Ports": {
"8643/tcp": null,
"8644/tcp": null,
"8743/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8743"
}
],
"8744/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8744"
}
]
},
The last block clearly shows that docker is not doing anything with the 8643 and 8644 port, but still references it there.
"8643/tcp": null,
"8644/tcp": null,
Any idea why this happens and how to avoid it?
They are there because the image exposes them (built with EXPOSE).
This is not a problem, it's totally normal. You won't have a problem unless you try to export the same port on the outside host more than once. Here, none of your exported ports are in conflict.
0.0.0.0:8743-8744->8743-8744/tcp
0.0.0.0:27020->27020/tcp
0.0.0.0:8643-8644->8643-8644/tcp
0.0.0.0:27019->27019/tcp
You are exporting 8643-8644, 8743-8744, 27019, 27020. No conflicts.
A container can expose whatever ports it wants, it is only important that exposed ports are not in conflict with one another.