Can't access container IP from host docker for windows stack - docker

I'm using the following docker compose file to build my docker swarm stack that have windows containers deployed in a Windows 10:
version: '3.2'
services:
service1:
image: myrepository/dotnet-framework:3.5-windowsservercore
environment:
- my_path="C:/app/build/app.exe"
- my_arg= 1
deploy:
replicas: 1
placement:
constraints:
- node.id == asdfasdgasgasg
volumes:
- service1:C:/app
service1:
image: myrepository/dotnet-framework:3.5-windowsservercore
ports:
- target: 7878
published: 7878
mode: host
environment:
- my_path="C:/app/app.exe"
- my_arg= 2
deploy:
replicas: 1
placement:
constraints:
- node.id == asdfasdgasgasg
volumes:
- service1:C:/app
volumes:
service2:
external:
name: service1
service1:
external:
name: service1
As you can see service2 is listening in port 7878. I know, as is shown in this post, that I can't reach this port using localhost:7878. Thus I run the command docker inspect containerID to figure out the IP address of the container.
If I ping the container service2 from service1, it responds. But If I try to access the port 10.0.3.18:7878 from the host, there's no response. How could I reach port 7878from the hots? On the other hand, I have Linux containers that must reach the 'service2' windows container.

Each of the docker containers in the service can communicate with each other by default as they are started up on their own private network. That is why you can ping between the service containers.
The port 7878 you opened up will also be accessible to the host windows 10 os via the host machine’s ip address not the container ip address. The container’s IP address is private even to the host os.
Ping may not work as you have not opened up the ping port in the service and there may not be a ping service in the image to respond to your ping request. I may be wrong on this last point. Ping is not a good method to verify if a container is working or not.

Windows updated and everything works as expected.

Related

Docker Swarm Routing Mesh

Assuming that I have 2 nodes in the swarm (Node 1 is a manager, node2 is a worker), and using the following compose to launch
version: "3.9"
services:
app1:
image: app1image
ports:
- 8080:8080
deploy:
mode: global
app2:
image: app2image
ports:
- 9080:9080
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- "node.role==manager"
My questions are:
If I try to access app1 through node1 could I be routed to the app1 container in node2?
Since the app2 only deploys to node1, if I try to access it through node2 on port 9080 will I be able to?
Besides ports referenced by the docker documentation(TCP port 2377 for cluster management communications
TCP and UDP port 7946 for communication among nodes
UDP port 4789 for overlay network traffic) are there any other ports that need to be opened? Like in case app1 wants to call app2
So, to understand whats actually going on:
version: "3.9"
networks:
default:
driver: overlay
ingress:
external: true
services:
app1:
image: app1image
ports:
- 8080:5000
deploy:
mode: global
networks:
- default
app2:
image: app2image
ports:
- 9080:5000
networks:
- default
deploy:
placement:
constraints:
- node.role==manager
In this configuration my expectation is that the app is listening on 0.0.0.0:5000.
So, what docker has done is created two networks: an ingress network that is used to bridge ports on each host, to each container:
node1:8080 node2:8080 will be routed and loadbalanced to app1 containers.
and
node1:9080, nod2:9080 will be routed and loadbalanced to app2 containers.
The service containers, or tasks, also have been attached to an implicit default network for the compose stack. Its an overlay - or software defined - network so each container has an ip on that network that is unrelated to the node its on. I have decided that the actual listen port is port :5000 for both services, so any services attached to {stack}_default will be able to use the servicename, and the actual port address:
app1:5000 will route via a vip to loadbalance traffic to instances of app1, and app1.tasks is a dnsrr record that will return each container ip.
Likewise app2:5000 will route to the app2 container on the manager node.
The app1 and app2 dns names are entirely private to services that are part of the stack / attached to the {stack_default} network so the app1:5000 names are not available external to the swarm, or even to other stacks or containers that are not explicitly attached.
So:
yes.
yes.
no but:
If you ports: to publish ports, those ports are external to docker and do not go through the overlay network. You would need to add every port published to the firewalls if required for node to node comms. e.g. 8080 and 9080 need to be open.
However, because overlay network allows connections, uses 4789 at the physical link layer, the traffic goint to app1, and app2 ips (the :5000 traffic) on the overlay is tunneled and does not need to be opened.

In a docker service, how do I share one port publicly and one port privately?

I am building a docker service which includes a squid and an icap service.
Squid runs on port 3128 and this port is public.
The ICAP service runs on port 1344, which I do not want to be public, as this will contain decrypted web traffic. I want this accessible only to squid, which is the icap client.
My question is, how do I set this up so that port 1344 on the e2guardian service is running on a private network that is accessible by squid, but not published where anyone on the "customer" network can use it?
I am including my docker compose file.
The "squidnet" network is really kind of a leftover. I wonder if I can make squidnet private and then share 1344 on squidnet only, but still have 3128 public for the squid service public on the local LAN. How would I change the docker compose file to accommodate this?
Thanks
version: "3"
services:
squid:
# replace username/repo:tag with your name and image details
image: jusschwa/docker-squid-sslbump-rpi
deploy:
replicas: 1
restart_policy:
condition: on-failure
volumes:
- "/workspace/etc/squid/squid.conf:/usr/local/squid/etc/squid.conf"
- "/workspace/certs:/usr/local/squid/ssl"
ports:
- "3128:3128"
networks:
- squidnet
e2guardian:
image: jusschwa/e2guardian-rpi
ports:
- "1344:1344"
volumes:
- "/workspace/etc/e2guardian:/etc/e2guardian"
deploy:
replicas: 1
restart_policy:
condition: on-failure
networks:
- squidnet
networks:
squidnet:
Use expose if you dont want to publish the ports to host machine. When you use ports it is publishing the ports to host machine.
Read more
Mapping container's 3306 to host machine 3306
ports:
- 3306:3306
Exposing container's 3306 to network
expose:
- 3306

Cannot ping docker container created with docker-compose

I want to create a PostgreSQL cluster composed by a master and two slaves within three containers. I want to do that with docker-compose. Everything works fine but I cannot ping containers from my Mac.
Here the code of my docker-compose.yml.
On Stackoverflow there is this thread How could I ping my docker container from my host that address docker standalone and not docker-compose.
version: '3.6'
volumes:
pgmaster_volume:
pgslave1_volume:
pgslave2_volume:
services:
pgmaster:
container_name: pgmaster
build:
context: ../src
dockerfile: Dockerfile
image: docker-postgresql:latest
environment:
NODE_NAME: pgmaster # Node name
ports:
- 5422:5432
volumes:
- pgmaster_volume:/home/postgres/data
networks:
cluster:
ipv4_address: 10.0.2.31
aliases:
- pgmaster.domain.com
pgslave1:
container_name: pgslave1
build:
context: ../src
dockerfile: Dockerfile
image: docker-postgresql:latest
environment:
NODE_NAME: pgslave1 # Node name
ports:
- 5441:5432
volumes:
- pgslave1_volume:/home/postgres/data
networks:
cluster:
ipv4_address: 10.0.2.32
aliases:
- pgslave1.domain.com
pgslave2:
container_name: pgslave2
build:
context: ../src
dockerfile: Dockerfile
image: docker-postgresql:latest
environment:
NODE_NAME: pgslave2 # Node name
ports:
- 5442:5432
volumes:
- pgslave2_volume:/home/postgres/data
networks:
cluster:
ipv4_address: 10.0.2.33
aliases:
- pgslave2.domain.com
networks:
cluster:
driver: bridge
ipam:
config:
- subnet: 10.0.2.1/24
On my Mac, I have a 192.168.0.0 local network. I expect that doing ping 10.0.2.31 I can ping my container but this is not possible. I think this is due to Linux VM created inside Mac where containers live and the IPs are not reachable outside this VM.
Can someone help me to understand how to make the above three IP reachable? IPs are reachable from one container to another.
Here my full code:
https://github.com/sasadangelo/docker-postgres
you should be able to ping your containers from you host.
via public ip:
just use their public ip. (you had been trying to ping your
container local ip, inside the docker network)
how to find the container public IP?
you can get it by running ifconfig inside the container.
or
or by running on your host docker container inspect <container_id>.
it should be there under NetworkSettings.<network_name>.IPAddress )
via container name/id
docker is running some sort of dns on your machine so you can also use
the container name or id - ping <container_name/id>
note
the way to access your containers outside the docker network is via their published ports. you have bound port 5432 on the docker network to port 5442 on your host, therefore the container should listen and accept traffic at 127.0.0.1:5442 (thats your localhost at the port you've bound)

Docker Per Network Port Mapping

I'm looking for a way to map the same port into 2 different ports, each for another container in a different network.
consider the below docker-compose scenario:
services:
web:
build: .
ports:
- "8080:8080"
networks:
Net1:
Net2:
serv1:
image: tomcat:7.0.92-jre8
networks:
Net1:
serv2:
image: tomcat:7.0.92-jre8
networks:
Net2:
Now what I would really like to do is to actually map the "web" service port 8080 so that serv1 could consume it as 8081 and serv2 will be using it as 8082.
Is that even possible?
Thanks
Ports are published to the host, not to docker networks, and not to other docker containers. So the above "8080:8080" maps port 8080 on the docker host into that container's port 8080.
For container-to-container communication, that happens using docker's internal DNS for service discovery, and the container port. So both serv1 and serv2 can connect to http://web:8080 to reach the web service on its container port. That in no way prevents serv1 and serv2 from listening within their own container on any ports they wish.

Traefik as a proxy for Docker container with host machines network

I would like to set up the following scenario:
One physical machine with Docker containers
traefik in a container with network backend
another container which is using the host machines network (network_mode: host)
Traefik successfully finds the container and adds it with the IP address 127.0.0.1 which obviously not accessible from the traefik container (different network/bridge).
docker-compose.yml:
version: '3'
services:
traefik:
image: traefik
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik.toml:/etc/traefik/traefik.toml
networks:
- backend
app:
image: my_app
labels:
- "traefik.enable=true"
- "traefik.frontend.rule=Host:myapp.example"
- "traefik.port=8080"
network_mode: host
networks:
backend:
driver: bridge
The app container is added with
Server URL Weight
server-app http://127.0.0.1:8080 0
Load Balancer: wrr
Of course I can access app with http://127.0.0.1:8080 on the host machine or with http://$HOST_IP:8080 from the traefik container.
Can I somehow convince traefik to use another IP for the container?
Thanks!
Without a common docker network, traefik won't be able to route to your container. Since you're using host networking, there's little need for traefik to proxy the container, just access it directly. Or if you need to only access it through the proxy, then place it on the backend network. If you need some ports published on the host and others proxied through traefik, then place it on the backend network and publish the ports you need to publish, rather than using the host network directly.

Resources