Access Dask scheduler from an external docker container - docker

I have started a dask-scheduler at host A. Host A has docker engine installed. So, host A has multiple network interfaces:
192.168.10.250 (default IP for host A)
172.17.0.1 (host A IP address in bridge network (i.e., docker0))
I tested a simple client, from within host A, to both IP addresses and works well
Now, I started a Docker container on the same host A without specifying any networks, so the docker container connects to the default bridge network and receives IP address 172.17.0.2. Within the docker container, I try to start a client that connects to the dask scheduler on the host A as follows:
client=Client('172.17.0.1:8786')
but each time I receive the following error:
IOError: Timed out trying to connect to 'tcp://172.17.0.1:8786' after 10 s: connect() didn't finish in time
I tried to change the network drive for the container to "host" instead of "bridge" but then I receive the following error:
distributed.comm.core.CommClosedError: in : Stream is closed
please help
Regards

Thanks guys. Problem solved.
I realized the problem was that python 2.7 was used inside docker image. When I used python 3.6, it worked (even without the --net host)
Regards

Related

How do I access an API on my host machine from a Docker container?

I have a docker-compose that spins up 2 Docker containers. One is an application (on port:8090) and the other is a database (on port:5432). I also have a Windows application that has an API accessible through localhost:8002. I want to use my container that has the application to read data from the localhost:8002 API, then move that data to my database in my other Docker container.
For docker-compose, I mapped port 5432:5432 and port 8090:8090 for the database and application containers, respectively. I have tested my application non-dockerized where I call the Windows API and then write it to port:5432 and it works properly.
However, after Dockerizing my application, localhost:8002 is now no longer localhost for my Dockerized application and is now unreachable. I am wondering how I can reach my host localhost:8002, hit that API, then move that data to my other container.
After 16 hours of blood, sweat, and tears. Mostly tears, here's the answer for whoever may be using this in the future.
TL;DR expose your local nat network to your docker containers in docker-compose by:
networks:
default:
external: true
name: nat
Note, on Windows, running a docker network ls should give something like:
NETWORK ID NAME DRIVER SCOPE
6b30a7dcf6e0 Default Switch ics local
26305680ad62 WSL ics local
b52f5e497eba nat nat local
4a4fd550398f none null local
The docker-compose is simply connecting the docker network to your host network (required for auto-restart of containers after computer restarts as well, but off-topic).
Afterwards, I had to run an ipconfig to find the IPv4 port that corresponds to my nat port. Here, there can be many IPv4 IP addresses, find the one that says Ethernet adapter vEthernet (nat).
Then, use the IPv4 Address corresponding to the nat network for any applications running on your local machine.
For example, my application ran on localhost:8002. Here, I changed my host to http://172.31.160.1:8002 and it worked.

How to access docker container in a custom network from another docker container running with host network

My program is consisting of a network of ROS1 and ROS2 nodes, which are software that work with a publish/subscribe way of communication.
Assume there is 4 nodes running inside a custom network: onboard_network.
Those 4 nodes (ROS1) can only communicate together, therefore we have a bridge node (ROS1 & ROS2) that needs to be sitting on the edge on both onboard_network and host network. The reason why we need the host network is because the host is inside a VPN (Zerotier). Inside the VPN we have also our server (ROS2).
We also need the bride node to work with host network because ROS2 work with some multicast stuff that works only on host mode.
So basically, I want a docker compose file running 4 containers inside an onboard_network & a container running inside the host network. The last container needs to be seen from the containers in the onboard_network and being able to see them too. How could I do it ? Is it even possible ?
If you're running a container on the host network, its network setup is identical to a non-container process running on the host.
A container can't be set to use both host networking and Docker networking.
That means, for your network_mode: host container, it can call other containers using localhost as a hostname and their published ports: (because its network is the host's network). For your bridge-network containers, they can call the host-network container using the special hostname host.docker.internal on MacOS or Windows hosts, or on Linux they need to find some reachable IP address (this is discussed further in From inside of a Docker container, how do I connect to the localhost of the machine?.

Create redis cluster (v5) with docker compose

It's been several days now trying to create a redis cluster with docker-compose, but it doesn't work because redis doesn't send a good ip address when my client sends a request (it sends to my host internal ip from docker, but i want he send host ip).
I'm looking for "cluster-announce-ip" but no success.
I've tried to create with host mode but it doesn't work ... I don't understand why..
Now redis-cli shows:
Waiting for the cluster to join
You could find my work here: https://github.com/fhebuterne/redis-cluster
If someone has a solution, I'm interested
Thanks
After some tests, it not possible to use internal network in docker with multiple docker container and redis cluster (even with "cluster-announce-ip"), so the only solution i has found, is to define on each service (on docker compose), this option :
network_mode: "host"
And using the brige docker ip between host and containers, on windows i found it with ipconfig and look for "vEthernet (DockerNAT)", base ip is 10.0.75.1, on my redis-cli and redis.conf, i put 10.0.75.2, so each containers can be connected with others and cluster send good response when i send request with my host computer, i'm sorry if is not clear, i has push my solution on my repository (the link is on my previous message).

Can Consul be run inside a Docker container using Docker for Windows?

I am trying to make Consul work inside a Docker container, but using Docker for Windows and Linux containers. I am using the official Consul Docker image. The documentation states that the container must use --net=host for Consul's consensus and gossip protocols.
The problem is, as far as I can tell, that Docker for Windows uses a Linux VM under the hood, and the "host" of the container is not the actual host machine, but that VM. I could not find a combination of -bind, -client and -advertise parameters (IP addresses), so that:
Other Consul agents on other hosts can connect to the local agent using the host machine's IP address.
Other containerized services on the same host can query the local agent's REST interface.
Whenever I pass the host machines IP address in the LAN through -advertise, I get these errors inside the container:
2018/04/03 15:15:55 [WARN] consul: error getting server health from "linuxkit-00155d02430b": rpc error getting client: failed to get conn: dial tcp
127.0.0.1:0->10.241.2.67:8300: connect: invalid argument 2018/04/03 15:15:56 [WARN] consul: error getting server health from "linuxkit-00155d02430b": context deadline exceeded
Also, other agents on other hosts cannot connect to that agent.
Using -bind on that address fails - my guess is, since the container is inside the Linux VM, the host machine's address is not the container's host's address, and therefore cannot be bound.
I have tried various combinations of -bind, -client and -advertise, using addresses like 0.0.0.0, 127.0.0.1, 10.0.75.2 (addresss on the Docker virtual switch) and the host machine's IP, but to no avail.
I am now wondering whether this is achievable at all. I have been trying this for quite some time, and I am despairing. Any advice would be appreciated!
I have tried the whole process without using --net=host, and everything works fine. I can connect agents across hosts, and I can query the local agents REST interface from other containerized applications... Is --net=host really crucial to the functioning of Consul?

Run docker container on localhost via VM

I'm new to Docker and Containers, and I'm trying to run a simple asp.net web app in a container but running into issues. My OS is Windows 10 Home, so I have to use the Docker Toolbox, which runs on a VM that only includes a basic Linux OS. When I spin up the container, it seems to start fine, but I can't view the app on the localhost.
$ docker run -p 8342:5000 -it jwarren:project
Hosting environment: Production
Content root path: /app
Now listening on: http://*:5000
Application started. Press Ctrl+C to shut down.
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
98cc4aed7586 jwarren:project "dotnet run" 8 minutes ago Up 8 minutes 0.0.0.0:8342->5000/tcp naughty_brattain
I've tried several different recommendations that I found on the web, but none have helped so far. However, my knowledge of networking is very limited, so maybe I'm not fully understanding what needs to be done. I've tried accessing it with the default VM machine IP and the container IP. I understand that the port forwarding does not carry over to the container. Any assistance would be great, as this project is due on Tuesday, and this is the last road block before finishing.
I found the following post that was really helpful: How to connect to a docker container from outside the host (same network) [Windows]. Following the steps below worked perfectly:
Open Oracle VM VirtualBox Manager
Select the VM used by Docker
Click Settings -> Network Adapter 1 should (default?) be "Attached
to:NAT"
Click Advanced -> Port Forwarding Add rule: Protocol TCP, Host Port
8080, Guest Port 8080 (leave Host IP and Guest IP empty)
You should now be able to browse to your container via localhost:8080 and your-internal-ip:8080.
Started up the container (Dockerfile EXPOSES 5000):
docker run -p 8080:5000 -it jwarren:project
Was able to connect with http://localhost:8080
There are few things to consider when working with a VM networking.
Virtual Box has 3 types of networking options NAT, Bridged and Host Only.
NAT would allow your VM to access internet through your internet. But won't allow your HOST machine to access the VM
Host Only network will create a network where the VM can reach the host machine and the Host can reach the VM. No internet using this network
Bridged network will allow your VM to assign another IP from your Wifi router or the main network. This IP will allow VM to have net access as well as access to other machines on the network. This will allow even the host machine to reach the IP
Now in most cases when you want to run Docker inside a VM and access that VM using the host machine you want the VM to have both NAT and Host only bridges
Now accessing your app on port 8342 needs few things checked
seliunx, firewalld, ufw are disabled on your VM (or properly configured to allow the port)
Your VM has a host only network or bridged network
iptables -S should not show REJECT rules
Some VMs come pre-configure to only allow port 22 from external network. So you should try access the app on <hostonlyip>:8342 or <bridgedip>:8342.
If you want to test if the app is up or not you can do the following
docker inspect <containerid> | grep IPA
Get the IP from this and run the command
curl http://<containerip>:5000/
This command needs to be execute inside the VM and not on your machine. If this command doesn't work then your container is not listening on 5000. Sometimes app listen to only 127.0.0.1 inside the container. This means they will work only inside the container and not outside. The app inside the container needs to listen to 0.0.0.0
If nothing works you can try an ssh tunnel approach
ssh -L 8342:127.0.0.1:8342 user#<VMIP>
And then you should be able to access the app on localhost:8342

Resources