Can't connect to my broker in a Docker container [duplicate] - docker

This question already has answers here:
Mosquitto: Starting in local only mode
(6 answers)
Closed 1 year ago.
I'm trying to setup mqtt broker in a Docker container. I pulled the following Docker image (https://hub.docker.com/_/eclipse-mosquitto) on my machine and I can successfully launch the Docker container with the following command:
docker run -it -p 1883:1883 -p 9001:9001 --network=host eclipse-mosquitto
If I run it with that command I get the following output:
WARNING: Published ports are discarded when using host network mode
1616081533: mosquitto version 2.0.9 starting
1616081533: Config loaded from /mosquitto/config/mosquitto.conf.
1616081533: Starting in local only mode. Connections will only be possible from clients running on this machine.
1616081533: Create a configuration file which defines a listener to allow remote access.
1616081533: Opening ipv4 listen socket on port 1883.
1616081533: Opening ipv6 listen socket on port 1883.
1616081533: mosquitto version 2.0.9 running
So then I start Mqttfx and I set up a connection to 127.0.0.1 and port 1883 but the MQTT client is unable to connect to my broker. What am I doing wrong?

Okay, let's read those logs :
WARNING: Published ports are discarded when using host network mode
In host mode, the ports exposed by the container are directly accessible from your local machine IP (the container uses your host machine IP address). So you do not need the -p option when launching the container
1616081533: Starting in local only mode. Connections will only be possible from clients running on this machine.
1616081533: Create a configuration file which defines a listener to allow remote access.
It seems that you need to change some configurations for Mosquitto : creating a mosquitto.conf file and looking more precisely at options like bind_addressand listeners
You can find more about that here and here

Related

Bidirectional socket communication between docker container and host

I want to establish a TCP socket based communication between client and server hosted on a docker and host respectively.
I am trying to run a GCC based socket agent on a ubuntu container running on Docker desktop installed on Windows 10 host. I have done port mapping (-p) to a port where a server runs on Windows 10.
docker run -it --name ubuntu1 -p 5997:5997 ubuntu /bin/bash
Now when I try to run a java socket server on windows 10 host it is showing error that port is already bind. But I have checked no other application is binding on port 5997.
I found that -p binds the host port already, so another service can not bind this. If I run the socket server on host first then starting container fails.
Error response from daemon: Ports are not available: listen tcp 0.0.0.0:5997: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.
Error: failed to start containers: ubuntu1
What is the correct way to establish a bidirectional socket communication between container and host ,where socket client runs at the container and socket server at the host ?

Connect to remote Docker Daemon

I have a Windows 10 Home machine (local machine) where I have installed Docker Toolbox that runs docker inside a VM. IP (192.168.0.6)
Also, there is another Windows 10 Pro machine (remote machine) that has Docker Desktop installed. IP (192.168.0.13). In the Docker Desktop setting, I have enabled "Expose daemon on tcp://localhost:2375 without TLS". At this point, I do not care about the TLS part since both the machines are on the local network. In the firewall setting, I have accepted inbound connections from port 2375.
Now I would like to run docker-compose from a local machine that connects and runs docker on the remote machine. To test connection, the command used on local machine is
docker -H tcp://192.168.0.13:2375 version
The response is
Cannot connect to the Docker daemon at tcp://192.168.0.13:2375. Is the docker daemon running?
I see that it calls https://192.168.0.13:2375/v1.40/info and not http://192.168.0.13:2375.
And in my remote machine, if I enter http://localhost:2375/v1.40/info I get a response but there is no response when I run by providing IP like http://192.168.0.13:2375/v1.40/info
I assume your docker daemon is only listening on localhost or 127.0.0.1.
You try to connect from another machine which connects to your machine with you internal network ip 192.168.0.13.
This means you need to configure your docker daemon to listen to:
192.168.0.13 = only network internal
tcp://192.168.0.13:2375
0.0.0.0 = all ip addresses
tcp://0.0.0.0:2375
In Windows you need to create a Docker-Daemon config file in:
C:\ProgramData\docker\config\daemon.json
with following content:
{
"hosts": ["tcp://0.0.0.0:2376"] # IP Address for container host
}
You can probably define a Subnet but i am not sure about this.
This is because the VM network interface is only binded to your localhost.
Try forwarding the port. From powershell or command prompt with admin privileges:
netsh interface portproxy add v4tov4 listenport=2375 listenaddress=192.168.0.13 connectaddress=127.0.0.1 connectport=2375

Local mosquitto not visible from docker

I have a local mosquitto broker running on ubuntu with bind_address localhost. If I try to access this broker from a docker container with node-red on the same host, it is not reachable. If I don't bind mosquitto to localhost, all works well.
What can I do to make mosquitto visible only on local machine but also accessible for local docker containers?
localhost in the docker container is not the same localhost as the machine running the Docker engine.
If you want to access the broker you will need to use the address of the host machine on the Docker virtual network (e.g. 172.17.0.1 bound to device docker0 is the default I think).
You can keep the bind_address entry, but you will need to add a second listener entry for the address bound to the docker0 interface.

Port binding error when connecting to localhost Kafka broker from localhost Consumer in Docker container

I have a Zookeeper running on port 2181 (default) and a Kafka server listening on port 9090 on my local machine.
When I run kafka CLI locally, or consumer/producer apps locally, I have no problem connecting.
I then try to bundle a Kafka consumer into a Docker container, and run that Docker container locally, e.g.:
docker run -p 9092:9092 --rm <DOCKER_IMAGE>
This gives the error:
(Error starting userland proxy: Bind for 0.0.0.0:9090 failed: port is already allocated.)
This makes sense since Kafka Server is bound to 9092, as shown in nmap -p 9092 localhost:
PORT STATE SERVICE
9092/tcp open XmlIpcRegSvc
I'd have no problem mapping the Docker container to a different port via -p XXX:9090, but how do I get the local Kafka server to listen on that new port without binding to it?
So after some digging I found a few options. (Note: I'm on a mac so #2 may not be applicable to everyone).
Include --network=host in the docker run command (as seen here).
Don't change the docker run command at all, and instead connect to broker at host.docker.internal:9092 inside the container consumer/publisher code. As seen here.
I wasn't able to get #1 to work out for me (I'm sure it's user error). However #2 worked perfectly and just required a config change inside the container.

By default, can a docker container call host's localhost UDP?

I have a docker container, and also installed on the VM a daemon listening for UDP on port 8125. The container sends data with UDP protocol on this 8125 port.
I was trying to open the port by starting the container with the -p 8125:8125/udp, but I'm getting the following error:
Error starting userland proxy: listen udp 0.0.0.0:8125: bind: address already in use
Which makes sense because the daemon is already listening on this port.
So how can I configure Docker so that the container can send UDP payloads to the external daemon ?
Opening ports is only needed when you want to Listen for the requests not sending. By default Docker provides the necessary network namespace for your container to communicate to the host or outside world.
So, you could it in two ways:
use --net host in your docker run and send requests to localhost:8125 in this case you containerized app is effectively sharing the host's network stack. So localhost points to the daemon that's already running in your host.
talk to the container network gateway (which is usually 172.17.0.1) or your host's hostname from your container. Then your are able to send packets to your daemon in your host.

Resources