docker eclipse-mosquitto run only on localhost - docker

I want to use docker eclipse-mosquitto just for communication on a local machine. Which settings do I need for mosquitto.conf to make the mosquitto broker only visible on localhost but not from outside? Since a second mosquitto is running, port 1883 is blocked and I'm using port 1884.
This is what I have:
port 1884
bind_address 127.0.0.1
is visible from outside.
port 1884
bind_address localhost
gives error Error: Address not available.
Binding to docker-ip
port 1884
bind_address 172.17.0.1
gives error Error: Address not available.
What can I do?

Your answer is the wrong approach, you should only really be using --network="host" for things that need to open raw sockets or receive broadcast messages from the local network.
The correct answer is to not use the bind_address option in the mosquitto.conf file and use the docker -p option to do the port mapping correctly (docs).
e.g.
docker run exec -rm -p 127.0.0.1:1884:1884/tcp mosquitto
Here the -p 127.0.0.1:1884:1884 maps port 1884 in the container to port 1884 bound to the loopback ip (127.0.0.1) on the host.

Ok, solved it myself:
Running docker with additional option --network="host" and than in mosquitto.conf:
port 1884
bind_address 127.0.0.1
does the job.

Related

Cannot connect to Protonmail Bridge SMTP (host machine) from a Docker container

My setup is:
Debian, Docker
Host machine running Protonmail Bridge as a service
Docker container running Discourse with their default recommended setup
Issue: From the Docker container, I cannot connect to the SMTP server exposed by the Protonmail Bridge on the host machine.
I checked open ports on the host machine, all good:
ss -plnt
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 4096 127.0.0.1:1025 0.0.0.0:* users:(("proton-bridge",pid=953,fd=12))
How I test
Host machine:
openssl s_client -connect 127.0.01:1025 -starttls smtp
Works.
Docker container:
openssl s_client -connect 172.17.0.1:1025 -starttls smtp
Connection refused.
I’m wondering if the Protonmail Bridge service that’s listening on 127.0.0.1:1025 is not accepting connections from the Docker container because they are not coming from 127.0.0.1 exactly? If this is the problem, how to validate and fix? If this is not the problem, what am I doing wrong?
Other tests
nmap 127.0.0.1 on the host machine outputs:
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000010s latency).
Not shown: 997 closed ports
PORT STATE SERVICE
22/tcp open ssh
1025/tcp open NFS-or-IIS
1042/tcp open afrog
Note that it lists the open port 1025.
nmap 172.17.0.1 in the docker container does not output any 1025 port. I'm not sure if this is the problem either.
Output of route in the Docker container:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
This may be impossible currently, but should be solved by this pull request.
If you're comfortable compiling the proton-bridge package from source, you only have to change 1 line in the internal/bridge/constants.go file to say
Host = '127.0.0.1'
To
Host = '0.0.0.0'
Then recompile with make build-nogui (to build the "headless" version).
And you should be good to go!

Mosquitto - Unable to connect over network other than on the default port

I am running Mosquitto 1.4.8 on Ubuntu successfully on port 1883 (tested from another machine with mosquitto_sub/mosquitto_pub). However I am encountering issues when attempting to use another port eg.
mosquitto -p 1884 -c moddebug.conf
This works OK if I access it from the same machine e.g.:
mosquitto_pub -h 127.0.0.1 -p 1884
but if I attempt to connect from another machine I get an error:
mosquitto_pub -h IP_ADDRESS -t exmapleTopic -p 1884
Connection timed out
My moddebug.conf file is:
log_type all
log_dest file mosquitto2_log.log
The log does not provide any extra information:
Config loaded from mosdebug.conf.
Opening ipv4 listen socket on port 1884.
Opening ipv6 listen socket on port 1884.
mosquitto version 1.4.8 terminating
I have tried altering the firewall rules (but this did not help):
ufw allow 1884/tcp
Rules updated
Rules updated (v6)

VisualVM cannot connect to any port except for 1099

I have a remote jvm application running inside docker container managed by kubernetes:
java -jar /path/to/app.jar
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.local.only=false
-Dcom.sun.management.jmxremote.port=1099
-Dcom.sun.management.jmxremote.rmi.port=1099
-Djava.rmi.server.hostname=127.0.0.1
When I try to debug using port forwarding and VisualVM, it works only when I use port 1099 on local machine. Ports 1098, 10900, or any other don't work. This one works for VisualVM: kubectl port-forward <pod-name> 1099:1099. This one doesn't: kubectl port-forward <pod-name> 1098:1099
I use "Add JMX Connection" option in VisualVM, connecting to localhost:1099 or localhost:1098. The former works, the latter doesn't.
Why can't I use non-1099 ports with VisualVM?
UPD
I believe the issue is related to VisualVM, because port forwarding seems to work fine whatever local port I choose:
$ kubectl port-forward <pod> 1098:1099
Forwarding from 127.0.0.1:1098 -> 1099
Forwarding from [::1]:1098 -> 1099
Handling connection for 1098
Handling connection for 1098
The full JMX URL for connecting to localhost is as follows:
service:jmx:rmi://localhost:<port1>/jndi/rmi://localhost:<port2>/jmxrmi
...where <port1> is the port number on which the RMIServer and RMIConnection remote objects are exported and <port2> is the port number of the RMI Registry.
For port 1098 you could try
service:jmx:rmi://localhost:1098/jndi/rmi://localhost:1098/jmxrmi
I'd guess that both ports default to 1099 if not explicitly configured.
EDIT: Per the comments, the JMX URL that worked was:
service:jmx:rmi://localhost:1098/jndi/rmi://localhost:1099/jmxrmi

When to perform host-ip based port mapping like "-p host-ip:port:port"

Docker provides a way to map ports between the container and host.
As per the official documentation its also possible to mention host-ip while port mapping.
-p 192.168.1.100:8080:80 - Map TCP port 80 in the container to port 8080 on the Docker host for connections to host IP 192.168.1.100.
I tried this option to figure out what's the difference with/without the host-ip.
Using just -p 80:80
$ docker run -itd -p 80:80 nginx:alpine
$ curl localhost:80
$ curl 127.0.0.1:80
$ curl 0.0.0.0:80
$ curl 192.168.0.13:80
$ ps -ef | grep docker-proxy
16723 root 0:00 /usr/local/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8080 -container-ip 172.17.0.1 -container-port 80
$
All the curl commands return the output.
Using host-ip like -p 192.168.0.13:80:80
$ docker run -itd -p 192.168.0.13:80:80 nginx:alpine
$ curl localhost:80
curl: (7) Failed to connect to localhost port 80: Connection refused
$ curl 127.0.0.1:80
curl: (7) Failed to connect to 127.0.0.1 port 80: Connection refused
$ curl 0.0.0.0:80
curl: (7) Failed to connect to 0.0.0.0 port 80: Connection refused
$ curl 192.168.0.13:80 # return output
$ ps -ef | grep docker-proxy
4914 root 0:00 /usr/local/bin/docker-proxy -proto tcp -host-ip 192.168.0.13 -host-port 80 -container-ip 172.17.0.2 -container-port 80
$
All the curl commands failed except 192.168.0.13:80.
Is there any there any other difference apart for the one I mentioned here.
Wondering when to use host-ip based port mapping. Any use cases?
A docker host may have multiple NICs. In the data center, this may be too segregate traffic, e.g. management, storage, and application/public. On your laptop, this may be for wireless and wired interfaces. There are also virtual NICs for things like loopback (127.0.0.1) and VPN tunnels.
When you do not specify an IP in the port publish command, by default docker will bind to all interfaces on the host. In IPv4, this is commonly notated as 0.0.0.0 which means listen on any interface (and this is why I don't connect to this address because there's no such thing as connecting to any IP). With the IP address specified, you manually specify which interface to use. Why would you want to specify this? Several reasons I can think of:
Listening on only 127.0.0.1 to prevent external access
Listening on 0.0.0.0 to explicitly bind to all IPv4 interfaces (it is possible to change docker's default behavior, so this could be necessary for some).
Listening on one physical NIC, allowing other NICs to be bound by other services on the same port.
Listening on only IPv4 interfaces if the app does not work for IPv6.
While there are lots of possible reasons, other than listening on loopback for security, these use cases are very rare and most users leave docker to listen on all interfaces.

Bridge docker container port to host port

I run a docker container with the following command:
docker run -d --name frontend_service -net host --publish=3001:3000 frontend_service
As I understand it maps the local port 3001 to the container port 3000.
I already ssh to the container and checked curl localhost:3000. Works. But outside, on the host, I can't curl localhost:3001.
I checked nmap. The port is open:
nmap -v -sT localhost
Starting Nmap 6.47 ( http://nmap.org ) at 2016-10-19 01:24 UTC
Initiating Connect Scan at 01:24
Scanning localhost (127.0.0.1) [1000 ports]
Discovered open port 25/tcp on 127.0.0.1
Discovered open port 22/tcp on 127.0.0.1
Discovered open port 5051/tcp on 127.0.0.1
Discovered open port 3001/tcp on 127.0.0.1
Completed Connect Scan at 01:24, 0.06s elapsed (1000 total ports)
Nmap scan report for localhost (127.0.0.1)
Host is up (0.0011s latency).
Other addresses for localhost (not scanned): 127.0.0.1
Not shown: 996 closed ports
PORT STATE SERVICE
22/tcp open ssh
25/tcp open smtp
3001/tcp open nessus
5051/tcp open ida-agent
How can i connect the container port with my host port?
When you specify --net=host, you are completely turning off Docker's network setup steps. The container won't get its own network namespace, won't get its own interfaces, and the port publishing system will have nothing to route to.
If you want your -p 3001:3000 to work, don't use --net=host.

Resources