Can I publish and subscribe message in mosquitto broker using ESP8266?? Mosquitto is used mainly for local host?
My requirement is , i have ESP 8266 and STM32 board. Can I use mosquitto as broker ??
Mosquitto is a fully functional MQTT broker.
The default configuration will bind to all available interfaces on the host so it can be reached from any host on the network that knows the correct address.
Related
I have a problem with the ip address of the mosquitto Broker. Currently I'm trying to get mosquitto Broker running locally. I used Siemens PLCSIM Virtual Ethernet Adapter as connection and set its ip address to 192.168.0.10. The version of mosquitto Broker I am using is 2.0.15. I added the following two lines of code in mosquitto.conf
listener 1883
allow_anonymous true
and enter the following command in command prompt
mosquitto.exe -c mosquitto.conf -v
After that when I tested the local connection, everything worked fine. The ip address of mosquitto Broker is the ip address of Siemens PLCSIM Virtual Ethernet Adapter, which is 192.168.0.10 I set before.
For example, I now have an actual plc and want to pass data through mosquitto Broker. Suppose the ip address of the network I am connected to is 192.168.0.103. I would like to ask, what should I do if I want to make mosquitto Broker run online instead of locally? Do I need to make any changes to the mosquitto.conf file? And if mosquitto Broker is running on the network, is the ip address of mosquitto Broker 192.168.0.103?
As configured mosquitto will bind to ALL the IP addresses on the machine it is running on, there is no need to change it's configuration at all.
You need to configure any MQTT clients that want to connect to the broker to user the IP address of what ever interface the machine running the broker is connected to the same subnet as the client device (assuming no routing is taking place)
I would like to access the Docker API (running on Windows Server).
Sadly a TCP connection is not possible in our network (at least for this case).
Here I found a solution to change the port. But I am not sure if changing the protocol is possible?
{
"hosts": ["tcp://0.0.0.0:4243"]
}
From the docs:
The Docker daemon can listen for Docker Engine API requests via three different types of Socket: unix, tcp, and fd.
... udp is not an option.
I'm having a few docker containers (Using docker-compose and a single network - network-sol)
One of the containers is a Spring Boot application that sends UDP broadcast to the local network. 255.255.255.255 fails because It's the local broadcast address of network-sol
How can I broadcast UDP messages such as the "top local network" Will get those packets? Do i have to use directed broadcast address for that?
P.S
broadcast works if the application is deployed outside of docker (part of the local network
You should either run the service defined in your docker-compose.yml file with network_mode: host.
Alternatively you can publish the port of the container you intended to communicate with by publishing it using the following configuration. Note that the /udp is required for UDP communication to work.
service:
ports:
- "8080:8080/udp"
I have gotten some luck out of this. The guide specifies sysctl parameters that are needed for broadcast forwarding from a docker network, you should then be able to either use his script or specify these parameters when running docker.
I have installed Mosquitto broker on EC2 instance but the mqtt client cannot access the broker.
What's the solution?
Likely you need to add the MQTT port 1883 (or 8883) to your security group to allow access from external client.
Few days ago I tried to configure Kafka Docker container with Docker Compose and port mapping and discover interesting behavior which I do not fully understand:
Kafka broker seems to connect to itself. Why ?
My set up is:
Ubuntu 14.04, Docker 1.13.1, Docker-Compose 1.5.2
Kafka 0.10 listens on port 9092, this port is exposed by container.
In Docker Compose I have port mapping from container port 9092 to local port 4005.
I configured host name of my Docker Host machine and local port from Compose in advertised.listeners (docker-host:4005) since broker should be visible from my company network.
With this set up when I try to send/fetch data to/from Kafka, all attempts end up with:
Topic metadata fetch included errors: {topic_name=LEADER_NOT_AVAILABLE}
After trying various combinations of ports and host names in advertised.listeners, I discovered that sole working combination is localhost:9092. Any attempt to change hostname or port led to the error mentioned above.
This made me think that Kafka tries to connect to address configured in advertised.listeners and this is somehow related to topic metadata.
So inside Docker container I did:
redirect traffic to "docker-host" to loopback
echo "127.0.0.1 $ADVERTISED_HOST" >> /etc/hosts
configure Kafka to listen on all interfaces and port (exact as advertised)
sed -r -i "s/#(listeners)=(.*)/\1=PLAINTEXT:\/\/0.0.0.0:4005/g" $KAFKA_HOME/config/server.properties
advertise "docker-host" and external port
sed -r -i "s/#(advertised.listeners)=(.*)/\1=PLAINTEXT:\/\/$ADVERTISED_HOST:4005/g" $KAFKA_HOME/config/server.properties
And now it works like a charm.
However I still do not understand:
Why Kafka broker might need to connect to itself via address configured in advertised.listeners ?
Is there a way to disable this or at least configure it to use address from 'listeners' property (with default Kafka port) ?
UPD
Worth to mention, following setup does not work: Kafka listens on 0.0.0.0:9092, advertised listener is configured to docker-host:4005.
In this case whenever consumer or producer connects to kafka it receives LEADER_NOT_AVAILABLE.
There is also connection shown by netstat (within container) to docker-host:4005 in state SYN_SENT.
UPD 2
Looks like there is similar problem with Kafka but inside AWS described here.
Difference is that in my case I want to use different Kafka port.
UPD 3
Ok, the reason why setup mentioned in the first UPD paragraph does not work is - UFW, for some reasons it blocks traffic which goes from docker container to itself via host machine.
Why Kafka broker might need to connect to itself via address
configured in advertised.listeners ?
When a Kafka broker is first connected by a client, it replies back with the address that it expects that client to use in the future to talk to the broker. This is what is set in the advertised.listeners property. If you don't set this property, the value from listeners will be used instead (which answers your second question).
So your "issue" is, that remote clients connect to yourhost:9092, reach the Kafka broker, because you forwarded the port, the broker then responds with "you can reach me at localhost:9092" and when the client sends the next packet there it just connects back to itself.
The metadata is not really related here, its just the first request that gets made.
Your solution is correct for this setup I think, have Kafka listen on local interfaces and set the advertised.listeners to the host that someone from your company network would connect to.
I don't 100% know if the broker needs to connect to itself as well, pretty sure thats not the case though. I think your setup would also work without the entry of the external hostname in your /etc/hosts file.
Is there a way to disable this or at least configure it to use address
from 'listeners' property (with default Kafka port) ?
see above