I am using the Python Paho MQTT library and Mosquitto broker both with 3.1.1
If two clients connect with the same id (for whatever reason), both clients keep getting disconnected and then connect in a loop forever.
on_disconnect has a rc of 1 but connect has an rc of 0 (then it gets disconnected)
Is there anyway to detect that the issue is duplicated clientid?
Not yet. Hopefully this will be addressed in a future version of the spec.
Related
I have an application when I'm sending MQTT messages to an IoT platform, the IoT platform has their own broker. The problem arose when the broker went down for 2-3 days, with that I lost 2-3 days worth of data.
I was wondering if there was a way to ensure that all data points are stored, and then sent when the broker come back online in order. I've been testing this with Mosquitto, but I can't seem to get it to work.
Is it a matter of using Quality of Service (QoS)? Does this work even the broker is down, or does it need the broker to communicate with? Or do I need to use persistence or retain?
Yes, you are on the right track, it requires QoS and must be used with the other settings together, you can test under the following conditions:
Initialize your MQTT client with clean session flag set to False and a unique client ID;
Here is an example using Paho python library,
mqttc = mqtt.Client("specify_a_unique_client_id", clean_session=False)
Subscribe to a topic with QoS >= 1;
Publish to a topic with QoS >= 1;
NOTICE: you must specify a unique client ID, so that your broker can still recognize the previous client session in case it reconnects. Leave the client ID as empty will auto generate a new one.
Bonus, Here is a good series of articles to explain all the configurations in MQTT, in case you want to understand the details.
I wrote an MQTT client program, which runs at a computer (computer 1). The MQTT client program connects to an MQTT Broker with QoS=1 and publishes data to Broker periodically. I subscribe the Broker (Qos=1) from another computer (computer 2), using Mosquito utility. I found the data published to Broker will be delay delivered to publisher about 3 seconds. The delayed time is too long. I checked the codes and found the 3-second-delay time is from read_packet() which is to read back acknowledge from Broker. Why is there such long delay time? How can I figure it out? The Broker (MQTT server) is managed by my coworker. If the Broker is the cause, I can request them to help. But I need to know what could be the trouble source, so that I can check with them.
I can confirm the delay occurring at the time of reading back acknowledge from Broker by watching the debugging message from MQTT client program at computer 1. For Qos = 1, the client must read back acknowledge after sending (publishing) packets. I found 3-second delay time between sending packet and reading back acknowledge. Surely, I also found the delay at display of Mosquito_sub utility.
Assuming near instant network comms and nothing else strange going on the fact that you have recreated the problem with mosquitto_sub then this points to the MQTT broker being the source of the problem.
Without knowing what broker you are using and how heavily it is loaded it's hard to say more but you should look at the broker logs.
We use ActiveMQ 5.10 for our JMS middleware, however we found the connection will be accumulated and sometimes reach to limit, we didn't find any connection leak problem in our code.
By using google, we found that there is a feature in ActiveMQ artemis which named "Detecting Failure from the Client", but we didn't find such feature in ActvieMQ
https://activemq.apache.org/components/artemis/documentation/latest/connection-ttl.html
Anyone can tell me is there any features in ActiveMQ 5.10 just like "Detecting Failure from the Client" in ActiveMQ artemis?
thanks
The ActiveMQ 5.x broker has features to detect that a client has dropped, mainly in the form of heartbeat messages to the connected clients. If you have changed the default configuration of the clients then it can become an issue for the broker to detect drops in a timely manner if the heartbeat intervals are set to large numbers.
It also depends a bit on the client you are using and the protocol it uses but most have some form of heart beating capability so you would need to investigate your clients and what configuration you are operating under.
5.10.0 is a very old broker release as well, upgrading to latest release will give you all the numerous bug fixes that have been done since that release.
I implemented a MQTT message broker using mosquitto on my network. I have one web app publishing things to the broker and several servers that subscribed the same topic. So i have a redundancy scenario.
My question is, using mosquitto alone, is there any way to configure it to publish data only on the first subscriber? Otherwise, all of them will do the same thing.
I don't think that is possible.
But you can do this.
Have the first subscriber program respond with an ack on the channel as soon as it gets the message, and have the redundancy program look for the ack for a small time after the initial message.
IF the ack is received the redundancy should not do anything.
So if the first subscriber gets and uses the message, the others wont do anything even if they get the message.
No this is not possible with mosquitto at the moment (without communication between the 2 subscribers as described in the other answer).
For the new release of the MQTT spec (v5)* there is a new mode called "Shared Subscriptions". This allow s multiple clients to subscribe to a single topic and messages will be delivered by round robin to each client. This is more for load balancing rather than master/slave fail over.
*There are some brokers (HiveMQ, IBM MessageSight) that already support some version of Shared Subscriptions at MQTT v3.1.1, but they implement it in slightly different ways (different topic prefixes) so they are not cross compatible.
I'm building a chat server with .NET. I have tried opening about 2000 client connections and my Linksys WRT54GL router (with tomato firmware) drops dead each time. The same thing happens when I have several connections open on my Azureus bit-torrent client.
I have three questions:
Is there a limit on the number of open sockets I can have in Windows Server 2003?
Is the Linksys router the problem? If so is there better hardware recommended?
Is there a way to possibly share sockets so that I can handle more open client connections with fewer resources?
AS I've mentioned before, Raymond Chen has good advice on this sort of question: If you have to ask about OS limits, you're probably doing something wrong. The IP protocol only allows for a maximum of 65535 ports and many of these are reserved and not available for general use. I would suggest that your messaging protocols need to be thought out in more detail so that OS limits are not an issue. I'm sure there are many good resources describing such systems, and there are certainly people here that would have good ideas about it.
EDIT: I'm going to put some thoughts about implementing a scalable chat server.
First off, designate a single port on the server for clients to communicate through. Whenever a client needs to update the chat state (a new user message for example) do the following:
create message packet
open port to server
send packet
close port
The server then does the following:
connection request received
get packet
close connection
process packet
for each client that requires updating
open connection to clients
send update packet
close connection
When a new chat session is started, the client starting the session sends a 'new session' message to the server with the clients user details and IP address for responses. The server creates a new chat session and responds with the session ID. The client then sends packets containing the messages the user types, the server processes them and forwards the message to other clients in the same session. When a client leaves the chat, it sends a 'end session' message to the server. The server removes the client from the session and destroys the session when there are no more clients in the session.
Hope that gets you thinking.
i have found some answers to this that i feel i should share:
Windows 2003 server has a limit on the number of ports that may be used. but this is configurable via a registry tweak to change the MaxUSerPort setting from 5000 to say, 64k( max).
Exploring further, i realize that the 64k port restriction is actually per IP address, hence a single server can easily attain much more ports, and hence TCP connections by either installing multiple network cards, or binding more than one IP address to a network card. that way, you can scale your system to handle n x 64k ports.
Had for days a problem with the available sockets on my Window 7 machine. After reading some articles about socket leaks in Win 7, I applied a Windows patch - nothing changed.
Below there is an article describing windows connection problems in great detail:
http://technet.microsoft.com/en-us/magazine/2007.12.network.aspx
For me it worked the following:
Open Regedit
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\Tcpip\Parameters: Create TcpNumConnections, REG_DWORD, decimal value 500 (this can be set according to your needs); EnableConnectionRateLimiting, REG_DWORD, value 0;
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\Tcpip: Create MaxUserPort, REG_DWORD, decimal value 65534
Restart Windows