I'm using esp8266 with the firmware produced with Marcel's NodeMCU custom builds http://frightanic.com/nodemcu-custom-build/
I tested the "dev" branch and the "master".
I changed a little bit the "Connect to MQTT Broker" code found here https://github.com/nodemcu/nodemcu-firmware
-- init mqtt client with keepalive timer 120sec
m = mqtt.Client("clientid", 120, "user", "password")
m:on("connect", function(con) print ("connected") end)
m:on("offline", function(con) print ("offline") end)
-- m:connect( host, port, secure, auto_reconnect, function(client) )
-- for secure: m:connect("192.168.11.118", 1880, 1, 0)
-- for auto-reconnect: m:connect("192.168.11.118", 1880, 0, 1)
m:connect("192.168.11.118", 1880, 0, 0, function(conn) print("connected") end)
-- publish a message with data = hello, QoS = 0, retain = 0
local i = 1
while i < 10 do
m:publish("/topic","hello",0,0, function(conn) print("sent") end)
i = i + 1
end
m:close();
I'm using mosquitto as a mqtt broker and I have launched a subscriber on all topic #.
The result is: the messages arrives correctly but they are really slow to arrive on the subscriber (around 1 second each)... why?
I tried also to change the mqtt architecture in favor of UDP.. the esp8266 send the 100 messages fast.
UPDATE 1#:
I have done some more experiments:
Testing the broker and the
subscriber with an [android phone + a mqtt publisher], the subscriber
receive messages immediately
I loaded a nodemcu with "debug" enabled
and I have done an interesting discovery: read on
For what I have understood reading debug log and source code..
There is a sort of queue that saves the messages in memory and a timer (I don't know the frequency/interval) reads a message from the queue and it sends it through mqtt.
If you try to send 100 messages, the queue increases, but it is not able to deliver messages at the same time (maybe there is a race condition? ).
There is a second problem here, after it has enqueued more than 15 messages, the firmware crash and the device reboots: it seems a symptom of memory no more available.
It may not be the answer you're looking for but yes, NodeMCU MQTT uses an internal queue for messages. It was added at the end of March 2015. It was added due to the asynchronous nature of the NodeMCU API.
If you have two calls to m.publish in quick succession, remember they're asynchronous, there isn't enough time for the 1st message to be delivered before the 2nd is triggered. Before the introduction of that queue the firmware would simply have crashed if you had published in a loop.
I simplified your code even more and added some debugging statements:
m = mqtt.Client("clientid", 120, "user", "password")
m:connect("m20.cloudmqtt.com", port, 0, function(conn)
print("MQTT connected")
for i=1,10 do
print("MQTT publishing...")
m:publish("/topic", "hello", 0, 0, function(conn)
print("MQTT message sent")
print(" heap is " .. node.heap() .. " bytes")
end)
print(" heap is " .. node.heap() .. " bytes in loop " .. i)
end
end)
Knowing that the calls to m.publish are asynchronous the output shouldn't be too surprising:
MQTT connected
MQTT publishing...
heap is 37784 bytes in loop 1
MQTT publishing...
heap is 37640 bytes in loop 2
MQTT publishing...
heap is 37520 bytes in loop 3
MQTT publishing...
heap is 37448 bytes in loop 4
MQTT publishing...
heap is 37344 bytes in loop 5
MQTT publishing...
heap is 37264 bytes in loop 6
MQTT publishing...
heap is 37192 bytes in loop 7
MQTT publishing...
heap is 37120 bytes in loop 8
MQTT publishing...
heap is 37048 bytes in loop 9
MQTT publishing...
heap is 36976 bytes in loop 10
sent
heap is 38704 bytes
sent
heap is 38792 bytes
sent
heap is 38856 bytes
sent
heap is 38928 bytes
sent
heap is 39032 bytes
sent
heap is 39112 bytes
sent
heap is 39184 bytes
sent
heap is 39256 bytes
sent
heap is 39328 bytes
sent
heap is 39400 bytes
You see that the available heap space is decreasing while publishing and increasing again as the queue is emptied.
Related
I have the following setup:
A FPGA sending out data on UART at a baudrate of 3Mbps. The data transmitted is a chunk of 1024 bytes sent at a variable periodicity ranging from 20ms to 200ms. (So even in the worst case, datarate is far from 3Msps)
A FTDI 232RG
A piece of python running on my computer (Windows), doing basically : opening a COM port with pyserial, 3Msps, polling the in_waiting until it reaches the size of a packet (1024 bytes), formatting the packet received and print it on the screen
The script works well for low repetition frequency, but I face issues with higher repetitions (typically 20ms). When the periodicity in 20ms I eventually end up getting a buffer overflow somewhere before the in_waiting. I checked the timing of my python loop and it takes about 4ms. So it looks like there is something upstream (in the FTDI or Windows) that feeds the pyserial buffer with more than one packet within the 4ms following a packet.
I tried changing the FTDI latency in the driver (from 16ms default down to a few ms) but it does not seem to help.
I am currently clueless about what is happening. Would you have any advice about how to understand better what is happening?
Thanks for your help!
You could create a "loop" between TX and RX and run the following code (tested with a FT2232H, so mostlikely you need to change the identifier string):
import time
import serial
import serial.tools.list_ports
print([(x[0],x[2]) for x in serial.tools.list_ports.comports()])
port = [x[0] for x in serial.tools.list_ports.comports() if "FT4Q1LJFB" in x[2]][0]
ser = serial.Serial(port,12000000)
while True:
t0 = time.time()
counter = 0
for i in range(1000):
ser.write([1]*3000)
recv = ser.read(ser.inWaiting())
delta_t = time.time() - t0
counter += len(recv)
print(counter / delta_t)
For me the following output is shown
[('COM7', 'USB VID:PID=0403:6010 SER=FT4Q1LJFA'), ('COM8', 'USB VID:PID=0403:6010 SER=FT4Q1LJFB')]
0.0
0.0
0.0
0.0
96787.81184093593
1201991.0268273412
1201197.0857713912
1201166.9350959768
1201445.4072856384
You will notice that it is 0.0 in the beginning. This is because I connected RX and TX after starting the program resulting in a ramping up of the received bytes. This is the "default" mode meaning 8 bits + 1 start bit + 1 stop bit = 10 bits per word which explains why "only" 1.2 Mbytes per second are transmitted.
I'm developing a forwarding protocol using Contiki OS. The protocol is running on top of IEEE802.15.4 TSCH mode. The protocol requires to add a certain amount of packets during a short period of time very often I get following error:
[RLL]:Send to Parent 0 base timeslot: 40, currentTimeslot: 1, send timeslot: 45 at: asn-0.46c41d
TSCH: send packet to 255 with seqno 0, queue 0 1, len 8 120
[RLL]:Send to CS base timeslot: 40, currentTimeslot: 2, send timeslot: 50 at: asn-0.46c41e
TSCH-queue:! add packet failed: 0 #0x20003004 8 #0x0 #0x0
TSCH:! can't send packet to 255 with seqno 0, queue 1 1
While it adds the first packet, it can't add the second packet. The queue is not full, i checked that.
The error simply says, its not possible to allocate memory for another packet, while there should be more than enough space.
Probably its just a simple setting i oversea but I can't find it.
If anyone has a suggestion, please let me know.
Conrad
Twilio's Message resource has a "status" property that indicates whether a SMS message is "queued", "sending", "failed", etc. If a Message instance failed to deliver, one possible error message is "Queue overflow". In the Twilio documentation, the description for this error case is: "You tried to send too many messages too quickly and your message queue overflowed. Try sending your message again after waiting some time."
Is the queue referenced in error code 30001 an instance of this resource? https://www.twilio.com/docs/api/rest/queue
Or is the queue (in the case of a 30001 error code) something that Twilio maintains on their end? If Twilio does throttling behind the scenes (queueing SMS messages per sending phone number), what is the size of that queue? How much would we have to exceed the rate limit (per phone number) before the queue overflow referenced in error code 30001 occurs?
Emily, message queue is not related to the queue resource you linked to above and it is something maintained on our end.
Twilio can queue up to 4 hours of SMS. This means, we can send out 1 sms per second, if there are more than 14,400 messages in the queue, all the messages queued up after will fail with 30001 error queue overflow and will not be sent. This info is for Long Code numbers. The link above explains processing for other scenarios.
A few suggestions to avoid the error:
Keep messages to at most 160 characters if possible. But if not
possible, calculate how many SMS messages each message will be (if
you are not sure you can always send 1 test message and see how much
you are charged for that message).
Based on the assumption that your messages is 160 characters,
throttle the sending rate to 3600 messages per hour (1 messaage/sec *
60 sec/min * 60 min/hr).
Please let me know if you've got any other questions.
Each of the Twilio phone numbers(senders) has a separate queue in which 14400(4 hr x 60 min x 60 sec) message segments can be queued. 1 segment is sent in one second.
What is a message segment?
A message segment is not a complete message but a part of a message.
Normally SMS is sent in terms of message segments and all message
segments are combined on the user’s mobile to create the actual SMS.
Twillio message segment config:
1 character = 8 bits(1 byte)
GSM encoding = 7 bit per character
UCS-2 encoding = 16 bit per character
Data Header = 6 bytes per segment
Summary: Each character takes 8 bits, If GSM encoding is used, each
character will take 7 bits or if UCS-2 encoding is used, each char
will take 16 bits. In the case of multiple segments, 6 bytes per
segment will be used for data headers(responsible for combining all
segments of the same SMS on user mobile)
Character per Message Segment:
GSM encoding when single segment = (140 char bytes x 8 bits)/ 7 bits =
160 characters
UCS-2 encoding when single segment = (140 char bytes x 8 bits)/ 16
bits = 70 characters
GSM encoding when multiple segment = ((140 char bytes - 6 header
bytes) x 8 bits)/ 7 bits = 154 characters
UCS-2 encoding when multiple segment = ((140 char bytes - 6 header
bytes) x 8 bits)/ 16 bits = 67 characters
Based on what encoding is used(check via Twilio Admin) for your message, you can calculate how many SMS can be in the queue at a time.
References:
https://support.twilio.com/hc/en-us/articles/115002943027-Understanding-Twilio-Rate-Limits-and-Message-Queues
https://www.twilio.com/blog/2017/03/what-the-heck-is-a-segment.html
I was told to ask this here:
10:53:04.042608 IP 172.17.2.12.42654 > 172.17.2.6.6000: Flags [FPU], seq 3891587770, win 1024, urg 0, length 0
10:53:04.045939 IP 172.17.2.6.6000 > 172.17.2.12.42654: Flags [R.], seq 0, ack 3891587770, win 0, length 0
This states that the flags set are FPU and R. What flags do these stand for and what kind of exchange is this?
The flags are:
F - FIN, used to terminate an active TCP connection from one end.
P - PUSH, asks that any data the receiving end is buffering be sent to the receiving process.
U - URGENT, indicating that there is data referenced by the urgent "pointer."
R - RESET, indicating that a packet was received that was NOT part of an existing connection.
It looks like the first packet was manufactured, or possibly delayed. The argument for it being manufactured is the urgent flag being set, with no urgent data. If it was delayed, it indicates the normal end of a connection between .12 and .6 on port 6000, along with a request that the last of any pending data sent across the wire be flushed to the service on .6.
.6 has clearly forgotten about this connection, if it even existed. .6 is indicating that while it got the FIN packet, it believes that the connection that FIN packet refers to did not exist.
If .6 had a current matching connection, it would have replied with a FIN-ACK instead of RST, acknowledging the termination of the connection.
I'm trying to write a dissector for the Safari Remote Debug protocol which is based on bplists and have been reasonably successful (current code is here: https://github.com/andydavies/bplist-dissector).
I'm running into difficultly with reassembling packets though.
Normally the protocol sends a packet with 4 bytes containing the length of the next packet, then the packet with the bplist in.
Unfortunately some packets from the iOS simulator don't follow this convention and the four bytes are either tagged onto the front of the bplist packet, or onto the end of the previous bplist packet, or the data is multiple bplists.
I've tried reassembling them using desegment_len and desegment_offset as follows:
function p_bplist.dissector(buf, pkt, root)
-- length of data packet
local dataPacketLength = tonumber(buf(0, 4):uint())
local desiredPacketLength = dataPacketLength + 4
-- if not enough data indicate how much more we need
if desiredPacketLen > buf:len() then
pkt.desegment_len = dataPacketLength
pkt.desegment_offset = 0
return
end
-- have more than needed so set offset for next dissection
if buf:len() > desiredPacketLength then
pkt.desegment_len = DESEGMENT_ONE_MORE_SEGMENT
pkt.desegment_offset = desiredPacketLength
end
-- copy data needed
buffer = buf:range(4, dataPacketLen)
...
What I'm attempting to do here is always force the size bytes to be the first four bytes of a packet to be dissected but it doesn't work I still see a 4 bytes packet, followed by a x byte packet.
I can think of other ways of managing the extra four bytes on the front, but the protocol contains a lookup table thats 32 bytes from the end of the packet so need a way of accurately splicing the packet into bplists.
Here's an example cap: http://www.cloudshark.org/captures/2a826ee6045b #338 is an example of a packet where the bplist size is at the start of the data and there are multiple plists in the data.
Am I doing this right (looking other questions on SO, and examples around the web I seem to be) or is there a better way?
TCP Dissector packet-tcp.c has tcp_dissect_pdus(), which
Loop for dissecting PDUs within a TCP stream; assumes that a PDU
consists of a fixed-length chunk of data that contains enough information
to determine the length of the PDU, followed by rest of the PDU.
There is no such function in lua api, but it is a good example how to do it.
One more example. I used this a year ago for tests:
local slicer = Proto("slicer","Slicer")
function slicer.dissector(tvb, pinfo, tree)
local offset = pinfo.desegment_offset or 0
local len = get_len() -- for tests i used a constant, but can be taken from tvb
while true do
local nxtpdu = offset + len
if nxtpdu > tvb:len() then
pinfo.desegment_len = nxtpdu - tvb:len()
pinfo.desegment_offset = offset
return
end
tree:add(slicer, tvb(offset, len))
offset = nxtpdu
if nxtpdu == tvb:len() then
return
end
end
end
local tcp_table = DissectorTable.get("tcp.port")
tcp_table:add(2506, slicer)