Corona SDK position prediction - lua

I am using Corona sdk to create a multiplayer game. Each device sends 15 messages per second containing the position of their character. This results in a choppy movement because some messages take slightly different times to be received and it only sends 15 per second in a 30 fps game. How could I take the difference in the position or rotation of the character from the previous frame to current frame to predict the position if no data is received in the next frame. I am completely open to other solutions too! Thanks!

If the send rate is fixed and known to both parties (sender and receiver), then the receiver can assume it. In that case, dead reckoning is a well-established technique that is easy to apply. For example, if sender sends data every 1/15th second, then receiver can assume that, after the first packet has been received, any other packets will be 1/15th second apart. If something is not received after 1/15th second, then receiver makes a guess as to what the data would be at 1/15th second if it had been received. Then when the packet actually comes, it corrects the guess.
The only thing to guarantee is the order of packets, and delivery. If packets never drop, and always get there in order (i.e., TCP), you're laughing.
If order is guaranteed but some get dropped, then receiver needs to know when drop occurred. I think this could be corrected via a counter that is part of packet. So if receiver gets a packet with counter = X, and next packet has packet = X+2, then receiver knows that the packet was dropped because order is guaranteed, so time delta between the two is 2/15th second.
If no dropping but order not guaranteed, the counter will help there too: your receiver is guaranteed that if X has arrived, X+1 will arrive eventually, so even if it gets X+2, it should save X+2 and wait till X+1 arrives.
Finally, if neither delivery nor order are guaranteed (the case of UDP packets on a WAN), then once more counter is sufficient, but the algorithm changes: if it gets packet with counter=X, and it gets X+2, it could mean that X+1 has been dropped, or will arrive shortly, but there is no way to know. So the receiver could wait a small amount (maybe another 1/15th second) if X+1 hasn't arrived by then, declare it as dropped (so if does eventually arrive it will be ignored).
The near-constant frequency of sender is critical in the above.

Related

CAN BUS - ACK field

When a receiving node would like to ACK acknowledge the receipt of a frame, what exactly is it supposed to transmit?
The same frame just with a dominant bit for ACK?
No, every CAN node controller on the bus will usually listen to a message transferred and will automatically check this frame for correctness (CRC).
And it will usually also acknowledge the message by overwriting the recessive ACK=1 ("send" by the transmitter) with a dominant ACK=0 during message transfer. So there is no second message needed to acknowledge the first one.
That's also why you can't have any CAN bus with just one node, because there is no one else to acknowledge and check its sent frames.
Of course, in some controllers these checks can be deactivated or ignored, but not in the common use case.

how to detect XMIT FIFO is full on a UART 16550 or higher

I have read already lot of specs and code about UART, but I cannot find any indication on how to find by software interface if the transmit FIFO is full. There is an interrupt when the FIFO is empty. Then I can write at least N characters, where N is the fifo size. But when I have written these N characters, a number of them have already been sent. So I can in fact write more than N characters, but there is no FIFO full interrupt. The specs says that when the fifo is full indeed the TXREADY pin on the chip is inverted. Is there a way to find this by software ? The Line Status Register bit only says that the fifo is not empty, which does not mean it is full...
Anyone can help ? I want to write characters until the fifo is full...
Looks to me also that they neglected this, but most people get by with the thing as it is. The usual way to use it is to get an interrupt, fill the FIFO (normally very fast compared to serial data rate) and then return.
There is a situation where it seems to me that what you are asking for could be nice...if transmitting in a polling mode...you want to send 10 bytes, your polling shows the FIFO is not empty, so you have not way to know if you can send them all or not...either you wait there until it is empty, which sort of defeats the purpose of the FIFO, or you continue polling other stuff until you get back to checking for FIFO empty, and maybe that slows your overall transmission rate. Guess it is not a very usual way to operate, so nobody worries about it.
The 16550D datasheet says the following:
The transmitter holding register interrupt (02) occurs when the XMIT
FIFO is empty; it is cleared as soon as the transmitter holding
register is written to (1 to 16 characters may be written to the XMIT
FIFO while servicing this interrupt) or the IIR is read.
This means that when the Line Status Register register (port base + 5) indicates Transmitter Empty condition (in bit 5), the transmit FIFO is completely empty and you may write up to 16 bytes to the transmitter holding register (port base + 0). It is important not to write more than 16 bytes between occurrences of the transmitter empty bit being set.
If you don't need to write 16 bytes at the point when you received the IRQ (or saw the transmitter register empty bit set, if polling), you can either keep track of how many bytes you wrote since the last transmitter empty state, or, just defer writing further bytes until the next transmitter empty state.

Measuring end-to-end latency with the Paho sample pub/sub app

My aim is to measure MQTT device-to-device message latency (not throughput) and I'm looking for feedback on my code-hacks. The setup is simple; just one device serving as two end-points (old Linux PC with two terminal sessions; one running the subscriber and the other running the publisher sample app) and the default broker at tcp://m2m.eclipse.org:1883). I inserted time-capturing code-fragments into the C-language publish/subscribe sample apps on the src/samples folder.
Below are the changes. Please provide feedback.
Changes to the subscribe sample app (MQTTAsync_subscribe.c)
Inserted the lines below at the top of the msgarrvd (message arrived) function
//print arrival time
struct timeval tv;
gettimeofday (&tv, NULL);
printf("Message arrived: %ld.%06ld\n", tv.tv_sec, tv.tv_usec);
Changes to the publish sample app (MQTTAsync_publish.c)
Inserted the lines below at the top of the onSend (callback) function
struct timeval tv;
gettimeofday (&tv, NULL);
printf("Message with token value %d delivery confirmed at %ld.%06ld\n",
response->token, tv.tv_sec, tv.tv_usec);
With these changes (after subtracting the time message arrived at the subscriber from the time that the delivery was confirmed at the publisher), I get a time anywhere between 1 millisecond and 0.5 millisecond.
Questions
Does this make sense as a rough benchmark on latency?
Is this the round-trip time?
Is the round-trip time in the right ball-park? Should be less? more?
Is it the one-way time?
Should I design the latency benchmark in a different way? I need a rough measurements (I'm comparing with XMPP).
I'm using the default QoS value (1). Should I change it?
The publisher takes a finite amount of time to connect (and disconnect). Should these be added?
The 200ms latency is high ! Can you please upload your code here ?
Does this make sense as a rough benchmark on latency?
-- Yes it makes sense. But a better approach is to make an automated time subtract with subscribed message and both synchronized to NTP.
Is this the round-trip time? Is it the one-way time?
-- Messages got published - you received ACK for publisher and same message got transferred to subscribed client.
Is the round-trip time in the right ball-park? Should be less? more?
-- It should be less.
Should I design the latency benchmark in a different way? I need a rough measurements (I'm comparing with XMPP).
I'm using the default QoS value (1). Should I change it?
-- Try with QoS 0 ( fire and forget )
The publisher takes a finite amount of time to connect (and disconnect). Should these be added?
-- Yes. It needs to be added but this time should be very small.

time measurements between CBPeripheralManager advertisment

I know from the Central side you can find the time between a peripheral's advertisements, but I would like to know if you could do this on the peripheral side.
For instance, lets say I have an algorithm that uses time as an input. Now lets look at two consecutive advertisements. the first advertisement will conveniently be from 0-20 ms and the second advertisement will be from 21-40 ms. My algorithms results are:
result 1 = algorithm output for 0-20 ms or the time interval between the start advertising call and the actual time the first advertisement is sent.
result 2 = algorithm output for 21-40 ms or the time interval between the first advertisement and the second advertisement.
I know there is going to be some lagging due to algorithm processing time and advertisement packet building time, but I would like to know if its possible to record the algorithms output as a string from 0-19 or almost 20 ms until an advertisement is ready to be sent out at 20 ms with that string. Then after that, the second advertisment will pick up the algorithms output where the first advertisement left off such as at 20 or 21 ms until its time for that second advertisement to be sent out at 40 ms. I cannot find this in apples documentation. The thing is I know I will not be able to predict when the advertisement will be sent out. I just want to basically say:
-When advertisement 1 is sent out at t1
{
send out data gathered from 0 - t1;
}
-When advertisement 2 is sent out at t2
{
send out data gathered from t1 - t2;
}
Again I know the time you end the interval cannot be the exact same time you send out the advertisement. I just want something very close to it. Also I know i could just do the algorithm on the central side using the time elapsed between advertisements received but i actually want to use this with the peripherals sensor readings as well such as total magnetometer readings between advertisement intervals.
Thanks and sorry for the long explanation.

libpcap: what is the efficiency of pcap_dispatch or pcap_next

I use libpcap to capture a lot packets, and then process/modify these packets and send them to another host.
First, I create a libpcap handler handle and set it NON-BLOCKING, and use pcap_get_selecable_fd(handle) to get a corresponding file descriptor pcap_fd.
Then I add an event for this pcap_fd to a libevent loop(it is like select() or epoll()).
In order to avoid frequently polling this file descriptor, each time there are packet arrival event, I use pcap_dispatch to collect a bufferful of packets and put them into a queue packet_queue, and then call process_packet to process/modify/send each packet in the queue packet_queue.
pcap_dispatch(handle, -1, collect_pkt, (u_char *)packet_queue);
process_packet(packet_queue);
I use tcpdump to capture the packets that are sent by process_packet(packet_queue), and notice:
at the very beginning, the interval between sent packets is small
after that several packets are sent, the interval becomes around 0.055 second
after 20 packets are sent, the interval becomes 0.031 second and keeps on being 0.031 second
I carefully checked my source code and find no suspicious blocks or logic which leads to so big intervals. So I wonder whether it is due to the problem of the function pcap_dispatch.
are there any efficiency problem on pcap_dispatch or pcap_next or even the libpcap file descriptor?
thanks!
On many platforms libpcap uses platform-specific implementations for faster packet capture, so YMMV. Generally they involve a shared buffer between the kernel and the application.
At the very beginning you have a time window between the moment packets start piling up on the RX buffer and the moment you start processing. The accumulation of these packets may cause the higher frequency here. This part is true regardless of implementation.
I haven't found a satisfying explanation to this. Maybe you got behind and missed a few packets, so you the time between packets resent becomes higher.
This is what you'd expect in normal operation, I think.
pcap_dispatch is pretty much as good as it gets, at least in libpcap. pcap_next, on the other hand, incurs in two penalties (at least on Linux, but I think it does in other mainstream platforms too): a syscall per packet (libpcap calls poll for error checking, even in non-blocking mode) and a copy (libpcap releases the "slot" in the shared buffer ASAP, so it can't just return that pointer). An implementation detail is that, on Linux, pcap_next just calls pcap_dispatch for one packet and with a copy callback.

Resources