what is advantage of can rx fifo? - buffer

I am very interested in AUTOSAR.
I studied CAN message buffer and CAN Rx FIFO.
I understood thart CAN Rx FiFO is collection of message box.
I have a question.
When using the CAN message buffer, it is known that the can message id in the range is processed first through message filtering.
If fifo is a collection of message buffer, I think the only advantage is memory.
Is that all? I'm really curious about the reason to use CAN RX FIFO.

This depends a lot on the specific CAN controller hardware used. But in general, yes, message id filtering and acceptance masking (if used) is applied before a message ends up in Rx FIFO.
The reason behind having a Rx FIFO is simply to allow your program some time to do other things, while there are incoming messages. When you inspect the FIFO, it is often best to do so until you've read all messages and emptied the FIFO.
The more modern/advanced CAN controllers uses something called "mailboxes" where specific CAN identifiers you are interested in end up in their own dedicated "mailbox" message buffer. The setup then is to have dedicated mailbox for all high priority messages you expect, and leave the RX FIFO for low priority stuff and/or messages that you aren't interested in.

Related

Bluetooth Low Energy data transmission on iOS

I'm recently working on a project which uses Bluetooth Low Energy. I implemented most of communication protocol, however I started having concerns, that actually I don't know how the data transmission works and if the solution that I implemented is going to behave in the same way with all devices.
So my main concern is what data chunk is received when I get a notification from peripheral(_:didUpdateValueFor:error:)? Is it only as big as negotatiated MTU size? Or maybe iOS receives information about chunk size and waits to receive it all before triggering peripheral(_:didUpdateValueFor:error:)?
When a peripheral sends chunks let's say 100 bytes each, can I assume that I will get always in a single notification 100 bytes? Or could it be last 50 bytes from previous chunk and first 50 bytes from the next one? That would be quite tricky and hard to detect where is the beginning of my frame.
I tried to find more information in Apple documentation but there is nothing about it.
My guess is that I receive always a single state of characteristic. Therefore it would mean that chunks depend on implementation on peripheral side. But what if characteristic is bigger than MTU size?
First, keep in mind that sending streaming data over a characteristic is not what characteristics are designed for. The point of characteristics is to represent some small (~20 bytes) piece of information like current battery level, device name, or current heartbeat. The idea is that a characteristics will change only when the underly value changes. It was never designed to be a serial protocol. So your default assumption should be that it's up to you to manage everything about that.
You should not write more data to a characteristic than the value you get from maximumWriteValueLength(for:). Chunking is your job.
Each individual value you write will appear to the receiver atomically. Remember, these are intended to be individual values, not chunks out of a larger data stream, so it would make no sense to overlap values from the same characteristic. "Atomically" means it all arrives or none of it. So if your MTU can handle 100 bytes, and you write 100 bytes, the other side will receive 100 bytes or nothing.
That said, there is very little error detection in BLE, and you absolutely can drop packets. It's up to you to verify that the data arrived correctly.
If you're able to target iOS 11+, do look at L2CAP, which is designed for serial protocols rather than using GATT.
If you can't do that, I recommend watching WWDC 2013 Session 703, which covers this use case in detail. (I am having trouble finding a link to it anymore, however.)

RDMA memory buffer

I knew that RDMA requires both sender and receiver register their memory before data transfer. I am wondering if the registered memory on both sender and receiver should be same or not. If same, I think RDMA wastes lots of memory since both side basically are storing the identical data. Is there any way to reduce such problem?
After a network transfer, both the sender and the receiver contain copies of the same information. However, depending on the application and communication pattern, buffers on both sides can be reused. For example, the initiator of a remote read operation can use the same buffers for the results of the read after it is done with the previous results.

LAN Driver Interruptions

I need to know how the computer handles Local Area Network Input and Output Processor interruptions. I have been looking for a while but can't seem to find anything. Came across some RJ-45 port information but not much of what I specifically need. If someone has some information on how the CPU interrupts a process to call the pointer and therefore the driver, plus how this process works it would be much appreciated.
Thanks
Typically, the driver for the LAN card configured the card to issue an interrupt when the receive buffer gets close to full or the send buffer gets close to empty. Typically, these buffers live in system memory and the network hardware uses DMA to pull transmitted packets and store received packets in system memory.
When the interrupt triggers, some process on some core is typically interrupted and the network code begins executing. If it's a send interrupt and there are more packets to send, more packets are attached to the send buffer. If it's a receive interrupt, typically more packet buffers are attached to the receive buffer. The driver typically arranges for a "bottom half" to be dispatched to handle whatever other work needs to be done (such as processing the received packets) and the the interrupts completes.
There's a ton of possible variation based upon many factors, but this is the basic idea.

Is there a UDP-based protocol that offers more robust sending of large data elements without datagram reliability?

On one end, you have TCP, which guarantees that packets arrive and that they arrive in order. It's also designed for the commodity Internet, with congestion control algorithms that "play nice" in traffic. On the other end of the spectrum, you have UDP, which doesn't guarantee arrival time and order of packets, and it allows you to send large data to a receiver. Somewhere in the middle, you have reliable UDP-based programs, such as UDT, that offer customized congestion control algorithms and reliability, but with greater speed and flexibility.
However, what I'm looking for is the capability to send large chunks of data over UDP (greater than the 64k datagram size of UDP), but without a concern for reliability of each individual datagram. The idea is that the large data is broken down into datagrams of a specified size (<= 64,000 bytes), probably with some header data stuck on the front and sent over the network. On the receiving side, these datagrams are read in and stored. If a datagram doesn't arrive, all of the datagrams associated with that transfer are simply thrown out by the client.
Most of the "reliable UDP" implementations try to maintain reliability of each datagram, but I'm only interested in the whole, and if I don't get the whole, it doesn't matter - throw it all away and wait for the next. I'd have to dig deeper, but it might be possible with custom congestion control algorithms in UDT. However, are there any protocols with this approach?
You could try ENet, whilst not specifically aimed at what you're trying to do it does have the concept of 'fragmented data blocks' whereby you send data larger than its MTU and it sends as a sequence of datagrams of its MTU with header details that relate one part of the sequence to the rest. The version I'm using only supports 'reliable' fragments (that is the ENet reliability layer will kick in to resend missing fragments) but I seem to remember seeing discussion on the mailing list about unreliable fragments which would likely to exactly what you want; i.e. deliver the whole payload if it all arrives and throw away the bits if it doesn't.
See http://enet.bespin.org/
Alternatively take a look at the answers to this question: What do you use when you need reliable UDP?

Is UDP suitable for transmitting discrete events from iOS

I'm sending messages upon detection of both discrete and continuous gestures. For continuous, UDP should be fine because even if a couple packets are lost, there's so many change events that it shouldn't matter.
I'm wondering about discrete events though, for example tap or swipe. Since there would only be a single packet sent, what is the risk that it doesn't arrive, and the application at the other end isn't notified of the gesture?
I understand that TCP guarantees delivery, but I'm thinking it might be too much overhead for the high frequency of messages generated from continuous gestures.
If your only concern with TCP is the extra overhead, then I wouldn't worry too much. Certainly, TCP has more overhead than UDP. However, the overhead isn't that much, especially for the modest amount of data you are likely to be sending. Some quick back-of the envelope calculations:
Assume you want to send status information every millisecond. (Likely more often than you really need to.)
Assume your individual messages can easily fit within 50 bytes / each. (Likely larger than you really need.)
Total bandwidth 50 bytes / ms = 400 bits / ms = 400 kbps
Even with these larger-than-necessary messages, and faster-than-needed updates, your total bandwidth is only around 5% of a slowish 802.11b wireless network. The extra overhead of TCP isn't likely to make a big difference here.
Personally, I tend to stick with TCP unless I have a strong reason not to. Sure, you could save some extra bits by using UDP, but to me, having the reliable delivery (including correctly ordered data, non duplicated data) is worth the extra overhead. One less thing to worry about.
EDIT: TCP does have some other drawbacks. In particular, it can take a bit more programming effort to create the initial connection, and to parse individual messages from the byte stream. UDP can certainly make these tasks easier. However, you didn't list programming complexity as one of your criteria, so I focused on your overhead questions instead.
LATENCY: As noted in comments below, latency is a critical factor. In this case, UDP has some significant advantages: If a packet is dropped, TCP will wait for that packet to be retransmitted before sending others. The benefit of this approach is that data are guaranteed to arrive in the original order. (A big plus when messages must be processed sequentially.) The drawback, of course, is that there could be a significant delay for new data.
UDP, on the other hand, will continue sending subsequent packets, even if one is dropped. The good news is that UDP reduces the latency of the remaining packets. The bad news, though, is that you must add some sort of "retry" in case discrete events are lost-- if you do so, you now have cases of gestures arriving out of order, perhaps significantly so. Can your app handle a case where a "move then click" gets changed to a "click then move"? If you chose to go the UDP route, you'll need to carefully think through all these cases to make sure it won't cause (perhaps subtle) problems in your app.

Resources