Can I read the bytes transmitted over the CAN-bus on the transmitting side? - can-bus

Sorry for my question (and my English too). I am a newbie in CAN-bus. I have theoretical question. If I pass any data to one CAN-socket, will I be able to read the same data from the same socket?. After all, the transmitted data appears not only on other nodes of the CAN-bus, but also on the one from which they were transmitted?

Thank you all, I understand. I wrote a small program in C and it turned out that after successfully (it's important!) sending the data to the socket corresponding to the CAN interface, you can immediately read back the same data transmitted to the CAN bus. Perhaps this is how the CAN driver is implemented on Linux (can4linux).
Moreover, in order to be able to read the sent data back, it is important that the CAN interface is not alone in the network, since the transmitter node sets the ACK bit value in the transmitted frame to a recessive state and listens to its own transmission, waiting for all receiving nodes to set the ACK confirmation bit in the dominant state in this transmitted frame, which means that there is no error. When listening to its transmission reveals that the ACK bit is in the dominant state (meaning the transmission was successful), the transmitted frame is placed by the CAN interface driver in the input buffer and can be read back from the our socket.
If the transmission fails (the ACK bit remains in a recessive state), the CAN controller will attempt to transmit the frame in a loop. In the program, this will look like the write function has ended, but when trying to immediately read the data, the program blocks inside the read function if the blocking read mode is set.

Related

How does error handling work in SCTP Sockets API Extensions?

I have been trying to implement a wrapper library for the Linux interface to SCTP sockets, and I am not sure how to integrate the asynchronous style of errors (where they are delivered via events). All example code I have seen, if it deals with the errors at all, simply prints out the information related to the error when it is received, but inserting error-handling code there seems like it would be ineffective, because by that point all of the context related to the original message which was sent has been lost and only a 32-bit integer sinfo_context remains. It also seems that there is no way to directly tell when a given message has been acknowledged successfully by the remote peer, which would make it impossible to implement an approach which listens for errors after sending a message, because the context information for successfully-delivered messages could never be freed.
Is there a way to handle the errors related to a given sending operation as part of the call to a send function, or is there a different way to approach error handling for SCTP which does not lose the context of the error?
One solution which I have considered is using the SCTP_SENDER_DRY notification to tell when packets have been sent, however this requires sending only one packet at a time. Another idea is to use the peer's receiver window size together with the sinfo_cumtsn field of sctp_sndrcvinfo to calculate how much data has been acknowledged as fully received using the cumulative TSN, however there are a couple of disadvantages to this: first, it requires bookkeeping overhead to calculate a number of bytes received by the peer based on the cumulative TSN (especially if the peer's window size may change); second, it requires waiting until all earlier packets were received before reporting success, which seems to defeat the purpose of SCTP's multistreaming; and third, it seems like it would not work for unordered packets.

Is it possible to setup timeout for receiving data over USB in STM32 MCUs?

I'm wondering if this is possible to setup a timeout for receiving data over USB interface in STM32 microcontrollers. Such approach is possible for example in UART connection (please refer to AN3109, section 2. Receive DMA timeout).
I can't find anything similar related to USB interface. What's more, it is said that DMA for USB should be enabled only if really necessary because data transfer shall be aligned to 32-bit word.
You have a receive call back function (if you use the HAL) in your ...._if.c file. Copy reived chars to the buffer. Implement timeout there.
What you refer to in case of UART is either DMA receive timeout as you've said or (when not using DMA) an IDLE interrupt. I'm not aware of such thing coming "out of the box" for USB CDC - you'd have to implement this timeout yourself, which shouldn't be too hard. Have a timer (hardware of software) that you re-trigger every time you receive data. Set its period to the timeout value of your choice and do protocol parsing after timeout elapses.
If I had to add anything - these kind of problems (not knowing how many bytes to receive) are typically solved at the protocol level. Assuming binary protocol, one way of achieving this is having frame start and end bytes which never occur in data (and if they do - you escape them) in which case you receive everything starting after "start byte" until you reveive "end byte". Yet another way is having a "start byte" and a field indicating how many bytes there are to receive. All of it should of course be checksumed in some way.
Having said that, if you have an option to change the protocol, you really should do so. Relying on timings in your communication, especially on such low level only invites problems and headaches in the long run. You introduce tight coupling between your protocol layer and interface layer. This is going to backfire on you every time you decide to use a different interface, as you'll have to re-invent the same thing again. Not to mention how painful it's going to be when you decide to move to TCP/IP with all its greatness - network jitter, dropped packets etc.

Zero byte receives: purpose clarification

I am learning server development with IO Completion Ports. My book, "Network Programming for Microsoft Windows - Second Edition", states the following:
With every overlapped send or receive operation, it is probable that
the data buffers submitted will be locked. When memory is locked, it
cannot be paged out of physical memory. The operating system imposes a
limit on the amount of memory that may be locked. When this limit is
reached, overlapped operations will fail with the WSAENOBUFS error. If
a server posts many overlapped receives on each connection, this limit
will be reached as the number of connections grow. If a server
anticipates handling a very high number of concurrent clients, the
server can post a single zero byte receive on each connection. Because
there is no buffer associated with the receive operation, no memory
needs to be locked. With this approach, the per-socket receive buffer
should be left intact because once the zero-byte receive operation
completes, the server can simply perform a non-blocking receive to
retrieve all the data buffered in the socket's receive buffer. There
is no more data pending when the non-blocking receive fails with
WSAEWOULDBLOCK.
Now, I'm trying to understand this paragraph; I think I've got it but want to make sure please.
I understand about memory being locked if I post make multiple WSARecv() calls with large buffers. But I am not entirely sure how a zero byte buffer prevents this.
I am thinking it is this (and would like confirmation please):
If I have n connections, and I post 50 WSARecv() calls with a 1KB buffer on each connection, that is n * 50KB total memory locked. All of that memory is locked, regardless of whether or not it is actually being used (i.e. whether or not anything is being copied into it from the TCP buffers). Hence if I keep adding more connections, I will keep locking more memory that may or may ever be used. Thus I can run out, with WSAENOBUFS error.
If I however post a zero byte receive on each connection, a completion packet will be generated on that connection only when there is data available for reading. (That is my first assumption, is that correct?)
Now, when I know there is some data, I can then post a WSARecv() with a buffer of 1KB (or however much) - or indeed loop repeatedly reading it all as suggested in my book - knowing that it will be filled immediately hence not remain unused and locked (second assumption, is that correct?)
Question 1
Thus, if my two assumptions are correct, then I have understood my book :) This means then that my server could, in theory, post a zero byte receive when a new connection is established, then when a completion packet is generated, read all of the data until there is no more, then post another zero byte receive - is that correct?
Question 2
However, isn't there still a risk that if I receive completion packets for lots of my zero byte receive posts at once, and I then go onto make multiple WSARecv() calls, that I will still end up with some failing with WSAENOBUFS?
Hopefully someone can clarify these two assumptions and two questions for me.
OK I've done research into this along with experimentation and have found the following:
Assumptions:
1) Assumption 1 is correct.
2) I believe assumption 2 is correct.
Questions
1) I have tested this and this seems to work.
2) This I guess remains a possibility but much less likely than if I posted receives with a none-zero buffer.
Note that we can still raise the WSAENOBUF error when sending too fast; more details here.

Corebluetooth terminate large data transfer without terminate bluetooth connection

I am developing an app that needs to send large amounts of data between an iPhone and a device (it takes approximately 10 seconds to send the data). But I want to be able to cancel the data communication anytime. I am aware I can simply drop the connection to the device at anytime with
centralManager.cancelPeripheral(peripheral)
but that is not what I am actually looking for, as I want to stop sending data but without terminating the bluetooth connection.
Is there a way to terminate the data transmission without dropping the connection to the device?
the codes of sending data is as follow:
for (var Hex: UInt8 = 0x01; Hex <= 0x14; Hex+=1){
var outbuffer = [UInt8](count: 16, repeatedValue: 0x00)
outbuffer[0] = (0x68)
outbuffer[1] = (Hex)
let data = NSData(bytes: outbuffer, length: 7)
print("data\(data)")
connectingPeripheral.writeValue(data, forCharacteristic: connectingCharacteristicPassword , type: CBCharacteristicWriteType.WithResponse)
}
I figured that I would go ahead and give my input on this. There is no way in CoreBluetooth to stop the transmission of a data packet that has already been written to the output buffer. The reason for why this is the case is simply because it is not needed and it would be a useless functionality. The only reason for why you are having this issue is because your methodology is wrong in my opinion. Do not put everything in a for-loop and push the data all at once. Instead you should implement some sort of flow control mechanism.
In Bluetooth LE there are two main ways of writing data to a peripheral: “Write Commands” and “Write Requests”. You can look at it a bit like the TCP vs UDP protocols. With write commands you are just sending data without knowing whether or not the data was received by the application on the other side of the bluetooth link. With write requests you are sending data and letting the peripheral know that you want to be notified (ack’ed) that the data was in fact received. These two types are in CoreBluetooth called CBCharacteristicWriteWithResponse and CBCharacteristicWriteWithoutResponse. When writing data using the CBCharacteristicWriteWithResponse (like you are doing in your code) you will get a peripheral:didWriteValueForCharacteristic:error: callback which verifies that the data has arrived at the other side. At this point you now have the option to go ahead and send the next packet if you want to, but if you for some reason want to stop sending data, then you can do that as well. Doing it this way you are in control of the whole flow and not just simply pushing everything though a for-loop.
But wait, why would you ever want to use write commands then? Well, since write requests requires the receiver to respond back to the sender it means that data must be sent in both directions. In this case, since the ack is sent by the application layer, you have to wait for the next connection interval before the ack can be sent. This means that when sending large amounts of data you can only send one packet per every two connection intervals which will give you a very poor overall bit rate.
With write commands, since they are not ack’ed, you can send as manny packets as possible within one connection event window. In most cases you should be able to send about 10-20 packets per connection window. But be aware that if you send too many packets then you will fill the outgoing buffer and packets will be lost. So, something that you can try is to directly send 9 packets with the WriteWithoutResponse type, followed by 1 packet of the WriteWithResponse type. After doing this you can wait for the peripheral:didWriteValueForCharacteristic:error: callback in which you can then send 10 more packets the same way. This way you will manage to send 10 packets per every 2 connection intervals while still being able to control the flow better.
You can of course experiment with the ratio a bit, but remember that the buffer is shared between multiple applications on the iOS device so you don’t want to be too close to the limit.

Missing bytes on IdUDPServer.OnRead event in buffer array - Delphi XE3

Can't seem to find anywhere informations about this, but, is TIdUDPServer.OnRead event passing everything that comes in to the AData array or not?
According to WireShark readings, I'm missing 42 bytes of data; While I should be getting 572 bytes of data on each reading, the AData size is always 530, and seems like always the same bytes are missing.
The device that sends data is broadcasting it, and I can get everything I need except for 2 bytes, which seems to be 2 of those that are missing.
Any hints on this one?
Edit:
I should mention that these are the very first 42 bytes; Everything afterwards is received fine;
The OnUDPRead event passes everything the socket receives from the OS. UDP operates on messages. Unlike TCP, a UDP read is an all-or-nothing operation, either a whole UDP message is read or an error occurs, there is no in-between.
If you are missing data, then either the OS is not providing it (such as if it belongs to the UDP and/or IP headers), or you are not reading data from the AData parameter correctly. If you think this is not the case, then you need to update your question to show your actual OnUDPRead handler code, an example WireShark dump showing the data being captured from the network, and the data that is making it to your OnUDPRead handler.
Update: The OS does not provide access to the packet headers (unless you are using a RAW socket, which TIdUDPServer does not use, but that is a whole other topic of discussion). The AData parameter of the OnUDPRead event provides only the application data portion of a packet, as that is what the OS provides. You cannot access the packet headers.
That being said, you can get the packet's source IP:Port, at least, via the ABinding.PeerIP and ABinding.PeerPort properties of the OnUDPRead event. However, there is no way to retrieve the other packet header values (nor should you ever need them in most situations), unless you sniff the network yourself, such as with a pcap library.

Resources