Sometimes when I try to send some packets continuously( I am using the send() API ) I receive this error. Now I am not sure what should I do than. I have these questions:
1) Can I re-send again ? If yes then after how much time should I try again. Is there any particular strategy to be followed
2) Is buffer size has exceeded its limits is the only reason ?
3) Can someone please give me a better idea/code, how to handle such scenario.
Thanks.
Sambit.
From send(): "EAGAIN -- The socket is marked non-blocking and the requested operation would block." and also When the message does not fit into the send buffer of the socket, send normally blocks, unless the socket has been placed in non-blocking I/O mode. In non-blocking mode it would return EAGAIN in this case. The select(2) call may be used to determine when it is possible to send more data.
This thread has a simple example of using select() to deal with EAGAIN, and is followed by significant discussion about what sorts of surprises lurk beneath the surface.
EAGAIN is usually returned when there is no outbound buffer space left. How long to wait depends on the speed of the underlying connection. The normal way is to wait until select() or poll() tells you that the socket is available for writing. If on Linux, take a look at the select_tut(2) manpage, and of course the send(2) manpage.
You could change to blocking operation (which is the default) if you want the call to wait until there is space available. Or you could call select(2) to wait until the socket is writeable and then try again.
There is one other important consideration. If you are sending UDP packets, then keep in mind that there is no guarantee of congestion control, and if you're sending packets over the Internet you will almost certainly get packet loss if you just try sending UDP packets as fast as possible (this doesn't necessarily apply to other datagram sockets such as Unix sockets).
Related
I have been trying to implement a wrapper library for the Linux interface to SCTP sockets, and I am not sure how to integrate the asynchronous style of errors (where they are delivered via events). All example code I have seen, if it deals with the errors at all, simply prints out the information related to the error when it is received, but inserting error-handling code there seems like it would be ineffective, because by that point all of the context related to the original message which was sent has been lost and only a 32-bit integer sinfo_context remains. It also seems that there is no way to directly tell when a given message has been acknowledged successfully by the remote peer, which would make it impossible to implement an approach which listens for errors after sending a message, because the context information for successfully-delivered messages could never be freed.
Is there a way to handle the errors related to a given sending operation as part of the call to a send function, or is there a different way to approach error handling for SCTP which does not lose the context of the error?
One solution which I have considered is using the SCTP_SENDER_DRY notification to tell when packets have been sent, however this requires sending only one packet at a time. Another idea is to use the peer's receiver window size together with the sinfo_cumtsn field of sctp_sndrcvinfo to calculate how much data has been acknowledged as fully received using the cumulative TSN, however there are a couple of disadvantages to this: first, it requires bookkeeping overhead to calculate a number of bytes received by the peer based on the cumulative TSN (especially if the peer's window size may change); second, it requires waiting until all earlier packets were received before reporting success, which seems to defeat the purpose of SCTP's multistreaming; and third, it seems like it would not work for unordered packets.
I'm wondering if this is possible to setup a timeout for receiving data over USB interface in STM32 microcontrollers. Such approach is possible for example in UART connection (please refer to AN3109, section 2. Receive DMA timeout).
I can't find anything similar related to USB interface. What's more, it is said that DMA for USB should be enabled only if really necessary because data transfer shall be aligned to 32-bit word.
You have a receive call back function (if you use the HAL) in your ...._if.c file. Copy reived chars to the buffer. Implement timeout there.
What you refer to in case of UART is either DMA receive timeout as you've said or (when not using DMA) an IDLE interrupt. I'm not aware of such thing coming "out of the box" for USB CDC - you'd have to implement this timeout yourself, which shouldn't be too hard. Have a timer (hardware of software) that you re-trigger every time you receive data. Set its period to the timeout value of your choice and do protocol parsing after timeout elapses.
If I had to add anything - these kind of problems (not knowing how many bytes to receive) are typically solved at the protocol level. Assuming binary protocol, one way of achieving this is having frame start and end bytes which never occur in data (and if they do - you escape them) in which case you receive everything starting after "start byte" until you reveive "end byte". Yet another way is having a "start byte" and a field indicating how many bytes there are to receive. All of it should of course be checksumed in some way.
Having said that, if you have an option to change the protocol, you really should do so. Relying on timings in your communication, especially on such low level only invites problems and headaches in the long run. You introduce tight coupling between your protocol layer and interface layer. This is going to backfire on you every time you decide to use a different interface, as you'll have to re-invent the same thing again. Not to mention how painful it's going to be when you decide to move to TCP/IP with all its greatness - network jitter, dropped packets etc.
I am writing a bulletin board system (BBS) reader on ios. I use GCDAsyncSocket library to handle packets sending and receiving. The issue that I have is the server always splits the data to send into multiple packets. I can see that happens by printing out the receiving string in didReceiveData() function.
From the GCDAsyncSocket readme, I understand TCP is a stream. I also know there are some end of stream mechanisms, such as double CR LFs at the end. I have used WireShark to parse the packets, but there is no sign of some sort of pattern in the last data packet. The site is not owned by me, so I couldn't make it to send certain bytes. There must be some way to detect the last packet, otherwise how BBS clients handle displaying data?
Double CR LFs are not end of stream. That is just part of the details of HTTP protocol, for example, but has nothing to do with closing the stream. HTTP 1.1 allows me to send multiple responses on a single stream, with double CR LF after HTTP header, without end of stream.
The TCP socket stream will return 0 on a read when it is closed from the other end, blocking or non-blocking.
So assuming the server will close the socket when it is done sending to you, you can loop and perform a blocking read and if returns > 0, process the data, then read again. if < 0, process the error code (could be fatal or not), and if == 0, socket is closed from the other side, don't read again.
For non-blocking sockets, you can use select() or some other API to detect when the stream becomes ready to read. I'm not familiar with the specific lib you are using but if it is a POSIX / Berkeley sockets API, it will work that way.
In either case, you should build a buffer of input, concatenating the results of each read until you are ready to process. As you've found, you can't assume that a single read will return a complete application level packet. But as to your question, unless the server wants you to close the socket, you should wait for read to return 0.
In a linux network driver, I must provide a function, hard_start_xmit(), that actually sends packets on the wire. I understand that if it can't send the packet, hard_start_xmit() should return an error, which will cause the packet to be retried later. However, since hard_start_xmit() may be called at with IRQs disabled, it cannot wait very long to determine whether the packet could be sent.
How do I deal with a transmission error that happens after hard_start_xmit() has already returned success? Is it correct to simply drop the packet, free the skb, and count a transmit error?
Yes. Many transmit errors are only detectable after the NIC has actually tried to transmit the frame. Note that there are several different error counters that you can increment, if your device returns sufficient information on the error - see struct net_device_stats.
I'm using IOCP on UDP socket, and the UDP socket may be closed in another thread. So, how can I free Per Socket Context and Per I/O Context which associated with SOCKET safely?
When I close the socket, there will still be un-completed I/O request in kernel queue.
If I free context just when socket closed, the GetQueueCompletionStatus may failed.
Now, my question is when to free context?
I use reference counting on all of my per socket and per I/O data structures. It makes this kind of thing easy as they are deleted when their references drop to 0. For some example code which shows one way to do this you could take a look at my free IOCP framework which you can download from here.
Use a mutex to enforce mutual exclusion in a critical section of your code that will check the availability of the socket, and open it if necessary. Lock the socket to that thread, and release it appropriately when finished.
I reuse my per-socket structures. After I have received completion events for all of the read and write operations that are required for that connection, I call TransmitFile with the TF_DISCONNECT and TF_REUSE_SOCKET flags to reset the socket without having to close it. I also reset the per-connection data once the completion event for the TransmitFile call comes through.
Close the socket first. You will get error (I think it is ERROR_OPERATION_ABORTED) from GetQueuedCompletionStatus, and then it is right time to free the structure. There are no other uncompleted requests on this connection in kernel queue by then, completion packets are maintained in FIFO order, and error packet will definitely be the last one for this connection.