Sending and receiving UDP packets in LwIP over same connection? - network-programming

I'm developing an application that should be able to asynchronously transmit and receive UDP messages with the same port number and am a little confused about the best way of doing this. I'm using LwIP and FreeRTOS on a STM32 platform and am wanting to use the netconn api.
My application should:
Transmit messages after a certain ISR fires. I have it setup so the ISR releases a semaphore, which my UDP task consumes.
Receive messages all the time
If I were developing this on Linux, I think it'd make sense to have one thread for send and one to receive, or maybe use the select OS call. As far as I can tell, neither of these are feasible with LwIP.
The only option I've thought of is to do something like this in my UDP task.
void my_task(void)
{
// setup netconn connection here
netconn_set_recvtimeout(conn, 1);
while (1)
{
// Only wait 1ms to take the semaphore
if(xSemaphoreTake(isr_semaphore, 1) == pdTRUE)
{
netconn_send(conn, nbuf);
}
// Only block for 1ms to receive a UDP message
if(netconn_recv(conn, mybuf) == ERR_OK)
{
//process incoming data
}
}
}
However this seems fairly ineloquent to me as I'm wasting 1ms for each call. Is there a better way to achieve the same thing? I feel like this must be a really common requirement, yet I don't see any examples of this out there.

As LWIP documentation mentions netconn API is sequential, so blocking API.
If you want to make it asynchronous you should use raw API which is callback based.

Since you are using FreeRTOS, create two separate tasks to send and receive. Set the priority of the sending task higher than that of the listener.
In other words, create a client and a server task.

Related

How does error handling work in SCTP Sockets API Extensions?

I have been trying to implement a wrapper library for the Linux interface to SCTP sockets, and I am not sure how to integrate the asynchronous style of errors (where they are delivered via events). All example code I have seen, if it deals with the errors at all, simply prints out the information related to the error when it is received, but inserting error-handling code there seems like it would be ineffective, because by that point all of the context related to the original message which was sent has been lost and only a 32-bit integer sinfo_context remains. It also seems that there is no way to directly tell when a given message has been acknowledged successfully by the remote peer, which would make it impossible to implement an approach which listens for errors after sending a message, because the context information for successfully-delivered messages could never be freed.
Is there a way to handle the errors related to a given sending operation as part of the call to a send function, or is there a different way to approach error handling for SCTP which does not lose the context of the error?
One solution which I have considered is using the SCTP_SENDER_DRY notification to tell when packets have been sent, however this requires sending only one packet at a time. Another idea is to use the peer's receiver window size together with the sinfo_cumtsn field of sctp_sndrcvinfo to calculate how much data has been acknowledged as fully received using the cumulative TSN, however there are a couple of disadvantages to this: first, it requires bookkeeping overhead to calculate a number of bytes received by the peer based on the cumulative TSN (especially if the peer's window size may change); second, it requires waiting until all earlier packets were received before reporting success, which seems to defeat the purpose of SCTP's multistreaming; and third, it seems like it would not work for unordered packets.

CAN bus delay between sending remote requests

What is the proper way to send multiple remote requests in CAN2.0B/A? Is it usual to have some delay between them like 1/50s, for the receiver to react? I know it shouldnt be needed if proper interrupts are used, i just want to do it the way it is originally designede for.
For future people, no delay is usually needed, be aware an answer can be received while sending remote requests.

Unordered socket read & close notification using IOCP

Most server framework/examples using sockets and I/O completion ports makes notifications in a way I couldn't completely figure out the purpose.
Upon read packets are processed, usually they are reordered to circumvent thread scheduling issues processing packets out of order no matter IOCP ensure a FIFO queue.
The problem is when a socket is closed gracefully or by an error. I saw in both situation, and again by the o.s. thread scheduler, the close notification may be sent to the application (i.e. http server using the framework) "before" the notification of data previously readed.
I think that the close notification should be queued in such way so the application receives it after previous reads.
Is there any intended use in most code I saw or my behavior may be correct depending on the situation?
What you suggest makes sense and I would imagine that any code that handles graceful close (a read returning 0 bytes) would do so by processing it after any proceeding successful read. Errors coming out of GetQueuedCompletionStatus(), such as connection reset errors, etc, are harder to integrate into the receive flow as they occur out of band as far as the receive data is concerned. Your question's a bit vague and depends very much on the code you're using and how you (or the people who wrote that code) want to handle these things. There is no single correct way, IMHO.

semaphore_wait_trap, GCD and CocoaAsyncSocket

I am currently building an App using CocoaAsyncSocket. I connect to a TCP server and read/write some data.
I create the socket using
self.socket = [[GCDAsyncSocket alloc] initWithDelegate:self delegateQueue:dispatch_get_main_queue()];
When data is received, I use FMDB to save it into a database. Everything works fine, until I send the App to Background (using Homebutton), and then resuming to it. The UI is frozen and not responsive, the Debugger shows, that it is waiting at semaphore_wait_trap.
Don't use the main queue as an argument to the delegateQueue parameter. Use one of the global concurrent queues or a serial/parallel queue you create yourself.
Update: I just looked at the implementation for GCDAsyncSocket and now realize that the delegate queue and methods are fired async to the actual read/write operations, which happen on an internal queue, so my suggestion was either irrelevant (depending on what you're actually doing in the completion methods) or, at the very least, not pertinent to the problem you're having. I think what's happening is that the internal socket(s) are being closed, as per the iOS App Programming Guide. To wit:
Be prepared to handle connection failures in your network-based
sockets. The system may tear down socket connections while your app
is suspended for any number of reasons. As long as your socket-based
code is prepared for other types of network failures, such as a lost
signal or network transition, this should not lead to any unusual
problems. When your app resumes, if it encounters a failure upon
using a socket, simply reestablish the connection.
The GCDAsyncSocket class you're using has some methods which seem to be aimed at dealing with this, such as -autoDisconnectOnClosedReadStream, and I think you just need to add some code to handle the disconnection / connection re-establishment case.

EAGAIN Error: Using Berkeley Socket API

Sometimes when I try to send some packets continuously( I am using the send() API ) I receive this error. Now I am not sure what should I do than. I have these questions:
1) Can I re-send again ? If yes then after how much time should I try again. Is there any particular strategy to be followed
2) Is buffer size has exceeded its limits is the only reason ?
3) Can someone please give me a better idea/code, how to handle such scenario.
Thanks.
Sambit.
From send(): "EAGAIN -- The socket is marked non-blocking and the requested operation would block." and also When the message does not fit into the send buffer of the socket, send normally blocks, unless the socket has been placed in non-blocking I/O mode. In non-blocking mode it would return EAGAIN in this case. The select(2) call may be used to determine when it is possible to send more data.
This thread has a simple example of using select() to deal with EAGAIN, and is followed by significant discussion about what sorts of surprises lurk beneath the surface.
EAGAIN is usually returned when there is no outbound buffer space left. How long to wait depends on the speed of the underlying connection. The normal way is to wait until select() or poll() tells you that the socket is available for writing. If on Linux, take a look at the select_tut(2) manpage, and of course the send(2) manpage.
You could change to blocking operation (which is the default) if you want the call to wait until there is space available. Or you could call select(2) to wait until the socket is writeable and then try again.
There is one other important consideration. If you are sending UDP packets, then keep in mind that there is no guarantee of congestion control, and if you're sending packets over the Internet you will almost certainly get packet loss if you just try sending UDP packets as fast as possible (this doesn't necessarily apply to other datagram sockets such as Unix sockets).

Resources