How To Disable Nagle on iOS POSIX Socket - ios

I'll start by saying that I definitely want to disable Nagle's Algorithm. The application that I am testing for is a real time P2P app in which packets are small and extremely time sensitive. This test also serves to compare UDP and TCP for possible networking solutions.
I am able to open the TCP socket and send messages back and forth, but I have not been able to disable Nagle's Algorithm. I have tried:
static const int yes = 1;
if (setsockopt(sockfd, IPPROTO_TCP, TCP_NODELAY, &yes, sizeof(yes))) {
fprintf(stderr, "Error setting tcp nodelay\n");
return -2;
}
on both the listening and connecting sockets. This does not fail. I also elevated the priority of the receiving thread to DISPATCH_QUEUE_PRIORITY_HIGH.
The test I am running starts with a packet that contains "start", followed by numerous packets that contain "data" and one final packet containing "stop". These packets are almost always combined into one or few packets, such as:
Received Peer Message: startdatadatadatadatadatadatastop
or
Received Peer Message: startdatadatadata
Received Peer Message: datadatadatastop
Is there a difference in the way that TCP_NODELAY is set on iOS? I was able to set it on Linux with the above code successfully.

Related

Is NSStream.close() synchronous wrt TCP?

I have input and output NSStream's as part of a TCP connection after using NSStream.getStreamsToHostWithName(). If I call the close() method on those input and output streams, then by the time the functions return, will my TCP connection be in the CLOSED state?
If not, how could I determine the time at which the underlying TCP connection actually closes?
Closing the stream terminates the flow of bytes and releases system
resources that were reserved for the stream when it was opened. If the
stream has been scheduled on a run loop, closing the stream implicitly
removes the stream from the run loop. A stream that is closed can
still be queried for its properties.
var streamStatus: NSStreamStatus { get }
The receiver’s status
enum NSStreamStatus : UInt {
case NotOpen
case Opening
case Open
case Reading
case Writing
case AtEnd
case Closed
case Error
}
As far as I can tell, the answer is no, the TCP connection will not necessarily be in the CLOSED state. To make sure the TCP connection does a graceful close and find out when that happens, one must use a lower-level API than NSStream.

Unable to reset the can error using CAN bus error recovery sequence. As when the can bus is not connected to other node

Subject: Unable to reset the can error using CAN bus error recovery sequence. As when the can bus is not connected to other node .
Description:
We are using infineon XE164 micro-controller and keil uvsion4 for compile the code.
BACKGROUND AND THE HARDWARE SETUP
Our product setup is as below:
We are using XE164 micro-controller to control peripherals . In peripherals it basically control the servo motor , stepper motor , lcd and keypad.
We want to transmit data from infenion XE164 node to PIC18f2480 node.
PROBLEM
There is no issue to transmit and receive data from both node on can bus.
When bus is not connected and data is transmitted by XE164 board our can bus goes in error state.what is recovery sequence to change the can bus mode in error state to idle state.
How to avoid this without hardware reset of micro controller.
In CAN protocol, ACK is a must!
If there is no other node on the bus, CAN Transmission will not work and it's a correct behaviour that it goes in error state.
Only way to get rid of that error state is re-initializing your CAN module (like calling CANInit() again), because even after error recovery sequence, CAN controller retries to transmit the data and again gets stuck into Error state and this will go on... Re-initialization will stop its attempts and CAN module will be in normal state.
EDIT after comment from OP:
If you want to poll whether there is a device on bus or not, you can set a Timer interrupt of say X msec and in Timer ISR:
1. Initialize CAN
2. Send a CAN Message
3. If no error interrupt is generated and message is successfully transmitted, stop timer otherwise keep it going.
You can also try different baud rates too.

path of packets through network stack

I'm trying to study and understand operations of the Linux tcp/ip stack, specifically how 'ping' sends packets down and receives them.
Ping creates raw socket in AF_INET family, therefore I placed printk in inet_sendmsg() at net/ipv4/af_inet.c to print out the socket protocol name (RAW, UDP etc.) and the address of protocol specific sendmsg function which correctly appears to be raw_sendmsg() from net/ipv4/raw.c
Now, I'm sending a single packet and observe that I'm getting printk form inet_sendmsg() twice.This puzzles me -- is it normal (has something to do with interrupts etc. ?) or there's something broken in the kernel?
Platform - ARM5te, kernel 2.6.31.8
Looking forward to hearing from you !
Mark

How to send data immediately in epoll ET mode when connect established

My server needs to send data when client connects to it. I am using Epoll ET mode.
But how to do it? Could any one give me a simple example for me?
Assuming you are listening on your socket (socket, bind, listen), and have added it's descriptor to epoll (epoll_create and epoll_ctl), then epoll_wait will tell you when there is a new connection to accept.
First you accept the connection (sockfd is descriptor of socket you're listening on, efd is epoll instance) and add it to your epoll instance:
int connfd = accept4(sockfd, NULL, 0, SOCK_NONBLOCK);
struct epoll_event ev;
ev.events = EPOLLOUT | EPOLLET;
ev.data.fd = connfd;
epoll_ctl(efd, EPOLL_CTL_ADD, connfd, &ev)
Then you go back to your main loop and call epoll_wait again. It will tell you when the socket is ready for writing, and you just happily write or sendfile away.
Add lots of error checking, and probably TCP_CORK and you're done. There's a working example on github.com/grahamking/netshare/.
I hope this gives you enough information to get started.

MQ Connection - 2009 error

am connectting the MQ with below code. I am able connected to MQ successfully. My case is i place the messages to MQ every 1 min once. After disconnecting the cable i get a ResonCode error but IsConnected property still show true. Is this is the right way to check if the connection is still connected ? Or there any best pratcices around that.
I would like to open the connection when applicaiton is started keep it open for ever.
public static MQQueueManager ConnectMQ()
{
if ((queueManager == null) || (!queueManager.IsConnected)||(queueManager.ReasonCode == 2009))
{
queueManager = new MQQueueManager();
}
return queueManager;
}
The behavior of the WMQ client connection is that when idle it will appear to be connected until an API call fails or the connection times out. So isConnected() will likely report true until a get, put or inquire call is attempted and fails, at which point QMgr will then report disconnected.
The other thing to consider here is that 2009 is not the only code you might get. It happens to be the one you get when the connection is severed but there are connection codes for QMgr shutting down, channel shutting down, and a variety of resource and other errors.
Typically for a requirement to maintain a constant connection you would want to wrap the connect and message processing loop inside a try/catch block nested inside a while statement. When you catch an exception other than an intentional exit, close the objects and QMgr, sleep at least 5 seconds, then loop around to the top of the while. The sleep is crucial because if you get caught in a tight reconnect loop and throw hundreds of connection attempts at the QMgr, you can bring even a mainframe QMgr to its knees.
An alternative is to use a v7 WMQ client and QMgr. With this combination, automatic reconnection is configurable as a channel configuration.

Resources