I use file descriptors to find the readable sockets and go on to read. For some reasons, a socket that has no data on the wire, goes on to read and never returns. Is there a way I can come out of the receive after a timeout?
I am using winsock library..
http://tangentsoft.net/wskfaq/newbie.html#timeout
2.15 - How can I change the timeout for a Winsock function?
Some of the blocking Winsock functions (e.g. connect()) have a timeout embedded into them. The theory behind this is that only the stack has all the information necessary to set a proper timeout. Yet, some people find that the value the stack uses is too long for their application; it can be a minute or longer.
You can adjust the send() and recv() timeouts with the SO_SNDTIMEO and SO_RCVTIMEO setsockopt() options. .
For other Winsock functions, the best solution is to avoid blocking sockets altogether. All of the non-blocking socket methods provide ways for you to build custom timeouts:
Non-blocking sockets with select() – The fifth parameter to the select() function is a timeout value.
Asynchronous sockets – Use the Windows API SetTimer().
Event objects – WSAWaitForMultipleEvents() has a timeout parameter.
Waitable Timers – Call CreateWaitableTimers() to make a waitable timer, which you can then pass to a function like WSAEventSelect() along with your sockets: if none of the sockets is signalled before the timer goes off, the blocking function will return anyway.
Note that with asynchronous and non-blocking sockets, you may be able to avoid handling timeouts altogether. Your program continues working even while Winsock is busy. So, you can leave it up to the user to cancel an operation that’s taking too long, or just let Winsock’s natural timeout expire rather than taking over this functionality in your code.
your problem is in the while loop that you try to fill buffer
just put an if statement that check last index of every chunks for '\0'
then break your while loop
do {
len = rcv(s,buf,BUF_LEN,0);
for (int i = 0; i <= len; ++i) {
if (buf[i] >= 32 || buf[i] == '\n' || buf[i] == '\r') { //only write valid chars
result += buf[i]; //final string
}
}
if (buf[len] == '\0') { //buf[len] is the last position in the received chanks
break;
}
} while (inner_result > 0);
Related
I am still confused about the NumberOfConcurrentThreads parameter within CreateIoCompletionPort(). I have read and re-read the MSDN dox, but the quote
This value limits the number of runnable threads associated with the
completion port.
still puzzles me.
Question
Let's assume that I specify this value as 4. In this case, does this mean that:
1) a thread can call GetQueuedCompletionStatus() (at which point I can allow a further 3 threads to make this call), then as soon as that call returns (i.e. we have a completion packet) I can then have 4 threads again call this function,
or
2) a thread can call GetQueuedCompletionStatus() (at which point I can allow a further 3 threads to make this call), then as soon as that call returns (i.e. we have a completion packet) I then go on to process that packet. Only when I have finished processing the packet do I then call GetQueuedCompletionStatus(), at which point I can then have 4 threads again call this function.
See my confusion? Its the use of the phrase 'runnable threads'.
I think it might be the latter, because the link above also quotes
If your transaction required a lengthy computation, a larger
concurrency value will allow more threads to run. Each completion
packet may take longer to finish, but more completion packets will be
processed at the same time.
This will ultimately affect how we design servers. Consider a server that receives data from clients, then echoes that data to logging servers. Here is what our thread routine could look like:
DWORD WINAPI ServerWorkerThread(HANDLE hCompletionPort)
{
DWORD BytesTransferred;
CPerHandleData* PerHandleData = nullptr;
CPerOperationData* PerIoData = nullptr;
while (TRUE)
{
if (GetQueuedCompletionStatus(hCompletionPort, &BytesTransferred,
(PULONG_PTR)&PerHandleData, (LPOVERLAPPED*)&PerIoData, INFINITE))
{
// OK, we have 'BytesTransferred' of data in 'PerIoData', process it:
// send the data onto our logging servers, then loop back around
send(...);
}
}
return 0;
}
Now assume I have a four core machine; if I leave NumberOfConcurrentThreads as zero within my call to CreateIoCompletionPort() I will have four threads running ServerWorkerThread(). Fine.
My concern is that the send() call may take a long time due to network traffic. Hence, I could be receiving a load of data from clients that cannot be dequeued because all four threads are taking a long time sending the data on?!
Have I missed the point here?
Update 07.03.2018 (This has now been resolved: see this comment.)
I have 8 threads running on my machine, each one runs the ServerWorkerThread():
DWORD WINAPI ServerWorkerThread(HANDLE hCompletionPort)
{
DWORD BytesTransferred;
CPerHandleData* PerHandleData = nullptr;
CPerOperationData* PerIoData = nullptr;
while (TRUE)
{
if (GetQueuedCompletionStatus(hCompletionPort, &BytesTransferred,
(PULONG_PTR)&PerHandleData, (LPOVERLAPPED*)&PerIoData, INFINITE))
{
switch (PerIoData->Operation)
{
case CPerOperationData::ACCEPT_COMPLETED:
{
// This case is fired when a new connection is made
while (1) {}
}
}
}
I only have one outstanding AcceptEx() call; when that gets filled by a new connection I post another one. I don't wait for data to be received in AcceptEx().
I create my completion port as follows:
CreateIoCompletionPort(INVALID_HANDLE_VALUE, NULL, 0, 4)
Now, because I only allow 4 threads in the completion port, I thought that because I keep the threads busy (i.e. they do not enter a wait state), when I try and make a fifth connection, the completion packet would not be dequeued hence would hang! However this is not the case; I can make 5 or even 6 connections to my server! This shows that I can still dequeue packets even though my maximum allowed number of threads (4) are already running? This is why I am confused!
the completion port - is really KQUEUE object. the NumberOfConcurrentThreads is corresponded to MaximumCount
Maximum number of concurrent threads the queue can satisfy waits for.
from I/O Completion Ports
When the total number of runnable threads associated with the
completion port reaches the concurrency value, the system blocks the
execution of any subsequent threads associated with that completion
port until the number of runnable threads drops below the concurrency
value.
it's bad and not exactly said. when thread call KeRemoveQueue ( GetQueuedCompletionStatus internal call it) system return packet to thread only if Queue->CurrentCount < Queue->MaximumCount even if exist packets in queue. system not blocks any threads of course. from another side look for KiInsertQueue - even if some threads wait on packets - it activated only in case Queue->CurrentCount < Queue->MaximumCount.
also look how and when Queue->CurrentCount is changed. look for KiActivateWaiterQueue (This function is called when the current thread is about to enter a wait state) and KiUnlinkThread. in general - when thread begin wait for any object (or another queue) system call KiActivateWaiterQueue - it decrement CurrentCount and possible (if exist packets in queue and became Queue->CurrentCount < Queue->MaximumCount and threads waited for packets) return packet to wait thread. from another side, when thread stop wait - KiUnlinkThread is called. it increment CurrentCount.
your both variant is wrong. any count of threads can call GetQueuedCompletionStatus(). and system of course not blocks the execution of any subsequent threads. for example - you have queue with MaximumCount = 4. you can queue 10 packets to queue. and call GetQueuedCompletionStatus() from 7 threads in concurrent. but only 4 from it got packets. another will be wait (despite yet 6 packets in queue). if some of threads, which remove packet from queue begin wait - system just unwait and return packet to another thread wait on queue. or if thread (which already previous remove packet from this queue (Thread->Queue == Queue) - so active thread) again call KeRemoveQueue will be Queue->CurrentCount -= 1;
I am using native NT API in my application to access files (NtCreateFile/etc). In order to avoid dealing with STATUS_PENDING I am using FILE_SYNCHRONOUS_IO_NONALERT flag when opening related file. So, opening file looks like this:
UNICODE_STRING fname = toNtUnicode(ntpath);
OBJECT_ATTRIBUTES oa;
InitializeObjectAttributes(&oa, &fname, 0, at.handle(), NULL);
HANDLE h;
IO_STATUS_BLOCK io_status;
NTSTATUS r = NtOpenFile(&h, GENERIC_READ|SYNCHRONIZE, &oa, &io_status,
FILE_SHARE_READ, FILE_SYNCHRONOUS_IO_NONALERT|FILE_DIRECTORY_FILE);
if (r != STATUS_SUCCESS)
...; // error handling
Unfortunately, it causes kernel to serialize all operations on given handle. I.e. if I try to execute multiple reads in parallel (using multiple threads) -- only one request will be processed at any point in time.
I could get rid of serialization:
HANDLE h;
IO_STATUS_BLOCK io_status;
NTSTATUS r = NtOpenFile(&h, GENERIC_READ|SYNCHRONIZE, &oa, &io_status,
FILE_SHARE_READ, FILE_DIRECTORY_FILE);
if (r == STATUS_PENDING)
...; // what to do here???
but how exactly should I wait for completion -- WaitForSingleObject() on file handle? As far as I know it can change to signaled state due to many reasons -- is there any way to tell that my open file (or dir) operation completed?
Similarly, if I submit multiple reads (from multiple threads) -- how can I tell which one (if any) has finished?
NtOpenFile is synchronous api. it never return STATUS_PENDING to you. even if driver return STATUS_PENDING for IRP_MJ_CREATE i/o sub-system will be wait for IRP complete
https://github.com/Zer0Mem0ry/ntoskrnl/blob/master/Io/iomgr/parse.c#L1404
so you never need check for STATUS_PENDING after NtOpenFile and never need wait (and in principle we can not wait here - we yet not have file handle -so can not wait on it or bind it to say IOCP. we not pass any event or another callback mechanism for NtOpenFile)
My Environment:
Windows 7 Pro (32bit)
C++ Builder XE4
I would like to know about wait after WriteLn();
Following is my sample code.
void __fastcall TForm1::IdTCPServer1Execute(TIdContext *AContext)
{
UTF8String rcvdStr;
rcvdStr = AContext->Connection->IOHandler->ReadLn(
IndyTextEncoding(TEncoding::UTF8) );
TList *threads;
TIdContext *ac;
threads = IdTCPServer1->Contexts->LockList();
ac = reinterpret_cast<TIdContext *>(threads->Items[0]);
UTF8String sendStr;
sendStr = "send:" + rcvdStr;
ac->Connection->IOHandler->WriteLn(sendStr);
for(int idx=0; idx<10; idx++) {
Sleep(100);
Application->ProcessMessages();
}
ac->Connection->Disconnect();
IdTCPServer1->Contexts->UnlockList();
}
//---------------------------------------------------------------------------
I am putting wait (for(int idx=0;...) after WriteLn() so that the sending should be completed before Disconnection. However, I am not sure whether this is a correct way to wait. Also I have no idea how long should I wait (in this sample, I wait 1000 msec).
Question: Are there any function to know completion of WriteLn()?
You don't need to wait at all. WriteLn() is a blocking function. It does not exit until the entire string has been placed into the socket's outbound buffer. By default, a socket's LINGER option is enabled, which means a closed socket will attempt to send pending outbound data in the background before fully closing the port, even after your code has moved on.
Refer to MSDN For more details:
Graceful Shutdown, Linger Options, and Socket Closure
For the record, Indy's Disconnect() does use both shutdown() and closesocket().
I looked into GCDAsyncSocket.m at the code that handles read timeout. If I don't extend the timeout, it seems that socket got closed and there is no option to the socket alive keep. I can't use infinite timeout (timeout = -1) because I still need to know when it is timed out, but also doesn't want it to disconnect. I'm not sure there is a reason behind this. Does anyone know?
- (void)doReadTimeoutWithExtension:(NSTimeInterval)timeoutExtension
{
if (currentRead)
{
if (timeoutExtension > 0.0)
{
currentRead->timeout += timeoutExtension;
// Reschedule the timer
dispatch_time_t tt = dispatch_time(DISPATCH_TIME_NOW, (timeoutExtension * NSEC_PER_SEC));
dispatch_source_set_timer(readTimer, tt, DISPATCH_TIME_FOREVER, 0);
// Unpause reads, and continue
flags &= ~kReadsPaused;
[self doReadData];
}
else
{
LogVerbose(#"ReadTimeout");
[self closeWithError:[self readTimeoutError]];
}
}
}
FYI, there is a pull request at https://github.com/robbiehanson/CocoaAsyncSocket/pull/126 that adds this keep-alive feature but it is not pulled yet.
I am the original author of AsyncSocket, and I can tell you why I did it that way: there are too many ways for protocols to handle timeouts. So I implemented a "hard" timeout and left "soft" timeouts up to the application author.
The usual way to do a "soft" timeout is with an NSTimer or dispatch_after. Set one of those up, and when the timer fires, do whatever you need to do. Meanwhile, use an infinite timeout on the actual readData call. Note that infinite timeouts aren't actually infinite. The OS will still time out after, say, 10 minutes without successfully reading. If you really want to keep the connection alive forever, you might be able to set a socket option.
I've been working with pthreads a fair bit recently and there's one little thing I still don't quite get. I know that condition variables are designed to wait for a specific condition to come true (or be 'signalled'). My question is, how does this differ at all from normal mutexes?
From what I understand, aren't condition variables just a mutex with additional logic to unlock another mutex (and lock it again) when the condition becomes true?
Psuedocode example:
mutex mymutex;
condvar mycond;
int somevalue = 0;
onethread()
{
lock(mymutex);
while(somevalue == 0)
cond_wait(mycond, mymutex);
if(somevalue == 0xdeadbeef)
some_func()
unlock(mymutex);
}
otherthread()
{
lock(mymutex);
somevalue = 0xdeadbeef;
cond_signal(mycond);
unlock(mymutex);
}
So cond_wait in this example unlocks mymutex, and then waits for mycond to be signalled.
If this is so, aren't condition variables just mutexes with extra magic? Or do I have a misunderstanding of the fundamental basics of mutexes and condition variables?
The two structures are quite different. A mutex is meant to provide serialised access to a resource of some kind. A condition variable is meant to allow one thread to notify some other thread that some event has occurred.
They aren't exactly mutexes with extra magic, although in some abstractions (monitor as used in java and C#) the condition variable and the mutex are combined into a single unit. The purpose of condition variables is to avoid busing waiting/polling and to hint to the run time which thread(s) should be scheduled "next". Consider how you would write this example without condition variables.
while(1) {
lock(mymutex)
if( somevalue != 0)
break;
unlock(mymutex);
}
if( somevalue == 0xdeadbeef )
myfunc();
You'll be sitting in a tight loop in this thread, burning up a lot of cpu, and making for a lot of lock contention. If locking/unlocking a mutex is cheap enough, you may be in a situation where otherthread never even has a chance to obtain the lock (although real world mutexes generally distinguish between the owning thread and having the lock, as well as having notions of fairness so this is unlikely to happen in reality).
You could reduce the busy waiting by sticking a sleep in,
while(1) {
lock(mymutex)
if( somevalue != 0)
break;
unlock(mymutex);
sleep(1); // let some other thread do work
}
but how long is a good time to sleep for? You'll basically just be guessing. The run time also can't tell why you are sleeping, or what you are waiting on. A condition variable lets the run time be at least somewhat aware of what threads are interested in the same event currently.
The simple answer is that you might want to wake more than one thread from the condition variables, but mutex allows only one thread execute the guarded block.