I'm evaluating the following block :
[byteStream atEnd] whileFalse: [stream nextPut: self parsePacket]
The problem is that my " byteStream" which is a ReadWrite stream, is at its end, (I inspect it and the position = the read limit = the write limit = 512) and my loop does not stop, like if the : " [byteStream atEnd] " had no effect.
I'm using VisualWorks 7.9.1, under linux and my byteStream is feeded via an UDP Socket.
Any help is welcomed.
Here is the server code:
listenOnPort: aPort
| server peerAddr |
self initialize.
server := SocketAccessor newUDPserverAtPort: aPort.
peerAddr := IPSocketAddress hostName:'localhost' port: aPort.
process :=
[
[| buf sizeOfBuf |
buf := String new: 2048.
sizeOfBuf := server bufferSize.
sizeOfBuf > 0
ifTrue:
[| dataStream |
server readWait.
server receiveFrom: peerAddr buffer: buf.
dataStream := ReadStream on: buf from: 1 to: sizeOfBuf.
dataStream reset.
self receive: dataStream]]
repeat.]
fork.
Here is the code that parses what is contained in the buffer :
parse
^ Array streamContents: [:stream |
[byteStream atEnd] whileFalse: [stream nextPut: self parsePacket]
]
The loop in the parse method, is the problem, i tried the code on a windows xp 32bit and it works fine, but on a linux 32bit it does not, i think it had something to do with the OS UDP networking ?
I found where the problem came from.I was resizing my buffer with a method that parse the size of the OSC BUNDLE, but this method was faulty and it was sending "0" as position to the buffer each time. so each time the loop find the buffer at its initial position and then continue looping, which is logical. thank you for your help.
What do you mean by "loop continues" ? Clearly it cannot keep reading packets that aren't there. Is it possible that since you put a hardcoded limit on the buffer size that you have a packet at the end of the buffer that is unfinished, thus it fails trying to read the rest of it?
Related
I'm trying to pipe the output(logs) of a program to a Go program which aggregates/compress the output and uploads to S3. The command to run the program is "/program1 | /logShipper". The logShipper is written in Go and it's simply read from os.Stdin and write to a local file. The local file will be processed by another goroutine and upload to S3 periodically. There are some existing docker log drivers but we are running the container on a fully managed provider and the log processing charge is pretty expensive, so we want to bypass the existing solution and just upload to S3.
The main logic of the logShipper is simply read from the os.Stdin and write to some file. It's work correctly when running on the local machine but when running in docker the goroutine blocked at reader.ReadString('\n') and never return.
go func() {
reader := bufio.NewReader(os.Stdin)
mu.Lock()
output = openOrCreateOutputFile(&uploadQueue, workPath)
mu.Unlock()
for {
text, _ := reader.ReadString('\n')
now := time.Now().Format("2006-01-02T15:04:05.000000000Z")
mu.Lock()
output.file.Write([]byte(fmt.Sprintf("%s %s", now, text)))
mu.Unlock()
}
}()
I did some research online but not find why it's not working. One possibility I'm thinking is might docker redirect the stdout to somewhere so the PIPE not working the same way as it's running on a Linux box? (As looks like it can't read anything from program1) Any help or suggestion why it not working is welcome. Thanks.
Edit:
After doing more research I realized it's a bad practice to handle the logs in this way. I should more rely on the docker's log driver to handle the log aggregate and shipping. However, I'm still interested to find out why it's not read anything from the PIPE source program.
I'm not sure about the way the Docker handles output, but I suggest that you extract the file descriptor with os.Stdin.Fd() and then resort to using golang.org/x/sys/unix package as follows:
// Long way, for short one jump
// down straight to it.
//
// retrieve the file descriptor
// cast it to int, because Fd method
// returns uintptr
fd := int(os.Stdin.Fd())
// extract file descriptor flags
// it's safe to drop the error, since if it's there
// and it's not nil, you won't be able to read from
// Stdin anyway, unless it's a notice
// to try again, which mostly should not be
// the case
flags, _ := unix.FcntlInt(fd, unix.F_GETFL, 0)
// check if the nonblocking reading in enabled
nb := flags & unix.O_NONBLOCK != 0
// if this is the case, just enable it with
// unix.SetNonblock which is also a
// -- SHORT WAY HERE --
err = unix.SetNonblock(fd, true)
The difference between the long and a short way is that the long way will definitely tell you, if the problem is in the nonblocking state absence or not.
If this is not the case. Then I have no other ideas personally.
I posted this to the Squeak Beginners list too - I'll be sure to make sure any answers from there get here :)
I'm using Squeak 4.2 and working on the smalltalk end of a named pipe connection, which sends a message to the named pipe server with:
msg := 'Here''s Johnny!!!!'.
pipe nextPutAll: msg; flush.
It should then receive an acknowledgement, which will be a 32-byte md5 hash of the received message (which the smalltalk app can then verify). It's possible the named pipe server may have gone away or otherwise been unable to deal with the request, and so I'd like to set a timeout on reading the acknowledgement. I've tried using this:
ack := [ pipe next: 32 ] valueWithin: (Duration seconds: 3) onTimeout: [ 'timeout'. ].
and then made the pipe server pause artificially to test the code. But the smalltalk thread blocks on the read and doesn't carry on (even after the timeout), although if I then get the pipe server to send the correct response (after a 5 second delay, for example), the value of 'ack' is 'timeout'. Obviously the timeout did what it's supposed to do, but couldn't 'unblock' the blocking read on the pipe.
Is there a way to accomplish this even with a blocking FileStream read? I'd rather avoid a busy wait on there being 32 characters available if at all possible.
This one may come in handy but not on Windows I am afraid.
http://www.samadhiweb.com/blog/2013.07.27.unixdomainsockets.html
I've createdan application which communicates with external device using TCP/IP as a client. I'm using Synapse library (v40) for communication. Sometimes however communication freezes. I managed to get callstack with JclDebug,showing that despite defined timeout, receiving packets is the problem.
Delphi 2009 is used.
Is there anything I can do to fix this issue? Bug in Synapse?
[77297094] KiFastSystemCallRet
[006193FE] blcksock.TBlockSocket.InternalCanRead (Line 2741, "synapse\blcksock.pas")
[0061945C] blcksock.TBlockSocket.CanRead (Line 2764, "synapse\blcksock.pas")
[006185E5] blcksock.TBlockSocket.RecvPacket (Line 2324, "synapse\blcksock.pas")
[0061888F] blcksock.TBlockSocket.RecvTerminated (Line 2410, "synapse\blcksock.pas")
... my own code..
Edit: The blocking line is:
x := synsock.Select(FSocket + 1, #FDSet, nil, nil, TimeVal);
Select -function is from winsock2 API.
Edit2: TimeVal is set by Synapse code:
var
TimeVal: PTimeVal;
TimeV: TTimeVal;
..
TimeV.tv_usec := (Timeout mod 1000) * 1000;
TimeV.tv_sec := Timeout div 1000;
TimeVal := #TimeV;
if Timeout = -1 then
TimeVal := nil;
Original source code is here: http://synalist.svn.sourceforge.net/viewvc/synalist/trunk/blcksock.pas?revision=154&view=markup
Timeout used is 1000.
Edit3: I've two client threads running to communicate with two different hosts. It looks like only other one is hanging. Application has been running now since thursday. Thread #2 hung after 5 hours, but thread #1 is still running.
As I couldn't find the reason for the freeze, I changed by code a bit and now end up calling RecvTerminated with CRLF as terminater instead of '>', and it seems to work without stopping.
I am reading a continuous data stream from an api and at times the program will freeze on the following line and eventually time out.
Private BUFFER_SIZE As Integer = 8100
...
Dim bufferread(81000) As Byte
numbytesread = responseStream.Read(bufferread, 0, BUFFER_SIZE)
I could reduce the buffersize I guess and also write more frequently to my files but I also want to make sure I create good files where data is not snipped off and I reach a delimitter which indicates end of a post. Any ideas on why this freezes up.
I was working in a simulator for testing connections and commands sending to a server. The simulator has some counters like Total of sent commands, successfully sent commands, fail sent commands, connection attempts, successful connections, etc...
The code that I used is the following:
procedure TALClient.SendCommand;
begin
Try
dlgMain.IncrementIntConx; //Increments conn attemps
FTCP.Connect(1000);
If FTCP.Connected Then
Begin
dlgMain.IncrementConections; //increments successfully connections
try
dlgMain.IncrementIntSendCommand; //Increments command sent attemps (A)
FTCP.SendCmd(FCmd.FNemo + ' ' + FCmd.FParams); // (Z)
dlgMain.IncrementSendComm; //Increments sent Commands (B)
try
FParent.CS.Acquire;
FParent.FStatistic[Tag, FCmd.FTag].LastCodeResult := FTCP.LastCmdResult.NumericCode;
FParent.FStatistic[Tag, FCmd.FTag].LastMsgResult := FTCP.LastCmdResult.Text.Text;
FParent.CS.Release;
if ((FTCP.LastCmdResult.NumericCode) = (497)) then
Synchronize(UpdateCorrectCounters) //increments successfully responds from server
else
Synchronize(UpdateErrorCounters); //increments failed responds from server
except
Synchronize(UpdateErrorCounters);
end;
except
dlgMain.IncrementFailCommand; //increments failed commands (C)
end;
End
Else
Synchronize(UpdateErrorCounters); //Increment failed responses from sever
Finally
If FTCP.Connected Then
FTCP.Disconnect;
End
end;
I have changed the code to many many other ways, but it never works fine.
The big problem is that the total count of sent commands is not equal to successfully sent commands plus failed sent commands. (in the code: A is not equal to B plus C). There are responses that I have never "seen" in the line marked as (Z), maybe "lost" responses...
So, what I am doing wrong?
I guess you are using multiple threads for your simulator. This looks like the classic Lost Updates problem to me. You have to synchronize the counter-incrementing code.
Incrementing a variable is NOT thread-safe:
Temp := CounterValue;
// If another thread intercepts here, we've got a lost update
Temp := Temp + 1;
CounterValue := Temp;
See this MSDN article to read more about concurrency issues.
If you only use counting you can use the Windows functions InterlockedIncrement and InterlockedDecrement and you won't need any locking.