Read a filestream (named pipe) with a timeout in Smalltalk - timeout

I posted this to the Squeak Beginners list too - I'll be sure to make sure any answers from there get here :)
I'm using Squeak 4.2 and working on the smalltalk end of a named pipe connection, which sends a message to the named pipe server with:
msg := 'Here''s Johnny!!!!'.
pipe nextPutAll: msg; flush.
It should then receive an acknowledgement, which will be a 32-byte md5 hash of the received message (which the smalltalk app can then verify). It's possible the named pipe server may have gone away or otherwise been unable to deal with the request, and so I'd like to set a timeout on reading the acknowledgement. I've tried using this:
ack := [ pipe next: 32 ] valueWithin: (Duration seconds: 3) onTimeout: [ 'timeout'. ].
and then made the pipe server pause artificially to test the code. But the smalltalk thread blocks on the read and doesn't carry on (even after the timeout), although if I then get the pipe server to send the correct response (after a 5 second delay, for example), the value of 'ack' is 'timeout'. Obviously the timeout did what it's supposed to do, but couldn't 'unblock' the blocking read on the pipe.
Is there a way to accomplish this even with a blocking FileStream read? I'd rather avoid a busy wait on there being 32 characters available if at all possible.

This one may come in handy but not on Windows I am afraid.
http://www.samadhiweb.com/blog/2013.07.27.unixdomainsockets.html

Related

Multiple TFileStreams, TStreamWriter writing to same file

I start using a TFileStream and TStreamWriter to write simple text logfiles (instead of old Writeln(T,....)). And I have multiple applicatiosn writing to the same logfile.
Each appplication has its own TFileStream of course and they each open the file like this
FFileStream:=TFileStream.Create(LogName, fmOpenReadWrite+fmShareDenyNone)
FExporter:=TStreamWriter.Create(FFilestream, TEncoding.UTF8);
FExporter.NewLine:=#$0A;
FExporter.AutoFlush:=TRUE;
and write to the file with
FExporter.BaseStream.Seek(0, soFromEnd);
FExporter.Write('['+DateToStr(Now, FDateTimeFormat)+'] ['+TimeToStr(Now, FDateTimeFormat)+'] [#'+Lead0(GetCurrentThreadId, 5)+']: '+EntryText);
FExporter.WriteLine;
the result is somewhat "unsatisfactory" as the lines are displaced, empty lines in between and does not seem to work.
HOW would I do that correctly?
Writing multiples lines at the same time in multiples process may result in unexpected continue, because parallels execution.
You should assure that you are writing a block continually so WriteLine shoud be send inside the write using lineBreak at the end.
So the way you can write should be:
FExporter.BaseStream.Seek(0, soFromEnd);
FExporter.Write('['+DateToStr(Now, FDateTimeFormat)+'] ['+TimeToStr(Now, FDateTimeFormat)+'] [#'+Lead0(GetCurrentThreadId, 5)+']: '+EntryText + System.slineBreak);
//FExporter.WriteLine;
Update1:
As the link Oliver posted, sometime it can not work if the message size to be written is bigger than the OS file sector and, at that very moment, other process also try to write a message. Thus in this case the result content might be mixed.
So doing what I first purpose you would increase the probability to have the desired result, but may not be the solution in 100% of the cases.
To be 100% sure of writing continuous log in a single file, using multiples process, you should create a log process to receive a message from the others and to be the only responsible for writing synchronized log throughout threads.

Python FTP hang

Say I want to use FTP in Python using the ftplib. I begin with this:
from ftplib import ftp
ftp = FTP('10.10.10.151')
If the FTP server is not online, however, it will hang right there indefinitely. The only thing that can kick it out is a keyboard interrupt as far as I know. I've tried this:
ftp.connect('10.10.10.151','21', 5)
With the five being a five second timeout. But the problem here is that I do not know of any way to use that line without first assigning ftp something. But if the server is offline, then the "ftp =" line will hang. So what use is ftp.connect()'s timeout function?!?
Does anybody know a workaround or anything? Is there a way to time out the "ftp = FTP(xxx)" command that I haven't found? Thanks.
I'm using Python 2.7 on Linux Mint.
Your call to connect() is redundant since FTP() method documentation states:
When host is given, the method call connect(host) is made.
Also, since Python 2.6, FTP() does have a timeout parameter:
class ftplib.FTP([host[, user[, passwd[, acct[, timeout]]]]])
The optional timeout parameter specifies a timeout in seconds for blocking operations like the connection attempt (if is not specified, the global default timeout setting will be used).

Why is my Ruby script utilizing 90% of my CPU?

I wrote a admin script that tails a heroku log and every n seconds, it summarizes averages and notifies me if i cross a certain threshold (yes I know and love new relic -- but I want to do custom stuff).
Here is the entire script.
I have never been a master of IO and threads, I wonder if I am making a silly mistake. I have a couple of daemon threads that have while(true){} which could be the culprit. For example:
# read new lines
f = File.open(file, "r")
f.seek(0, IO::SEEK_END)
while true do
select([f])
line = f.gets
parse_heroku_line(line)
end
I use one daemon to watch for new lines of a log, and the other to periodically summarize.
Does someone see a way to make it less processor-intensive?
This probably runs hot because you never really block while reading from the temporary file. IO::select is a thin layer over POSIX select(2). It looks like you're trying to block until the file is ready for reading, but select(2) considers EOF to be ready ("a file descriptor is also ready on end-of-file"), so you always return right away from select then call gets which returns nil at EOF.
You can get a truer EOF reading and nice blocking behavior by avoiding the thread which writes to the temp file and instead using IO::popen to fork the %x[heroku logs --ps router --tail --app pipewave-cedar] log tailer, connected to a ruby IO object on which you can loop over gets, exiting when gets returns nil (indicating the log tailer finished). gets on the pipe from the tailer will block when there's nothing to read and your script will only run as hot as it takes to do your line parsing and reporting.
EDIT: I'm not set up to actually try your code, but you should be able to replace the log tailer thread and your temp file read loop with this code to get the behavior described above:
IO.popen( %w{ heroku logs --ps router --tail --app my-heroku-app } ) do |logf|
while line = logf.gets
parse_heroku_line(line) if line =~ /^/
end
end
I also notice your reporting thread does not do anything to synchronize access to #total_lines, #total_errors, etc. So, you have some minor race conditions where you can get inconsistent values from the instance vars that parse_heroku_line method updates.
select is about whether a read would block. f is just a plain old file, so you when get to the end reads don't block, they just return nil instantly. As a result select returns instantly rather than waiting for something to be appending to the file as I assume you're expecting. Because of this you're sitting in a tight busy loop, so high cpu is to be expected.
If you are at eof (you could either check f.eof? or whether gets returns nil), then you could either start sleeping (perhaps with some sort of back off) or use something like listen to be notified of filesystem changes

Unmarshalling Error

I get an error whenever I try do a request to a SOAP service:
Unmarshalling Error: unexpected element (uri:"http://www.domain.com/ws/servicename/", local:"dummyArg"). Expected elements are <{}dummyArg>
The method that I'm calling has is defined as:
function GetTxServer(UseWSDL: Boolean; Addr: string; HTTPRIO: THTTPRIO): TxServer;
I have little experience with SOAP, and I couldn't find any useful information on this. Feel free to ask any question that might speed up the process in finding the issue.
I believe that the way that I am calling the function is not the correct way!
I'm using Delphi 2010, and I've called the method like so:
Response := GetTxServer.requestIVULoto(cm);
Use SoapUI (the free version is fine) to consume the WSDL and make sure that you can properly send a request to the server and get a response that makes sense. Then make a "mock" service in SoapUI, to act as the server. Send your Delphi requests to the mockservice (typically done by setting your endpoint to http://localhost:8089 or some such) so that you can inspect the XML that you're sending out. Now you can experiment and determine whether the problem is due to sending out bad requests, the server returning bad/unexpected results, trouble interpreting good results, etc..
Aside from that, I'd guess that you're failing to allocate or populate "cm" correctly. I assume that's your request object.
Also... big tip here....
Use the RIO_BeforeExecute event to debug this. At that point, the SOAPRequest is a string that you can inspect or dump to a file. So you can see what you're sending, without having to use SoapUI, Fiddler2, Wireshark, etc..

Reading from TCPSocket is slow in Ruby / Rails

I have this simple piece of code that writes to a socket, then reads the response from the server. The server is very fast (responds within 5ms every time). However, while writing to the socket is quick -- reading the response from the socket is always MUCH slower. Any clues?
module UriTester
module UriInfo
class << self
def send_receive(socket, xml)
# socket = TCPSocket.open("service.server.com","2316")
begin
start = Time.now
socket.print(xml) # Send request
puts "just printed the xml into socket #{Time.now - start}"
rescue Errno::ECONNRESET
puts "looks like there is an issue!!!"
socket = TCPSocket.open("service.server.com","2316")
socket.print(xml) # Send request
end
response=""
while (line =socket.recv(1024))
response += line
break unless line.grep(/<\/bcap>/).empty?
end
puts "SEND_RECEIVE COMPLETED. IN #{Time.now - start}"
# socket.close
response
end
end
end
end
Thanks!
Writing to the socket will always be way faster than reading in this case because the write is local to the machine and the read has to wait for a response to come over the network.
In more detail, when the call to write / send returns all the system is telling you is that N bytes have been successfully copied to the sockets kernel space buffer. It does not mean that the data has actually been sent across the network yet. In fact, the data can sit in the socket buffer for quite a long time ( assuming you're using TCP ). This is because of something called the Nagle algorithm which is intended to make efficient use of network bandwidth. This unseen Nagle delay adds to the round trip time and how long till you get a response. The server might also delay it's response for the same reason adding even more to the response time.
So when you time the write and it returns quickly, that doesn't actually mean anything useful.
As mentioned earlier the read from the socket will be way longer since when you time the read you are actually timing the round trip travel time plus server response time, which is always going to be way slower than the time it takes to copy data from a user space program to a kernel buffer.
What is it that you are actually trying to measure and why?

Resources