Suds is not reusing cached WSDLs and XSDs, although I expect it to - suds

I'm pretty sure suds is not caching my WSDLs and XSDs like I expect it to. Here's how I know that cached objects are not being used:
It takes about 30 seconds to create a client: client = Client(url)
The logger entries show consistent digestion of the XSD and WSDL files during the entire 30 seconds
Wireshark is showing consistent TCP traffic to the server storing the XSD and WSDL files during the entire 30 seconds
I see the files in the cache being updated each time I run my program
I have a small program that creates a suds client, sends a single request, gets the response, then ends. My expectation is that each time I run the program, it should fetch the WSDL and XSD files from the file cache, not from the URLs. Here's why I think that:
client.options.cache.duration is set to ('days', 1)
client.options.cache.location is set to c:\docume~1\mlin\locals~1\temp\suds and I see the cache files being generated and re-generated each time I run the program
For a moment I thought that maybe the cache is not reused between runs of a program, but I don't think a file cache would be used if that were the case, because an in-memory cache would do just fine
Am I misunderstanding how suds caching is supposed to work?

The problem is in the suds library itself. In cache.py, although ObjectCache.get() is always getting a valid file pointer, it's hitting an exception (EOFError) doing pickle.load(fp). When that happens, the file is just downloaded again.
Here's the sequence of events:
DocumentReader.open():
Trying http://172.28.50.249/wsdl/billingServices/v3.0/RequestScrubAddress.wsdl
Loading ObjectCache 51012453-document
Loading pickled object...
Exception raised:
Got None from cache
Downloading... Done
Saving FileCache 51012453-document... Done
So it doesn't really matter that the new cache file was saved, because the same thing happens the next time I run. This happens for ALL of WSDL and XSD files.
I fixed that problem by opening the cache file in binary mode when reading and writing. Specifically, the changes I made were in cache.py:
1) In FileCache.put(), change this line:
f = self.open(fn, 'w')
to
f = self.open(fn, 'wb')
2) In FileCache.getf(), change this line:
return self.open(fn)
to
return self.open(fn, 'rb')
I don't know the codebase well enough to know if these changes are safe, but it is pulling the objects from the file cache, the service is still running successfully, and loading the client went from 16 seconds down to 2.5 seconds. Much better if you ask me.
Hopefully this fix, or something similar can be introduced back into the suds main line. I've already sent this to the suds mailing list (fedora-suds-list at redhat dot com).

Related

Can not open new files on SD with NodeMCU FATFS module

I'm using the NodeMCU environment to write a Lua script for the ESP8266. It uses the FATFS module to create several files with the following pattern:
LOG_xxxxyymmdd_hhmmss.txt, where xxxx: #file(incremental), and the rest is a timestamp.
I was running a test, creating one file, filling it with a small amount of data (~200 bytes), closing it, and repeating this for X times. After the first hour had passed, the files stopped being created. Here is the list of files created successfully:
<script src="https://pastebin.com/embed_js/0xn3MBQt"></script>
And here are some filenames it couldn't create:
LOG_0193000101_005909.txt
LOG_0194000101_005908.txt
LOG_0000000000_000000.txt
LOG.txt
I'm really troubled by this. Is there some kind of filename size limit so that when searching for files during open(), it returns an error because of a name ambiguity or something like that? If anyone has a clue about this, please tell me so I can test it. Thanks.

Jmeter doesn't save response data or headers

I'm building some simple load testing for my API, and to make sure everything is on the up and up I'd like to also review the response headers and data. But when I run my test using the command line and then re-open the GUI to add a View Results Tree listener and load the created file the response headers or response data is empty.
I entered the following values into user.properties (also tried uncommenting those values in jmeter.properties and changing them there, same result)
jmeter.save.saveservice.output_format=csv (tried xml, omitting it, jtl)
jmeter.save.saveservice.data_type=false
jmeter.save.saveservice.label=true
jmeter.save.saveservice.response_code=true
jmeter.save.saveservice.response_data.on_error=true
jmeter.save.saveservice.response_message=true
jmeter.save.saveservice.successful=true
jmeter.save.saveservice.thread_name=true
jmeter.save.saveservice.time=true
jmeter.save.saveservice.subresults=false
jmeter.save.saveservice.assertions=false
jmeter.save.saveservice.latency=true
jmeter.save.saveservice.bytes=true
jmeter.save.saveservice.hostname=true
jmeter.save.saveservice.thread_counts=true
jmeter.save.saveservice.sample_count=true
jmeter.save.saveservice.response_message=true
jmeter.save.saveservice.assertion_results_failure_message=true
jmeter.save.saveservice.timestamp_format=HH:mm:ss
jmeter.save.saveservice.default_delimiter=;
jmeter.save.saveservice.print_field_names=true
But still no luck when opening the result file. I tried declaring the file after the -l tag as results.csv, .jtl, even .xml but none of them show me the headers and data.
I'm running it locally on Mac OS X 10.10 using the following command, jmeter version is 2.12
java -jar ApacheJMeter.jar -n -t /Users/[username]/Documents/API_test.jmx -l results_15.jtl
I don't know if it's not even saving that data, or if the Listeners can't read it or if I've been cursed but any help is appreciated.
It works fine if I add a Listener and run it using the GUI, but if I try to run my larger tests that way, well, things don't end well for anyone.
So my question is:
How do I save the response header and data to a file when using the command line, and how do I then view said file in jmeter?
Add a Simple Data Writer (under Listeners) and output to a file (NB: different file than your log). Under the 'configure' button, there are all sorts of options of what to save. One of the check boxes is Save Response Header.
This file can get huge if you're saving a bunch of things for every request- one strategy is to check everything, but only save for errors. But you can do whatever works for you.
You can also turn on "Functional Test Mode" which will produce a large file but will contain pretty much anything you might need to debug your test.
Beware, this can create a very large JTL file, so don't forget to turn it off for your large test runs! See JMeter Maven mojo throws IllegalArgumentException with large JTL file
Alternatively use a Tree View Listener in the GUI for a small sample of the requests and check the request/response in the GUI (including headers) to debug or check your test.
Add Below lines in user.properties file
jmeter.save.saveservice.output_format=xml
jmeter.save.saveservice.response_data=true
jmeter.save.saveservice.samplerData=true
jmeter.save.saveservice.requestHeaders=true
jmeter.save.saveservice.url=true
Restart cmd prompt.

Delphi Text Files get NULLS (0's) written to them instead of text

Unfortunately this question may be a bit vague, I have a problem that I am finding difficult to describe, it is intermittent and I cannot reproduce it myself, I am just hoping that someone else has seen something like it before.
My application has quite a lot of text and ini files that get written when it closes down. Typically this would be in response to a Close event, but may also be triggered by a WM_ENDSESSION. Unfortunately at the moment I am not sure if both or only one of these events can result in the problem I am about to describe, because I have been unable to reproduce this problem myself.
The issue I have is that for some users some of the text and ini files end up being written as NULLs. The file sizes end up looking about right, but instead of text, every character is written as a x00. So instead of 500 bytes of regular ASCII text I end up with 500 x00's. I also have an application log file that can sometimes end up with nulls written to it also. However the logging of x00's to the log file does not necessarily correspond to the exact same time as x00's were written to the config files.
For my files I am using TmemIniFile or TstringList which means that ultimately a Tstrings.SaveToFile is being called for all of my config files.
sl:=TstringList.Create;
try
SourceList.GetSpecificSubset(sl);
AppLogLogLine('Commands: Saving Always Available list. List has '+inttostr(sl.Count)+' commands.');
sl.SaveToFile(fn);
finally
sl.Free;
end;
But then I also have instance where I would already have a TstringList in memory and I just call SaveToFile on it. For TmemIniFile the structure would look similar to above. In some instances I may have an outer loop to write multiple lists. Some of those will result in files being written correctly, some will be full of 00's.
EDIT: GetSpecificSubset is simply a function that will populate "sl" with a list of command names. I have "GetAllUsersCommands", "GetHiddenCommands", "GetAlwaysVisibleCommands" etc. Note that my log file also writes this kind of thing, as a check for how big those lists are:
16/10/2013 11:17:49 AM: Commands: Saving Any User list. List has 8 commands.
16/10/2013 11:17:49 AM: Commands: Saving Always Visible list. List has 17 commands.
16/10/2013 11:17:49 AM: Commands: Saving Always Hidden list. List has 2 commands.
I accidentally left the logging line out of the code above. So this log line is the last thing written before calling Tstrings.SaveToFile, and at that point it thinks it has data. Even if somehow each line of text were NULLs, I would still expect to see x13x10 in the files, but that is not happening.
Here's a screen cap from a HEX editor:
EDIT 2: I just realised I left off a very important piece of information. This is only intermittent. It works 99% of the time. When saving files at shutdown it might not even be all files. Even if I have a loop saving multiple similar files, some may work fine and others may fail.

Connecting Ruby(Rails) to Nodejs through a pipe

I have a rails app that needs to make use of a Javascript library on the server. Up until now I have been running system commands from rails to nodejs whenever this is necessary. However, I have a particularly computationally intensive task that has made it necessary to cache data to speed it up. I also have to pass large inputs to the node program. As a result I've hit the buffer size of inputs to the node program. I am currently just sending it to separate node processes multiple times in chunks small enough to fit in the buffer, but this is causing performance problems because I now no longer get to take advantage of caching over as many runs. I would like to use a pipe to do this, but my pipe hits the buffer as well, and I don't know how to empty it. So far I have...
#ruby file
output=[]
node_pipe=IO.popen("nodejs /home/user/node_program.js","w+")
10_000.times do |time|
node_pipe.write("a lot of stuff")
#here I would like to read contents and push contents to output array but still be
#able to write to the same process in the next loop to take advantage of the cache
end
//node_program.js
var input=process.stdin;
var cache={};
input.resume();
input.on('data',function(chunk){
cache[chunk]=library_function(chunk);
console.log(String(other_library_function(chunk)));
}
Any suggestions?
`

EmbeddedReadOnlyGraphDatabase complaining about locked database

Exception in thread "main" java.lang.IllegalStateException: Database locked.
at org.neo4j.kernel.InternalAbstractGraphDatabase.create(InternalAbstractGraphDatabase.java:289)
at org.neo4j.kernel.InternalAbstractGraphDatabase.run(InternalAbstractGraphDatabase.java:227)
at org.neo4j.kernel.EmbeddedReadOnlyGraphDatabase.<init>(EmbeddedReadOnlyGraphDatabase.java:81)
at org.neo4j.kernel.EmbeddedReadOnlyGraphDatabase.<init>(EmbeddedReadOnlyGraphDatabase.java:72)
at org.neo4j.kernel.EmbeddedReadOnlyGraphDatabase.<init>(EmbeddedReadOnlyGraphDatabase.java:54)
at QueryNodeReadOnly.main(QueryNodeReadOnly.java:55)
This is using 1.8.2 version of neo4j. I've written a program that opens the db in readonly mode, querying and and make it sleep for a while before exiting.
Here is the relevant text
graphDb = new EmbeddedReadOnlyGraphDatabase( dbname); // Line 55 - the exception.
......
......
......
......
......
if(sleepVal > 0)
Thread.sleep(sleepVal);
I reckon I should not be getting this error. There are only 2 processes that open the db , both in read-only mode. In fact, it should work even if i open the db when another process has opened it to write to it.
We disallow two databases accessing the same files on disk at the same time - even in read-only mode.
The reason being is that while we do not allow you to modify the database in read-only mode, Lucene will still write to disk when servicing your read requests, and having two instances access those same index files leads to race conditions and index corruptions.
Why is it you want 2x instances accessing the same files at the same time anyway? What is your use case?
You can't make multiple connections to an embedded database. Maybe you should consider using the REST server.

Resources