H5Screate_simple throws exception: dims rank is invalid - hdf5

I just started experimenting with HDF5 to see if I can use it in a new project. I'm getting the following exception from a call to H5Screate_simple: dims rank is invalid.
I'm developing in Eclipse with Scala with Maven on OS X. I'm using this example to build my test. Here is the failing snippet:
def failTest() {
val rank: Int = 2
val dimSizes = Array[Long](1, 1)
val maxDimSizes = Array[Long](1, 1)
val dataSpaceID = H5.H5Screate_simple(rank, dimSizes, maxDimSizes)
}
Searching for the error message I found the code that throws the exception here, see line 81. This indicates the length of the dimSizes array does not match the value of rank, but in the snippet above the both are obviously 2. I wondered if this could be some problem with the Array object in Scala (though I've never had problem passing arrays to java functions before). So I wrote a test snippet in Java ...
public static void failTest() throws Exception {
int rank = 2;
long[] dims = { 1, 1 };
long[] mdims = { 1, 1 };
long dataSpaceID = H5.H5Screate_simple(rank, dims, mdims);
}
I get the same exception. It all seems pretty straight forward, but I can't see any problem. Can anyone help on this?

The problem was due to an out of data package. I setup my pom.xml to fetch the package from maven central, but the latest version posted there is 2.6.1 from 2010. The latest version is 3.2.1. For some reason they are not maintaining it at maven central. I downloaded the latest from here.
I manually installed the jar in my maven repository with:
mvn install:install-file -Dfile=jarhdf5-3.2.1.jar -DgroupId=org.hdfgroup -DartifactId=hdf-java -Dversion=3.2.1 -Dpackaging=jar
Then updated my pom.xml with
<dependency>
<groupId>org.hdfgroup</groupId>
<artifactId>hdf-java</artifactId>
<version>3.2.1</version>
</dependency>
The download from hdfgroup was described as an installer, but it didn't seem to actually install anything. So I also had to install the native library manually with:
ln -sf /<path_to_package>/HDFJAVA/3.2.1/lib/libjhdf.3.2.1.dylib /usr/local/lib/libjhdf.3.2.1.dylib
ln -sf /<path_to_package>/HDFJAVA/3.2.1/lib/libjhdf.3.2.1.dylib /usr/local/lib/libjhdf5.dylib

Related

Load-test Spock spec, running it concurrently multiple times

I have written integration tests in Spock that I would like to reuse for load testing. I haven't had any luck with programmatically executing Spock tests. I need to run an entire spec as a single unit which will be executed concurrently to create load.
Previous posts on stack overflow on this topic are obsolete (I tried a bunch of them with no luck).
Example Spec:
class MySpec extends Specification {
def 'test'() {
expect: 1+1 == 2
}
}
I want to be able to run this in something like (executed, succeeded and failed are AtomicInteger):
executor.submit(() -> {
try {
executed.addAndGet(1);
Result result = mySpecInstance.run() // <-- what should this be.
if (result.wasSuccessful()) {
succeeded.addAndGet(1);
} else {
failed.addAndGet(1);
log.error("Failures encountered: {}", result.getFailures());
}
} catch (RuntimeException e) {
log.error("Exception when running runner!", e);
failed.addAndGet(1);
}});
I've tried the answer in this post which throws
Invalid test class 'my.package.MySpec':
1. No runnable methods]
I tried using the new EmbeddedSpecRunner().run(MySpec.class) which throws
groovy.lang.MissingMethodException: No signature of method: spock.util.EmbeddedSpecRunner.runClass() is applicable for argument types: (Class) values: [class my.package.MySpec]
Possible solutions: getClass(), metaClass(groovy.lang.Closure)
I am using JDK8 with Groovy 3.0.4 and spock-2.0-M3-groovy-3.0 (spock-junit4).
Update:
The answer from post works with Groovy-2.4, Spock-1.3 but not with Groovy-3.0 and Spock-2.0.
Thanks.
Your error did not occur because you used the wrong Spock version, by the way. You can use module spock-junit4 if you want to run the old JUnit 4 API. I just tried, the method works in Spock 1 and still in Spock 2, even though you maybe should upgrade to something that does not rely on an older API and a compatibility layer.
Your error message is simply caused by the fact that you copied & pasted code from the other answer without fixing it. The guy there wrote MySuperSpock.Class which causes the error because if must be MySuperSpock.class with a lower-case "C" or under Groovy simply MySuperSpock because the .class is optional there.
The error message even proves that you had JUnit 4 on the class path and everything was fine, otherwise the code importing JUnit 4 API classes would not have compiled in the first place. And the error message also explains what is wrong and suggests a solution:
Exception in thread "main" groovy.lang.MissingPropertyException: No such property: Class for class: de.scrum_master.testing.MyTest
Possible solutions: class
See? Class MyTest does not have any property called Class. And one possible solution (in this case even the correct one) is to use .class. This gives you a hint. BTW, the syntax MyTest.Class looks like an inner class reference or maybe a property reference to the compiler (to me too).
Update: I just took a closer look and noticed that the solution from the other question you said was working for Spock 1.3 actually compiles and runs, but the JUnit Core runner does not really run the tests. I tried with tests that print something. Furthermore, the result reports all tests as failed.
For simple cases you could use Spock's EmbeddedSpecRunner which is used internally to test Spock itself. Under Spock 1.x it should be enough to have JUnit 4 on the test class path, under Spock 2 which is based on JUnit 5 platform, you need to add these dependencies too because the embedded runner uses them:
<properties>
<version.junit>5.6.2</version.junit>
<version.junit-platform>1.6.2</version.junit-platform>
<version.groovy>3.0.4</version.groovy>
<version.spock>2.0-M3-groovy-3.0</version.spock>
</properties>
<!-- JUnit 5 Jupiter platform launcher for Spock EmbeddedSpecRunner -->
<dependency>
<groupId>org.junit.platform</groupId>
<artifactId>junit-platform-launcher</artifactId>
<version>${version.junit-platform}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.junit.platform</groupId>
<artifactId>junit-platform-testkit</artifactId>
<version>${version.junit-platform}</version>
<scope>test</scope>
</dependency>
Then you can run a test like this:
def spockRunner = new EmbeddedSpecRunner()
def spockResult = spockRunner.runClass(MyTest)
println "Tests run: " + spockResult.runCount
println "Tests ignored: " + spockResult.ignoreCount
println "Tests failed: " + spockResult.failureCount
BTW, the *Count getter methods are deprecated in Spock 2, but they still work. You can replace them by newer ones easily, I just wanted to post code which runs unchanged in both Spock versions 1.x and 2.x.
Update 2: If you want to run the same test e.g. 10x concurrently, each in its own thread, in Groovy a simple way to do that is:
(1..10).collect { Thread.start { new EmbeddedSpecRunner().runClass(MyTest) } }*.join()
Or maybe a bit easier to read with a few line breaks:
(1..10)
.collect {
Thread.start { new EmbeddedSpecRunner().runClass(MyTest) }
}
*.join()
I am assuming that you are familiar with collect (similar to map for Java streams) and the star-dot operator *. (call a method on each item in an iterable).

CYTHON : Generating coverage for pyx file

I am trying to generate code coverage report for a cython module, an d facing issues.
I have a simple c++ code : apple.h and apple.cpp files.
The cpp file is simple as :
using namespace std;
namespace mango {
apple::apple(int key) {
_key = key;
};
int apple::execute()
{
return _key*_key;
};
}
I have written a basic cython code over this in "cyApple.pyx" :
# cython: linetrace=True
from libcpp.list cimport list as clist
from libcpp.string cimport string
from libc.stdlib cimport malloc
cdef extern from "apple.h" namespace "mango" :
cdef cppclass apple:
apple(int)
int execute()
cdef class pyApple:
cdef apple* aa
def __init__(self, number):
self.aa = new apple(number)
def getSquare(self):
return self.aa.execute()
My setup.py file :
from distutils.core import setup, Extension
from Cython.Build import cythonize
compiler_directives = {}
define_macros = []
compiler_directives['profile'] = True
compiler_directives['linetrace'] = True
define_macros.append(('CYTHON_TRACE', '1'))
setup(ext_modules = cythonize(Extension(
"cyApple",
sources=["cyApple.pyx", "apple.cpp"],
define_macros=define_macros,
language="c++",
), compiler_directives=compiler_directives))
This generates a proper library cyApple.so.
I have also written a simple appletest.py file to run test cases :
import cyApple, unittest
class APPLETests(unittest.TestCase):
def test1(self):
temp = 5
apple1 = cyApple.pyApple(temp)
self.assertEqual(25, apple1.getSquare())
suite = unittest.TestLoader().loadTestsFromTestCase(APPLETests)
unittest.TextTestRunner(verbosity=3).run(suite)
The test works fine.
The problem is I need to get code coverage for my cyApple.pyx file
When i run "coverage report -m"
I get the error and coverage for only my test file not pyx file.
cyApple.pyx NotPython: Couldn't parse '/home/final/cyApple.pyx' as Python source: 'invalid syntax' at line 2
Name Stmts Miss Cover Missing
--------------------------------------------
appletest.py 8 1 88% 9
I tried to look online and get some help , so i added
.coveragerc file with contents as :
[run]
plugins = Cython.Coverage
On running "coverage run appletest.py" i get errors :
...
...
...
ImportError: No module named Coverage
I want to generate simple code coverage report for my pyx file. How i can do it in a simple way ?
I reinstalled Cython-0.28.3.
Now on running "coverage run appletest.py"
I am getting error :
test1 (__main__.APPLETests) ... Segmentation fault (core dumped)
This is my apple.h file :
#include<iostream>
namespace mango {
class apple {
public:
apple(int key);
int execute();
private:
int _key;
};
}
You must update Cython. The documentation states:
Since Cython 0.23, line tracing (see above) also enables support for
coverage reporting with the coverage.py tool.
I created a simple helper script that:
Runs all cython files in the directory
Creates a linetrace version of cython code, but cleans up after execution, so this won't interfere with production versions.
Produces cython annotated report with line coverage, takes care of .coveragerc creation and all technical stuff
Works well with pyximport, no need to build setup.py and rebuild cython modules
Caveats
It works only for linux (but it's doable to adapt it for another OS)
Built for anaconda python
Project:
https://github.com/alexveden/cython_coverage_script
Source code:
https://github.com/alexveden/cython_coverage_script/blob/master/cy_test/tests/run_cython_coverage_annotations.py
Hopefully this helps anyone, because I spent an enormous amount of time to figure out how to deal with coverage under Cython. It's appeared not a trivial task because, Cython coverage plugin has issues with mapping .pyx paths and pyximport doesn't support coverage directives.

Basic and enhanced dependencies give different results in Stanford coreNLP

I am using dependency parsing of coreNLP for a project of mine. The basic and enhanced dependencies are different result for a particular dependency.
I used the following code to get enhanced dependencies.
val lp = LexicalizedParser.loadModel("edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz")
lp.setOptionFlags("-maxLength", "80")
val rawWords = edu.stanford.nlp.ling.Sentence.toCoreLabelList(tokens_arr:_*)
val parse = lp.apply(rawWords)
val tlp = new PennTreebankLanguagePack()
val gsf:GrammaticalStructureFactory = tlp.grammaticalStructureFactory()
val gs:GrammaticalStructure = gsf.newGrammaticalStructure(parse)
val tdl = gs.typedDependenciesCCprocessed()
For the following example,
Account name of ramkumar.
I use simple API to get basic dependencies. The dependency i get between
(account,name) is (compound). But when i use the above code to get enhanced dependency i get the relation between (account,name) as (dobj).
What is the fix to this? Is this a bug or am i doing something wrong?
When I run this command:
java -Xmx8g edu.stanford.nlp.pipeline.StanfordCoreNLP -annotators tokenize,ssplit,pos,lemma,ner,parse -file example.txt -outputFormat json
With your example text in the file example.txt, I see compound as the relationship between both of those words for both types of dependencies.
I also tried this with the simple API and got the same results.
You can see what simple produces with this code:
package edu.stanford.nlp.examples;
import edu.stanford.nlp.semgraph.SemanticGraphFactory;
import edu.stanford.nlp.simple.*;
import java.util.*;
public class SimpleDepParserExample {
public static void main(String[] args) {
Sentence sent = new Sentence("...example text...");
Properties props = new Properties();
// use sent.dependencyGraph() or sent.dependencyGraph(props, SemanticGraphFactory.Mode.ENHANCED) to see enhanced dependencies
System.out.println(sent.dependencyGraph(props, SemanticGraphFactory.Mode.BASIC));
}
}
I don't know anything about any Scala interfaces for Stanford CoreNLP. I should also note my results are using the latest code from GitHub, though I presume Stanford CoreNLP 3.8.0 would also produce similar results. If you are using an older version of Stanford CoreNLP that could be a potential cause of the error.
But running this example in various ways using Java I don't see the issue you are encountering.

Simple Dropwizard 0.7.1 App Failing over Optional QueryParam w/ Java 8

I decided to return to Dropwizard after a very long affair with Spring. I quickly got the absolute barebones REST service built, and it runs without any problems.
Using Dropwizard 0.7.1 and Java 1.8, only POM entries are the dropwizard-core dependency and the maven compiler plugin to enforce Java 1.8, as recommended by the Dropwizard user manual
However, as soon as I try to add an Optional QueryParam to the basic controller, the application fails to start with the following error (cut for brevity):
INFO [2015-01-03 17:44:58,059] io.dropwizard.jersey.DropwizardResourceConfig: The following paths were found for the configured resources:
GET / (edge.dw.sample.controllers.IndexController)
ERROR [2015-01-03 17:44:58,158] com.sun.jersey.spi.inject.Errors: The following errors and warnings have been detected with resource and/or provider classes:
SEVERE: Missing dependency for method public java.lang.String edge.dw.sample.controllers.IndexController.index(java.util.Optional) at parameter at index 0
Exception in thread "main" javax.servlet.ServletException: com.sun.jersey.spi.container.servlet.ServletContainer-6c2ed0cd#330103b7==com.sun.jersey.spi.container.servlet.ServletContainer,1,false
The code for the controller is as follows:
#Path("/")
public class IndexController {
#GET
#Timed
public String index(#QueryParam("name") Optional<String> name) {
String saying = "Hi";
if(name != null && name.isPresent()) {
saying += " " + name.get();
}
return saying;
}
}
If I remove Optional from the mix, the application runs just fine. I replace the Optional-specific code with null checks and it works perfectly.
Am I missing something fundamental here? Both Google Guava Optional and java.util.Optional fail with the same error. (And yes, I did narrow it down to the Optional object)
A quick Google/SO search yielded nothing useful, but feel free to point me to a resource I may have missed
Thanks in advance!
Moments after posting this, I found that the issue was my use of Java 1.8. If using Java 1.8, I have to add the Java8Bundle to my app:
POM Entry:
<dependency>
<groupId>io.dropwizard.modules</groupId>
<artifactId>dropwizard-java8</artifactId>
<version>0.7.0-1</version>
</dependency>
And code in the Application class:
#Override
public void initialize(Bootstrap<SampleConfiguration> bootstrap) {
bootstrap.addBundle(new Java8Bundle());
}
See: https://github.com/dropwizard/dropwizard-java8
This enables both Google Guava Optional and java.util.Optional to work just fine.
If I revert to Java 1.7 and use the Google Guava Optional, it works just fine as well and I don't have to include the Java8Bundle. I'll opt for the Java8Bundle for now, though, as using Java8 features is lucrative for me :)
Cheers!

java.lang.LinkageError: "javax/activation/DataHandler"

So, I recently upgraded our Grails app from version 1.3.7 to 2.3.4. I'm now getting an exception in a SOAP handler that attempts to extract the message content and log it to the DB. This worked in 1.3.7, but I'm assuming that some new dependency or something has messed with the classpath.
The code looks like this:
private String extractSOAPMessage(SOAPMessageContext smc) {
Source source = smc.getMessage().getSOAPPart().getContent()
TransformerFactory factory = TransformerFactory.newInstance()
Transformer transformer = factory.newTransformer()
transformer.setOutputProperty( OutputKeys.METHOD, "xml" )
java.io.StringWriter writer = new StringWriter()
Result result = new StreamResult( writer )
transformer.transform( source, result )
return writer.toString()
}
The exception I'm seeing is:
Caused by: java.lang.LinkageError: loader constraint violation: loader (instance of <bootloader>) previously initiated loading for a different type with name "javax/activation/DataHandler"
It happens on this line:
Source source = smc.getMessage().getSOAPPart().getContent()
It looks like the culprit is the getSOAPart() call.
Note that I am using the 1.1.1 version of the cxf plugin for Grails. Any help on this would be greatly appreciated. I've found several similar issues with solutions, but none of them have been for the "javax/activation/DataHandler", so I am not sure what's going on here.
I suspect something has a transitive dependency on the activation library which you need to exclude - try running a dependency-report. Since Java 6 that JAR has been un-necessary as it's built in to the core Java class library, but many things still have dependencies on it so they can work on Java 5 (or date back to when Java 5 was still in widespread use).

Resources