CYTHON : Generating coverage for pyx file - code-coverage

I am trying to generate code coverage report for a cython module, an d facing issues.
I have a simple c++ code : apple.h and apple.cpp files.
The cpp file is simple as :
using namespace std;
namespace mango {
apple::apple(int key) {
_key = key;
};
int apple::execute()
{
return _key*_key;
};
}
I have written a basic cython code over this in "cyApple.pyx" :
# cython: linetrace=True
from libcpp.list cimport list as clist
from libcpp.string cimport string
from libc.stdlib cimport malloc
cdef extern from "apple.h" namespace "mango" :
cdef cppclass apple:
apple(int)
int execute()
cdef class pyApple:
cdef apple* aa
def __init__(self, number):
self.aa = new apple(number)
def getSquare(self):
return self.aa.execute()
My setup.py file :
from distutils.core import setup, Extension
from Cython.Build import cythonize
compiler_directives = {}
define_macros = []
compiler_directives['profile'] = True
compiler_directives['linetrace'] = True
define_macros.append(('CYTHON_TRACE', '1'))
setup(ext_modules = cythonize(Extension(
"cyApple",
sources=["cyApple.pyx", "apple.cpp"],
define_macros=define_macros,
language="c++",
), compiler_directives=compiler_directives))
This generates a proper library cyApple.so.
I have also written a simple appletest.py file to run test cases :
import cyApple, unittest
class APPLETests(unittest.TestCase):
def test1(self):
temp = 5
apple1 = cyApple.pyApple(temp)
self.assertEqual(25, apple1.getSquare())
suite = unittest.TestLoader().loadTestsFromTestCase(APPLETests)
unittest.TextTestRunner(verbosity=3).run(suite)
The test works fine.
The problem is I need to get code coverage for my cyApple.pyx file
When i run "coverage report -m"
I get the error and coverage for only my test file not pyx file.
cyApple.pyx NotPython: Couldn't parse '/home/final/cyApple.pyx' as Python source: 'invalid syntax' at line 2
Name Stmts Miss Cover Missing
--------------------------------------------
appletest.py 8 1 88% 9
I tried to look online and get some help , so i added
.coveragerc file with contents as :
[run]
plugins = Cython.Coverage
On running "coverage run appletest.py" i get errors :
...
...
...
ImportError: No module named Coverage
I want to generate simple code coverage report for my pyx file. How i can do it in a simple way ?
I reinstalled Cython-0.28.3.
Now on running "coverage run appletest.py"
I am getting error :
test1 (__main__.APPLETests) ... Segmentation fault (core dumped)
This is my apple.h file :
#include<iostream>
namespace mango {
class apple {
public:
apple(int key);
int execute();
private:
int _key;
};
}

You must update Cython. The documentation states:
Since Cython 0.23, line tracing (see above) also enables support for
coverage reporting with the coverage.py tool.

I created a simple helper script that:
Runs all cython files in the directory
Creates a linetrace version of cython code, but cleans up after execution, so this won't interfere with production versions.
Produces cython annotated report with line coverage, takes care of .coveragerc creation and all technical stuff
Works well with pyximport, no need to build setup.py and rebuild cython modules
Caveats
It works only for linux (but it's doable to adapt it for another OS)
Built for anaconda python
Project:
https://github.com/alexveden/cython_coverage_script
Source code:
https://github.com/alexveden/cython_coverage_script/blob/master/cy_test/tests/run_cython_coverage_annotations.py
Hopefully this helps anyone, because I spent an enormous amount of time to figure out how to deal with coverage under Cython. It's appeared not a trivial task because, Cython coverage plugin has issues with mapping .pyx paths and pyximport doesn't support coverage directives.

Related

Android: linking to opencv results in SIGBUS (signal SIGBUS: illegal alignment) when exception is thrown

I have to work with opencv in an android project. Everything worked fine until I recently had to use c++ exception_ptr as well.
Since then, the use of std::rethrow_exception causes a SIGBUS (signal SIGBUS: illegal alignment).
I created a minimal example to illustrate the problem. The example application only links to opencv 3.4.4 but does not use any opencv function. If you remove the linking to opencv in CMakeLists.txt the app works fine and doesn't crash. If you add it however, the app will crash as soon as the native method triggerException() is called.
In my implementation the example application calls this method if a button is pressed.
native-lib.cpp:
#include <jni.h>
#include <string>
#include <exception>
/*
* code based on: https://en.cppreference.com/w/cpp/error/exception_ptr
*/
std::string handle_eptr2(std::exception_ptr eptr)
{
try {
if (eptr) {
std::rethrow_exception(eptr);
}
} catch (const std::exception &e) {
return "Caught exception \"" + std::string(e.what()) + "\"\n";
}
return "Something went wrong";
}
extern "C" JNIEXPORT jstring JNICALL
Java_com_example_user_exceptiontest_MainActivity_triggerException(
JNIEnv *env,
jobject /* this */) {
std::exception_ptr eptr;
try {
std::string().at(1); // this generates an std::out_of_range
} catch(...) {
eptr = std::current_exception(); // capture
}
std::string res = handle_eptr2(eptr);
return env->NewStringUTF(res.c_str());
}
CMakeLists.txt
cmake_minimum_required(VERSION 3.4.1)
set(OPENCV_DIR $ENV{HOME}/lib/OpenCV-android-sdk/sdk )
include_directories(${OPENCV_DIR}/native/jni/include )
add_library( native-lib
SHARED
src/main/cpp/native-lib.cpp)
find_library( log-lib
log)
target_link_libraries(
native-lib
# Removing the following line will make everything work as expected (what() message is returned)
${OPENCV_DIR}/native/libs/${ANDROID_ABI}/libopencv_java3.so # <--- critical line
${log-lib})
build.gradle
To use exceptions and c++17 support, I added the following lines to the configuration that is created by android-studio.
externalNativeBuild {
cmake {
arguments '-DANDROID_TOOLCHAIN=clang',
'-DANDROID_STL=c++_shared'
cppFlags "-std=c++1z -frtti -fexceptions"
}
}
Stacktrace:
<unknown> 0x004c4e47432b2b01
___lldb_unnamed_symbol15856$$libopencv_java3.so 0x0000007f811c4a58
_Unwind_Resume_or_Rethrow 0x0000007f811c4fc8
__cxa_rethrow 0x0000007f81181e50
__gnu_cxx::__verbose_terminate_handler() 0x0000007f811b1580
__cxxabiv1::__terminate(void (*)()) 0x0000007f81181c54
std::terminate() 0x0000007f81181cc0
std::rethrow_exception(std::exception_ptr) 0x0000007f802db2cc
handle_eptr2(std::exception_ptr) native-lib.cpp:35
::Java_com_example_user_exceptiontest_MainActivity_triggerException(JNIEnv *, jobject) native-lib.cpp:58
While searching for a solution I looked at the opencv sources (https://github.com/opencv/opencv/blob/master/modules/core/src/parallel.cpp) and stumbled upon this code snippet:
#ifndef CV__EXCEPTION_PTR
# if defined(__ANDROID__) && defined(ATOMIC_INT_LOCK_FREE) && ATOMIC_INT_LOCK_FREE < 2
# define CV__EXCEPTION_PTR 0 // Not supported, details: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58938
I'd understand if this changes the behavior of opencv, but I don't get how this might influence code that does not use opencv at all.
EDIT: It is also worth mentioning that linking to opencv has no impact if I use this code directly (without jni) in a linux (x86_64) desktop setting (clang, libc++, opencv3.4.4). Thus, my conclusion that it is an android specific problem...
Does anyone has an idea how to solve that issue or what to try next?
Thanks a lot in advance!
Opencv is compiled with gnu runtime while you are using c++ stl. See One STL per app. You will need to either use gnustl (you will need to go back to ndk 15 for that) or build opencv with c++ stl.
In order to build opencv with c++_static you can try to follow comment in opencv bugtracker
cmake -GNinja -DINSTALL_ANDROID_EXAMPLES=ON
-DANDROID_EXAMPLES_WITH_LIBS=ON -DBUILD_EXAMPLES=ON -DBUILD_DOCS=OFF -DWITH_OPENCL=OFF -DWITH_IPP=ON -DCMAKE_TOOLCHAIN_FILE=${ANDROID_NDK}/build/cmake/android.toolchain.cmake
-DANDROID_TOOLCHAIN=clang "-DANDROID_STL=c++_static" -DANDROID_ABI=x86 -DANDROID_SDK_TARGET=18 ../opencv
Followed by
make && make install

Basic and enhanced dependencies give different results in Stanford coreNLP

I am using dependency parsing of coreNLP for a project of mine. The basic and enhanced dependencies are different result for a particular dependency.
I used the following code to get enhanced dependencies.
val lp = LexicalizedParser.loadModel("edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz")
lp.setOptionFlags("-maxLength", "80")
val rawWords = edu.stanford.nlp.ling.Sentence.toCoreLabelList(tokens_arr:_*)
val parse = lp.apply(rawWords)
val tlp = new PennTreebankLanguagePack()
val gsf:GrammaticalStructureFactory = tlp.grammaticalStructureFactory()
val gs:GrammaticalStructure = gsf.newGrammaticalStructure(parse)
val tdl = gs.typedDependenciesCCprocessed()
For the following example,
Account name of ramkumar.
I use simple API to get basic dependencies. The dependency i get between
(account,name) is (compound). But when i use the above code to get enhanced dependency i get the relation between (account,name) as (dobj).
What is the fix to this? Is this a bug or am i doing something wrong?
When I run this command:
java -Xmx8g edu.stanford.nlp.pipeline.StanfordCoreNLP -annotators tokenize,ssplit,pos,lemma,ner,parse -file example.txt -outputFormat json
With your example text in the file example.txt, I see compound as the relationship between both of those words for both types of dependencies.
I also tried this with the simple API and got the same results.
You can see what simple produces with this code:
package edu.stanford.nlp.examples;
import edu.stanford.nlp.semgraph.SemanticGraphFactory;
import edu.stanford.nlp.simple.*;
import java.util.*;
public class SimpleDepParserExample {
public static void main(String[] args) {
Sentence sent = new Sentence("...example text...");
Properties props = new Properties();
// use sent.dependencyGraph() or sent.dependencyGraph(props, SemanticGraphFactory.Mode.ENHANCED) to see enhanced dependencies
System.out.println(sent.dependencyGraph(props, SemanticGraphFactory.Mode.BASIC));
}
}
I don't know anything about any Scala interfaces for Stanford CoreNLP. I should also note my results are using the latest code from GitHub, though I presume Stanford CoreNLP 3.8.0 would also produce similar results. If you are using an older version of Stanford CoreNLP that could be a potential cause of the error.
But running this example in various ways using Java I don't see the issue you are encountering.

H5Screate_simple throws exception: dims rank is invalid

I just started experimenting with HDF5 to see if I can use it in a new project. I'm getting the following exception from a call to H5Screate_simple: dims rank is invalid.
I'm developing in Eclipse with Scala with Maven on OS X. I'm using this example to build my test. Here is the failing snippet:
def failTest() {
val rank: Int = 2
val dimSizes = Array[Long](1, 1)
val maxDimSizes = Array[Long](1, 1)
val dataSpaceID = H5.H5Screate_simple(rank, dimSizes, maxDimSizes)
}
Searching for the error message I found the code that throws the exception here, see line 81. This indicates the length of the dimSizes array does not match the value of rank, but in the snippet above the both are obviously 2. I wondered if this could be some problem with the Array object in Scala (though I've never had problem passing arrays to java functions before). So I wrote a test snippet in Java ...
public static void failTest() throws Exception {
int rank = 2;
long[] dims = { 1, 1 };
long[] mdims = { 1, 1 };
long dataSpaceID = H5.H5Screate_simple(rank, dims, mdims);
}
I get the same exception. It all seems pretty straight forward, but I can't see any problem. Can anyone help on this?
The problem was due to an out of data package. I setup my pom.xml to fetch the package from maven central, but the latest version posted there is 2.6.1 from 2010. The latest version is 3.2.1. For some reason they are not maintaining it at maven central. I downloaded the latest from here.
I manually installed the jar in my maven repository with:
mvn install:install-file -Dfile=jarhdf5-3.2.1.jar -DgroupId=org.hdfgroup -DartifactId=hdf-java -Dversion=3.2.1 -Dpackaging=jar
Then updated my pom.xml with
<dependency>
<groupId>org.hdfgroup</groupId>
<artifactId>hdf-java</artifactId>
<version>3.2.1</version>
</dependency>
The download from hdfgroup was described as an installer, but it didn't seem to actually install anything. So I also had to install the native library manually with:
ln -sf /<path_to_package>/HDFJAVA/3.2.1/lib/libjhdf.3.2.1.dylib /usr/local/lib/libjhdf.3.2.1.dylib
ln -sf /<path_to_package>/HDFJAVA/3.2.1/lib/libjhdf.3.2.1.dylib /usr/local/lib/libjhdf5.dylib

javac will not compile enum, ( Windows Sun 1.6 --> OpenJDK 1.6)

package com.scheduler.process;
public class Process {
public enum state {
NOT_SUBMITTED, SUBMITTED, BLOCKED, READY, RUNNING, COMPLETED
}
private state currentState;
public state getCurrentState() {
return currentState;
}
public void setCurrentState(state currentState) {
this.currentState = currentState;
}
}
package com.scheduler.machine;
import com.scheduler.process.Process;
import com.scheduler.process.Process.state;
public class Machine {
com.scheduler.process.Process p = new com.scheduler.process.Process();
state s = state.READY; //fails if I don't also explicitly import Process.state
p.setCurrentState(s); //says I need a declarator id after 's'... this is wrong.
p.setCurrentState(state.READY);
}
Modified the example to try and direct to the issue. I cannot change the state on this code. Eclipse suggests importing Process.state like I had on my previous example, but this doesn't work either. This allows state s = state.READY but the call to p.setCurrentState(s); fails as does p.setCurrentState(state.READY);
Problem continued.... Following Oleg's suggestions I tried more permutations:
package com.scheduler.machine;
import com.scheduler.process.Process;
import com.scheduler.process.Process.*;
public class Machine {
com.scheduler.process.Process p = new com.scheduler.process.Process();
public state s = Process.state.READY;
p.setCurrentState(s);
p.setCurrentState(state.READY);
}
Okay. It's clear now that I'm a candidate for lobotomy.
package com.scheduler.machine;
import com.scheduler.process.Process;
import com.scheduler.process.Process.state;
public class Machine {
public void doStuff(){
com.scheduler.process.Process p = new com.scheduler.process.Process();
state s = state.READY; //fails if I don't also explicitly import Process.state
p.setCurrentState(s); //says I need a declarator id after 's'... this is wrong.
p.setCurrentState(state.READY);
}
}
I needed to have a method in the class--but we're still missing something (probably obvious) here. When I go via the command line and run javac on the Machine class AFTER compiling Process, I still get the following error:
mseil#context:/media/MULTIMEDIA/Scratch/Scratch/src/com/scheduler/machine$ javac Machine.java
Machine.java:3: package com.scheduler.process does not exist
import com.scheduler.process.Process;
^
So I guess the question now becomes, what idiot thing am I missing that is preventing me from compiling this by hand that eclipse is doing for me behind the scene?
======
Problem solved here:
Java generics code compiles in eclipse but not in command line
This has just worked for me:
Download latest Eclipse
Create new project
Create two packages com.scheduler.process and com.scheduler.machine
Create class Process in package com.scheduler.process and class Machine in com.scheduler.machine and copy their contents from your post modifying them to conform to Java language syntax, like this:
Everything compiles right away.
------ to answer the previous version of the question ------
To answer the question as it is right now: you need to either
import com.scheduler.process.Process.status or import com.scheduler.process.Process.* and refer to status as just status
or
import com.scheduler.process.* or import com.scheduler.process.Process and refer to status as Process.status
------ to answer the original version of the question ------
You can't import classes that are not inside some package. You just can't. It is a compile time error to import a type from the unnamed package.
You don't need to import anything if your classes are in the same package, or if all of your classes are packageless.
If Process class was inside some package it would be possible to import just its status inner class: import a.b.c.Process.status would work just fine.
All your Windows/Linux migration issues don't have anything to do with Java and exceptions that you see. import Process.state; will produce exception on any OS because you can't import classes that don't belong to any package.
Eclipse doesn't use the Sun JDK by default. I would assume that you are using Eclipse's built in compiler as Sun's JDK and the OpenJDK are almost identical.
Java code compiles and runs exact the same on Windows and Linux most of the time (unless you use a few of the platform specific operations)
I suspect you are not building the code the same way and when you compile Machine, the Process class has not been compiled.
I suggest you use a standard build system like maven or ant and it will build the same everywhere. Failing that run Eclipse on Linux or just the same .class you use on windows as they don't need to be re-compiled in any case.
BTW: You don't need to import Process.state as it not used and its in the same package (so you wouldn't need to if you did)

Where to find Glib object in vala?

I just started with learning vala. I tried the following program from the vala tutorial.
class Demo.Hello : Glib.Object
{
public static int main( string[] args )
{
stdout.printf("Hello, Vala!\n");
return 0;
}
}
and got this when I compiled.
$ valac hello.vala
hello.vala:1.20-1.23: error: The symbol `Glib' could not be found
class Demo.Hello : Glib.Object
^^^^
Compilation failed: 1 error(s), 0 warning(s)
[1]+ Done gvim hello.vala
If I remove Glib. from Glib.Object i.e leave it with just class Demo.Hello : Object, then everything works fine. But all the programs in the tutorial use Glib.Object. What's wrong here? I searched for answers but could not find it. Here is the vala version info:
$ valac --version
Vala 0.5.2
And I am running the latest version of CentOS.
The namespace is called GLib (with big L) not Glib..
The correct name is GLib. But you can just as well leave the "GLib." out and just write "Object", the GLib namespace is implicitely used in all vala apps.
For other namespaces you can use "using", for example using Gtk;.

Resources