I created a simple log4j2.properties file:
status = warn
appender.console.type = Console
appender.console.name = LogToConsole
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %level: %msg%n
rootLogger.level = warn
rootLogger.appenderRef.stdout.ref = LogToConsole
I added a VM argument to the run configuration in eclipse to point to the file.
Here is the resulting command line:
C:\Program Files\Java\jdk-17.0.1\bin\javaw.exe "-Dlog4j2.configurationFile=G:\My Drive\Dev\CrossFigureSolver\target\classes\log4j2.properties"
-Dfile.encoding=UTF-8
-classpath "[...]"
-XX:+ShowCodeDetailsInExceptionMessages com.propfinancing.crossFigure.Solver
But, when my program runs, it seems like log4j2 is not using my config file.
I am getting a lot of logs, including ones at the debug level. Also, they are not the same format as the pattern I specified. Here is an example:
13:16:37.505 [main] DEBUG org.apache.poi.openxml4j.opc.ZipPackage - Save core properties part
I can't find anything except instructions on setting log4j2.configurationFile which I did.
Any ideas?
I was able to use -Dlog4j2.debug and mvn dependency:tree to figure out that spring-boot-starter-web was bringing in slf4j which was interfering with log4j2.
I was able to add these exclusions to the pom.xml file to avoid loading it and now my logging works as expected:
<exclusions>
<exclusion>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-to-slf4j</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>jul-to-slf4j</artifactId>
</exclusion>
</exclusions>
I am surprised Spring was causing the problem, but at least now it is solved.
Related
I am running an Application on Google AppEngine which generates Dataflow-Templates and starts them. In one of those pipelines, inside a DoFn, the process generates another pipeline and waits for it to finish, before it continues its work. Until a few days ago, that was not a problem. But now, I get a NoSuchMethodError when pipeline.run() is called.
The stacktrace:
java.lang.NoSuchMethodError: org.apache.beam.sdk.common.runner.v1.RunnerApi$FunctionSpec$Builder.setPayload(Lcom/google/protobuf/ByteString;)Lorg/apache/beam/sdk/common/runner/v1/RunnerApi$FunctionSpec$Builder;
at org.apache.beam.runners.dataflow.repackaged.org.apache.beam.runners.core.construction.WindowingStrategyTranslation.toProto(WindowingStrategyTranslation.java:224)
at org.apache.beam.runners.dataflow.repackaged.org.apache.beam.runners.core.construction.WindowingStrategyTranslation.toProto(WindowingStrategyTranslation.java:299)
at org.apache.beam.runners.dataflow.repackaged.org.apache.beam.runners.core.construction.WindowingStrategyTranslation.toProto(WindowingStrategyTranslation.java:285)
at org.apache.beam.runners.dataflow.DataflowPipelineTranslator.serializeWindowingStrategy(DataflowPipelineTranslator.java:129)
at org.apache.beam.runners.dataflow.DataflowPipelineTranslator.access$1500(DataflowPipelineTranslator.java:114)
at org.apache.beam.runners.dataflow.DataflowPipelineTranslator$5.groupByKeyHelper(DataflowPipelineTranslator.java:806)
at org.apache.beam.runners.dataflow.DataflowPipelineTranslator$5.translate(DataflowPipelineTranslator.java:784)
at org.apache.beam.runners.dataflow.DataflowPipelineTranslator$5.translate(DataflowPipelineTranslator.java:781)
at org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator.visitPrimitiveTransform(DataflowPipelineTranslator.java:442)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:663)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:655)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:655)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:655)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:655)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:655)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$600(TransformHierarchy.java:311)
at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:245)
at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:446)
at org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator.translate(DataflowPipelineTranslator.java:386)
at org.apache.beam.runners.dataflow.DataflowPipelineTranslator.translate(DataflowPipelineTranslator.java:173)
at org.apache.beam.runners.dataflow.DataflowRunner.run(DataflowRunner.java:537)
at org.apache.beam.runners.dataflow.DataflowRunner.run(DataflowRunner.java:170)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:303)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:289)
at com.mycompany.projectsign.dataflow.dofn.downloads.something.RunDownloadPipeline.processElement(RunDownloadPipeline.java:150)
The referenced line of my code is the pipeline.run()
When I look at the maven-dependencies in Eclipse, the right dependencies (and versions) are added in the project and the RunnerApi.FunctionSpec.Builder.setPayload(com.google.protobuf.ByteString) method exists, too. I continued enforcing the versions with dependencyManagement:
<dependencyManagement>
<dependencies>
....
<dependency>
<groupId>com.google.protobuf</groupId>
<artifactId>protobuf-java</artifactId>
<version>3.2.0</version>
</dependency>
<dependency>
<groupId>io.grpc</groupId>
<artifactId>grpc-core</artifactId>
<version>1.2.0</version>
</dependency>
<dependency>
<groupId>io.grpc</groupId>
<artifactId>grpc-protobuf</artifactId>
<version>1.2.0</version>
</dependency>
<dependency>
<groupId>io.grpc</groupId>
<artifactId>grpc-stub</artifactId>
<version>1.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-common-runner-api</artifactId>
<version>2.2.0</version>
</dependency>
.....
</dependencies>
</dependencyManagement>
When running in AppEngine or creating the pipeline from localhost (running directly on Google Dataflow) the templates can be created and started without a problem, but running the created Pipeline inside the DoFn throws the same error.
However, when I run the Pipeline as a local pipeline directly on my computer (with DirectRunner), the pipeline runs without a problem and creates the other pipelines on GoogleDataflow.
I updated the beam-version to 2.2.0.
If some Googler is reading this, this is one of the failing JobIds: 2017-12-11_07_01_17-3122752092943950314
What might be the reason for the NoSuchMethodError? Could it be a conflicting dependency or something else?
Any help is highly appreciated :-)
public static void main(String[] args) {
//Pipeline p = Pipeline.create(PipelineOptionsFactory.fromArgs(args).withValidation().create());
DataflowPipelineOptions options = PipelineOptionsFactory.as(DataflowPipelineOptions.class);
options.setRunner(DataflowRunner.class);
options.setStagingLocation("gs://bucketname/stageapache");
options.setTempLocation("gs://bucketname/stageapachetemp");
options.setProject("projectid");
Pipeline p=Pipeline.create(options);
p.apply(TextIO.read().from("gs://bucketname/filename.csv"));
//p.apply(FileIO.match().filepattern("gs://bucketname/f.csv"));
p.run();
}
pom.xml
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-java-core</artifactId>
<version>2.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-java-io-google-cloud-platform</artifactId>
<version>2.0.0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.beam/beam-runners-google-cloud-dataflow-java -->
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-runners-google-cloud-dataflow-java</artifactId>
<version>2.0.0</version>
</dependency>
Error
Dec 08, 2017 5:09:35 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 85 files. Enable logging at DEBUG level to see which files will be staged.
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.beam.sdk.values.PCollection.createPrimitiveOutputInternal(Lorg/apache/beam/sdk/Pipeline;Lorg/apache/beam/sdk/values/WindowingStrategy;Lorg/apache/beam/sdk/values/PCollection$IsBounded;)Lorg/apache/beam/sdk/values/PCollection;
at org.apache.beam.runners.dataflow.PrimitiveParDoSingleFactory$ParDoSingle.expand(PrimitiveParDoSingleFactory.java:68)
at org.apache.beam.runners.dataflow.PrimitiveParDoSingleFactory$ParDoSingle.expand(PrimitiveParDoSingleFactory.java:58)
at org.apache.beam.sdk.Pipeline.applyReplacement(Pipeline.java:550)
at org.apache.beam.sdk.Pipeline.replace(Pipeline.java:280)
at org.apache.beam.sdk.Pipeline.replaceAll(Pipeline.java:201)
at org.apache.beam.runners.dataflow.DataflowRunner.replaceTransforms(DataflowRunner.java:688)
at org.apache.beam.runners.dataflow.DataflowRunner.run(DataflowRunner.java:498)
at org.apache.beam.runners.dataflow.DataflowRunner.run(DataflowRunner.java:153)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:303)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:289)
at com.pearson.apachebeam.StarterPipeline.main(StarterPipeline.java:60)
In the above code if add FileIO/TextIO line i am getting the above error, with out adding that line if i run it is creating job since there is no operations in it is failing. i am stuck up at this in my development i migrated to apache beam 2.2 to get control of the file we read from storage
Help will be appreciated
Thanks
The issue is that your pom.xml is depending on different components of the Beam SDK at different versions: beam-sdks-java-core at 2.2.0, but beam-sdks-java-io-google-cloud-platform and beam-runners-google-cloud-dataflow-java at 2.0.0. They need to be at the same version.
I configured 2 projects to use last jacoco version 0.7.8 and last Arquillian jacoco extension (1.0.09Alpha) it works like a charm (for jenkins and sonar 6.2)! but i have a bigger project, when i launch only Arquillian IT test my war archive is created and have all classes and so tests OK, when i run the same tests with IT code coverage, no class are included in the arquillian archive and have this error :
org.jboss.shrinkwrap.api.exporter.ArchiveExportException: Failed to write asset to output: /WEB-INF/...
Caused by: java.lang.RuntimeException: Could not instrument Asset org.jboss.shrinkwrap.api.asset.ClassLoaderAsset
Same configuration as other project BOM Arquillian 1.1.12Final arquillian suite 1.1.2 container 2.0.2 testng.....
any help ?
finally it was lib error indeed library asm-debug-all version was omitted because other library (apache-tika-parsers) already imported an older version (in pom.xml)... make an exclude in pom.xml fix the issue, we can check dependencies hierarchy in eclipse for example.
jacoco-arquillian extension use asm to instrument code...
<dependency>
<groupId>org.apache.tika</groupId>
<artifactId>tika-parsers</artifactId>
<version>1.9</version>
<scope>${defaultScope}</scope>
<exclusions>
<exclusion>
<groupId>org.bouncycastle</groupId>
<artifactId>bcprov-jdk15</artifactId>
</exclusion>
<exclusion>
<groupId>org.bouncycastle</groupId>
<artifactId>bcmail-jdk15</artifactId>
</exclusion>
<exclusion>
<groupId>org.ow2.asm</groupId>
<artifactId>asm-debug-all</artifactId>
</exclusion>
</exclusions>
</dependency>
I'm trying to get a Dataflow job to run on Google Cloud. It always fails with:
java.lang.NoSuchMethodError: io.grpc.protobuf.ProtoUtils.marshaller(Lcom/google/protobuf/Message;)Lio/grpc/MethodDescriptor$Marshaller
It's a maven project, here are my dependencies:
<dependencies>
<dependency>
<groupId>com.google.cloud.dataflow</groupId>
<artifactId>google-cloud-dataflow-java-sdk-all</artifactId>
<version>1.8.0</version>
</dependency>
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-pubsub</artifactId>
<version>0.4.0</version>
</dependency>
<dependency>
<groupId>com.google.protobuf</groupId>
<artifactId>protobuf-java</artifactId>
<version>3.0.0</version>
</dependency>
</dependencies>
I've tried a bunch of different dependency versions. What am I missing?
It has the same result whether I run via exec:java or via a shade jar.
Full stack trace:
(e8dbd0c1b8b8a22): java.lang.NoSuchMethodError:
io.grpc.protobuf.ProtoUtils.marshaller(Lcom/google/protobuf/Message;)Lio/grpc/MethodDescriptor$Marshaller;
at com.google.iam.v1.IAMPolicyGrpc.(IAMPolicyGrpc.java:56) at
com.google.cloud.pubsub.spi.v1.PublisherSettings$Builder.(PublisherSettings.java:487)
at
com.google.cloud.pubsub.spi.v1.PublisherSettings$Builder.createDefault(PublisherSettings.java:508)
at
com.google.cloud.pubsub.spi.v1.PublisherSettings$Builder.access$000(PublisherSettings.java:402)
at
com.google.cloud.pubsub.spi.v1.PublisherSettings.defaultBuilder(PublisherSettings.java:224)
at
com.google.cloud.pubsub.spi.DefaultPubSubRpc.(DefaultPubSubRpc.java:138)
at
com.google.cloud.pubsub.PubSubOptions$DefaultPubSubRpcFactory.create(PubSubOptions.java:60)
at
com.google.cloud.pubsub.PubSubOptions$DefaultPubSubRpcFactory.create(PubSubOptions.java:54)
at com.google.cloud.ServiceOptions.rpc(ServiceOptions.java:399) at
com.google.cloud.pubsub.PubSubImpl.(PubSubImpl.java:115) at
com.google.cloud.pubsub.PubSubOptions$DefaultPubSubFactory.create(PubSubOptions.java:43)
at
com.google.cloud.pubsub.PubSubOptions$DefaultPubSubFactory.create(PubSubOptions.java:38)
at com.google.cloud.ServiceOptions.service(ServiceOptions.java:391) at
com.google.lindsaysmith.titan.DataflowBulkLoadNodes$SendPubSub.sendPubsubMessage(DataflowBulkLoadNodes.java:41)
at
com.google.lindsaysmith.titan.DataflowBulkLoadNodes$SendPubSub.processElement(DataflowBulkLoadNodes.java:32)
at
com.google.cloud.dataflow.sdk.util.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:49)
at
com.google.cloud.dataflow.sdk.util.DoFnRunnerBase.processElement(DoFnRunnerBase.java:139)
at
com.google.cloud.dataflow.sdk.runners.worker.SimpleParDoFn.processElement(SimpleParDoFn.java:190)
at
com.google.cloud.dataflow.sdk.runners.worker.ForwardingParDoFn.processElement(ForwardingParDoFn.java:42)
at
com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerLoggingParDoFn.processElement(DataflowWorkerLoggingParDoFn.java:47)
at
com.google.cloud.dataflow.sdk.util.common.worker.ParDoOperation.process(ParDoOperation.java:55)
at
The usual answer in this situation is that you really must use exactly the version of gRPC and Protocol Buffers declared in the dependencies of the Dataflow SDK. This includes all transitive dependencies, so you may have to suppress gRPC or protobuf dependencies of other libraries so they do not intefere.
You can see the versions here (gRPC) and here (protobuf). I'm leaving them out of this answer so it does not get out of date.
I tried updating from Neo4j 2.3 to 3.0.1. I can start up the database as a service, no problem there.
But when I try to build a Neo4j executable and run it, I find a bug which I cannot resolve. Under Neo4j 2.x I can build executables fine. Below is my main method:
public class StartDB {
public static void main(String[] args) {
new GraphDatabaseFactory()
.newEmbeddedDatabaseBuilder(Paths.get(args[0]).toFile())
.loadPropertiesFromFile(args[1])
.newGraphDatabase();
}
}
I have a simple POM with 1 dependency:
<dependencies>
<dependency>
<groupId>org.neo4j</groupId>
<artifactId>neo4j</artifactId>
<version>3.0.1</version>
</dependency>
</dependencies>
The command line arguments are the paths to my DB and config respectively.
Below is the stacktrace from this graph instantiation error.
Exception in thread "main" java.lang.RuntimeException: Error starting org.neo4j.kernel.impl.factory.CommunityFacadeFactory, /home/glemmon/UPDB/data/neo4j-3.0.1/data/databases/graph.db
at org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory.initFacade(GraphDatabaseFacadeFactory.java:144)
at org.neo4j.kernel.impl.factory.CommunityFacadeFactory.initFacade(CommunityFacadeFactory.java:40)
at org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory.newFacade(GraphDatabaseFacadeFactory.java:108)
at org.neo4j.graphdb.factory.GraphDatabaseFactory.newDatabase(GraphDatabaseFactory.java:99)
at org.neo4j.graphdb.factory.GraphDatabaseFactory.lambda$createDatabaseCreator$206(GraphDatabaseFactory.java:88)
at org.neo4j.graphdb.factory.GraphDatabaseFactory$$Lambda$1/1313922862.newDatabase(Unknown Source)
at org.neo4j.graphdb.factory.GraphDatabaseBuilder.newGraphDatabase(GraphDatabaseBuilder.java:183)
at test.StartDB.main(StartDB.java:11)
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.kernel.impl.storageengine.impl.recordstorage.RecordStorageEngine#5483163c' failed to initialize. Please see attached cause exception.
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.init(LifeSupport.java:415)
at org.neo4j.kernel.lifecycle.LifeSupport.init(LifeSupport.java:62)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:98)
at org.neo4j.kernel.NeoStoreDataSource.start(NeoStoreDataSource.java:502)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:433)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:107)
at org.neo4j.kernel.impl.transaction.state.DataSourceManager.start(DataSourceManager.java:99)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:433)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:107)
at org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory.initFacade(GraphDatabaseFacadeFactory.java:140)
... 7 more
Caused by: java.lang.IllegalArgumentException: An SPI class of type org.apache.lucene.codecs.PostingsFormat with name 'BlockTreeOrds' does not exist. You need to add the corresponding JAR file supporting this SPI to your classpath. The current classpath supports the following names: [Lucene50]
at org.apache.lucene.util.NamedSPILoader.lookup(NamedSPILoader.java:114)
at org.apache.lucene.codecs.PostingsFormat.forName(PostingsFormat.java:112)
at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.<init>(PerFieldPostingsFormat.java:258)
at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:341)
at org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:104)
at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:65)
at org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
at org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:197)
at org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:99)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:435)
at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:100)
at org.apache.lucene.search.SearcherManager.<init>(SearcherManager.java:106)
at org.apache.lucene.search.SearcherManager.<init>(SearcherManager.java:76)
at org.neo4j.kernel.api.impl.index.partition.IndexPartition.<init>(IndexPartition.java:54)
at org.neo4j.kernel.api.impl.index.AbstractLuceneIndex.open(AbstractLuceneIndex.java:101)
at org.neo4j.kernel.api.impl.schema.LuceneSchemaIndexProvider.indexIsOnline(LuceneSchemaIndexProvider.java:178)
at org.neo4j.kernel.api.impl.schema.LuceneSchemaIndexProvider.getInitialState(LuceneSchemaIndexProvider.java:123)
at org.neo4j.kernel.impl.api.index.IndexingService.init(IndexingService.java:200)
at org.neo4j.kernel.impl.storageengine.impl.recordstorage.RecordStorageEngine.init(RecordStorageEngine.java:403)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.init(LifeSupport.java:405)
... 16 more
Since I can start the DB as a service using "bin/neo4j", I thought there must be a discrepancy between the files under /neo4j-community/3.0.1/lib and the files Maven is providing. I've tried running my executable with java -cp "/neo4j-community/3.0.1/lib/*" to no avail. Any help would be appreciated.
The most likely reason is that Maven is not including the Lucene jar file's META-INF/services into the compiled artifact:
META-INF/services/org.apache.lucene.codecs.PostingsFormat
org.apache.lucene.codecs.blocktreeords.BlockTreeOrdsPostingsFormat
org.apache.lucene.codecs.bloom.BloomFilteringPostingsFormat
org.apache.lucene.codecs.memory.DirectPostingsFormat
org.apache.lucene.codecs.memory.FSTOrdPostingsFormat
org.apache.lucene.codecs.memory.FSTPostingsFormat
org.apache.lucene.codecs.memory.MemoryPostingsFormat
org.apache.lucene.codecs.simpletext.SimpleTextPostingsFormat
org.apache.lucene.codecs.autoprefix.AutoPrefixPostingsFormat
As you can see, this is where the BlockTreeOrdsPostingsFormat is defined.
You can work round the problem by creating a shaded jar with a ServicesResourceTransformer, which will merge all the various META-INF/services from all the included jar files together.
<plugin>
<artifactId>maven-shade-plugin</artifactId>
<version>2.4.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<transformers>
<!-- add Main-Class to manifest file -->
<transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<mainClass>...</mainClass>
</transformer>
<!-- merge META-INF/services -->
<transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
</transformers>
</configuration>
</execution>
</executions>
</plugin>
Your maven dependency is not sufficient, change it to
<dependency>
<groupId>org.neo4j</groupId>
<artifactId>neo4j-community</artifactId>
<version>3.0.1</version>
<type>pom</type>
</dependency>
update
maybe adding this one solves it:
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-codecs</artifactId>
<version>5.5.0</version>
</dependency>
could this be an encoding issue?
my build currently shows the exact same behavior.
it runs perfectly fine in eclipse but building a jar file drops this error.
my src files are encoded in utf8 as well as all the resources.
I noticed that the db itself and the compilations and jar will be ANSI though.
while creating the database works fine. using transactions on it will utterly fail.
I further noticed that using eclipse i have no charset issues but executing the jar from power shell will display faulty characters.
I also found a nullpointer exception where there shouldn't have been one when looking up a node in the db.
all strong indicators that this might be an encoding issue as the build file itself looks flawless.
sadly it would be quite an effort to convert all my files to ansi just to see if my hunch was correct but maybe this was of help.