Neo4j 3.0.x PostingsFormat with name 'BlockTreeOrds' does not exist - neo4j

I tried updating from Neo4j 2.3 to 3.0.1. I can start up the database as a service, no problem there.
But when I try to build a Neo4j executable and run it, I find a bug which I cannot resolve. Under Neo4j 2.x I can build executables fine. Below is my main method:
public class StartDB {
public static void main(String[] args) {
new GraphDatabaseFactory()
.newEmbeddedDatabaseBuilder(Paths.get(args[0]).toFile())
.loadPropertiesFromFile(args[1])
.newGraphDatabase();
}
}
I have a simple POM with 1 dependency:
<dependencies>
<dependency>
<groupId>org.neo4j</groupId>
<artifactId>neo4j</artifactId>
<version>3.0.1</version>
</dependency>
</dependencies>
The command line arguments are the paths to my DB and config respectively.
Below is the stacktrace from this graph instantiation error.
Exception in thread "main" java.lang.RuntimeException: Error starting org.neo4j.kernel.impl.factory.CommunityFacadeFactory, /home/glemmon/UPDB/data/neo4j-3.0.1/data/databases/graph.db
at org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory.initFacade(GraphDatabaseFacadeFactory.java:144)
at org.neo4j.kernel.impl.factory.CommunityFacadeFactory.initFacade(CommunityFacadeFactory.java:40)
at org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory.newFacade(GraphDatabaseFacadeFactory.java:108)
at org.neo4j.graphdb.factory.GraphDatabaseFactory.newDatabase(GraphDatabaseFactory.java:99)
at org.neo4j.graphdb.factory.GraphDatabaseFactory.lambda$createDatabaseCreator$206(GraphDatabaseFactory.java:88)
at org.neo4j.graphdb.factory.GraphDatabaseFactory$$Lambda$1/1313922862.newDatabase(Unknown Source)
at org.neo4j.graphdb.factory.GraphDatabaseBuilder.newGraphDatabase(GraphDatabaseBuilder.java:183)
at test.StartDB.main(StartDB.java:11)
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.kernel.impl.storageengine.impl.recordstorage.RecordStorageEngine#5483163c' failed to initialize. Please see attached cause exception.
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.init(LifeSupport.java:415)
at org.neo4j.kernel.lifecycle.LifeSupport.init(LifeSupport.java:62)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:98)
at org.neo4j.kernel.NeoStoreDataSource.start(NeoStoreDataSource.java:502)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:433)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:107)
at org.neo4j.kernel.impl.transaction.state.DataSourceManager.start(DataSourceManager.java:99)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:433)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:107)
at org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory.initFacade(GraphDatabaseFacadeFactory.java:140)
... 7 more
Caused by: java.lang.IllegalArgumentException: An SPI class of type org.apache.lucene.codecs.PostingsFormat with name 'BlockTreeOrds' does not exist. You need to add the corresponding JAR file supporting this SPI to your classpath. The current classpath supports the following names: [Lucene50]
at org.apache.lucene.util.NamedSPILoader.lookup(NamedSPILoader.java:114)
at org.apache.lucene.codecs.PostingsFormat.forName(PostingsFormat.java:112)
at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.<init>(PerFieldPostingsFormat.java:258)
at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:341)
at org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:104)
at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:65)
at org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
at org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:197)
at org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:99)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:435)
at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:100)
at org.apache.lucene.search.SearcherManager.<init>(SearcherManager.java:106)
at org.apache.lucene.search.SearcherManager.<init>(SearcherManager.java:76)
at org.neo4j.kernel.api.impl.index.partition.IndexPartition.<init>(IndexPartition.java:54)
at org.neo4j.kernel.api.impl.index.AbstractLuceneIndex.open(AbstractLuceneIndex.java:101)
at org.neo4j.kernel.api.impl.schema.LuceneSchemaIndexProvider.indexIsOnline(LuceneSchemaIndexProvider.java:178)
at org.neo4j.kernel.api.impl.schema.LuceneSchemaIndexProvider.getInitialState(LuceneSchemaIndexProvider.java:123)
at org.neo4j.kernel.impl.api.index.IndexingService.init(IndexingService.java:200)
at org.neo4j.kernel.impl.storageengine.impl.recordstorage.RecordStorageEngine.init(RecordStorageEngine.java:403)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.init(LifeSupport.java:405)
... 16 more
Since I can start the DB as a service using "bin/neo4j", I thought there must be a discrepancy between the files under /neo4j-community/3.0.1/lib and the files Maven is providing. I've tried running my executable with java -cp "/neo4j-community/3.0.1/lib/*" to no avail. Any help would be appreciated.

The most likely reason is that Maven is not including the Lucene jar file's META-INF/services into the compiled artifact:
META-INF/services/org.apache.lucene.codecs.PostingsFormat
org.apache.lucene.codecs.blocktreeords.BlockTreeOrdsPostingsFormat
org.apache.lucene.codecs.bloom.BloomFilteringPostingsFormat
org.apache.lucene.codecs.memory.DirectPostingsFormat
org.apache.lucene.codecs.memory.FSTOrdPostingsFormat
org.apache.lucene.codecs.memory.FSTPostingsFormat
org.apache.lucene.codecs.memory.MemoryPostingsFormat
org.apache.lucene.codecs.simpletext.SimpleTextPostingsFormat
org.apache.lucene.codecs.autoprefix.AutoPrefixPostingsFormat
As you can see, this is where the BlockTreeOrdsPostingsFormat is defined.
You can work round the problem by creating a shaded jar with a ServicesResourceTransformer, which will merge all the various META-INF/services from all the included jar files together.
<plugin>
<artifactId>maven-shade-plugin</artifactId>
<version>2.4.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<transformers>
<!-- add Main-Class to manifest file -->
<transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<mainClass>...</mainClass>
</transformer>
<!-- merge META-INF/services -->
<transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
</transformers>
</configuration>
</execution>
</executions>
</plugin>

Your maven dependency is not sufficient, change it to
<dependency>
<groupId>org.neo4j</groupId>
<artifactId>neo4j-community</artifactId>
<version>3.0.1</version>
<type>pom</type>
</dependency>
update
maybe adding this one solves it:
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-codecs</artifactId>
<version>5.5.0</version>
</dependency>

could this be an encoding issue?
my build currently shows the exact same behavior.
it runs perfectly fine in eclipse but building a jar file drops this error.
my src files are encoded in utf8 as well as all the resources.
I noticed that the db itself and the compilations and jar will be ANSI though.
while creating the database works fine. using transactions on it will utterly fail.
I further noticed that using eclipse i have no charset issues but executing the jar from power shell will display faulty characters.
I also found a nullpointer exception where there shouldn't have been one when looking up a node in the db.
all strong indicators that this might be an encoding issue as the build file itself looks flawless.
sadly it would be quite an effort to convert all my files to ansi just to see if my hunch was correct but maybe this was of help.

Related

Apache Beam running Pipeline from DoFn throws NoSuchMethodError

I am running an Application on Google AppEngine which generates Dataflow-Templates and starts them. In one of those pipelines, inside a DoFn, the process generates another pipeline and waits for it to finish, before it continues its work. Until a few days ago, that was not a problem. But now, I get a NoSuchMethodError when pipeline.run() is called.
The stacktrace:
java.lang.NoSuchMethodError: org.apache.beam.sdk.common.runner.v1.RunnerApi$FunctionSpec$Builder.setPayload(Lcom/google/protobuf/ByteString;)Lorg/apache/beam/sdk/common/runner/v1/RunnerApi$FunctionSpec$Builder;
at org.apache.beam.runners.dataflow.repackaged.org.apache.beam.runners.core.construction.WindowingStrategyTranslation.toProto(WindowingStrategyTranslation.java:224)
at org.apache.beam.runners.dataflow.repackaged.org.apache.beam.runners.core.construction.WindowingStrategyTranslation.toProto(WindowingStrategyTranslation.java:299)
at org.apache.beam.runners.dataflow.repackaged.org.apache.beam.runners.core.construction.WindowingStrategyTranslation.toProto(WindowingStrategyTranslation.java:285)
at org.apache.beam.runners.dataflow.DataflowPipelineTranslator.serializeWindowingStrategy(DataflowPipelineTranslator.java:129)
at org.apache.beam.runners.dataflow.DataflowPipelineTranslator.access$1500(DataflowPipelineTranslator.java:114)
at org.apache.beam.runners.dataflow.DataflowPipelineTranslator$5.groupByKeyHelper(DataflowPipelineTranslator.java:806)
at org.apache.beam.runners.dataflow.DataflowPipelineTranslator$5.translate(DataflowPipelineTranslator.java:784)
at org.apache.beam.runners.dataflow.DataflowPipelineTranslator$5.translate(DataflowPipelineTranslator.java:781)
at org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator.visitPrimitiveTransform(DataflowPipelineTranslator.java:442)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:663)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:655)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:655)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:655)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:655)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:655)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$600(TransformHierarchy.java:311)
at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:245)
at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:446)
at org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator.translate(DataflowPipelineTranslator.java:386)
at org.apache.beam.runners.dataflow.DataflowPipelineTranslator.translate(DataflowPipelineTranslator.java:173)
at org.apache.beam.runners.dataflow.DataflowRunner.run(DataflowRunner.java:537)
at org.apache.beam.runners.dataflow.DataflowRunner.run(DataflowRunner.java:170)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:303)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:289)
at com.mycompany.projectsign.dataflow.dofn.downloads.something.RunDownloadPipeline.processElement(RunDownloadPipeline.java:150)
The referenced line of my code is the pipeline.run()
When I look at the maven-dependencies in Eclipse, the right dependencies (and versions) are added in the project and the RunnerApi.FunctionSpec.Builder.setPayload(com.google.protobuf.ByteString) method exists, too. I continued enforcing the versions with dependencyManagement:
<dependencyManagement>
<dependencies>
....
<dependency>
<groupId>com.google.protobuf</groupId>
<artifactId>protobuf-java</artifactId>
<version>3.2.0</version>
</dependency>
<dependency>
<groupId>io.grpc</groupId>
<artifactId>grpc-core</artifactId>
<version>1.2.0</version>
</dependency>
<dependency>
<groupId>io.grpc</groupId>
<artifactId>grpc-protobuf</artifactId>
<version>1.2.0</version>
</dependency>
<dependency>
<groupId>io.grpc</groupId>
<artifactId>grpc-stub</artifactId>
<version>1.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-common-runner-api</artifactId>
<version>2.2.0</version>
</dependency>
.....
</dependencies>
</dependencyManagement>
When running in AppEngine or creating the pipeline from localhost (running directly on Google Dataflow) the templates can be created and started without a problem, but running the created Pipeline inside the DoFn throws the same error.
However, when I run the Pipeline as a local pipeline directly on my computer (with DirectRunner), the pipeline runs without a problem and creates the other pipelines on GoogleDataflow.
I updated the beam-version to 2.2.0.
If some Googler is reading this, this is one of the failing JobIds: 2017-12-11_07_01_17-3122752092943950314
What might be the reason for the NoSuchMethodError? Could it be a conflicting dependency or something else?
Any help is highly appreciated :-)

Dataflow fails with java.lang.NoSuchMethodError: io.grpc.protobuf.ProtoUtils.marshaller(Lcom/google/protobuf/Message;)

I'm trying to get a Dataflow job to run on Google Cloud. It always fails with:
java.lang.NoSuchMethodError: io.grpc.protobuf.ProtoUtils.marshaller(Lcom/google/protobuf/Message;)Lio/grpc/MethodDescriptor$Marshaller
It's a maven project, here are my dependencies:
<dependencies>
<dependency>
<groupId>com.google.cloud.dataflow</groupId>
<artifactId>google-cloud-dataflow-java-sdk-all</artifactId>
<version>1.8.0</version>
</dependency>
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-pubsub</artifactId>
<version>0.4.0</version>
</dependency>
<dependency>
<groupId>com.google.protobuf</groupId>
<artifactId>protobuf-java</artifactId>
<version>3.0.0</version>
</dependency>
</dependencies>
I've tried a bunch of different dependency versions. What am I missing?
It has the same result whether I run via exec:java or via a shade jar.
Full stack trace:
(e8dbd0c1b8b8a22): java.lang.NoSuchMethodError:
io.grpc.protobuf.ProtoUtils.marshaller(Lcom/google/protobuf/Message;)Lio/grpc/MethodDescriptor$Marshaller;
at com.google.iam.v1.IAMPolicyGrpc.(IAMPolicyGrpc.java:56) at
com.google.cloud.pubsub.spi.v1.PublisherSettings$Builder.(PublisherSettings.java:487)
at
com.google.cloud.pubsub.spi.v1.PublisherSettings$Builder.createDefault(PublisherSettings.java:508)
at
com.google.cloud.pubsub.spi.v1.PublisherSettings$Builder.access$000(PublisherSettings.java:402)
at
com.google.cloud.pubsub.spi.v1.PublisherSettings.defaultBuilder(PublisherSettings.java:224)
at
com.google.cloud.pubsub.spi.DefaultPubSubRpc.(DefaultPubSubRpc.java:138)
at
com.google.cloud.pubsub.PubSubOptions$DefaultPubSubRpcFactory.create(PubSubOptions.java:60)
at
com.google.cloud.pubsub.PubSubOptions$DefaultPubSubRpcFactory.create(PubSubOptions.java:54)
at com.google.cloud.ServiceOptions.rpc(ServiceOptions.java:399) at
com.google.cloud.pubsub.PubSubImpl.(PubSubImpl.java:115) at
com.google.cloud.pubsub.PubSubOptions$DefaultPubSubFactory.create(PubSubOptions.java:43)
at
com.google.cloud.pubsub.PubSubOptions$DefaultPubSubFactory.create(PubSubOptions.java:38)
at com.google.cloud.ServiceOptions.service(ServiceOptions.java:391) at
com.google.lindsaysmith.titan.DataflowBulkLoadNodes$SendPubSub.sendPubsubMessage(DataflowBulkLoadNodes.java:41)
at
com.google.lindsaysmith.titan.DataflowBulkLoadNodes$SendPubSub.processElement(DataflowBulkLoadNodes.java:32)
at
com.google.cloud.dataflow.sdk.util.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:49)
at
com.google.cloud.dataflow.sdk.util.DoFnRunnerBase.processElement(DoFnRunnerBase.java:139)
at
com.google.cloud.dataflow.sdk.runners.worker.SimpleParDoFn.processElement(SimpleParDoFn.java:190)
at
com.google.cloud.dataflow.sdk.runners.worker.ForwardingParDoFn.processElement(ForwardingParDoFn.java:42)
at
com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerLoggingParDoFn.processElement(DataflowWorkerLoggingParDoFn.java:47)
at
com.google.cloud.dataflow.sdk.util.common.worker.ParDoOperation.process(ParDoOperation.java:55)
at
The usual answer in this situation is that you really must use exactly the version of gRPC and Protocol Buffers declared in the dependencies of the Dataflow SDK. This includes all transitive dependencies, so you may have to suppress gRPC or protobuf dependencies of other libraries so they do not intefere.
You can see the versions here (gRPC) and here (protobuf). I'm leaving them out of this answer so it does not get out of date.

maven-war-plugin overlay and m2e eclipse plugin

I'm trying to leverage the useful overlay feature of the maven-war-plugin.
In other words, I have a template (packaged as WAR file, template-0.0.1.war) containing tag files, css, js and images.
When I set template-0.0.1.war as a dependency of the myApp project I get a final myApp.war containing all the files of template-0.0.1.war overwritten by those with the same path in the myApp project.
This is the behavior I want.
However, I need to introduce in the pom.xml of myApp a configuration of the maven-war-plugin:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>2.3</version>
<configuration>
<webResources>
<resource>
<directory>../path/to/another/dir</directory>
</resource>
</webResources>
</configuration>
</plugin>
As soon as I introduce such a configuration of the plugin, I obtain the final myApp.war with all the files from both template-0.0.1.war and myApp project but the files of template-0.0.1.war overwrite those with the same path in the myApp project.
This behavior is exactly the opposite of what I expect.
Can someone tell me where I'm wrong?
Thanks in advance.
Edit after the solution was found:
The described issue is due to the concurrency of different actions: the WAR overlay (which works correctly) and the external webResources.
In fact, the external webResources tag points to the template project directory: totally unuseful for Maven, but indispensable to "fool" the m2e eclipse plugin and let it see the custom tags contained in the template.
The solution I have adopted is to introduce 2 different profiles in the plugin section of my pom.xml: the first one called "eclipse" in which I inserted the maven-war-plugin with the webResources and a second profile (called "standard" and activated by default) without the maven-war-plugin.
From the maven war plugin documentation:
By default, the source of the project (a.k.a the current build) is added first (e.g. before any overlay is applied). The current build is defined as a special overlay with no groupId, artifactId. If overlays need to be applied first, simply configure the current build after those overlays.
If you have files in the template that are being overwritten by files in the child WAR, you may want to consider explicitly excluding them in the overlay configuration.
Here's what the documentation says to apply the overlay first:
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>2.3</version>
<configuration>
<overlays>
<overlay>
<groupId>com.example.projects</groupId>
<artifactId>my-webapp</artifactId>
</overlay>
<overlay>
<!-- empty groupId/artifactId represents the current build -->
</overlay>
</overlays>
</configuration>
</plugin>
</plugins>

Generating multiple javadoc reports using maven-javadoc-plugin and Maven 3

We use a custom doclet to generate a report from custom javadoc tags, and use the Maven site plugin and javadoc plugin to generate both this report and the regular java API docs.
The section of the POM looks like this:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-javadoc-plugin</artifactId>
<reportSets>
<reportSet>
<id>html</id>
<reports>
<report>javadoc</report>
</reports>
</reportSet>
<reportSet>
<id>custom_report</id>
<configuration>
...
</configuration>
<reports>
<report>javadoc</report>
</reports>
</reportSet>
</reportSets>
</plugin>
Under Maven 2, this works fine, but in Maven 3 only one report is generated, that being the last one specified in the POM (confimed by swapping the reportSet elements).
After some experimenting I discovered that if I changed the regular report's goal from "javadoc" to "test-javadoc", then I got output from both report sets. So the problem seems to be that with Maven 3 I can't generate two reports that use the same javadoc-plugin goal.
Is this a bug, or is there some congifuration I've missed? I moved the maven-javadoc-plugin setup from reporting to the configuration of the site plugin as described at http://maven.apache.org/plugins/maven-site-plugin-3.0-beta-3/maven-3.html, to no avail. I'm using Maven 3.0.4, maven-site-plugin 3.0-beta-3 and maven-javadoc-plugin 2.8.1.
Thanks!
It's a bug in maven-reporting-exec component.
Report sets are kept in a map using the report goal as a key.

Maven Antrun and Dependencies

(See edits below.)
The reason I can't just use the classpath, is because I need to manage some non-java libraries, and I'm compiling a non-java project.
I'm trying to use maven dependencies in an antrun call, following the documentation on the maven site:
http://maven.apache.org/plugins/maven-antrun-plugin/examples/classpaths.html
At the bottom of the page:
<property name="mvn.dependency.jar"
refid="maven.dependency.my.group.id:my.artifact.id:classifier:jar.path"/>
<echo message="My Dependency JAR-Path: ${mvn.dependency.jar}"/>
I can't make this work no matter how I try. I've tried ${} around the refid contents, I've tried colons, periods, etc.. as separators in every way I can think of.
Can anyone tell me what that refid should really look like for some common dependency?
EDIT:
Thanks for your reply.
Using your example SingleShot, I have the following:
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<executions>
<execution>
<id>create-messages</id>
<phase>compile</phase>
<configuration>
<tasks>
<property name="build.compiler" value="extJavac"/>
<property name="compile_classpath" refid="maven.compile.classpath"/>
<property name="runtime_classpath" refid="maven.runtime.classpath"/>
<property name="test_classpath" refid="maven.test.classpath"/>
<property name="plugin_classpath" refid="maven.plugin.classpath"/>
<property name="log4j.jar" refid="log4j:log4j:jar"/>
<echo message="Where is the Log4J JAR?: ${log4j.jar}"/>
</tasks>
</configuration>
<goals>
<goal>run</goal>
</goals>
</execution>
</executions>
<dependencies>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.14</version>
</dependency>
</dependencies>
</plugin>
And here's what I get when run mvn compile:
[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------------
[INFO] Building Chat Component
[INFO] task-segment: [compile]
[INFO] ------------------------------------------------------------------------
Downloading: http://<redacted>/content/groups/public/log4j/log4j/1.2.14/log4j-1.2.14.pom
2K downloaded
Downloading: http://<redacted>/content/groups/public/log4j/log4j/1.2.14/log4j-1.2.14.jar
358K downloaded
[INFO] [antrun:run {execution: create-messages}]
[INFO] Executing tasks
[INFO] ------------------------------------------------------------------------
[ERROR] BUILD ERROR
[INFO] ------------------------------------------------------------------------
[INFO] Error executing ant tasks
Embedded error: Reference log4j:log4j:jar not found.
[INFO] ------------------------------------------------------------------------
[INFO] For more information, run Maven with the -e switch
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 3 seconds
[INFO] Finished at: Fri Oct 16 14:54:19 PDT 2009
[INFO] Final Memory: 7M/80M
[INFO] ------------------------------------------------------------------------
EDIT (2):
Looking at the sourcecode linked I decided to run "mvn -X compile" and grep for "Storing", which turns up a bunch of log output where things are getting stored.
Of interest are the facts that the dependency I'm explicitly specifying isn't showing in the list, and, that when I switch to a key based on one of the entries I do see, I still get the error.
Based on the code that SingleShot linked to, and random poking until it worked, here's how I got this problem "working", (I say in quotes because it feels very tenuous.)
Here's the way to make it properly work:
<property name="log4j_location"
value="${maven.dependency.log4j.log4j.jar.path}"/>
<echo message="${log4j_location}"/>
Some important things to note: You cannot use the maven dependency as a refid in setting the ant property. You have to use ${} to get the maven var value.
It appears that the dependency must be in the top-level dependency list, making log4j a dependency of the antrun plugin does not expose it to the plugin in anyway that I can see.
All of the path separators are dots (.), no colons (:) which is why I ultimately checked my own answer as correct.
Soapbox:
I would highly recommend anyone considering Maven use Ant with maven plugins or, even better, use Ant with Ivy instead.
This particular problem is a shining example of the utterly absurd level of difficulty associated with doing anything out of the norm with maven.
I say this having implemented an entire build system based on Maven2, and having also implemented several build systems in Ant. I've used both Maven2 and Ant with complex builds involving Java, Flex/AS3, C# and C++. Maven makes sense for Java projects that have no external dependencies on projects in other languages.
Maven does address some things that aren't addressed implicitly by Ant, but with some up front planning, Ant is the much more flexible, better documented, and the less buggy tool.
If you decide to go the ant route, make sure to define a structure for your projects, figure out your dependency system (Use one).
I think you will ultimately be much happier than with Maven, as you won't spend crunch time trying to fix your build system.
As an addendum to Aaron H.'s answer above, I had to set the plugin's version to 1.3 for that to actually work. I was using it without a specific version and was getting 1.1 (where nothing seems to work).
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<version>1.3</version>
...
</plugin>
Without an example of what you typed into your POM its hard to say, but consider a concrete example. Let's say your POM references log4j (groupId=log4j, artifactId=log4j). I believe this is how you would reference that JAR in your Ant file:
<property name="log4j.jar" refid="maven.dependency.log4j:log4j:jar.path"/>
<echo message="Where is the Log4J JAR?: ${log4j.jar}"/>
Ideally you shouldn't have to reference specific JARs, but rather, reference the entire classpath for the appropriate scope, as the somewhat sparse documentation for the plug-in indicates.
If you still have trouble, please post the <dependency> tag for a Maven POM dependency you are using and I can try to be more specific.
I looked at the plugin's code to confirm.
This works for me.
<copy file="${javax.mail:javax.mail-api:jar}" todir="tomcat/lib" />
<copy file="${org.springframework:spring-instrument-tomcat:jar}" todir="tomcat/lib" />
<copy file="${postgresql:postgresql:jar}" todir="tomcat/lib"/>
http://maven.apache.org/plugins/maven-antrun-plugin/examples/classpaths.html has the explanation of how to reference dependencies form the ant classpath.
There is a bug in the documentation. The path should be of the form:
<property name="mvn.dependency.jar"
value="${maven.dependency.my.group.id.my.artifact.id.classifier.jar.path}"/>
So the correct key for your log4j dependency would be:
maven.dependency.log4j.log4j.jar.path
Also note that it should be value= rather than refid=, so the full property would be:
<property name="log4j.jar"
value="${maven.dependency.log4j.log4j.jar.path}"/>
<echo message="My Dependency JAR-Path: ${log4j.jar}"/>
I have an existing ant and we planned to use (new) maven to call it. I encountered problems that I may not remember clear, but it is related to class pathes, maybe just like yours.
The problem is, the "ant" we are using daily is a shell script that sets class pathes, both on XNIX and Windows. I have not compared class pathes set by it and those available to maven, but my test showed they dont match and ant won't run with some pathes passed to it from maven.
What I am using is "exec-maven-plugin" and run ant as an external program with some arguments applied. This is sure to work but adds extra dependencies, though.

Resources