I Started working on Stax Parser for past three months. I used to see data or text in the stax events while debugging. This used to help me a lot while working on my task. But from past 2days,there is weird behavior. When i debugged the project, i can only see events like this...[Stax Event #1], [Stax Event #4], [Stax Event #1], [Stax Event #4]
This is giving me hard time debugging. I am woodStox stax and java 1.6.
These are dependencies i am using
<dependency>
<groupId>javax.xml</groupId>
<artifactId>jsr173</artifactId>
<version>1.0</version>
</dependency>
<dependency>
<groupId>org.codehaus.woodstox</groupId>
<artifactId>wstx-asl</artifactId>
<version>4.0.6</version>
</dependency>
<dependency>
<groupId>stax</groupId>
<artifactId>stax-api</artifactId>
<version>1.0.1</version>
</dependency>
<dependency>
<groupId>com.sun.xml.stream</groupId>
<artifactId>sjsxp</artifactId>
<version>1.0.2</version>
</dependency>
Do i need to change my settings to get back to normal behavior.
You have two StAX implementations: sjsxp and woodstox, so it's kind of random which one is actually used. Most likel you'll want to remove the dependency to sjsxp.
You also have two StAX APIs: jsr173 and stax-api. Definitely avoid the former, it's buggy! With Java 6 or later you may/should also remove the latter.
What code do you use to print output statements? Stax API always allows you to access any data events have; but it may not work by simply doing event.toString().
Related
I need help in RHPAM Business Central.
Anybody knows how to add any print statements or logs in DMN's for debugging DMN flow?
You can define your own DMNRuntimeEventListener.
The listener is usually wired in Drools library using: https://docs.drools.org/8.33.0.Final/drools-docs/docs-website/drools/DMN/index.html#dmn-properties-ref_dmn-models:~:text=org.kie.dmn.runtime.listeners.%24LISTENER_NAME
e.g.: with a configuration such as:
-Dorg.kie.dmn.runtime.listeners.mylistener=org.acme.MyDMNListener
or alternatively with analogous configuration in kmodule.xml
<kmodule xmlns="http://www.drools.org/xsd/kmodule">
<configuration>
<property key="org.kie.dmn.runtime.listeners.mylistener" value="org.acme.MyDMNListener"/>
</configuration>
</kmodule>
This latter option, is the one you might preference on RHPAM Business Central.
You might find helpful this tutorial: https://www.youtube.com/watch?v=WzstCC3Df0Q
I am writing a Java program that uses an embedded Neo4j graph with TinkerPop. Here's the relevant section of my pom
<dependency>
<groupId>org.apache.tinkerpop</groupId>
<artifactId>neo4j-gremlin</artifactId>
<version>3.3.1</version>
</dependency>
<dependency>
<groupId>org.neo4j</groupId>
<artifactId>neo4j-tinkerpop-api-impl</artifactId>
<version>0.7-3.2.3</version>
</dependency>
<dependency>
<groupId>org.neo4j</groupId>
<artifactId>neo4j-tinkerpop-api</artifactId>
<version>0.1</version>
</dependency>
I want to add a configuration option to set the page cache size when initializing the graph. My PS Old Gen heap space is filling up.
Currently, I'm opening the Neo4j graph by passing it a org.apache.commons.configuration.Configuration object. I’m trying to set two properties, the directory and the pagecache. When I run my program, the "gremlin.neo4j.directory" property is processed, but the "dbms.memory.pagecache.size" is not, according to the graph's log file. The log file's first line is this:
2019-03-20 14:38:36.155+0000 WARN [o.n.i.p.PageCache] The
dbms.memory.pagecache.size setting has not been configured. It is
recommended that this setting is always explicitly configured, to
ensure the system has a balanced configuration. Until then, a computed
heuristic value of 8310519808 bytes will be used instead.
Using jvisualvm and jconsole, I can see that the memory in the PS Old gen is filling up with objects related to page caching so I'm trying to throttle how much data is cached by Neo4j.
Here's my code:
Configuration configuration = new BaseConfiguration();
configuration.addProperty("gremlin.neo4j.directory", "tmp/mygraph");
configuration.addProperty("dbms.memory.pagecache.size", "500m");
myGraph = Neo4jGraph.open(configuration);
Any idea what I'm doing wrong?
I think that you need to prefix your configuration keys that are Neo4j specific with gremlin.neo4j.conf thus:
Configuration configuration = new BaseConfiguration();
configuration.addProperty("gremlin.neo4j.directory", "tmp/mygraph");
configuration.addProperty("gremlin.neo4j.conf.dbms.memory.pagecache.size", "500m");
myGraph = Neo4jGraph.open(configuration);
I want to inject x-b3-traceid and x-b3-spanid in logs with pattern as shown-
property name="PATTERN" value="%h %l %u [%date{dd/MMM/yyyy:HH:mm:ss.SSS}] "%r" %s %b "%i{Referer}" "%i{User-Agent}" [trace=%responseHeader{X-B3-TraceId},span=%i{X-B3-SpanId}] %D"
For zipkins, there are libraries available like
brave-context-log4j2 –
(https://github.com/openzipkin/brave/tree/master/context/log4j2)
Spring cloud sleuth. (https://cloud.spring.io/spring-cloud-sleuth/)
How can I add that while using jaeger?
The best way to move forward in order to use Jaegar is NOT TO USE JAEGAR CLIENT! Jaegar has the ability to collect Zipkin spans.
https://www.jaegertracing.io/docs/1.8/getting-started/#migrating-from-zipkin
You should take advantage of this and use the below Sleuth+Zipkin dependency and exclude Jaegar agent jars in your spring boot app.
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-sleuth-zipkin</artifactId>
</dependency>
The above will send Zipkin spans to http://localhost:9411 by default. You can override this in your Spring Boot app to point to your Jaegar server easily by overriding the zipkin base URL.
spring.zipkin.base-url=http://your-jaegar-server:9411
Sleuth will do all the heavy lifting and the default logging will log the span and traceIds.
In the log4j2.xml file, all you have to mention is
[%X]
I'll be uploading a working example of this approach into my GitHub and sharing the link.
EDIT 1:
You can find the sample code here:
https://github.com/anoophp777/spring-webflux-jaegar-log4j2
General Problem: I'm testing a web application at a large company with a service oriented architecture. External services often fail in our test environment due to background noise. This prevents integration tests for our service from running properly since our service won't work unless calls to these external services are succeeding. For this reason we'd like the ability to mock responses from external services so that we don't have to depend on them and can test our own service in isolation.
There's a tool for this called Mockey which we are hoping to use. It's a Java program that runs through an embedded Jetty server and acts as a proxy for service calls. Our web application is re-configured to call Mockey instead of the external services. Mockey is then configured to provide dynamically mocked responses to these calls depending on the URL and header data that get passed in.
In order to utilize this tool we'd like the ability to start Mockey during the pre-integration-test phase of a Maven lifecycle so that it will be available for use during the integration-test phase.
Specific Problem: In order to start and shutdown Mockey during the pre-integration-test and post-integration-test phases of the Maven lifecycle I've written a Maven 3 plugin called mockey-maven-plugin:
The mockey-maven-plugin pom.xml file:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.mycompany.mockey</groupId>
<artifactId>mockey-maven-plugin</artifactId>
<packaging>maven-plugin</packaging>
<version>1.3</version>
<dependencies>
<!-- Maven plugin dependencies -->
<dependency>
<groupId>org.apache.maven</groupId>
<artifactId>maven-plugin-api</artifactId>
<version>3.2.5</version>
</dependency>
<dependency>
<groupId>org.apache.maven.plugin-tools</groupId>
<artifactId>maven-plugin-annotations</artifactId>
<version>3.4</version>
<scope>provided</scope>
</dependency>
<!-- Mockey dependency -->
<dependency>
<groupId>com.mycompany.mockey</groupId>
<artifactId>Mockey</artifactId>
<version>1.16.2015</version>
</dependency>
</dependencies>
<build>
<plugins>
<!-- This plugin is used to generate a plugin descriptor
xml file which will be packaged with the plugin -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-plugin-plugin</artifactId>
<version>3.4</version>
</plugin>
</plugins>
</build>
</project>
The mockey-maven-plugin StartMockey class:
#Mojo(name="start-mockey")
#Execute(phase= LifecyclePhase.PACKAGE) // Not sure about this annotation
public class StartMockey extends AbstractMojo
{
/**
* Flag which controls Mockey startup.
*/
#Parameter(property="mockey.skipStartup", defaultValue="false", required=true)
private Boolean skipStartup;
// Do I need these getters and setters or does Maven ignore them?
public Boolean getSkipStartup()
{
return skipStartup;
}
public void setSkipStartup(Boolean skipStartup)
{
this.skipStartup = skipStartup;
}
// *SNIP* Defining Mockey parameters...
// Maven will call this method to start the mockey-maven-plugin
public void execute()
{
if(skipStartup)
{
getLog().info("Skipping Mockey startup");
return;
}
getLog().info("Starting Mockey");
// Load specified parameters into array
List<String> argsList = new ArrayList<>();
// *SNIP* Adding Mockey parameters to argList...
String[] args = new String[argsList.size()];
argsList.toArray(args);
// Start Mockey with specified parameters and wait for it to return
try
{
JettyRunner.main(args);
}
catch(Exception e)
{
getLog().error("Mockey died... :(");
}
getLog().info("mockey-maven-plugin now exiting");
}
}
The mockey-maven-plugin ShutdownMockey class:
#Mojo(name="shutdown-mockey")
public class ShutdownMockey extends AbstractMojo
{
/**
* Flag which controls Mockey shutdown.
*/
#Parameter(property="mockey.skipShutdown")
private Boolean skipShutdown;
// Again, Do I need these getters and setters or does Maven ignore them?
public Boolean getSkipShutdown()
{
return skipShutdown;
}
public void setSkipShutdown(Boolean skipShutdown)
{
this.skipShutdown = skipShutdown;
}
public void execute()
{
if(skipShutdown)
{
getLog().info("Skipping Mockey shutdown");
return;
}
getLog().info("Shutting down Mockey");
JettyRunner.stopServer();
getLog().info("mockey-maven-plugin now exiting");
}
}
Plugin entry for mockey-maven-plugin in my team project's pom.xml file:
<plugin>
<groupId>com.mycompany.mockey</groupId>
<artifactId>mockey-maven-plugin</artifactId>
<version>1.3</version>
<configuration>
<skipShutdown>${keepMockeyRunning}</skipShutdown>
<skipStartup>${skipMockey}</skipStartup>
<!-- *SNIP* Other Mockey parameters... -->
</configuration>
<executions>
<execution>
<id>start-mockey</id>
<goals>
<goal>start-mockey</goal>
</goals>
<phase>pre-integration-test</phase>
</execution>
<execution>
<id>shutdown-mockey</id>
<goals>
<goal>shutdown-mockey</goal>
</goals>
<phase>post-integration-test</phase>
</execution>
</executions>
</plugin>
This plugin works fine for starting Mockey in the pre-integration-test phase, but blocks the build until Mockey has exited. I'm not sure why this is occurring since I added this annotation specifically to prevent that issue:
#Execute(phase= LifecyclePhase.PACKAGE)
I actually copied this annotation from another plugin which does exactly what I'm trying to do here (We use the maven-tomcat7-plugin for launching our web application locally in the pre-integration-test phase and shutting it down in the post-integration-test phase). I thought this would work the same way, but I'm seeing a different behavior.
Here's what I want to see happen:
Maven build begins on a single thread.
This thread runs through all of the lifecycle phases from validate to package (reference) and executes all of the plugins with goals that are bound to those phases.
The thread gets to the pre-integration-test phase, sees that the mockey-maven-plugin's start-mockey goal is bound to the pre-integration-test phase, and attempts to execute the start-mockey goal.
The start-mockey goal is annotated to execute on a second thread (not the first thread) without running any other goals for any other lifecycle phases beforehand or afterwards on the new thread. The second thread starts Mockey's Jetty server by calling JettyRunner.main(args) and blocks on that method for the time being (it's running Mockey).
The first thread continues on to other goals and phases (IE: run integration tests).
The first thread gets to the post-integration test phase, sees that the mockey-maven-plugin's shutdown-mockey goal is bound to the post-integration-test phase, and executes the shutdown-mockey goal.
The shutdown-mockey goal calls JettyRunner.stopServer() which hooks into a static object inside the JettyRunner class and signals the first thread to shutdown Jetty. Meanwhile the first thread waits for a signal from the second thread (or maybe it's polling, I don't really know) that Jetty has shut down.
The second thread finishes shutting down Jetty, signals to the first thread that it can continue, and kills itself.
The first thread continues on to any additional goals and Maven lifecycle phases.
Here's what I'm actually seeing happen:
Maven build begins on a single thread.
This thread runs through all of the lifecycle phases from validate to package (reference) and executes all of the plugins with goals that are bound to those phases.
The thread gets to the pre-integration-test phase, sees that the mockey-maven-plugin's start-mockey goal is bound to the pre-integration-test phase, and attempts to execute the start-mockey goal.
The start-mockey goal is annotated to execute on a second thread. The second thread starts the entire Maven lifecycle over again beginning in the validate phase.
The first thread blocks while waiting for the second thread to exit.
The second thread runs all the way through the package phase and then kills itself.
The first thread is unblocked and picks up where it left off. It executes the start-mockey goal on its own (never run by the second thread). This calls JettyRunner.main(args) and the thread then blocks while running Mockey's Jetty server.
The thread remains blocked until the Jetty server is manually killed (along with the rest of the Maven lifecycle).
This confuses me primarily because Maven seems to have a different concept of forking than what I'm familiar with. To me forking means to diverge at a particular point, not to start over, and not to affect the original process. When we fork a process in Unix it copies the stack and function pointer of the first process. It does not start over from the beginning of the program. Similarly, when we fork a code repository we start with all of the files and directories that are currently in the original repository. We don't start over with a blank slate. So why, when we "fork" a Maven lifecycle does it abandon everything, start over, and block the original thread? That doesn't seem at all like forking to me. Here's some of the documentation I've read that describes "forking" in Maven:
"Running the ZipForkMojo will fork the lifecycle"
"Any plugin that declares #execute [phase] will cause the build to fork"
"goal=goal to fork... lifecycle=lifecycle id to fork... phase=lifecycle phase to fork..."
Remaining Questions:
How can I get Maven to fork in the sense that I'm familiar with?
What does it mean to fork a Maven lifecycle to a phase that takes place before the one that you're forking from? For instance, what does it mean to fork to the package phase from the pre-integration-test phase?
Why do you think the Tomcat7 plugin is doing this (forking to the package phase from the pre-integration test phase)?
What could be different about the Tomcat7 plugin that causes the same annotation to behave differently in my plugin?
Answered Question (see below):
- Is there another phase which I should specify in the annotation for my plugin to get it to behave as desired or should I be using the execute annotation in a fundamentally different manner?
See https://books.sonatype.com/mvnref-book/reference/writing-plugins-sect-plugins-lifecycle.html
The docs seem to indicate that you should create a custom lifecycle that only includes the start-mockey goal. Then your #Execute annotation should specify the goal and the lifecycle. That should fork off the execution but only execute your start-mockey. I think then you can run the end-mockey as normal.
I still don't understand what Maven is doing, but I did find a way to work around it:
I removed the #Execute annotation from the StartMockey class.
I forked the process myself in Java.
A snippet of the new code inside StartMockey.execute():
// Start Mockey with the specified parameters on a new thread.
MockeyRunner mockeyRunner = new MockeyRunner(args);
mockeyRunnerThread = new Thread(mockeyRunner);
mockeyRunnerThread.start();
The new MockeyRunner class:
public class MockeyRunner implements Runnable
{
private String[] args;
private Exception exception;
/**
* We cannot throw the Exception directly in run() since we're implementing the runnable interface which does not
* allow exception throwing. Instead we must store the exception locally and check for it in whatever class is
* managing this MockeyRunner instance after the run method has returned.
* #return Exception thrown by Mockey
*/
public Exception getException()
{
return exception;
}
/**
* Constructor
* #param args The arguments to pass to Mockey on startup
*/
public MockeyRunner(String[] args)
{
this.args = args;
}
/**
* This method starts Mockey from inside a new Thread object in an external class. It is called internally by Java
* when the Thread.start() method is called on that object.
*/
#Override
public void run()
{
try
{
JettyRunner.main(args);
}
catch(Exception e)
{
exception = e;
}
}
}
I'm not going to accept this solution as an answer. Although it solves my problem I'd still like to know what Maven "forking" is all about!
My pom.xml contains the following to auto generate a client for a working web service having the WSDL specified below:
<plugin>
<groupId>org.apache.cxf</groupId>
<artifactId>cxf-codegen-plugin</artifactId>
<version>2.3.1</version>
<executions>
<execution>
<id>generate-sources</id>
<configuration>
<sourceRoot>${basedir}/target/generated/src/main/java</sourceRoot>
<wsdlOptions>
<wsdlOption>
<wsdl>${basedir}/src/main/wsdl/myclient.wsdl</wsdl>
<extraargs>
<extraarg>-client</extraarg>
<extraarg>-verbose</extraarg>
</extraargs>
<wsdlLocation>wsdl/myclient.wsdl</wsdlLocation>
</wsdlOption>
</wsdlOptions>
</configuration>
<goals>
<goal>wsdl2java</goal>
</goals>
</execution>
</executions>
</plugin>
The project builds fine, without any errors or warnings and I can see the the file myclient.wsdl in the JAR file right under a wsdl folder.
But when I try running that JAR:
java -Xmx1028m -jar myclient-jar-with-dependencies.jar
It complains that "Can not initialize the default wsdl from wsdl/myclient.wsdl"
Why?
What am I missing?
How can I find out what path that wsdl/myclient.wsdl in pom.xml translates into, that makes the client's JAR complain at run time?
Update: I am aware of some solutions/workarounds that involve modifying the auto-generated code:
Pass "null" for the wsdl URL and then use the ((BindingProvider)port).getRequestContext().put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, "http://example.com/....") to set the address.
load the WSDL as a Java resource and pass its location into your service's constructor.
But I am more interested in a solution that requires entering the right values into the pom.xml like the classpath approach (but unfortunately classpath didn't work for me for some reason).
Any ideas what I should be typing there instead? Apparently this is a very simply case of figuring out the correct path rules for that particular plugin, but I am missing something and I don't know what it is.
The error comes from the static initializer of your generated service class (which is annotated by #WebServiceClient). It tries to load the wsdl file as resource. The generator uses the value which you have provided by the parameter wsdlLocation. You should leave away the "wsdl/" prefix:
<wsdlLocation>myclient.wsdl</wsdlLocation>
because the wsdl is located directly in the root of the classpath folder.
BTW: If you omit the parameter <wsdlLocation> the value of the param <wsdl> is used (which is not correct at runtime in your case, but would be correct if the provided URL would be a remote URL address, i.e. directly fetched from the webservice server).
BTW2: Your workaround 2 is in fact +/- what the generated code of the service class does if you use the parameterless constructor.
I notice the cfx examples use slightly different locations for sourceRoot, wsdl and wsdlLocation.
Remember that typically, files in src/main/resources are included in the produced artifact. In order for files in src/main/wsdl to be included, it needs to be added as a resource in the pom.xml:
<resources>
<resource>
<directory>src/main/wsdl</directory>
</resource>
</resources>
Tips:
Set the paths you suspect to known bad paths and see if you get the same error-message.
Unzip the produced *.jar-file(s) and check if the wsdl is included, and what the path is.