Convert MbElement to W3C Node in WebSphere Message Broker 7.0.0 - messagebroker

I am trying to convert a com.ibm.broker.plugin.MbElement into an org.w3c.dom.Node. The environment is WebSphere Message Broker 7.0.0.
I know that version IIB 9.0.0 has methods like getDOMNode() which returns a org.w3c.dom.Node. I cannot upgrade my environment for various reasons , Any pointers suggestions for doing this in 7.0.0.
Thanks in Advance

You should not parse your message in the message flow, so leaving it in BLOB.
Then in your Java Compute you could access the message as a byte array and parse it in Java to get a org.w3c.dom.Document, like this:
how to convert array byte to org.w3c.dom.Document

It's not that trivial a task as to generate the correct children of each node in the DOM tree you will need basically recreate the tree navigating the message tree as you go.
The JAXB implementation in IIB 9 is pretty much exactly what you need. Given that WMB v7 is out of support in Sept I think your best bet is taking the plunge to IIB 9 /10.
Parsing a BLOB in a JCN will work but is a pretty inefficient implementation as other nodes in your message trees wont benefit from your JCN having parsed the tree and will therefore need to reparse.

Related

Machine parseable error messages

(From https://groups.google.com/d/msg/bazel-discuss/cIBIP-Oyzzw/caesbhdEAAAJ)
What is the recommended way for rules to export information about failures such that downstream tools can include them in UIs.
Example use case:
I ran bazel test //my:target, and one of the actions for //my:target fails because there is an unknown variable "usrname" in my/target.foo at line 7 column 10. It would also like to report that "username" is a valid variable and this is a possible misspelling. And thus wants to suggest an addition of an "e" character.
One way I have thought to do this is to have a separate file that my action produces //my:target.errors that is in a separate output group and have it write machine parseable data there in addition to human readable data on stdout.
I can then find all of these files and parse the data in them in downstream tools.
Is there any prior work on this, or does everything just try to parse the human readable output?
I recommend running the error checkers as extra actions.
I don't think Bazel currently has hooks for custom error handlers like you describe. Please consider opening a feature request: https://github.com/bazelbuild/bazel/issues/new

BPEL, threads stucked on HashMap.getEntry?

I am new to SOA, and currently we met a problem when using BPEL to do some XML transformation.
we have 3 SOA projects will do something like:
Read input files from folder which is in text format
Save file content in Database and put on AQ
Read file id from AQ, load content from database, and transform to our internal XML format
apply some business logic and transform content back to text format.
SOA proejct1 do step 1-2, project2 do step 3 and project3 to step4.
We are doing some load test with input 7000 files.
the problem we experienced is that the memory use of "Old Generation" keep accumulating, although major GC can reduce it, it still keep growing, until 100%. Then no new BEPL instance can be created, and we met transaction timeout.
after analyze heap dump, we get a result like below, it seems that BPELFactoryImpl hold a HashMap which more than 180M, and it will keep growing. so does anyone experienced something similar?
we use SOA version 12.1.3. this problem stopped us for weeks, please help, thanks a lot.
Image of heap analysis
Guys
Finally we got an answer on this, it was caused by a bug, as said by Oracle Support, we are waiting for the patch.
thanks for your attention.
It's a bug. You should raise an SR referring for: stuck threads on
at java.util.HashMap.getEntry(HashMap.java:465)
at java.util.HashMap.get(HashMap.java:417)
at oracle.xml.parser.v2.XMLNode.setUserData(XMLNode.java:2137)
at oracle.bpel.lang.v20.model.impl.ExtensibleElementImpl.doCreateElement(ExtensibleElementImpl.java:502)
at oracle.dp.entity.impl.EmFacadeObjectImpl.getElement(EmFacadeObjectImpl.java:35)
at oracle.bpel.lang.v20.model.impl.ExtensibleElementImpl.performDOMChange(ExtensibleElementImpl.java:707)
at oracle.bpel.lang.v20.model.impl.ExtensibleElementImpl.doOnChange(ExtensibleElementImpl.java:636)
at oracle.bpel.lang.v20.model.impl.ExtensibleElementImpl$DOMUpdater.notifyChanged(ExtensibleElementImpl.java:535)
at oracle.dp.notify.impl.NotifierImpl.emNotify(NotifierImpl.java:39)
at oracle.dp.entity.impl.EmHolderImpl.doNotifyOnSet(EmHolderImpl.java:53)
at oracle.dp.entity.impl.EmHolderImpl.set(EmHolderImpl.java:47)
at oracle.bpel.lang.v20.model.impl.CopyImpl.setTo(CopyImpl.java:115)
at com.collaxa.cube.engine.ext.bpel.v2.wmp.BPEL2xCallWMP$CallArgument$1.evaluate(BPEL2xCallWMP.java:190)
at com.collaxa.cube.engine.ext.bpel.v2.wmp.BPEL2xCallWMP.invokeMethod(BPEL2xCallWMP.java:103)
at com.collaxa.cube.engine.ext.bpel.v2.wmp.BPEL2xCallWMP.__executeStatements(BPEL2xCallWMP.java:62)
at com.collaxa.cube.engine.ext.bpel.common.wmp.BaseBPELActivityWMP.perform(BaseBPELActivityWMP.java:188)
at com.collaxa.cube.engine.CubeEngine.performActivity(CubeEngine.java:2880)
....
Bug 20857627 (20867804) : Performance issue due to large number of threads stuck in HashMap.get

Different coders for the same class in dataflow job

I'm trying to use different coders for the same class for two different scenarios:
Reading from JSON input files - using data = TextIO.Read.from(options.getInput()).withCoder(new Coder1())
Elsewhere in the job I want the class to be persisted using SerializableCoder using data.setCoder(SerializableCoder.of(MyClass.class)
It works locally, but fails when run in the cloud with
Caused by: java.io.StreamCorruptedException: invalid stream header: 7B227365.
Is it a supported scenario? The reason to do this in the first place is to avoid read/write of JSON format, and on the other hand make reading from input files more efficient (UTF-8 parsing is part of the JSON reader, so it can read from InputStream directly)
Clarifications:
Coder1 is my coder.
The other coder is a SerializableCoder.of(MyClass.class)
How does the system choose which coder to use? The two formats are binary incompatible, and it looks like due to some optimization, the second coder is used for data format which can only be read by the first coder.
Yes, using two different coders like that should work. (With the caveat that the coder in #2 will only be used if the system choses to persist 'data' instead of optimizing it into surround computations.)
Are you using your own Coders or ones provided by the Dataflow SDK? Quick caveat on TextIO -- because it uses newlines to encode element boundaries, you'll get into trouble if you use a coder that produces encoded values containing something that can be mistaken for a newline. You really should only use textual encodings within TextIO. We're hoping to make that clearer in the future.

NullPointerException in saxon IdentityTransformer.transform when extracting data from javax DOMSource

I am using Spring Webservice (2.1.0) Client to send a very simple Message to a Soap UI Mock Webservice. (Hello World style, no namespaces)
Before Sending the DOMSource via the SpringWebserviceTemplate it is extracted from
a jdom2.Element as jdom2.transform.JDOMSource. (JDOM 2.0.2)
The Transformer is Saxon 9.4.0.4.
While calling the Spring Webservice Template function sendSourceAndReceiveToResult
the net.sf.saxon.IdentityTransformer throws a NullPointerException when executing
the transform(DOMSource,responseResult) function.
Since DOMSource is available at that point I do not know what could have gone wrong.
Stacktrace tells me the Nullpointer was thrown at:
net.sf.saxon.lib.SerializerFactory.getReceiver (line 239).
It would help me greatly if you could speculate on possible causes.
Please note that the best way of reporting a Saxon problem is to use either the Saxon forums at http://saxonica.plan.io, or the saxon-help mailing list on SourceForge. We try to monitor questions on StackOverflow, but it's often a few days before we notice them.
With this kind of problem, the cause is often that some piece of software (like Spring Webservice) is using the JAXP TransformerFactory mechanism to load whatever XSLT transformer it finds on the classpath, but hasn't actually done the testing to ensure that it works with an arbitrary XSLT transformer; people often test only with the default one provided by the JDK. It's not clear from your question whether you actually intended for it to use Saxon or not.
Line 239 of SerializerFactory is actually doing
throw new IllegalArgumentException("Unknown type of result: " + result.getClass());
(having tested whether result is one of the kinds of Result that it recognizes); so it looks to me as if result (which is probably the value passed to the transform() method) is null. Check the contents of your responseResult value.

How do I build a DNS Query record in Erlang?

I am building a native Bonjour / Zeroconf library and need to build DNS query records to broadcast off to the other machines. I have tried looking thru the Erlang source code but as I am relatively new to Erlang it gets kind of dense down the bowels of all the inet_XXX.erl and .hrl files. I have a listener that works for receiving and parsing the DNS record payloads, I just can't figure out how to create the query records. What I really need to know is what I need to pass into inet_dns:encode() to get a binary I can send out. Here is what I am trying to do.
{ok,P} = inet_dns:encode(#dns_query{domain="_daap._tcp.local",type=ptr,class=in})
here is the error I am getting
10> test:send().
** exception error: {badrecord,dns_rec}
in function inet_dns:encode/1
in call from test:send/0
11>
I finally figured it out.
send(Domain) ->
{ok,S} = gen_udp:open(5555,[{reuseaddr,true}, {ip,{224,0,0,251}}, {multicast_ttl,4}, {multicast_loop,false}, {broadcast,true}, binary]),
P = #dns_rec{header=#dns_header{},qdlist=[#dns_query{domain=Domain,type=ptr,class=in}]},
gen_udp:send(S,{224,0,0,251},5353,inet_dns:encode(P)),
gen_udp:close(S).
The fact that there is no documentation for the inet_dns module should make you very wary of using it from your code. I hope you are fully aware that no consideration will be taken to your project if they (the OTP team) feel like changing how the module is implemented and used.
Read the code for implementation ideas, or just get down to creating the DNS protocol message using the Erlang bit syntax based on the RFCs on the DNS protocol. Creating a DNS package is much easier than parsing it (I've been down that road myself, and the "clever tricks" to minimize packet size hardly seem worth it).
As explained by Magnus in the Erlang Questions Mailing list:
http://groups.google.com/group/erlang-programming/browse_thread/thread/ce547dab981219df/47c3ca96b15092e0?show_docid=47c3ca96b15092e0
you were passing a dns_query instead of a dns_rec record in the encode/1 function.

Resources