I have exposed an MBean NotifyFailedTriggers that exposes an Integer value. I have configured the poller, collectd and jmx-datacollection config files correctly. However, the collectd daemon seems to be skipping the mbean saying it is not registered. See the attached log.
2018-06-12 13:08:41,204 DEBUG [pool-10-thread-8] o.o.n.j.i.DefaultJmxCollector: Collecting MBean (objectname=com.example:name=notifyFailedTriggers, wildcard=false)
2018-06-12 13:08:41,205 DEBUG [pool-10-thread-8]
o.o.n.j.i.DefaultJmxCollector: Collecting ObjectName
com.example:name=notifyFailedTriggers
2018-06-12 13:08:41,328 DEBUG [pool-10-thread-8]
o.o.n.j.i.DefaultJmxCollector: ObjectName
com.example:name=notifyFailedTriggers is not registered.
2018-06-12 13:08:41,329 DEBUG [pool-10-thread-8]
o.o.n.j.i.DefaultJmxCollector: Skip ObjectName
com.example:name=notifyFailedTriggers
2018-06-12 13:08:41,510 INFO [Collectd-Thread-15-of-50]
o.o.n.c.CollectableService: run: finished collection for
3/xx.xx.84.122/onms-poc/example1
2018-06-12 13:08:41,510 DEBUG [Collectd-Thread-15-of-50]
o.o.n.s.LegacyScheduler: schedule: Adding ready runnable
CollectableService for service 3:/xx.xx.84.122:onms-poc (ready in
300000ms) at interval 300000
This is a standalone java application that is exposing the MXBeans.
Is there a specific reason why it considers this MXBean to be unregistered and hence skipping it?
In a nutshell, it considers the MXBean to be unregistered if the MBean server says that it is. You may need to configure the application to enable certain beans, or perhaps the version of the application that you're using does not support the particular bean in question.
Behind the curtain, the JMX collector asks the MBean server whether the object is registered. If the MBean server responds that it is not, the JMX collector logs the message you pasted. Here's the JMX collector code where that happens, and here's the documentation of the "isRegistered" method that it's calling to make that determination.
Related
I have a spark cluster running in a docker container. I have a pyspark simple example program to test my configuration which is running on my desktop outside the docker container. The spark console gets and executes the job and completes the job. However the pyspark client never gets the results.
image of spark console
The pyspark program's console shows:
" Using Spark's default log4j profile:
org/apache/spark/log4j-defaults.properties Setting default log level
to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For
SparkR, use setLogLevel(newLevel). 22/03/05 11:42:23 WARN
ProcfsMetricsGetter: Exception when trying to compute pagesize, as a
result reporting of ProcessTree metrics is stopped 22/03/05 11:42:28
WARN TaskSchedulerImpl: Initial job has not accepted any resources;
check your cluster UI to ensure that workers are registered and have
sufficient resources 22/03/05 11:42:43 WARN TaskSchedulerImpl: Initial
job has not accepted any resources; check your cluster UI to ensure
that workers are registered and have sufficient resources 22/03/05
11:42:58 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered
and have sufficient resources 22/03/05 11:43:13 WARN
TaskSchedulerImpl: Initial job has not accepted any resources; check
your cluster UI to ensure that workers are registered and have
sufficient resources 22/03/05 11:43:28 WARN TaskSchedulerImpl: Initial
job has not accepted any resources; check your cluster UI to ensure
that workers are registered and have sufficient resources 22/03/05
11:43:43 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered
and have sufficient resources "
I know this is false since the job executed on the server.
If I click the kill link on the server the pyspark program immediately gets:
22/03/05 11:46:22 ERROR Utils: Uncaught exception in thread
stop-spark-context org.apache.spark.SparkException: Exception thrown
in awaitResult: at
org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:301)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at
org.apache.spark.deploy.client.StandaloneAppClient.stop(StandaloneAppClient.scala:287)
at
org.apache.spark.scheduler.cluster.StandaloneSchedulerBackend.org$apache$spark$scheduler$cluster$StandaloneSchedulerBackend$$stop(StandaloneSchedulerBackend.scala:259)
at
org.apache.spark.scheduler.cluster.StandaloneSchedulerBackend.stop(StandaloneSchedulerBackend.scala:131)
at
org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:927)
at
org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:2567)
at
org.apache.spark.SparkContext.$anonfun$stop$12(SparkContext.scala:2086)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1442)
at org.apache.spark.SparkContext.stop(SparkContext.scala:2086) at
org.apache.spark.SparkContext$$anon$3.run(SparkContext.scala:2035)
Caused by: org.apache.spark.SparkException: Could not find AppClient.
Thoughts on how to fix this?
There can be multiple reasons for it, as you are running spark client in docker container there is possibility that your container is not reachable from spark nodes while the reverse is possible, that's why your spark session gets created but gets killed in few seconds after it.
You should make your container accessible from spark nodes to make network connection complete. If in error message you are seeing some DNS name which might be container name in most cases, map it to docker container's host ip in /etc/hosts file on all nodes of spark cluster.
Hope it helps.
I have setup a test perfino server attached the agent to a jvm.
I can see it register and but the mbean browser shows nothing ! is there something I am missing
I'm trying to enable JMX for my wildfly swarm component. I'm used to seeing several mbeans for a variety of wildfly subsystems, I'm specifically interested in the data source mbeans.
I've pasted a snippet below, I've got the jmx fraction and I have statistics-enabled set to true. When thorntail is running I can connect to the JVM via JMX, but I am cannot see any datasource mbeans. Is there something else that needs to be enabled for them to show up?
The app is currently on swarm 2018.2.0.Final
swarm:
jmx:
expression-expose-model.domain-name: RemoteJMX
jmx-remoting-connector:
use-management-endpoint: true
resolved-expose-model.domain-name: RemoteJMX
show-model: true
datasources:
data-sources:
MyDataSourceName:
driver-name: com.microsoft.sqlserver
connection-url: jdbc:xyz
statistics-enabled: true
First of all, WildFly Swarm 2018.2.0.Final is very old. In the meantime, WildFly Swarm got renamed to Thorntail; you can automatically migrate by running mvn io.thorntail:thorntail-maven-plugin:2.5.0.Final:migrate-from-wildfly-swarm.
And then: if you connect to JMX, do you see any WildFly MBeans at all? I mean, is the problem with datasources only, or is it more general?
During boot, you should see JMX-related log messages, such as JMX not configured for remote access or JMX configured for remote connector: implicitly using ... interface. Do you see any of them?
Finally, it seems you want JMX exposed on the management port. Do you have a dependency on the management fraction?
I am migrating my java enterprise application from WAS8.5 full profile
to liberty server. My application code has soap client and required
stubs generated from WSDL. I am able to receive response when using
WAS8.5 but getting below exceptions while running liberty server.
I have already added jaxws-2.2 .
Recreated the stubs again pointing to liberty server, IBM-WS from eclipse. I
couldn't find relevant answers online.
Console logs
[WARNING ] Could not unwrap Operation {http://services.abc.com/gb/getsomepoint/v1}getSomeInfoByParam to match method "public abstract void com.abc.services.gb.getsomepoint.v1.GBGetSomePointV1.getSomeInfoByParam(javax.xml.ws.Holder,java.lang.String,java.lang.String,javax.xml.ws.Holder,javax.xml.ws.Holder)"
javax.xml.ws.soap.SOAPFaultException: BIP3113E: Exception detected in message flow GB_GetSomePoint_V1.SOAP Input (integration node NMD4BRK)
[err] at org.apache.cxf.jaxws.JaxWsClientProxy.invoke(JaxWsClientProxy.java:156)
[err] at [internal classes]
[err] ... 51 more
[err] Caused by:
[err] org.apache.cxf.binding.soap.SoapFault: BIP3113E: Exception detected in message flow GB_GetSomePoint_V1.SOAP Input (integration node NMD9BRK)
[err] at org.apache.cxf.binding.soap.interceptor.Soap11FaultInInterceptor.unmarshalFault(Soap11FaultInInterceptor.java:84)
[err] at [internal classes]
[err] ... 53 more
There isn't much information to go on from the console but one thing to check is if the request/response messages to and from the client look the same on Liberty as they do on WAS 8.5. That'd be a simple way to verify if the expected behavior is the same. Another thing to consider is if your WSDL contract matches the request generated by the client (i.e. all the bindings are there).
The Web Services configuration (WS-Policy/WS-Security) can be substantially different on Liberty vs WAS 8.5 and this Knowledge Center document has some good info on how to properly configure your app if you need it.
Deploying JAX-WS applications to Liberty:
https://www.ibm.com/support/knowledgecenter/SSD28V_9.0.0/com.ibm.websphere.wlp.core.doc/ae/twlp_dep_jaxws.html
The last thing I’d suggest is turning on Liberty’s Web Services trace. There isn’t a lot of info about root cause from the console messages, but by turning on the trace the specific problem might make itself known. You can turn trace on by following these directions.
Enabling trace on WebSphere Liberty:
https://www.ibm.com/support/knowledgecenter/SSD28V_9.0.0/com.ibm.websphere.wlp.core.doc/ae/rwlp_logging.html
The specific trace specification you’ll want to enable for Web Services is as follows:
traceSpecification="*=audit:com.ibm.ws.jaxws.*=finest:org.apache.cxf.*=finest”
If it works after commenting out jaxws-2.2 feature, you must be using a different jax-ws implementation packaged with your application. You can try adding back the jaxws-2.2 feature and set this JVM property for liberty server: -Dcom.ibm.xml.xlxp.jaxb.opti.level=0
I want to read C3P0 JMX MBean to extract some of the attributes, and expose over one of the rest endpoint. How can I read Jmx bean using spring within the same process ?.