How to wrap a JMXClient in a JavaAgent - jmx

I want to load a custom JMX Client into a JVM by wrapping the client in a javaagent package and query MBeans locally. My intent is that the JMX Client will periodically query the host application's MBeans or receive notifications. However, since the javaagent is loaded before the main jar (via premain), the host applications's MBeans are not yet available. How should I handle this "chicken before the egg" problem? Are threads appropriate for this? Or is there some other JMX mechanism that would be preferred?
Thank you

Start a loop with a sleep in it until you successfully get the target MBeanServer. If this is the platform MBeanServer, you should get it immediately using ManagementFactory.getPlatformMBeanServer(). Then register a notification listener with the ObjectName defined as MBeanServerDelegate.DELEGATE_NAME. Filter for notifications of the class MBeanServerNotification, with notification types of MBeanServerNotification.REGISTRATION_NOTIFICATION. Your notification listener will get a callback every time a new MBean is registered in the target MBeanServer.

Related

How to find/define JMX key for ActiveMQ Artemis monitoring

I'm trying to setup monitoring of ActiveMQ Artemis with Zabbix. My intention is to monitor the availability of Artemis and also monitor the size and number of messages accumulating in queues, and setup alerts.
I enabled JMX on Artemis as the documents in struct, and I built the JMX example. From what I can tell, this only involves adding the following lines to these two files in the broker:
management.xml
<connector connector-port="1099" connector-host="192.168.56.101" />
Opened the port:
sudo ufw allow 1099
broker.xml
<jmx-management-enabled>true</jmx-management-enabled>
So I think JMX is enabled, although I haven't managed to confirm this.
In Zabbix I added the "host" (a system to monitor), but the next step is creating an "item" (a thing on the system). To do this I need a JMX key, something similar to jmx["java.lang:type=Memory","HeapMemoryUsage.used"]. (I tried this one but I don't get any data back) This defines the MBean to call.
So where can I find the keys for the available things to monitor on Artemis? Or have I screwed something up here and am not looking for the right thing?
In the example there is a JMWExample.java program. It connects to Artemis, publishes a message, uses JMX to count the messages, then removes the message -- but I don't see any keys to MBeans.
Also, in the admin console for Artemis there is a JMX tab, which lists what I think is all the available things to monitor. For example, I have a queue called "test.queue". Under the JMX tab I find:
org.apache.activemq.artemis:broker="0.0.0.0",component=addresses,address="test.topic",subcomponent=queues,routing-type="multicast",queue="test.queue"
And there are numerous methods listed, including countMessages(). Have I answered my own question here? Is this what I'm looking for?
If so, how does it fit into this key format, jmx[object_name,attribute_name]
{EDIT}
I'm looking at the JMX tab on the console. If I understand correctly, the key should have a format like this: jmx[object_name,attribute_name]
So I see the the object name under the JMX tab for one of my test queues is: org.apache.activemq.artemis:broker="0.0.0.0",component=addresses,address="test.topic",subcomponent=queues,routing-type="multicast",queue="test.queue"
And it has an attribute of: MessageCount
So I treid this, which doesn't work. I also tried replacing 0.0.0.0 with the IP address.
jmx[org.apache.activemq.artemis:broker="0.0.0.0",component=addresses,address="test.topic",subcomponent=queues,routing-type="multicast",queue="test.queue",MessageCount]
The default value for <jmx-management-enabled> is true so you don't need to explicitly configure that.
You can confirm that JMX is enabled by connecting to the broker using a tool like JConsole or JVisualVM which ship with the JVM. Ideally you would do this locally to avoid any network configuration issues.
The broker exposes lots of different MBeans for managing all parts of the broker. Here are the different "control" objects with their default MBean object naming pattern:
ActiveMQServerControl: <domain>:broker=<brokerName>
AddressControl: <domain>:broker=<brokerName>,component=addresses,address=<addressName>
QueueControl: <domain>:broker=<brokerName>,component=addresses,address=<addressName>,subcomponent=queues,routing-type=<routingType>,queue=<queueName>
DivertControl: <domain>:broker=<brokerName>,component=addresses,address=<addressName>,subcomponent=diverts,divert=<divertName>
ClusterConnectionControl: <domain>:broker=<brokerName>,component=cluster-connections,name=<clusterConnectionName>
AcceptorControl: <domain>:broker=<brokerName>,component=acceptors,name=<acceptorName>
BroadcastGroupControl: <domain>:broker=<brokerName>,component=broadcast-groups,name=<broadcastGroupName>
BridgeControl: <domain>:broker=<brokerName>,component=bridges,name=<bridgeName>
The "key" that you use will depend on the name of the attribute from the control that you want to inspect. That name will correspond to the "getter" of the attribute. You can see all the names of all the getters in the linked JavaDoc. For example, if you want to get the number of messages from a queue you'd use the key MessageCount since the getter is named getMessageCount().
The domain by default is org.apache.activemq.artemis and the default broker name is localhost so if you didn't explicitly configure either of these and you wanted to get the message count of the anycast queue "myQueue" on the address "myAddress" you would use something like this:
jmx["org.apache.activemq.artemis:broker=\"localhost\",component=addresses,address=\"myAddress\",subcomponent=queues,routing-type=\"anycast\",queue=\"myQueue\"",MessageCount]
This formatting is based on this Zabbix block post which is also discussed on this Zabbix forum thread.
To be clear, the JMXExample you cited uses a handy helper method named getQueueObjectName to construct the MBean's object name.
If you need to quickly get a broker up and running which supports remote JMX clients do the following:
Open the directory examples/features/standard/jmx in a terminal.
Run the example using mvn clean verify.
This will create a full broker instance in target/server0 which you can use as a template to configure your own. It includes modifications to broker.xml, management.xml, and artemis.profile (to set the java.rmi.server.hostname system property).
If you start this broker instance manually you can connect to it with JConsole or JVisualVM using service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi.

Start and verify Dart VM service

I have a two part question.
First, what Dart command should I use to "start"
the VM service listening for requests, with possibly
giving it what host and port number to use.
I'm using Windows, and I don't need the Observatory possibly
interfering.
I'm currently trying to use this, after I CD into the project's directory:
dart --pause_isolates_on_start bicycle
And the second part of the question is, is it possible to verify
that the VM service is there and listening on whatever port?
I want to be able to send a request to the VM service,
from a WebSocket client, and get back a response.
After I give the above command, if I do a 'netstat'
it doesn't look like there is anything there listening.
And any attempts at trying to connect to the VM service get
a connection refused Exception, same as if I didn't even
try to start the VM service.
UPDATE:
I was looking at the intelliJ plugin code, to see how they did their connect,
and saw that they used "ws://localhost:8181/ws", I was trying to use
"ws://localhost:8181", and now it's finally getting past the handshake,
the server was returning "200 OK" instead of "101" before.
I'm assuming that I'm talking to the Observer at this point,
and not the VM service, I'm not sure, but at least I'm further along..
When it worked, I was using:
dart --enable-vm-service --pause_isolates_on_start bicycle.dart
Thanks!!
dart --help -v prints
--observe[=<port>[/<bind-address>]]
The observe flag is a convenience flag used to run a program with a
set of options which are often useful for debugging under Observatory.
These options are currently:
--enable-vm-service[=<port>[/<bind-address>]]
--pause-isolates-on-exit
--pause-isolates-on-unhandled-exceptions
--warn-on-pause-with-no-debugger
This set is subject to change.
Please see these options for further documentation.
It depends on what exactly you want to do, but as far as I know the Observatory just uses this service and if you don't access any of its features, it won't add additional load to the process.
There is a Dart client API https://pub.dartlang.org/packages/vm_service_client and a documentation about the protocol https://github.com/dart-lang/sdk/blob/master/runtime/vm/service/service.md
Perhaps this is what you are looking for
enum EventKind {
// Notification that VM identifying information has changed. Currently used
// to notify of changes to the VM debugging name via setVMName.
VMUpdate,
// Notification that a new isolate has started.
IsolateStart,
used with Events https://github.com/dart-lang/sdk/blob/master/runtime/vm/service/service.md#events

Can I have multiple service workers both intercept the same fetch request?

I'm writing a library which I will provide for 3rd parties to run a service worker on their site. It needs to intercept all network requests but I want to allow them to build their own service worker if they like.
Can I have both service workers intercept the same fetches and provide some kind of priority/ordering between them? Alternatively is there some other pattern I should be using?
Thanks
No, you can not. Only one service worker per scope is allowed to be registered so the latest kick the previous one out unless the scope is more specific, in this case, the request is attended by the most specific only.
Nevertheless, you can attach multiple fetch handlers and they all will process the request so maybe you can write your functionality in a separated script and let the user's service worker to include your file via importScripts().
The first handler calling event.respondWith() synchronously (actually, you can not call this method asynchronously) wins and the remaining handlers trying to call will throw.
Prioritization and coordination requires middleware. You can check ServiceWorkerWare or sw-toolbox.

Message broadcasting using rabbitmq, genbunny and cowboy event notifier

I have two instances of cowboy server running which are connected to RabbitMQ. I am using gen_bunny as RabbitMQ client to connect to RabbitMQ.
I can consume the message to from rabbitMQ if using bunnyc:consume(). However for that I need to fire this method explicitly. What I want is to bind an event on cowboy so as soon as there is a message in the Queue it should automatically notify to cowboy.
Is it possible using gen_bunny or other erlang client?
Dont know about gen_bunny, but with official erlang client you can subscribing to queue (look at http://www.rabbitmq.com/erlang-client-user-guide.html, "Subscribing To Queues" section)
As far as i understand, you need send messages from queue through WebSockets to clients. So you need subscribe to queue in process that communicate with client. And recieve messages in "receive ... end" or in handle_info (depends on realization)
ADDITION
I looked in gen_bunny sources... mochi/gen_bunny depends on mochi/amqp_client which provide amqp_channel:subscribe/3 (see https://github.com/mochi/amqp_client/blob/master/src/amqp_channel.erl#L177) you can use it for subscribing
Got it worked ... After some tweaking in the bunnyc.erl source. Now, In init function i have added subscription function and in start_link function in bunnyc.erl passing the process id of my cowboy process so as soon as there is a message in the queue I can get it in websocket_info function of cowboy..

Logout clients from XMPP

I have an xmpp/ejabberdb app that uses an external service to provide eventing features, but when this service becomes unavailable, I want to disconnect/logout all of my clients. Is this possible? How?
I got it working the way I needed. In fact, I didn't find any simple way to make my own server logout all connected users given some kind of situation, so I dug into ejabberd's code and figured out a way to do it myself.
In ejabberd_c2s.erl module, when a client logs out or it's socket is dropped for some reason, the FSM is terminated, doing all necessary clean up to maintain ejabberd's consistency.
What I had to do was just create an exported function shutdown/1 in this module that calls gen_fsm:send_all_state_event/2 sending a signal for it to terminate.
As for each connection there's one c2s process, I need to call this function for each user.
---UPDATING---
Actually there's no need to create this shutdown function, as ejabber_c2s already has the ability to process 'closed' signal, which does the same thing. So, instead of creating the shutdown function, simply doing ge_fsm:send_event(C2SPid, closed) might be enough.
---UPDATING---
To discover the user's c2s process PID I just use ejabberd_sm:get_session_pid/1 or ejabberd_sm:dirty_get_sessions_list/0 (for all sessions).
This worked fine for me, but if anyone has a better idea, please add here.
Thanks
I don't know the ejabberd specifics, but you could write a custom XMPP component which polls the external service (or listens for presence events, if it's another XMPP component), then logs out users when the service becomes unavailable.

Resources