How to access and export WebSocketMessageBrokerStats internal counts - spring-websocket

I've created a spring boot app with websocket support, and I would like to expose the WebSocketMessageBrokerStats on the prometheus scrape endpoint provided by acuator.
I'm able to get access to the WebSocketMessageBrokerStats object via autowiring/inject. However, the object only exposes the relevant metrics (number of connected sockets, size of thread pool etc) as a summarized String.
This isn't very useful since I'd ultimately like to export those metrics to a prometheus instance. Is there any way to get access to the stats exposed by the WebSocketMessageBroker stats in their raw form (e.g. as an int or long).

Related

Is there a way to keep a single url, domain, or ip, for communication between docker containters, and between localhost and containers?

I am working on a web app where a single environmental variable is used for specifying a certain server (a rest api), like this:
.env:
...
URL_SERVER_API="http://localhost:8080"
...
the application is running inside a container, and it uses the server api variable for two things related to my problem:
It could generate and serve dynamic html where it append URL_SERVER_API to complete full api urls, for example {{URL_SERVER_API}}/someendpoint
It calls the api directly from a (php) script using CURL, defining the endpoint in the same fashion as 1
so I end up with a situation where if I set URL_SERVER_API to localhost:8080 the main application forms valid urls to call because the api app (which is also running in a docker container) was exposed in the correspondingly port, but the CURL calls don't work because localhost:8080 is not a known server inside the container.
Also I configured a bridge network and attached both apps to it, and I was capable to ping from the main app to the api succesfuly (e.g. ping api_docker), then when I set URL_SERVER_API=api_docker the CURL calls to the api are successfull, but the html files returned from the main app are constructed with unreachable urls like http://api_docker/someendpoint
Hope you can see my issue
I am able to solve the issue by having two variables URL_SERVER_API and URL_SERVER_API_INTERNAL and using the first for html serving and the second for the CURL calls, but I think it is not the best solution to add new variables to remember because I am not in charge to do so.
Thanks for the time taken to read

Intercept all REST API request made from local machine

I have a large JAVA application which connects to hundreds of cloud based systems using their REST API's and fetch the data from those systems.
To connects those different cloud systems we have different modules and each one have different approach to call REST API's like some modules using apache rest client some module using googles rest client.
So there is no centralise place where the REST api is getting called.
I have to track performance of the application e.g. to fetch accounts info from test system takes 1 hour. and this process need
4 api calls for https://test/api/v2/accounts -- (this will return all account id's)
8000 api calls for https://test/api/v2/accounts/{accountId}. --- (this will return deaths of each account)
I need to track what is the time taken by each api to responds and based on that calculate time taken by application to process that data.
Important part here is deatiled api analysis and make graphical data if possible e.g.
4 api calls for https://test/api/v2/accounts --- taken 3 minutes
8000 api calls for https://test/api/v2/accounts/{accountId} -- taken
48 minutes
I need any any pointer how can I achieve that something like intercept all rest api made to https://test/api/v2
As you've probably already discovered, without some extra tweaking, wireshark just shows you the connections at the FQDN level: you can't see which individual endpoint is called (because TLS, by design, hides the content of the connection). You have a few options though:
if you control the APIs that are being connected to, you can load the
TLS keys into wireshark, and it'll let you decrypt the TLS
connection;
if you can force your app to use a proxy, you can use a Man-In-The-Middle (MITM) proxy (like Burp) to intercept the traffic; or
you can instrument your app to log destination and duration for all the API requests.

How filter Server names from application urls in grafana

I'm using prometheus as a datasource and windows exporter as exporter on monitoring server. I've made a variable for server where i want to show server names only what should i write next to filter out server names and leave out applications url
If you have a metrics that only contain server names, then yes. For example if you have a metrics like:
host_cpu_number{instance="192.168.1.1"}
host_cpu_number{instance="192.168.1.2"}
then you can use label_values(host_cpu_number, instance) to get 192.168.1.1 and 192.168.1.2

How to find/define JMX key for ActiveMQ Artemis monitoring

I'm trying to setup monitoring of ActiveMQ Artemis with Zabbix. My intention is to monitor the availability of Artemis and also monitor the size and number of messages accumulating in queues, and setup alerts.
I enabled JMX on Artemis as the documents in struct, and I built the JMX example. From what I can tell, this only involves adding the following lines to these two files in the broker:
management.xml
<connector connector-port="1099" connector-host="192.168.56.101" />
Opened the port:
sudo ufw allow 1099
broker.xml
<jmx-management-enabled>true</jmx-management-enabled>
So I think JMX is enabled, although I haven't managed to confirm this.
In Zabbix I added the "host" (a system to monitor), but the next step is creating an "item" (a thing on the system). To do this I need a JMX key, something similar to jmx["java.lang:type=Memory","HeapMemoryUsage.used"]. (I tried this one but I don't get any data back) This defines the MBean to call.
So where can I find the keys for the available things to monitor on Artemis? Or have I screwed something up here and am not looking for the right thing?
In the example there is a JMWExample.java program. It connects to Artemis, publishes a message, uses JMX to count the messages, then removes the message -- but I don't see any keys to MBeans.
Also, in the admin console for Artemis there is a JMX tab, which lists what I think is all the available things to monitor. For example, I have a queue called "test.queue". Under the JMX tab I find:
org.apache.activemq.artemis:broker="0.0.0.0",component=addresses,address="test.topic",subcomponent=queues,routing-type="multicast",queue="test.queue"
And there are numerous methods listed, including countMessages(). Have I answered my own question here? Is this what I'm looking for?
If so, how does it fit into this key format, jmx[object_name,attribute_name]
{EDIT}
I'm looking at the JMX tab on the console. If I understand correctly, the key should have a format like this: jmx[object_name,attribute_name]
So I see the the object name under the JMX tab for one of my test queues is: org.apache.activemq.artemis:broker="0.0.0.0",component=addresses,address="test.topic",subcomponent=queues,routing-type="multicast",queue="test.queue"
And it has an attribute of: MessageCount
So I treid this, which doesn't work. I also tried replacing 0.0.0.0 with the IP address.
jmx[org.apache.activemq.artemis:broker="0.0.0.0",component=addresses,address="test.topic",subcomponent=queues,routing-type="multicast",queue="test.queue",MessageCount]
The default value for <jmx-management-enabled> is true so you don't need to explicitly configure that.
You can confirm that JMX is enabled by connecting to the broker using a tool like JConsole or JVisualVM which ship with the JVM. Ideally you would do this locally to avoid any network configuration issues.
The broker exposes lots of different MBeans for managing all parts of the broker. Here are the different "control" objects with their default MBean object naming pattern:
ActiveMQServerControl: <domain>:broker=<brokerName>
AddressControl: <domain>:broker=<brokerName>,component=addresses,address=<addressName>
QueueControl: <domain>:broker=<brokerName>,component=addresses,address=<addressName>,subcomponent=queues,routing-type=<routingType>,queue=<queueName>
DivertControl: <domain>:broker=<brokerName>,component=addresses,address=<addressName>,subcomponent=diverts,divert=<divertName>
ClusterConnectionControl: <domain>:broker=<brokerName>,component=cluster-connections,name=<clusterConnectionName>
AcceptorControl: <domain>:broker=<brokerName>,component=acceptors,name=<acceptorName>
BroadcastGroupControl: <domain>:broker=<brokerName>,component=broadcast-groups,name=<broadcastGroupName>
BridgeControl: <domain>:broker=<brokerName>,component=bridges,name=<bridgeName>
The "key" that you use will depend on the name of the attribute from the control that you want to inspect. That name will correspond to the "getter" of the attribute. You can see all the names of all the getters in the linked JavaDoc. For example, if you want to get the number of messages from a queue you'd use the key MessageCount since the getter is named getMessageCount().
The domain by default is org.apache.activemq.artemis and the default broker name is localhost so if you didn't explicitly configure either of these and you wanted to get the message count of the anycast queue "myQueue" on the address "myAddress" you would use something like this:
jmx["org.apache.activemq.artemis:broker=\"localhost\",component=addresses,address=\"myAddress\",subcomponent=queues,routing-type=\"anycast\",queue=\"myQueue\"",MessageCount]
This formatting is based on this Zabbix block post which is also discussed on this Zabbix forum thread.
To be clear, the JMXExample you cited uses a handy helper method named getQueueObjectName to construct the MBean's object name.
If you need to quickly get a broker up and running which supports remote JMX clients do the following:
Open the directory examples/features/standard/jmx in a terminal.
Run the example using mvn clean verify.
This will create a full broker instance in target/server0 which you can use as a template to configure your own. It includes modifications to broker.xml, management.xml, and artemis.profile (to set the java.rmi.server.hostname system property).
If you start this broker instance manually you can connect to it with JConsole or JVisualVM using service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi.

Routing to same instance of Backend container that serviced initial request

We have a multiservice architecture consisting of HAProxy front end ( we can change this to another proxy if required), a mongodb database, and multiple instances of a backend app running under Docker Swarm.
Once an initial request is routed to an instance ( container ) of the backend app we would like all future requests from mobile clients to be routed to the same instance. The backend app uses TCP sockets to communicate with a VoIP PBX.
Ideally we would like to control the number of instances of the backend app using the replicas key in the docker-compose file. However if a container died and was recreated we would require mobile clients continue routing to the same container. The reason for this is each container is holding state info.
Is this possible with Docker swarm? We are thinking each instance of the backend app when created gets an identifier which is then used to do some sort of path based routing.
HAproxy has what you need. This article explains all.
As a conclusion of the article, you may choose from two solutions:
IP source affinity to server and Application layer persistence. The latter solution is stronger/better than the first but it requires cookies.
Here is an extras from the article:
IP source affinity to server
An easy way to maintain affinity between a user and a server is to use user’s IP address: this is called Source IP affinity.
There are a lot of issues doing that and I’m not going to detail them right now (TODO++: an other article to write).
The only thing you have to know is that source IP affinity is the latest method to use when you want to “stick” a user to a server.
Well, it’s true that it will solve our issue as long as the user use a single IP address or he never change his IP address during the session.
Application layer persistence
Since a web application server has to identify each users individually, to avoid serving content from a user to an other one, we may use this information, or at least try to reproduce the same behavior in the load-balancer to maintain persistence between a user and a server.
The information we’ll use is the Session Cookie, either set by the load-balancer itself or using one set up by the application server.
What is the difference between Persistence and Affinity
Affinity: this is when we use an information from a layer below the application layer to maintain a client request to a single server
Persistence: this is when we use Application layer information to stick a client to a single server
sticky session: a sticky session is a session maintained by persistence
The main advantage of the persistence over affinity is that it’s much more accurate, but sometimes, Persistence is not doable, so we must rely on affinity.
Using persistence, we mean that we’re 100% sure that a user will get redirected to a single server.
Using affinity, we mean that the user may be redirected to the same server…
Affinity configuration in HAProxy / Aloha load-balancer
The configuration below shows how to do affinity within HAProxy, based on client IP information:
frontend ft_web
bind 0.0.0.0:80
default_backend bk_web
backend bk_web
balance source
hash-type consistent # optional
server s1 192.168.10.11:80 check
server s2 192.168.10.21:80 check
Session cookie setup by the Load-Balancer
The configuration below shows how to configure HAProxy / Aloha load balancer to inject a cookie in the client browser:
frontend ft_web
bind 0.0.0.0:80
default_backend bk_web
backend bk_web
balance roundrobin
cookie SERVERID insert indirect nocache
server s1 192.168.10.11:80 check cookie s1
server s2 192.168.10.21:80 check cookie s2

Resources