Micrometer publish to JMX in non-SpringBoot-Project - jmx

It seems like there is a lot of magic going on in Micrometer and SpringBoot to publish to the chosen monitoring system.
Is it possible to publish the information I gather with Micrometer to JMX in a non-SpringBoot application?
I added the dependency
implementation 'io.micrometer:micrometer-registry-jmx:latest.release'
and I added a Timer like that
Timer timer =
Timer.builder("name")
.publishPercentiles(0, 0.5, 0.75, 0.95)
.register(Metrics.globalRegistry);
but now I need to publish that data to JMX to be able to see that data in the JConsole. I searched the internet but as I am pretty new to Micrometer and JMX, I am not able to find anything that helps me solve that problem yet.

I'm not sure what magic are you referring to, Spring Boot will auto-configure the MeterRegistry instances for you.
If you want to use the JmxMeterRegistry, you need to create an instance of it, see: docs and samples.
Then you can use it:
MeterRegistry registry = new JmxMeterRegistry(...);
Timer timer = Timer.builder("test").register(registry);
If you don't want to inject your MeterRegistry everywhere, you can use the global registry, see the docs:
MeterRegistry registry = new JmxMeterRegistry(...);
Metrics.addRegistry(registry);
Though recommend injecting your MeterRegistry especially if you are using any dependency injection solution.

Related

Logging in vernemq plugin

I am trying to implement logging on connected users in my vernemq client using erlang. From documentation, I found that this could be bad, due to the scalability of the client and the assumption that there might be a lot of clients connecting and disconnecting. This is not my case, I will just have a bunch of clients but a lot of messages. Anyway, to my question. Is it possible to change the log file when using error_logger? Or should I use a different module for logging? Log file can be in any location if it had to, but I need it separated from vernemqs console.log. A followup question would be, can I somehow get a floating window on logs? I don't need to keep logs from previous year and I don't want to manually clean them every day or week or something like that
Thanks for any responses
From OTP21 on, you should use logger instead of error_logger, although the error_logger API is kept for compatibility (it justs uses logger under the hood).
With logger, which you can configure with the system configuration, you can use file backends such as logger_std_h (check the example configurations).
In logger_std_h you can set file rotation.

Rational behind appending versions as Service/Deployment name on k8s with spring cloud skipper

I am kind of new the spring cloud dataflow world and while playing around with the framework, I see that if I have a stream = 'test-steram' with 1 application called 'app'. When I deploy using skipper to kubernetes, I see that It creates pod/deployment & service on kubernetes with name as
test-stream-app-v1.
My question is why do we need to have v1 in service/deployment names on k8s? What role does it play in the overall workflow using spring cloud dataflow?
------Follow up -----------
Just wanted to confirm few points to make sure i am on right track to understand the flow
My understanding is with traditional stream (bind through kafka topics) service (object on kubernetes) do not play a significant role.
Rolling Update (Red/Black) pattern has implemented in following way in skipper and versioning in deployment/service plays a role in following way.
Let's assume that app-v1 deployment already exists and upgrade is requested. Skipper creates app-v2 deployment and
wait for it to be ready. Once ready it destroys app-v1
If my above understanding is right I have following follow up questions...
I see that skipper can deploy and package (and it do not have to be a traditional stream) to work with. Is that the longer term plan or Skipper is only intended to work spring-cloud-dataflow streams?
In case of non-tradtional stream package, where an package has multiple apps(rest microservices) in a group, how this model of versioning will work? I mean when I want to call the microservice from other microservice, I cannot possibly know or less than ideal to know the release-version of the app?
#Anand. Congrats on the 1st post!
The naming convention goes by the idea that each of the stream application is "versioned" if Skipper is used with SCDF. The version gets bumped for when, as a user, when you rolling-upgrade and rolling-downgrade the streaming-application versions or the application-specific properties either on-demand or via CI/CD automation.
It is very relevant for continuous-delivery and continuous-deployment workflows, and we provide native options in SCDF through commands such as stream update .. and stream rollback .. respectively. For any of these operations, the applications will be rolling updated in K8s, and each action will bump the number in the application name. In your example, you'd see them as test-stream-app-v1, `test-stream-app-v2, etc.
With all the historical versions in a central place (i.e., Skipper's database), you'd be able to interact with them via stream history.. and stream manifest .. commands in SCDF.
To learn more about all this, watch this demo-webinar (starts # ~41.25), and also have a look at samples in the reference guide.
I hope this helps.

Which metrics are exported by DropwizardMetrics (Prometheus client)?

I have implemented a web service using Dropwizard.
It runs on Kubernetes and we use Prometheus for logging there.
The web service exports Dropwizard Metrics using the Java client implementation in DropwizardExports.java.
This is the relevant code:
private void registerMetrics(Environment environment) {
CollectorRegistry collectorRegistry = CollectorRegistry.defaultRegistry;
new DropwizardExports(environment.metrics()).register(collectorRegistry);
environment.admin().addServlet("metrics", new MetricsServlet(collectorRegistry))
.addMapping("/metrics");
}
I cannot find a reference documenting which metrics are exported exactly and their specific purpose. Even though I can look at the output and figure out most of it, there does not seem to be a comprehensive documentation. The code is not simple enough (for me) to look it up either.
Am I missing something?
I should mention that I am quite new to the Prometheus ecosystem. A pointer to a standard/default that is implemented by DropwizardExports might also help.
DropwizardExports exposes whatever metrics are using that Dropwizard metrics instrumentation environment, so you should look at the Dropwizard documentation to see what they mean.

Measuring total number of active (HTTP) connections with Metrics in Dropwizard application

I have a Dropwizard application I am working with where I need to be able to monitor for active HTTP connections. I know Metrics provides classes for Instrumenting Jetty, and of interest to me is measuring total number of active connections....however the javadoc doesn't help me much, and I can't find any examples of how this specific functionality is implemented. Does anyone have any examples they can share?
I'm not sure what your exact use-case is but if you just need to be able to see active connections I think the simplest solutions are just using monitoring solutions like JMX, Datadog, New Relic, or Appdynamics. If you need it in the code I think you'd need to manually implement something. I'd recommend StatsD or Redis is that's the path you go down.

Flume automatic scalability and failover

My company is considering using flume for some fairly high volume log processing. We believe that the log processing needs to be distributed, both for volume (scalability) and failover (reliability) reasons, and Flume seems the obvious choice.
However, we think we must be missing something obvious, because we don't see how Flume provides automatic scalability and failover.
I want to define a flow that says for each log line, do thing A, then pass it along and do thing B, then pass it along and do thing C, and so on, which seems to match well with Flume. However, I want to be able to define this flow in purely logical terms, and then basically say, "Hey Flume, here are the servers, here is the flow definition, go to work!". Servers will die, (and ops will restart them), we will add servers to the cluster, and retire others, and flume will just direct the work to whatever nodes have available capacity.
This description is how Hadoop map-reduce implements scalability and failover, and I assumed that Flume would be the same. However, the documentation sees to imply that I need to manually configure which physical servers each logical node runs on, and configure specific failover scenarios for each node.
Am I right, and Flume does not serve our purpose, or did I miss something?
Thanks for your help.
Depending on whether you are using multiple masters, you can code your configuration to follow a failover pattern.
This is fairly detailed in the guide: http://archive.cloudera.com/cdh/3/flume/UserGuide/index.html#_automatic_failover_chains
To answer your question, bluntly, Flume does not yet have an ability to figure out a failover scheme automatically.

Resources