I wonder if reactor-core-micrometer is publicly available? I could not find it there.
I ask because I noticed that Schedulers.enableMetrics(); is deprecated but I couldn't find Micrometer.enableSchedulersMetricsDecorator(); to be publicly available.
Reactor provides a built-in integration with Micrometer. Check Exposing Reactor metrics for details.
You could also check e2e demo based on Prometheus & Grafana that exposes both Reactor's schedulers and business metrics Reactor monitoring demo.
Update
Starting from v3.5.0 the new module reactor-core-micrometer is introduced for more explicit way of bringing
metrics in reactor-core. The entry point is the Micrometer class.
As for now v3.5.0 is not yet released and not available in maven central but dependencies are published to https://repo.spring.io/milestone (e.g. reactor-core-micrometer:1.0.0-M3)
Related
I have used micrometer.io for most of my career to collect metrics. One of the coolest micrometer features is binding to collect information about the host system and jvm: https://micrometer.io/docs/ref/jvm on the basis of which it was possible to run the Grafana dashboard without much effort: https://grafana.com/grafana/dashboards/4701
Currently, I am starting to learn about OpenTelemetry, but I cannot find a description of the above functionalities. I do not want to use instrumentation, I want to depend on a manual definition of what is to be measured. Can you show me a way to do this? How to easily manually provide system/JVM metrics?
I don't think such a component exist in OTel, see:
Metrics API spec: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md
Metrics SDK spec: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/sdk.md
See Is there an equivalent of Prometheus simpleclient_hotspot with Opentelemetry?.
You can use opentelemetry-java-instrumentation and manually register JVM metrics observers.
I have Kafka Streams java Application up and running. I was trying to use KSQL to create simple queries and Use Kafka Stream for complex solution. I wanted to run both KSQL and Kafka Streams as
Java application.
I was going to through https://github.com/confluentinc/ksql/blob/master/ksqldb-examples/src/main/java/io/confluent/ksql/embedded/EmbeddedKsql.java. is there any documentation for EmbeddedKsql ? or any working prototype ?
KsqlDb 0.10 has been launched and of the newest features in it is the Java Client.
Please go through - https://www.confluent.io/blog/ksqldb-0-10-0-latest-features-updates/
The KsqlDB server does not have a supported Java API at this time. The project doesn't offer any guarantees of maintaining compatibility between releases.
If you were to run ksqlDB embedded in your Java application then KsqlContext would be the class to play around with. But I'm not sure how up-to-date it is, nor can I guarantee it won't be removed in a future release. I'm afraid there there aren't any documentation or examples to look at, as it's not a supported use.
The only supported way to communicate with ksqlDB is really through its HTTP endpoints. You could still embed the server in your own Java app and talk locally of HTTP, though running them in separate JVMs has many benefits.
I have implemented a web service using Dropwizard.
It runs on Kubernetes and we use Prometheus for logging there.
The web service exports Dropwizard Metrics using the Java client implementation in DropwizardExports.java.
This is the relevant code:
private void registerMetrics(Environment environment) {
CollectorRegistry collectorRegistry = CollectorRegistry.defaultRegistry;
new DropwizardExports(environment.metrics()).register(collectorRegistry);
environment.admin().addServlet("metrics", new MetricsServlet(collectorRegistry))
.addMapping("/metrics");
}
I cannot find a reference documenting which metrics are exported exactly and their specific purpose. Even though I can look at the output and figure out most of it, there does not seem to be a comprehensive documentation. The code is not simple enough (for me) to look it up either.
Am I missing something?
I should mention that I am quite new to the Prometheus ecosystem. A pointer to a standard/default that is implemented by DropwizardExports might also help.
DropwizardExports exposes whatever metrics are using that Dropwizard metrics instrumentation environment, so you should look at the Dropwizard documentation to see what they mean.
I'm using Spring Cloud Dataflow local server and deploying 60+ streams with a Kafka topic and custom sink. The memory/cpu usage cost is not currently scalable. I've set the Xmx to 64m for most streams.
Currently exploring my options.
Disable embedded Tomcat server. I tried this once and SCDF couldn't tell the deployment status of the stream.
Group multiple Kafka "source" topics to a single sink app. This is allowed by Kafka but unclear if SCDF will permit subscribing to multiple topics.
Switch to using Kubernetes deployer. Won't exactly reduce the memory/cpu usage but distribute it across multiple machines. Haven't pursued this option because Kubernetes isn't used in my org yet. Maybe this will force the issue.
Open to other ideas. Might also be able to tweak Kafka configs such as max.poll.records and reduce memory usage.
Thanks!
First, I'd like to clarify the differences between SCDF and Stream/Task apps in the data pipeline.
SCDF is a lightweight Spring Boot app that includes the DSL, REST-APIs, and the Dashboard. Simply put, it serves as the orchestrator to define and deploy stream and task/batch data pipelines made of stream and task applications respectively.
The actual business logic, its performance, and the underlying resource consumption are at the individual Stream/Task application level. SCDF doesn't interfere with the app's operation, nor it contributes to the resource load. Everything, in the end, is standalone Boot apps - standalone Java processes.
Now, to your exploratory steps.
Disable embedded Tomcat server. I tried this once and SCDF couldn't tell the deployment status of the stream.
SCDF is a REST server and it requires the application container (in this case Tomcat), you cannot disable it.
Group multiple Kafka "source" topics to a single sink app. This is allowed by Kafka but unclear if SCDF will permit subscribing to multiple topics.
Again, there is no relation between SCDF and the apps. SCDF orchestrates full-blown Stream/Task (aka: Boot apps) into coherent data pipeline. If you have to produce or consumer to/from multiple Kafka topics, it is done at application level. Checkout the multi-io sample for more details.
There's the facility to consume from multiple topics directly via named-destination, too. SCDF provides a DSL/UI capability to build fan-in and fan-out pipelines. Refer to docs for more details. This video could be useful, too.
Switch to using Kubernetes deployer.
SCDF's Local-server is generally recommended for development. Primarily because there's no resiliency baked into the Local-server implementation. For example, if the streaming apps crash for any reason, there's no mechanism to restart them automatically. This is exactly why we recommend either SCDF's Kubernetes or Cloud Foundry server implementations in production. The platform provides the resiliency and fault-tolerance by automatically restarting the apps under fault scenarios.
From resourcing standpoint, once again, it depends on each application. They are standalone microservice application doing a specific operation at runtime, and it is up to how much resources the business logic requires.
I'm using FUSE ESB and i wondering, is there any possibilities to connect some JMX monitor ?
I have connectec JMX monitor to normal tomcat, but i think that it is good idea, to have controll over serwer load, where i have FUSE ESB instance.
Do you have any experience with it?
I will be greatefull for any help
You may want to read this QA as well where its discussing monitoring of SMX / FuseESB
Administration and Monitoring of Apache-Camel routes in ServiceMix
But rule of thumb is that SMX / Fuse ESB is running on a JVM and offers JMX management capabilities, and any standard JMX compliant tooling can access these information.
For example with Camel we have an extensive number of JMX mbeans, you can gain details about your Camel applications, such as performance statistics, control lifecycle of Camel routes, consumers, etc. And see thread pool utilization, and so forth.
FuseSource offers documentation about Fuse ESB. For example there is some details about configuring JMX here: http://fusesource.com/docs/esb/4.4.1/esb_runtime/ESBRuntimeJMXConfig.html
yep, you can use JMX (jconsole, visualVM, etc)...its enabled by default (see the /bin/servicemix shell script and /etc/system.properties for config)
see these links for more details (though they are a bit dated)...
https://cwiki.apache.org/confluence/display/SMX4/Remote+JMX+connection
http://servicemix.apache.org/docs/4.4.x/users-guide/jmx.html