Is it possible to disable Tomcat's local JMX server? - jmx

Tomcat provides various monitoring operations via JMX. The documentation describes how to enable and disable remote monitoring. But is it possible to disable local monitoring (i.e. from within a web app deployed to Tomcat itself)?
In other words, can I count on my webapp always being able to get the local MBeanServer using (for example) JmxUtils#locateMBeanServer?

Related

How can i access real time driver log (not with a lag of 5 min) other than Azure databricks sparkUI

I am trying to integrate the driver logs to Control-M scheduler.
How can i access real time driver log (not with a lag of 5 min) other than Azure databricks sparkUI. Using some API or accessing the location of real time written logs.
Also I am planning to do elastic analysis on top of it.
Such things (real-time collection of metrics or logs) are usually done via installation of some agent (for example, filebeat) via init scripts (global or cluster-level init scripts).
The actual script content heavily depends on the type of the agent used, but Databricks' documentation contains some examples of that:
Blog post on setting Datadog integration
Notebook that shows how setup init script for Datadog

Spring Cloud Dataflow reducing cost of streams

I'm using Spring Cloud Dataflow local server and deploying 60+ streams with a Kafka topic and custom sink. The memory/cpu usage cost is not currently scalable. I've set the Xmx to 64m for most streams.
Currently exploring my options.
Disable embedded Tomcat server. I tried this once and SCDF couldn't tell the deployment status of the stream.
Group multiple Kafka "source" topics to a single sink app. This is allowed by Kafka but unclear if SCDF will permit subscribing to multiple topics.
Switch to using Kubernetes deployer. Won't exactly reduce the memory/cpu usage but distribute it across multiple machines. Haven't pursued this option because Kubernetes isn't used in my org yet. Maybe this will force the issue.
Open to other ideas. Might also be able to tweak Kafka configs such as max.poll.records and reduce memory usage.
Thanks!
First, I'd like to clarify the differences between SCDF and Stream/Task apps in the data pipeline.
SCDF is a lightweight Spring Boot app that includes the DSL, REST-APIs, and the Dashboard. Simply put, it serves as the orchestrator to define and deploy stream and task/batch data pipelines made of stream and task applications respectively.
The actual business logic, its performance, and the underlying resource consumption are at the individual Stream/Task application level. SCDF doesn't interfere with the app's operation, nor it contributes to the resource load. Everything, in the end, is standalone Boot apps - standalone Java processes.
Now, to your exploratory steps.
Disable embedded Tomcat server. I tried this once and SCDF couldn't tell the deployment status of the stream.
SCDF is a REST server and it requires the application container (in this case Tomcat), you cannot disable it.
Group multiple Kafka "source" topics to a single sink app. This is allowed by Kafka but unclear if SCDF will permit subscribing to multiple topics.
Again, there is no relation between SCDF and the apps. SCDF orchestrates full-blown Stream/Task (aka: Boot apps) into coherent data pipeline. If you have to produce or consumer to/from multiple Kafka topics, it is done at application level. Checkout the multi-io sample for more details.
There's the facility to consume from multiple topics directly via named-destination, too. SCDF provides a DSL/UI capability to build fan-in and fan-out pipelines. Refer to docs for more details. This video could be useful, too.
Switch to using Kubernetes deployer.
SCDF's Local-server is generally recommended for development. Primarily because there's no resiliency baked into the Local-server implementation. For example, if the streaming apps crash for any reason, there's no mechanism to restart them automatically. This is exactly why we recommend either SCDF's Kubernetes or Cloud Foundry server implementations in production. The platform provides the resiliency and fault-tolerance by automatically restarting the apps under fault scenarios.
From resourcing standpoint, once again, it depends on each application. They are standalone microservice application doing a specific operation at runtime, and it is up to how much resources the business logic requires.

Stackdriver - monitor multiple JMX

I'm trying to configure google stackdriver to monitor multiple JVM instances on same machine by using this plugin: https://cloud.google.com/monitoring/agent/plugins/jvm
I created two block with different ServiceURL ports (for different JMXs) and different InstancePrefix ("jvm" and "jvm2") like it described here: https://collectd.org/wiki/index.php/Plugin:GenericJMX
But I still don't see second jvm in metrics-explorer.
Unfortunately, this kind of configuration is not currently supported by the Stackdriver monitoring agent. The suggested workaround is to use Custom Metrics.

How I can monitor status of my custom application in Cloudera Manager?

I have my own custom application. It works with Apache Kafka and has two main parts: Producer and Consumer.
Is there a possibility to monitoring all running producers and consumers in Cloudera Manager (like a DataNodes of HDFS)? First and main feature that I need is showing of status of every instance (started or stopped).
Or maybe there are some another soft (except Cloudera Manager) to perform my request?
Thanks
You can/should be using an APM tool. The company I work for is AppDynamics, we provide deep transaction tracing and monitoring among other things for these types of apps. There are other leading tools in APM as well for example Dynatrace and New Relic.

Load Balancing in ASP.NET MVC Web Application. What can/can't be done?

I'm in the middle of developing a web application and have been asked the question whether it will work with a load balancer. My initial reaction is yes, since there is no state tracked between requests anywhere in the system. However, there is some application specific state loaded on app start (configuration settings from the database mainly.)
This data is all Read Only. Is it sufficient to rely on the normal cache dependency mechanisms to manage this and invalidate these objects across all the applications in the cluster or would I have to move to a shared cache system like App Fabric to ensure reliability/consistency?
With diagnostics enabled, I've got numerous logging calls using EventSource.Write and an out of process logger picking these up. I assume in this case, I'd need one logger installed on each of the servers in the cluster to pick up the events each one triggers. I'm not too fussed about that, but what is a good way to identify which server in the cluster serviced the request?
If you initialize the data on each server seperately and it is read-only, there's no problem. The separate applications will have a copy each.
Yes, you'd need a logger on each instance. In order to identify the server you could include the servers' IP into the log. That way you can track the server. (provided you have static IP's, but I assume you do).

Resources