I have implemented a web service using Dropwizard.
It runs on Kubernetes and we use Prometheus for logging there.
The web service exports Dropwizard Metrics using the Java client implementation in DropwizardExports.java.
This is the relevant code:
private void registerMetrics(Environment environment) {
CollectorRegistry collectorRegistry = CollectorRegistry.defaultRegistry;
new DropwizardExports(environment.metrics()).register(collectorRegistry);
environment.admin().addServlet("metrics", new MetricsServlet(collectorRegistry))
.addMapping("/metrics");
}
I cannot find a reference documenting which metrics are exported exactly and their specific purpose. Even though I can look at the output and figure out most of it, there does not seem to be a comprehensive documentation. The code is not simple enough (for me) to look it up either.
Am I missing something?
I should mention that I am quite new to the Prometheus ecosystem. A pointer to a standard/default that is implemented by DropwizardExports might also help.
DropwizardExports exposes whatever metrics are using that Dropwizard metrics instrumentation environment, so you should look at the Dropwizard documentation to see what they mean.
Related
My requirement is to monitor the helpdesk system of the company which is running inside the Kubernetes cluster, for example, URL https://xyz.zendesk.com
They provide their API set to monitor this efficiently.
We can easily check the status using curl
$ curl -s "https://status.zendesk.com/api/components/support?domain=xyz.zendesk.com" | jq '.active_incidents'
[]
The above output means success status according to zendesk documentation.
Now the main part is, the company uses Prometheus to monitor everything.
How to have Prometheus check the success status from the output of this curl command?.
I did some research already and found somewhat related threads here and using pushgateway
Are they applicable to my requirement or going in the wrong route?
What you probably (!?) want is something that:
Provides an HTTP(s) (e.g. /metrics) endpoint
Producing metrics in Prometheus' exposition format
From Zendesk's API
NOTE curl only gives you #3
There are some examples of solutions that appear to meet the requirements but none is from Zendesk:
https://www.google.com/search?q=%22zendesk%22+prometheus+exporter
There are >2 other lists of Prometheus exporters (neither contains Zendesk):
https://prometheus.io/docs/instrumenting/exporters/
https://github.com/prometheus/prometheus/wiki/Default-port-allocations
I recommend you contact Zendesk and ask whether there's a Prometheus Exporter already. It's surprising to not find one.
It is straightforward to write a Prometheus Exporter. Prometheus Client libraries and Zendesk API client are available and preferred. While it's possible, bash is probably sub-optimal.
If all you want to do is GET that static endpoint, get a 200 response code and confirm that the body is [], you may be able to use Prometheus Blackbox exporter
NOTE Logging and monitoring tools often provide a higher-level tool that provides something analogous to a "universal translator", facilitating translation from 3rd-party systems' native logging|monitoring formats into some canonical form using config rather than code. Although in the logging space, fluentd is an example. To my knowledge, there is no such tool for Prometheus but I sense that there's an opportunity for someone to create one.
We need to monitor the Neo4j hosted in GCP VM's instance for which we are using Prometheus. Neo4j Natively supports sending metrics to Prometheus.
Now we need to create a dashboard using the stack driver monitoring with the exposed prometheus metrics.
Any suggestions/help will be useful.
Thanks in advance !!
Well, I tried to find something elaborated about this particular scenario but all I found was related with GKE and prometheus as a sidecar. However, the configuration must be similar and probably you will find this documentation useful. This guide will also lead you to this official documentation.
Also found another user of neo4j trying to do something similar and there is an answer that maybe you should check. For logging neo4j to Stackdriver Logging, see this.
I have a Dropwizard application I am working with where I need to be able to monitor for active HTTP connections. I know Metrics provides classes for Instrumenting Jetty, and of interest to me is measuring total number of active connections....however the javadoc doesn't help me much, and I can't find any examples of how this specific functionality is implemented. Does anyone have any examples they can share?
I'm not sure what your exact use-case is but if you just need to be able to see active connections I think the simplest solutions are just using monitoring solutions like JMX, Datadog, New Relic, or Appdynamics. If you need it in the code I think you'd need to manually implement something. I'd recommend StatsD or Redis is that's the path you go down.
Where I work, we are migrating our entire infrastructure which was until now based on monolithic services that ran directly on a windows/linux VM to a docker based architecture that will be orchestrated by Kubernetes.
One of the things that came to my mind is how we would handle logs in this new infrastructure.
Up until now, each app had its own way of handling logs, some were using log4net/log4j to write to file system, some were writing to GrayLog via a dedicated library.
The main problem I have with that is that one of the core ideas of programming micro-services in a Docker environment is that every service should assume as little as possible about the rest of services or the platform.
So basically I was looking into how I can abstract the logging process from the application, make it independent from the rest of the infrastructure.
One interesting thing that I found was that you could write the logs to standard output (stdout) and then configure Kubernetes to pull these logs and direct them to a centralised storage or a centralised logging server (like GrayLog) https://kubernetes.io/docs/concepts/cluster-administration/logging/
I have several concerns with this approach, for once, I haven't seen too many companies that do it, most popular logging solutions are to use a dedicated library to log to filesystem.
I am also concerned about how it might impact performance, some languages block if you write to stdout, whereas when you use a standard logging library, the logs are queued.
So what about services that output massive amount of user related logs?
I was interested about what you think, I didn't see this approach used widely, maybe there is reason for that.
Logging to whatever stream (File, stdout, GrayLog...) can either be synchronous (blocking) or asynchronous (non-blocking). Inherently, that has nothing to do with the medium you log to per-se. It is true that using System.out.println in Java will result in heavy thread-contention.
All the major logging frameworks (like log4j) provide you with the means to log in an asynchronous fashion to every medium that you like.
Your perception of not many companies doing this I think is wrong. Logging to stdout and configuring your underlying architecture to forward logs somewhere is the defacto standard of all PaaS/containerized applications.
So my tip is going to be: Log to stdout using a good logging framework which ensures asynchronous usage of the stream. For the rest you'll probably be fine.
I am trying to write a small script that will help me automate some of my IT tasks regarding to VLAN management.
I do not want to log-in to my switch via command-line - I want to send commands to it and get response (over the NET).
Are there any alternatives? I have started to search the web but so far I did not found anything.
I know SNMP is an option to gain info but I want to check other alternatives
thanks.
You can try Netconf Configuration Protocol, it is RPC-like management protocol which is supported by Cisco and many other vendors.
SNMP is the only widely and commonly used option here.
You can use WMI to manage Windows-based infrastructure.
There is also legacy SYSLOG protocol (RFC3164) which is UDP based.
For traffic monitoring and billing purposes there are NetFlow,
sFlow, jFlow, IPFIX and RADIUS protocols.
There are some other protocols but mostly proprietary.
So I'd suggest using SNMP which is nowadays a de-facto standard in network monitoring domain.
You might look at Expect as a scripting language solution. It is commonly used to do exactly what you are needing:
log into device (with result cases)
execute commands
save config
logout
As you build out a script library, tasks become simplified as you could do things like run scripts with parameters and have Expect do all the detail work.
See the wikipedia article for an overview.
I have also used SNMP for this kind of thing but the functionality is different because you are using an SNMP read-write privilege to upload new parts or complete configs, saving the running config to flash and/or saving the config off-device.
Try NETCONF+YANG protocol because it is currently the best option for network device configuration. More about SNMP alternatives:
https://bestmonitoringtools.com/top-snmp-alternatives-because-snmp-is-dying/