solace monitoring: looking for prometheus exporter - solace

Is the a way to have a solace prometheus exporter?
For example let the prometheus mon server export its metrics as promehteus compatible web service?
Or to use something like the PushGateway for the metrics over topic?

I mean time we created an exporter:
https://github.com/dabgmx/solace_exporter
https://hub.docker.com/r/dabgmx/solace-exporter

Related

Is there any possibilities to monitoring the Apache Flume using REST API & JMX?

I am currently start working on Apache flume.I had installed the apache hadoop,java8 64 bit and Apache flume in same directory.
Now i need to monitoring the apache flume.
Is there any possibility to monitoring the Apache flume using REST API and JMX?
Prometheus JMX Exporter or Jolokia can be added to Flume JVM arguments

PushProx setup in Kubernetes for metrics from federate endpoint of Prometheus deployed using kube prometheus stack helm chart

I need insight on how we can set up a PushProx client in kubernetes which will communicate with kube prometheus and fetch all the metrics from /federate.
For refrenence I used the helm chart https://artifacthub.io/packages/helm/devopstales/pushprox, I create a pod template using this helm chart as refrenence as it adds a dameonset which is not required in my case. I only want my client to interact with internal prometheus so it can get all the metrics and send them to my external prometheus. Currently, the proxy is able to connect but I see 404 as a response on the proxy side when interacting with the client.
Any help is appreciated.

Collect metrics from a Prometheus server to telegraf

I have a prometheus server running on a K8s instance and telegraf on a different cluster. Is there some way to pull metrics from the prometheus server using telegraf? I know there is telegraf support for scraping metrics from prometheus clients but I am looking to get these metrics from the prometheus server.
Thanks
there is this thing inside data sources, called scrapers, its a tab, you just need to put the url of the server.
I am trying to configure this using cli, but i can only do it with gui
There is a prometheus remote write parser (https://github.com/influxdata/telegraf/tree/master/plugins/parsers/prometheusremotewrite), I think it will be included in the 1.19.0 release of Telegraf. If you want to try it out now you can use a nightly build. (https://github.com/influxdata/telegraf#nightly-builds)
Configure your prometheus remote write towards telegraf and configure the input plugin to listen for traffic on the port that you configured. For convenience sake, have the output plugin configured to file so you can see the metrics in a file, almost immediately

Telegraf - send Error/0/false if input plugin (service) is down

I need calculate my service uptime (e.g redis, memcached)
= success fetching metrics attempts / *total* fetching metrics attempts (every 10 sec for some period)
Can I somehow configure Telegraf to send 0/false if my input (service) is down?
Cause now if input-service is down influxdb don't receive any new metrics points from telegraf (only error logs on telegraf daemon side).
Daniel Nelson answered here https://github.com/influxdata/telegraf/issues/4563#issuecomment-413653844
that each plugin can add custom metrics to internal plugin (example: http_listener input plugin)

How to connect Flink remotely via JMX?

For my upcoming bachelor's thesis I want to develop a tool that collects system and application data from Apache Flink and sends this data in some kind of "events" to another system. This tool will be installed on Flink job- and taskmanager nodes. Beside data from linux system utilities like dstat I would like to collect JMX data.
My problem is, that I couldn't figure out how to connect via remote JMX connection by using a port to Flinks jobmanager. Although the collector will be on the same machine, I really try to avoid using a --javaagent to access JMX data of Flink's JVM.
Another problem is, I have a local docker setup based on https://github.com/apache/flink/tree/master/flink-contrib/docker-flink and updated to flink-1.0.2, that I cannot connect via jconsole because I don't know how to "open" a JMX remote port for the job- and taskmanager.
Is there any way to achieve this?
Thanks in advance, any ideas very appreciated.
Solved!
I needed to add env.java.opts: -Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=9999
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
in flink-conf.yaml.
No it's possible to connect the jobmanager via jconsole.

Resources