Collect metrics from a Prometheus server to telegraf - influxdb

I have a prometheus server running on a K8s instance and telegraf on a different cluster. Is there some way to pull metrics from the prometheus server using telegraf? I know there is telegraf support for scraping metrics from prometheus clients but I am looking to get these metrics from the prometheus server.
Thanks

there is this thing inside data sources, called scrapers, its a tab, you just need to put the url of the server.
I am trying to configure this using cli, but i can only do it with gui

There is a prometheus remote write parser (https://github.com/influxdata/telegraf/tree/master/plugins/parsers/prometheusremotewrite), I think it will be included in the 1.19.0 release of Telegraf. If you want to try it out now you can use a nightly build. (https://github.com/influxdata/telegraf#nightly-builds)
Configure your prometheus remote write towards telegraf and configure the input plugin to listen for traffic on the port that you configured. For convenience sake, have the output plugin configured to file so you can see the metrics in a file, almost immediately

Related

PushProx setup in Kubernetes for metrics from federate endpoint of Prometheus deployed using kube prometheus stack helm chart

I need insight on how we can set up a PushProx client in kubernetes which will communicate with kube prometheus and fetch all the metrics from /federate.
For refrenence I used the helm chart https://artifacthub.io/packages/helm/devopstales/pushprox, I create a pod template using this helm chart as refrenence as it adds a dameonset which is not required in my case. I only want my client to interact with internal prometheus so it can get all the metrics and send them to my external prometheus. Currently, the proxy is able to connect but I see 404 as a response on the proxy side when interacting with the client.
Any help is appreciated.

Apache Artemis queue monitoring with Zabbix

I'd like to keep track of data that might be stuck in Apache Artemis queues and I'd like to leverage its JMX management abilities together with our Zabbix instance.
What steps do I need to take in order to successfully connect Zabbix to Artemis via JMX? The ones mentioned in https://activemq.apache.org/artemis/docs/latest/management.html are not quite clear to me.
I had to disable the internal connector and go the other way around by adding this to the artemis.profile file:
JAVA_ARGS="$JAVA_ARGS -Dcom.sun.management.jmxremote"
JAVA_ARGS="$JAVA_ARGS -Dcom.sun.management.jmxremote.authenticate=false"
JAVA_ARGS="$JAVA_ARGS -Dcom.sun.management.jmxremote.ssl=false"
JAVA_ARGS="$JAVA_ARGS -Dcom.sun.management.jmxremote.port=1099"
JAVA_ARGS="$JAVA_ARGS -Dcom.sun.management.jmxremote.rmi.port=1098"
JAVA_ARGS="$JAVA_ARGS -Djava.rmi.server.hostname=edimq-broker-master-az1.dc01.clouedi.local"
However, this way it's anything but secure, I know.
As the documentation states, you need to add this to your management.xml:
<connector connector-port="1099"/>
This will expose a JMX connector on localhost so if you want to be able to access it remotely from another machine on your network (i.e. your Zabbix instance) then you should do something like:
<connector connector-port="1099" connector-host="myhost" />
Also, if you have multiple IP addresses on the machine hosting the broker you'll want to set this system property in the JAVA_ARGS variable in artemis.profile:
-Djava.rmi.server.hostname=myhost
Then point your Zabbix instance at the broker using a url like:
service:jmx:rmi:///jndi/rmi://myhost:1099/jmxrmi
You can see this in action by running the jmx example shipped with Artemis in the examples/features/standard/ directory. Just navigate into that directory and run mvn verify. Running the example will create a broker instance, start the broker instance, and run the client all automatically. After the example runs you can go to into the target/server0 directory and look at all the configuration files to compare them to your own. You can also start broker independently of the example if you wish (by running ./artemis run from the target/server0/bin directory). Once the broker is running you should be able to connect to it with JConsole no problem using a JMX url like this:
service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi

Flume agent: How flume agent gets data from a webserver located in different physical server

I am trying to understand Flume and referring to the official page of flume at flume.apache.org
In particular, referring to this section, I am bit confused in this.
Do we need to run the flume agent on the actual webserver or can we run flume agents in a different physical server and acquire data from webserver?
If above is correct, then how flume agent gets the data from webserver logs? How can webserver make its data available to the flume agent ?
Can anyone help understand this?
The Flume agent must pull data from a source, publish to a channel, which then writes to a sink.
You can install Flume agent in either a local or remote configuration. But, keep in mind that having it remote will add some network latency to your event processing, if you are concerned about that. You can also "multiplex" Flume agents to have one remote aggregation agent, then individual local agents on each web server.
Assuming a flume agent is locally installed using a Spooldir or exec source, it'll essentially tail any file or run that command locally. This is how it would get data from logs.
If the Flume agent is setup as a Syslog or TCP source (see Data ingestion section on network sources), then it can be on a remote machine, and you must establish a network socket in your logging application to publish messages to the other server. This is a similar pattern to Apache Kafka.

How to connect Flink remotely via JMX?

For my upcoming bachelor's thesis I want to develop a tool that collects system and application data from Apache Flink and sends this data in some kind of "events" to another system. This tool will be installed on Flink job- and taskmanager nodes. Beside data from linux system utilities like dstat I would like to collect JMX data.
My problem is, that I couldn't figure out how to connect via remote JMX connection by using a port to Flinks jobmanager. Although the collector will be on the same machine, I really try to avoid using a --javaagent to access JMX data of Flink's JVM.
Another problem is, I have a local docker setup based on https://github.com/apache/flink/tree/master/flink-contrib/docker-flink and updated to flink-1.0.2, that I cannot connect via jconsole because I don't know how to "open" a JMX remote port for the job- and taskmanager.
Is there any way to achieve this?
Thanks in advance, any ideas very appreciated.
Solved!
I needed to add env.java.opts: -Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=9999
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
in flink-conf.yaml.
No it's possible to connect the jobmanager via jconsole.

TITAN server monitoring via JMX

Is it possible to monitor TITAN cassandra server with rexster remotely via JMX using something like VisualVM?
I have titan installed on the cloud and want to monitor it from my dev box. Is this possible.
I have read this
https://github.com/tinkerpop/rexster/wiki/Monitoring
but it seems that JMX MBeans are only available locally however I could be wrong
You can monitor Rexster JMX remotely with VisualVM, but it takes a bit of configuration and changes to rexster.sh as you need to include these environment variables:
-Dcom.sun.management.jmxremote.port=3333
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
You can read some more about how to do remote setup on the VisualVM site.
You mentioned that you are trying to monitor an instance in the cloud. You didn't mention the cloud provider, but I've had trouble doing this with EC2 in the past. Perhaps this post will help you out. While I've had issues with VisualVM remoting to EC2, I have successfully connected to Rexster via VisualVM from another EC2 instance without trouble so if all else fails that could be your workaround.

Resources