Query on custom metrics exposed via prometheus node exporter textfile collector fails - docker

I am new to prometheus/alertmanager.
I have created a cron job which executes shell script every minute. This shell script generates "test.prom" file (with a gauge metric in it) in the same directory which is assigned to --textfile.collector.directory argument (to node-exporter). I verified (using curl http://localhost:9100/metrics) that the node-exporter exposes that custom metric correctly.
When I tried to run a query against that custom metric in prometheus dashboard, it does not show up any results (it says no data found).
I could not figure out why the query against the metric exposed via node-exporter textfile collector fails. Any clues what I missed ? Also please let me know how to check and ensure that prometheus scraped my custom metric 'test_metric` ?
My query in prometheus dashboard is test_metric != 0 (in prometheus dashboard) which did not give any results. But I exposed test_metric via node-exporter textfile.
Any help is appreciated !!
BTW, the node-exporter is running as docker container in Kubernetes environment.

I had a similar situation, but it was not a configuration problem.
Instead, my data included timestamps:
# HELP network_connectivity_rtt Round Trip Time to each node
# TYPE network_connectivity_rtt gauge
network_connectivity_rtt{host="home"} 53.87 1541426242
network_connectivity_rtt{host="hop_1"} 58.8 1541426242
network_connectivity_rtt{host="hop_2"} 21.93 1541426242
network_connectivity_rtt{host="hop_3"} 71.69 1541426242
PNE was picking them up without any problem once I reloaded it. As prometheus is running under systemd, I had to check the logs like this:
journalctl --system -u prometheus.service --follow
There I read this line:
msg="Error on ingesting samples that are too old or are too far into the future"
Once I removed the timestamps, values started appearing. This lead me to read more in detail about the timestamps, and I found out they have to be in miliseconds. So this format now is ok:
# HELP network_connectivity_rtt Round Trip Time to each node
# TYPE network_connectivity_rtt gauge
network_connectivity_rtt{host="home"} 50.47 1541429581376
network_connectivity_rtt{host="hop_1"} 3.38 1541429581376
network_connectivity_rtt{host="hop_2"} 11.2 1541429581376
network_connectivity_rtt{host="hop_3"} 20.72 1541429581376
I hope it helps someone else.

Its my bad. I did not included scrape instructions for node-exporter in prometheus.yaml file. It worked after including them.

This issue is happening because of stale metrics.
Lets say you have written you metric in file at 13.00
by default after 5min prometheus will consider you metric stale and it might disappear from there at the time you are making query.

Related

How to get a program's std-out to fluentd (without docker)

Scenario:
You write a program in R or Python, which needs to run on Linux or Windows, you want to log (JSON structured and unstructured) std-out and (mostly unstructured) std-error from this program to a Fluentd instance. Adding a new program or starting another instance should not require to update the Fluentd configuration and the applications will not (yet) be running in a docker environment.
Question:
How to send "logs" from a bunch of programs to an fluentd instance, without the need to perform curl calls for every log entry that your application was originally writing to std-out?
When a UDP or TCP connection' is necessary for the application to run, it seems to become less easy to debug, and any dependency of your program that returns std-out will be required to be parsed, just to get it's logging passed through.
Thoughts:
Alternatively, a question could be, how to accept a 'connection' object which can either point to a file or to a TCP connection? So that switching between the std-out or a TCP destination is a matter of changing a single value?
I like the 'tail' input plugin, which could be what I am looking for, but then:
the original log file never appears to stop growing (will the trail position value reset when it is simply removed? I couldn't find this behaviour), and
it seems that it requires to reconfigure fluentd for every new program that you start on that server (if it logs in another file), I would highly prefer to keep that configuration on the program side...
I build an EFK stack with a docker logdriver set to fluentd, which does not seem to have an optimal solid solution either, but without docker, I already get kind of stuck with setting up a basic configuration (not referring to fluent.conf here).
TL;DR
std-out -> fluentd: Redirect the program output, when launching your program, to a file. On linux, use logrotate, you will love it.
Windows: use fluent-bit.
App side config: use single (or predictable) log locations, and the
fluentd/fluent-bit 'in_tail' plugin.
logging general:
It's recommended to always write application output to a file, if the std-out must be written to a file, pipe it's output at program startup. For more flexibility for the fluentd configuration, pipe them to separate files (just like 'Apache' does):
My_program.exe Do some crazy stuf > my_out_file.txt 2> my_error_file.txt
This opens the option for fluentd to read from this/these file(s).
Windows:
For Windows systems, use fluent-bit, it likely solves the issue for aggregating the Windows OS program logs. Support for Windows has just been implemented recently.
fluent-bit supports:
the 'tail' plugin, which records the 'inode' value (unique, renaming insensitive, file pointer) and the 'index' (called 'pos' for the full-blown 'fluent' application) value in a sqllite3 database and deals with un-processable data, which is allocated to a certain key ('log' by default)
Works on Windows machines, but note that it cannot buffer to disk, so be sure a lost connection, or another issue with the output, is reestablished or fixed in time so that you will not be running into OOM issues.
Appl. side config:
The tail plugin can monitor a folder, this makes it practically possible to keep the configuration on the side of your program. Just make sure you write your logs of your different applications to a predictable directory.
Fluent-bit setup/config:
For Linux, just use fluentd (unless > 100000 messages per second are required, which is where fluent-bit becomes your only choice).
For Windows, install Fluent-bit, and make it run as a deamon (almost funny sollution).
There are 2 execution methods:
Providing configuration directly via the commandline
Using a config file (example included in zip), and referring to it with the -c flag.
Directly from commandline
Some example executions (without making use of the option to work with a configuration file) can be found here:
PS .\bin\fluent-bit.exe -i winlog -p "channels=Setup,Windows PowerShell" -p "db=./test.db" -o stdout -m '*'
-i declares the input method. Currently, only a few plugins have been implemented, see the man page below.
PS fluent-bit.exe --help
Available Options
-b --storage_path=PATH specify a storage buffering path
-c --config=FILE specify an optional configuration file
-f, --flush=SECONDS flush timeout in seconds (default: 5)
-F --filter=FILTER set a filter
-i, --input=INPUT set an input
-m, --match=MATCH set plugin match, same as '-p match=abc'
-o, --output=OUTPUT set an output
-p, --prop="A=B" set plugin configuration property
-R, --parser=FILE specify a parser configuration file
-e, --plugin=FILE load an external plugin (shared lib)
-l, --log_file=FILE write log info to a file
-t, --tag=TAG set plugin tag, same as '-p tag=abc'
-T, --sp-task=SQL define a stream processor task
-v, --verbose increase logging verbosity (default: info)
-s, --coro_stack_size Set coroutines stack size in bytes (default: 98302)
-q, --quiet quiet mode
-S, --sosreport support report for Enterprise customers
-V, --version show version number
-h, --help print this help
Inputs
tail Tail files
dummy Generate dummy data
statsd StatsD input plugin
winlog Windows Event Log
tcp TCP
forward Fluentd in-forward
random Random
Outputs
counter Records counter
datadog Send events to DataDog HTTP Event Collector
es Elasticsearch
file Generate log file
forward Forward (Fluentd protocol)
http HTTP Output
influxdb InfluxDB Time Series
null Throws away events
slack Send events to a Slack channel
splunk Send events to Splunk HTTP Event Collector
stackdriver Send events to Google Stackdriver Logging
stdout Prints events to STDOUT
tcp TCP Output
flowcounter FlowCounter
Filters
aws Add AWS Metadata
expect Validate expected keys and values
record_modifier modify record
rewrite_tag Rewrite records tags
throttle Throttle messages using sliding window algorithm
grep grep events by specified field values
kubernetes Filter to append Kubernetes metadata
parser Parse events
nest nest events by specified field values
modify modify records by applying rules
lua Lua Scripting Filter
stdout Filter events to STDOUT

Error 403: "Flux query service disabled." But flux-enabled=true has been set in influxdb.conf

I have been using InfluxDB (server version 1.7.5) with the InfluxQL language for some time now. Unfortunately, InfluxQL does not allow me to perform any form of joins, so I need to use InfluxDB's new scripting language Flux instead.
The manual states that I have to enable Flux in /etc/influxdb/influxdb.conf by setting flux-enabled=true which I have done. I restarted the server to make sure I got the new settings and started the Influx Command Line tool with "-type=flux".
I then do get a different user interface than when I use InfluxQL. So far so good. I can also set and read variables etc. So I can set:
> dummy = 1
> dummy
1
However, when I try to do any form of query of the tables such as: from(bucket:"db_OxyFlux-test/autogen")
I always get
Error: Flux query service disabled. Verify flux-enabled=true in the [http] section of the InfluxDB config.
: 403 Forbidden
I found the manual for Fluxlang rather lacking in basic details of Schema exploration and so I am not sure if this is just an issue with my query raising this error or if something else is going wrong. I tested this both on my own home machine and on our remote work server and I get the same results.
Re: Vilix
Thank you. This lead me in the right direction.
I realised that InfluxDB does not automatically read the config file (which is not very intuitive). But your solution also forces me to start the deamon by hand each time. After some more googling I used:
"sudo influxd config -config /etc/influxdb/influxdb.conf"
So hopefully now the daemon will start automatically each time on startup rather than me having to do this by hand.
I have the same issue and solution is to start influxd with -config option:
influxd -config /etc/influxdb/influxdb.conf

Integrate graphite metrics with bosun

I am running Docker container for bosun. I want to integrate the graphite metrics with bosun.
What are the configuration changes that need to be done for this?
#kyle-brandt's answer is okay and I gave it an upvote but it and the Bosun docs don't really explain enough of how to use a Graphite that you don't host, i.e. hostedgraphite.com. Using the docs and some trial and error I figured things out. So here it goes:
Make a Graphite API key: http://docs.hostedgraphite.com/advanced/access-keys.html (you should whitelist IP addresses). Let's say you got https://www.hostedgraphite.com/deadbeef/431-831/graphite/.
Create data.conf with:
tsdbHost = localhost:4242
stateFile = /data/bosun.state
graphiteHost = https://www.hostedgraphite.com/deadbeef/431-831/graphite/render
Start the Docker container:
docker run -d \
-p 80:8070 \
--name=bosun \
-v `pwd`/bosun.conf:/data/bosun.conf \
stackexchange/bosun
Note that I didn't do the 4242 port mapping because I'm getting my data just from hostedgraphite.com and I mapped 8070 to 80 so that I don't have to specify the port when going to Bosun in the browser.
Adding expressions: The docs say to use GraphiteQuery but that didn't work for me, graphite worked instead. For example: graphite("my.long.metric.name.for.some.method", "10m", "", ""). There is also an example graphite alert in the examples part of the documentation (thanks #kyle-brandt).
As per the documentation you linked, you must set the graphiteHost in the config:
graphiteHost: an ip, hostname, ip:port, hostname:port or a URL,
defaults to standard http/https ports, defaults to “/render” path. Any
non-zero path (even “/” overrides path)
The graphing page and items page in Bosun only work with OpenTSDB as the backend. However, you can still you the expression page, dashboard, and config editor. When you use expressions that return a seriesSet as the graphite query functions do, you will see a graph tab on the expression tabe. You can also use the .Graph and .GraphAll template functions with graphite. So it is largely functional.
There is also an example graphite alert in the examples part of the documentation.

how to swap solr core from shell

I have a solr setup with two cores. I want to schedule a core(core1, backend) for full import frequently(e.g. after every 5 mins), then swap with the live(core0, serving) core from shell command through a shceduler.
For full-import command, I am using following shell command
wget -o - -q -t 1 http://localhost:8080/solr/core1/dataimport?command=full-import
Which works fine. If I do a core swap from browser by hitting
http://localhost:8080/solr/admin/cores?action=SWAP&core=core1&other=core0, I get latest update instantly on search. But if I schedule this URL as shell command similar to dataimport, it doesn't do that swap.
Did you try with
curl
"http://'localhost':8080/solr/admin/cores?action=SWAP&core=core1&other=core0"
from shell?
There is catch with the SWAPs
Apache Solr allows to swap two cores around for non-Cloud configurations. They take each other’s name, so it is a good way to push an updated core into a production without downtime.
But an interesting question is how this is achieved. Normally, core name is it’s directory name too. So, does Solr rename the directory on the filesystem too?
Not really! Instead name property in the core.properties file is updated to use the name of the other core. Usually that property is used to give an alternave name of the core for when the directory naming conventions are not suitable.
The gotcha is - of course - that you still have two directories with right looking names for the cores you see in the Admin UI. So, it is very easy to forget that extra redirection/rename step when troubleshooting somebody else’s - or even your own old - setup.

How to change the timestamp to UTC for the logs that a fluent-bit docker container receives via stdin?

My Fluent Bit Docker container is adding a timestamp with the local time to the logs that received via STDIN; otherwise all the logs received via rsyslog or journald seem to have a UTC time format.
I have a basic EFK stack where I am running Fluent Bit containers as remote collectors which are forwarding all the logs to a FluentD central collector, which is pushing everything into Elasticsearch.
I've added a filter to the Fluent Bit config file where I have experimented with many ways to modify the timestamp, to no avail. It seems like I am overthinking it; it should be much easier to modify the timestamp.
These are all the ways I've tried to modify the timestamp with the fluent-bit.conf filter
[FILTER]
Name record_modifier
Match_Regex ^(?!log.*).*$ ## only match the input received via stdin
Tag log.stdout ## tag to mark input received via stdin
Add sourcetype timestamp ## tried to add timestamp from lua script
Parser docker ## tried to use docker parser for timestamp
Time_key utc ## tried to add timestamp as a key
script test.lua ## sample lua script from fluentbit docs
call cb_print ## call a function from within lua script
What is the de facto method to make all the timestamps uniform to UTC? Any help or suggestion is appreciated.
The way it works is that the docker parser extracts the content of 'log' and respect the timestamp defined by docker.
One quick workaround would be to modify your parsers.conf and make sure the docker parser does not resolve the timestamp, on that way Fluent Bit will assign the current time in UTC for you.

Resources