Zabbix notifying after 2 bad values - monitoring

I do have a trigger that will fire when external check output will contain "Servers 1/3 UP" or "2/3 UP" combined with OR. This is one of them:
({Zabbix server :server-check.regexp(Servers 1/3 UP, #2)})<>0
But it doesn't work. How do I make the trigger fire only when the last two outputs do contain these strings? Thanks

Function regexp(Servers 1/3 UP, #2) checks whether any of the last two values have a regular expression Servers 1/3 UP in them.
You can instead use function count(#2, Servers 1/3 UP) and compare its result to 2.
Please see function documentation for Zabbix 3.0 for more information. Note that since Zabbix 3.2 regular expression matching is also supported in function count() (see function documentation for Zabbix 3.2).

Related

Graphite has null values between data points

I have an API that fetches data packets from different servers. It formats this data to different small JSON units. I wrote an algorithm that sends them to graphite with the command json2graphite.
The sending works very well, the incoming data doesn't look bad either.
Now the problem:
The data displayed in graphite shows that each entry is followed by a null.
The data points that should be connected
I am aware that this data can also be connected using a function provided by the Graphite interface, but this doesn't help because Grafana boards always jump back and forth between value and null.
Is there a way to tell Grafana that it only goes to null if there was no data for more than 1 min or so?
I already tried to fix the problem with the data from "storage-schemas.conf" and "storage-aggregation.conf". Unfortunately without success.
storage-schemas.conf:
[default_1min_for_1day]
pattern = .*
retentions = 10s:6h,30s:8d,1m:31d,10m:1y,1h:5y
aggregation.conf:
[default_average]
pattern = .*
xFilesFactor = 0
aggregationMethod = average
If you want to know any more, ask me. : )
Grafana has an option to connect datapoints that are separated by nulls. You can see how to enable this in the screenshot shown under Display Styles settings on Grafana's documentation.
In Graphite composer you can also do it by specifying the connected line mode under Graph options here:
Additionally, you could use Graphite's keepLastValue function to carry the last received value over gaps where there are nulls.
I haven't found a direct solution but I will now try to minimize the interval between the entries. I noticed that the requests take much too long: 2-5 minutes.
There are probably too many servers, so the requests block the port too long.
The problem is not solved yet but I think I will mark it as solved if nobody says I have the problem within 5 days.

In InfluxDB/Telegraf How to compute difference between 2 fields based on 3rd field

I have the current use case:
We have a system that computes different response time metrics for messages that we want to insert in InfluxDB. This system writes JSON entries to a file.
We use telegraf with JSON plugin to extract the fields we want and insert into InfluxDB.
So far so good.
But we have an issue with 1 particular information.
The system will emit messages where mId is the Unique identifier, in the below examples we have 2 uuidXXXX and uuidYYYY:
{“meta1”:“value”, “mId”:“uuidXXXX”, “resTime1”:1232332233, “timeWeEnterBus”:startTimestamp}
{“meta1”:“value2”, “mId”:“uuidYYYY”, “resTime1”:1232331111, “timeWeEnterBus”:startTimestamp}
{“meta1”:“value”, “mId”:“uuidXXXX”, “resTime1”:1232332233, “timeWeExitBus”:endTimestamp}
{“meta1”:“value2”, “mId”:“uuidYYYY”, “resTime1”:1232331111, “timeWeEnterBus”:startTimestamp}
And what we want here is to graph the timeInBus which is equal to “timeWeExitBus-timeWeEnterBus” for each unique mId.
So my questions are:
IMU, uuid would be a field not a tag as it is unlimited, same for timeWeExitBus and timeWeEnterBus which would be numeric fields since we want to use functions on them. And timeInBus would be the measurement. Am I right ?
Is this use case a good one for Influx / Telegraf or are we misusing it for this ? IMU, it doesn’t look like a good use case to try to compute this on telegraf side, but I don’t see how to do it in InfluxDB, I initially thought ELAPSED function could help but I end up thinking it doesn’t work here
If it’s a good use case, could you point me to documentation helping implementing this ?

Prometheus increase not handling process restarts

I am trying to figure out the behavior of Prometheus' increase() querying function with process restarts.
When there is a process restart within a 2m interval and I query:
sum(increase(my_metric_total[2m]))
I get a value less than expected.
For example, in a simple experiment I mock:
3 lcm_restarts
1 process restart
2 lcm_restarts
All within a 2 minute interval.
Upon querying:
sum(increase(lcm_restarts[2m]))
I receive a value of ~4.5 when I am expecting 5.
lcm_restarts graph
sum(increase(lcm_restarts[2m])) result
Could someone please explain?
Pretty concise and well-prepared first question here. Please keep this spirit!
When working with counters, functions as rate(), irate() and also increase() are adjusting on resets due to restarts. Other than the name suggests, the increase() function does not calculate the absolute increase in the given time frame but is a different way to write rate(metric[interval]) * number_of_seconds_in_interval. The rate() function takes the first and the last measurement in a series and calculates the per-second increase in the given time. This is the reason why you may observe non-integer increases even if you always increase in full numbers as the measurements are almost never exactly at the start and end of the interval.
For more details about this, please have a look at the prometheus docs for the increase() function. There are also some good hints on what and what not to do when working with counters in the robust perception blog.
Having a look at your label dimensions, I also think that counter resets don't apply to your constructed example. There is one label called reason that changed between the restarts and so created a second time series (not continuing the existing one). Here you are also basically summing up the rates of two different time series increases that (for themselves) both have their extrapolation happening.
So basically there isn't really anything wrong what you are doing, you just shouldn't rely on getting highly precise numbers out of prometheus for your use case.
Prometheus may return unexpected results from increase() function due to the following reasons:
Prometheus may return fractional results from increase() over integer counter because of extrapolation. See this issue for details.
Prometheus may return lower than expected results from increase(m[d]) because it doesn't take into account possible counter increase between the last raw sample just before the specified lookbehind window [d] and the first raw sample inside the lookbehind window [d]. See this article and this comment for details.
Prometheus skips the increase for the first sample in a time series. For example, increase() over the following series of samples would return 1 instead of 11: 10 11 11. See these docs for details.
These issues are going to be fixed according to this design doc. In the mean time it is possible to use other Prometheus-like systems such as VictoriaMetrics, which are free from these issues.

Zabbix trigger expression - number of occurences of current value in the last X minutes

I want to write a Zabbix trigger that can detect when my server gets too many requests from the same remote client - like a simple DOS attack.
Specifically, I want to detect that the number of occurrences of the current value for the remote client item is not more than a defined threshold in the last one minute.
I have tried something like this
{Server:nginx.access.remote.count(1m,{Server:nginx.access.remote.last()},eq)}>10000
But I get errors saying the expression is incorrect. I guess it is not accepting the expression {Server:nginx.access.remote.last()} as an argument for the function count.
So ultimately, my logic is this
{Server:nginx.access.remote.count(1m,X,eq)}>10000 where X should be the current value, but I don't know how to put the value in the expression.
I would appreciate any help and suggestion. Thanks.

not deterministic behavior with priorities

I have developed an application using priorities that I have verified using http://esper-epl-tryout.appspot.com/epltryout/mainform.html, with a set of input events.
Using a Java environment with esper 5.3, and using always the same input data, I find two scenarios:
If I configure priorities using config.getEngineDefaults ().
GetExecution (). SetPrioritized (true) and I subscribe to the sentence
xxx, every time I run the application, it produces a different
number of subscriber activations of the sentence xxx.
If I don't configure priorities, every time I run the application
always produces the same number of subscriber activations
In the case of setting the priorities (the first option) what reason can be behind this not deterministic behavior?
Any kind of help will be very appreciated

Resources