Telegraf cannot correctly read JSON data in mqtt protocol - influxdb

When I use telegraf inputs.mqtt_consumer cannot parse the data correctly when subscribing to mqtt
This is the raw data I read from the client using emqx:
{
"air": "air",
"humidity": 35,
"region": "shanghai",
"temperature": 34,
"time": 1655371268
}
This is the configuration of telegraf
[[inputs.mqtt_consumer]]
servers = ["tcp://127.0.0.1:1883"]
topics = [
"test/#"
]
data_format = "json"
[[inputs.mqtt_consumer.topic_parsing]]
topic = "air/humidity/region/temperature/time"
measurement = "measurement/_/_/_/_"
tags = "_/_/region/_/_"
fields = "_/_/_/temperature/_"
But the data I got in the incluxdb is like this
> select * from mqtt_consumer
name: mqtt_consumer
time humidity temperature topic
---- -------- ----------- -----
2022-06-16T09:28:45.663600586Z 41 41 test
2022-06-16T09:28:45.764128509Z 38 48 test
2022-06-16T09:28:46.664330569Z 43 47 test
2022-06-16T09:28:46.764848563Z. 41 34 test
2022-06-16T09:28:47.665094746Z. 49 50 test
2022-06-16T09:28:47.765611758Z. 5 50 test
2022-06-16T09:28:48.66629661Z 42 32 test
I expect the data to be like this
name: mqtt_consumer
time region temperature topic
---- ---- ----------- -----
2022-06-16T09:28:45.663600586Z shanghai 41 test
2022-06-16T09:28:45.764128509Z shanghai 48 test
2022-06-16T09:28:46.664330569Z shanghai 47 test
2022-06-16T09:28:46.764848563Z shanghai 34 test
2022-06-16T09:28:47.665094746Z shanghai 50 test
2022-06-16T09:28:47.765611758Z shanghai 50 test
2022-06-16T09:28:48.66629661Z shanghai 32 test
The configuration of telegraf cannot be changed to this
[[inputs.mqtt_consumer.topic_parsing]]
topic = "air/humidity/region/temperature/time"
measurement = "measurement/_/_/_/_"
tags = "_/_/tag/_/_"
fields = "_/_/_/field/_"

[[inputs.mqtt_consumer]]
data_format = "json"
tag_keys = [
"region"
]
see doc https://docs.influxdata.com/telegraf/v1.22/data_formats/input/json/

Related

Sci-kit Learn Mutual Information Classification- Dataframe and function issues

I've currently got the below set of smoothed data:
print(df_smooth.dropna())`
mean std skew kurtosis peak2peak rms crestFactor \
4 0.247555 2.100961 0.001668 3.024679 20.628402 2.115862 5.066747
5 0.237015 2.062690 -0.000792 3.029156 20.314159 2.076466 5.043114
6 0.230783 2.044657 -0.001680 3.028746 20.219575 2.057846 5.030472
7 0.235838 1.986232 -0.001031 3.025417 19.497090 2.000425 4.960363
8 0.235062 1.984086 -0.001014 3.031342 19.817176 1.998209 4.989612
9 0.238660 1.968814 -0.001608 3.023882 19.340179 1.983427 4.998115
10 0.223305 1.975597 -0.000197 3.045224 19.701747 1.988305 5.135947
11 0.219480 2.007902 -0.002460 3.060428 20.252087 2.020074 5.117502
12 0.214518 2.071287 -0.002944 3.092217 21.489908 2.082439 5.302407
13 0.244281 2.122538 -0.003717 3.094335 21.792449 2.137164 5.271366
14 0.235806 2.161333 -0.003364 3.123866 23.128965 2.174895 5.472129
15 0.233630 2.175946 -0.002682 3.152740 24.045300 2.189226 5.610038
16 0.236764 2.188906 -0.000032 3.203623 24.745386 2.202420 5.772337
17 0.262289 2.205111 0.000350 3.192511 24.708587 2.221785 5.681394
18 0.229795 2.139946 0.001239 3.183109 23.745617 2.152940 5.564731
19 0.243538 2.150018 0.001071 3.170558 23.385026 2.164355 5.427326
20 0.266458 2.097468 -0.000830 3.144338 22.084817 2.115172 5.236667
21 0.280729 2.106302 -0.000618 3.101014 21.434129 2.125517 5.147621
22 0.252042 2.078190 0.000259 3.100911 20.991519 2.093988 5.231684
23 0.252297 2.097652 0.000383 3.126250 21.790854 2.113380 5.378267
24 0.250502 2.078781 0.000042 3.129014 21.559732 2.094428 5.340024
25 0.220506 2.070573 0.001974 3.110477 21.473643 2.082461 5.364519
26 0.204412 2.049979 -0.000306 3.227532 22.975315 2.060236 5.706146
27 0.215429 2.103150 -0.001421 3.275257 23.719901 2.114265 5.660891
28 0.216689 2.137870 -0.001783 3.298750 24.040561 2.148948 5.614089
29 0.208962 2.160487 0.000547 3.349068 24.546959 2.170628 5.732873
30 0.227231 2.267705 0.000101 3.413948 25.958169 2.279131 5.745555
31 0.221097 2.258519 0.001567 3.379193 25.424651 2.269446 5.662354
32 0.204962 2.224569 0.000951 3.458483 25.984242 2.234101 5.862379
33 0.224707 2.283631 0.000046 3.516125 27.410217 2.294934 6.024091
34 0.248792 2.354713 -0.001143 3.630634 29.159253 2.368248 6.197140
35 0.229501 2.339020 -0.000673 3.743356 30.695670 2.350898 6.613011
36 0.255474 2.454993 -0.001164 3.780962 32.480614 2.468843 6.627903
37 0.257979 2.530495 0.000630 3.962767 33.656646 2.544310 6.661273
38 0.232977 2.498537 0.001111 3.931879 32.754947 2.510044 6.557506
39 0.237025 2.392735 -0.000920 3.919665 31.277647 2.405969 6.494115
40 0.243630 2.368295 -0.001569 3.812383 29.306347 2.382131 6.077379
41 0.221252 2.305374 -0.000861 4.032235 29.548822 2.317355 6.292428
42 0.215262 2.254417 -0.002057 3.977328 28.970507 2.266098 6.353168
43 0.208581 2.240020 -0.001403 4.154288 30.121039 2.251270 6.630079
44 0.170230 2.302794 -0.001867 4.307822 31.556097 2.309174 6.838202
45 0.168889 2.353960 -0.001309 4.433633 32.825109 2.360053 6.977719
46 0.163156 2.337222 -0.001097 4.238485 31.344888 2.342934 6.658564
47 0.165685 2.369817 -0.002246 4.151915 31.154929 2.375626 6.438286
48 0.190677 2.552397 -0.003645 4.311166 33.473407 2.559565 6.428513
49 0.210200 2.667889 0.004168 4.495159 35.625185 2.676223 6.500683
I want to use the sckikit learn Mutual Information Classification to test for Monotonicity in this dataset, but am having trouble with the syntax (more specifically around the X-value) and the splitting of the full dataset into test and train sets.
I only want 40% of the dataset to be used at the "test data".
Currently this is the command I have:
X_train,X_test,y_train,y_test=train_test_split(df_smooth.dropna(),
test_size=0.4,
random_state=0)
print(X_train)
This is the error I get:
ValueError: not enough values to unpack (expected 4, got 2)
from sklearn.feature_selection import mutual_info_classif
mutual_info = mutual_info_classif(X_train, y_train)
The output I want is something like this:
Monotonicity bar chart- descending
Where the MIC array is ranked from highest to low.
Using the following command:
from sklearn.feature_selection import mutual_info_classif
mutual_info = mutual_info_classif(X_train, y_train)
mutual_info
I tried extracting the ordered numbers 1-49 from the dataframe (which is what I believe is used as the "x" syntax input into the MCI function), but they don't seem to be part of the dataframe when called with iloc[:,0] (which displays the values in the "mean" column). I don't know how this takes into account the dropped "n/a" line values.
If you're testing for something like "the degree of monotonicity between two variables," you're probably looking for Spearman's rank correlation coefficient, which is implemented in scipy.stats.spearmanr:
MRE:
from io import StringIO
import pandas as pd
from scipy import stats
data = StringIO("""mean,std,skew,kurtosis,peak2peak,rms,crestFactor
0.247555,2.100961,0.001668,3.024679,20.628402,2.115862,5.066747
0.237015,2.062690,-0.000792,3.029156,20.314159,2.076466,5.043114
0.230783,2.044657,-0.001680,3.028746,20.219575,2.057846,5.030472
0.235838,1.986232,-0.001031,3.025417,19.497090,2.000425,4.960363
0.235062,1.984086,-0.001014,3.031342,19.817176,1.998209,4.989612
""")
df = pd.read_csv(data)
for var in df.columns:
print(f"{var} {stats.spearmanr(df[var], range(len(df))).correlation:.2f}")
Comparing the first five values of each column to the strictly monotonic sequence range() yields the following table, suggesting the first few samples are antimonotone:
mean -0.70
std -1.00
skew -0.60
kurtosis 0.60
peak2peak -0.90
rms -1.00
crestFactor -0.90

RecursionError: maximum recursion depth exceeded when merging DeepLabV3 config

I want to train semantic segmentation to recognize small objects. For this I use DeepLabVabV3. But when installing the config, I get a recursion error that reloads the environment. Am I setting the config correctly or is there a problem in my yaml file?
cfg = get_cfg()
cfg.load_yaml_with_base('/content/Base3-DeepLabV3-OS16-Semantic.yaml')
cfg.MODEL.WEIGHTS = '/content/model_final_0dff1b.pkl'`
Error logs
---------------------------------------------------------------------------
RecursionError Traceback (most recent call last)
[<ipython-input-6-83854a40b4a3>](https://localhost:8080/#) in <module>()
1 cfg = get_cfg()
----> 2 cfg.load_yaml_with_base('/content/Base3-DeepLabV3-OS16-Semantic.yaml')
3 cfg.MODEL.WEIGHTS = '/content/model_final_0dff1b.pkl'
2 frames
[/usr/local/lib/python3.7/dist-packages/fvcore/common/config.py](https://localhost:8080/#) in load_yaml_with_base(cls, filename, allow_unsafe)
92 # the path to base cfg is relative to the config file itself.
93 base_cfg_file = os.path.join(os.path.dirname(filename), base_cfg_file)
---> 94 base_cfg = cls.load_yaml_with_base(base_cfg_file, allow_unsafe=allow_unsafe)
95 del cfg[BASE_KEY]
96
[/usr/local/lib/python3.7/dist-packages/fvcore/common/config.py](https://localhost:8080/#) in load_yaml_with_base(cls, filename, allow_unsafe)
92 # the path to base cfg is relative to the config file itself.
93 base_cfg_file = os.path.join(os.path.dirname(filename), base_cfg_file)
---> 94 base_cfg = cls.load_yaml_with_base(base_cfg_file, allow_unsafe=allow_unsafe)
95 del cfg[BASE_KEY]
96
... last 2 frames repeated, from the frame below ...
[/usr/local/lib/python3.7/dist-packages/fvcore/common/config.py](https://localhost:8080/#) in load_yaml_with_base(cls, filename, allow_unsafe)
92 # the path to base cfg is relative to the config file itself.
93 base_cfg_file = os.path.join(os.path.dirname(filename), base_cfg_file)
---> 94 base_cfg = cls.load_yaml_with_base(base_cfg_file, allow_unsafe=allow_unsafe)
95 del cfg[BASE_KEY]
96
RecursionError: maximum recursion depth exceeded
My .yaml config
_BASE_: Base3-DeepLabV3-OS16-Semantic.yaml
MODEL:
WEIGHTS: "/content/deeplab_v3_R_103_os16_mg124_poly_90k_bs16.yaml"
PIXEL_MEAN: [123.675, 116.280, 103.530]
PIXEL_STD: [58.395, 57.120, 57.375]
BACKBONE:
NAME: "build_resnet_deeplab_backbone"
RESNETS:
DEPTH: 101
NORM: "SyncBN"
RES5_MULTI_GRID: [1, 2, 4]
STEM_TYPE: "deeplab"
STEM_OUT_CHANNELS: 128
STRIDE_IN_1X1: False
SEM_SEG_HEAD:
NAME: "DeepLabV3Head"
NORM: "SyncBN"
INPUT:
FORMAT: "RGB"

Extract text from wrk output

I'm running a load test with wrk2 as a job on Jenkins. I'd like to send the results of the load test to Graylog but I only want to store the Requests/Sec and average latency.
Here's what the output looks like:
Running 30s test # https://example.com
1 threads and 100 connections
Thread calibration: mean lat.: 8338.285ms, rate sampling interval: 19202ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 16.20s 6.17s 29.64s 65.74%
Req/Sec 5.00 0.00 5.00 100.00%
Latency Distribution (HdrHistogram - Recorded Latency)
50.000% 15.72s
75.000% 20.81s
90.000% 24.58s
99.000% 29.13s
99.900% 29.66s
99.990% 29.66s
99.999% 29.66s
100.000% 29.66s
Detailed Percentile spectrum:
Value Percentile TotalCount 1/(1-Percentile)
4497.407 0.000000 1 1.00
7561.215 0.100000 11 1.11
11100.159 0.200000 22 1.25
12582.911 0.300000 33 1.43
14565.375 0.400000 44 1.67
15720.447 0.500000 54 2.00
16416.767 0.550000 60 2.22
17301.503 0.600000 65 2.50
18464.767 0.650000 71 2.86
19185.663 0.700000 76 3.33
20807.679 0.750000 81 4.00
21479.423 0.775000 84 4.44
22347.775 0.800000 87 5.00
22527.999 0.825000 90 5.71
23216.127 0.850000 93 6.67
23478.271 0.875000 95 8.00
23805.951 0.887500 96 8.89
24723.455 0.900000 98 10.00
25067.519 0.912500 99 11.43
25395.199 0.925000 101 13.33
26525.695 0.937500 102 16.00
26525.695 0.943750 102 17.78
26705.919 0.950000 103 20.00
28065.791 0.956250 104 22.86
28065.791 0.962500 104 26.67
28377.087 0.968750 105 32.00
28377.087 0.971875 105 35.56
28475.391 0.975000 106 40.00
28475.391 0.978125 106 45.71
28475.391 0.981250 106 53.33
29130.751 0.984375 107 64.00
29130.751 0.985938 107 71.11
29130.751 0.987500 107 80.00
29130.751 0.989062 107 91.43
29130.751 0.990625 107 106.67
29655.039 0.992188 108 128.00
29655.039 1.000000 108 inf
#[Mean = 16199.756, StdDeviation = 6170.105]
#[Max = 29638.656, Total count = 108]
#[Buckets = 27, SubBuckets = 2048]
----------------------------------------------------------
130 requests in 30.02s, 13.44MB read
Socket errors: connect 0, read 0, write 0, timeout 1192
Requests/sec: 4.33
Transfer/sec: 458.47KB
Does anyone know how I could go about extracting Requests/sec (at the bottom) and the latency average to send as JSON parameters?
The expected output would be: "latency": 16.2, "requests_per_second": 4.33
You didn't provide the expected output so your question isn't clear but is this what you want?
$ awk 'BEGIN{a["Latency"]; a["Requests/sec:"]} ($1 in a) && ($2 ~ /[0-9]/){print $1, $2}' file
Latency 16.20s
Requests/sec: 4.33
Updated based on you adding expected output to your question:
$ awk '
BEGIN { map["Latency"]="latency"; map["Requests/sec:"]="requests_per_second" }
($1 in map) && ($2 ~ /[0-9]/) { printf "%s\"%s\": %s", ofs, map[$1], $2+0; ofs=", " }
END { print "" }
' file
"latency": 16.2, "requests_per_second": 4.33

Decode UDP message with LUA

I'm relatively new to lua and programming in general (self taught), so please be gentle!
Anyway, I wrote a lua script to read a UDP message from a game. The structure of the message is:
DATAxXXXXaaaaBBBBccccDDDDeeeeFFFFggggHHHH
DATAx = 4 letter ID and x = control character
XXXX = integer shows the group of the data (groups are known)
aaaa...HHHHH = 8 single-precision floating point numbers
The last ones is those numbers I need to decode.
If I print the message as received, it's something like:
DATA*{V???A?A?...etc.
Using string.byte(), I'm getting a stream of bytes like this (I have "formatted" the bytes to reflect the structure above.
68 65 84 65/42/20 0 0 0/237 222 28 66/189 59 182 65/107 42 41 65/33 173 79 63/0 0 128 63/146 41 41 65/0 0 30 66/0 0 184 65
The first 5 bytes are of course the DATA*. The next 4 are the 20th group of data. The next bytes, the ones I need to decode, and are equal to those values:
237 222 28 66 = 39.218
189 59 182 65 = 22.779
107 42 41 65 = 10.573
33 173 79 63 = 0.8114
0 0 128 63 = 1.0000
146 41 41 65 = 10.573
0 0 30 66 = 39.500
0 0 184 65 = 23.000
I've found C# code that does the decode with BitConverter.ToSingle(), but I haven't found any like this for Lua.
Any idea?
What Lua version do you have?
This code works in Lua 5.3
local str = "DATA*\20\0\0\0\237\222\28\66\189\59\182\65..."
-- Read two float values starting from position 10 in the string
print(string.unpack("<ff", str, 10)) --> 39.217700958252 22.779169082642 18
-- 18 (third returned value) is the next position in the string
For Lua 5.1 you have to write special function (or steal it from François Perrad's git repo )
local function binary_to_float(str, pos)
local b1, b2, b3, b4 = str:byte(pos, pos+3)
local sign = b4 > 0x7F and -1 or 1
local expo = (b4 % 0x80) * 2 + math.floor(b3 / 0x80)
local mant = ((b3 % 0x80) * 0x100 + b2) * 0x100 + b1
local n
if mant + expo == 0 then
n = sign * 0.0
elseif expo == 0xFF then
n = (mant == 0 and sign or 0) / 0
else
n = sign * (1 + mant / 0x800000) * 2.0^(expo - 0x7F)
end
return n
end
local str = "DATA*\20\0\0\0\237\222\28\66\189\59\182\65..."
print(binary_to_float(str, 10)) --> 39.217700958252
print(binary_to_float(str, 14)) --> 22.779169082642
It’s little-endian byte-order of IEEE-754 single-precision binary:
E.g., 0 0 128 63 is:
00111111 10000000 00000000 00000000
(63) (128) (0) (0)
Why that equals 1 requires that you understand the very basics of IEEE-754 representation, namely its use of an exponent and mantissa. See here to start.
See #Egor‘s answer above for how to use string.unpack() in Lua 5.3 and one possible implementation you could use in earlier versions.

icinga2 - where to change client monitoring commands?

system ubuntu 16.04
On master node where icinga2 is installed
#ls /etc/icinga2/repository.d/hosts/WIN-U52321E0BAK/
disk C%3A.conf disk.conf icinga.conf load.conf ping4.conf
ping6.conf procs.conf swap.conf users.conf
All conf files have save "dummy" check_command on them for example
#cat load.conf
object Service "load" {
import "satellite-service"
check_command = "dummy"
host_name = "WIN-U52321E0BAK"
zone = "WIN-U52321E0BAK"
}
I cant understand from where dummy command is called and how to customize the checks for warning and critical threshold
The dummy command is defined in /usr/share/icinga2/include/command-plugins.conf, like so:
144 object CheckCommand "dummy" {
145 import "plugin-check-command"
146
147 command = [
148 PluginDir + "/check_dummy",
149 "$dummy_state$",
150 "$dummy_text$"
151 ]
152
153 vars.dummy_state = 0
154 vars.dummy_text = "Check was successful."
155 }
In order to modify the warn and crit levels, you set the custom variable at the host or service level. Using the example of ping, we see the default configuration in that same file:
36 template CheckCommand "ping-common" {
37 import "plugin-check-command"
38
39 command = [ PluginDir + "/check_ping" ]
40
41 arguments = {
42 "-H" = "$ping_address$"
43 "-w" = "$ping_wrta$,$ping_wpl$%"
44 "-c" = "$ping_crta$,$ping_cpl$%"
45 "-p" = "$ping_packets$"
46 "-t" = "$ping_timeout$"
47 }
48
49 vars.ping_wrta = 100
50 vars.ping_wpl = 5
51 vars.ping_crta = 200
52 vars.ping_cpl = 15
53 }
Here's the important bit:
49 vars.ping_wrta = 100
50 vars.ping_wpl = 5
51 vars.ping_crta = 200
52 vars.ping_cpl = 15
So: we go to our host or service definition, thusly (using /etc/icinga2/conf.d/host.conf and the NodeName/localhost definition which everybody has; comments removed):
18 object Host NodeName {
20 import "generic-host"
21
23 address = "127.0.0.1"
24 address6 = "::1"
25
27 vars.os = "Linux"
30 vars.http_vhosts["http"] = {
31 http_uri = "/"
32 }
37
39 vars.disks["disk"] = {
41 }
42 vars.disks["disk /"] = {
43 disk_partitions = "/"
44 }
45 }
And we insert before line 45 above to produce:
18 object Host NodeName {
20 import "generic-host"
21
23 address = "127.0.0.1"
24 address6 = "::1"
25
27 vars.os = "Linux"
30 vars.http_vhosts["http"] = {
31 http_uri = "/"
32 }
37
39 vars.disks["disk"] = {
41 }
42 vars.disks["disk /"] = {
43 disk_partitions = "/"
44 }
45 vars.ping_wrta = 50
46 vars.ping_wpl = 3
47 vars.ping_crta = 10
48 vars.ping_cpl = 2
49 }
...and you have successfully customized the check threshold. You can add those variables to a template or even a hostgroup (I think; better test that, I may be wrong).

Resources