sending data on the basis of json key value pair in fluentbit output - fluentd

wants to send JSON log data that contains a key-value pair. I want to send them to different files based on the key-value pair
ex:
I am having below the data
1. {"ABC": true, "BDC": "some random data"}
2. {"ABC": false, "BDC": "some random data"}
3. {"BDC": "some random data"}
So I want to create an OUTPUT that will filter data and send it to two different files.
If the data contained "ABC": true then send to one output else send to another output.
[OUTPUT]
Name file
path /tmp
Match "ABC": true
[OUTPUT]
Name file
path /tmp
Match *

Related

Telegraf MQTT input data flatten

How can I use Telegraf to extract timestamp and sensor value from an MQTT message and insert it into a PostgreSQL database with separate timestamp and sensor value columns?
I am receiving this JSON object from MQTT:
{"sensor": "current", "data": [[1614945972418042880, 1614945972418042880], [1614945972418294528, 0.010058338362502514], [1614945972418545920, 0.010058338362502514]]}
It contains two fields: "sensor" and "data". The "sensor" field contains a string value that identifies the type of sensor and the "data" field contains an array of arrays, where each sub-array contains a timestamp and a sensor value. I am using Telegraf to output this data to a PostgreSQL database. I would like to separate the timestamp and sensor value and flatten it out of the list and use the sensor name as the column name, how can I configure Telegraf to do this?
So my table would look like this :
timestamp
current
1614945972418042880
1614945972418042880
1614945972418294528
0.010058338362502514
[[inputs.mqtt_consumer]]
servers = ["tcp://localhost:1883"]
topics = ["your_topic"]
data_format = "json"
json_query = "data.*"
tag_keys = ["sensor","timestamp"]
measurement = "sensors"`

Postman read from csv data file. Need new url for each iteration

I have read the excellent documentation 'Working with data files"
which handles the case:
path, value
post, 1
post, 2
post, 3
post, 4
so the path is valid for one iteration, within the iteration many posts can be processed.
My scenario is a little different since the customer numbers I process are passed in the URL:
https://tapy-uat.flyrit.de/clm/unbprocessor/sales/pt/identifiers/no=876498712
so I would have:
https://tapy-uat.flyrit.de/clm/unbprocessor/sales/pt/identifiers/no={{path}}
and the rest of the API call is unchanged, i.e.
path, value1
path, value2
path, value3
path, value4
path, value5
is there a way to iterate so that each api call is made to a new url?
I envision executing from Collection Runner and looping thru all customer numbers from the csv file passing for each iteration the customer number to the {{path}} variable in the url.
OK, I have solved it. Actually I need only one column for the URL variable:
https://tapy-uat.flyrit.de/clm/unbprocessor/sales/pt/identifiers/no={{path}}
path
4567459687
4357349584
2396504398
4572839495
and the iteration is done with each Customer Number giving a new URL

Format of file so that its easy to parse it

I have to parse some files having common format and dump data into a file. I want to know what should be its format so that parsing is easy.
What should be the file format and its parsing mechanism?
I have designed file format and its parsing mechanism as follows
user1.txt
24-07-2014
tag_1
some_data
tag_2
some_data
tag_3
some_data
end
31-07-2014
tag_1
some_data
tag_2
some_data
tag_3
some_data
end
Every week these files will be updated with some data
Parsing mechanism:
func()
get index of date(passed as arg) and end(first occurence from after date)
make a list out of it
from this list take index of tag_1, tag_2, tag_3 and append data between tag_1 index and tag_2 index, tag_2 and tag_3, tag_3 and end in tag_1_data, tag_2_data, tag_3_data respectively
main()
call func() for each file as argument to it.
then dump data in list tag_1_data, tag_2_data & tag_3_data into a file
So file generated from this will contain data from tag_1, tag_2, tag3 from all files grouped
seperately.
generated_file.txt for python script.py 24-07-2014
24-07-2014
tag_1
data from user1
data from user2
data from user3
tag_2
data from user1
data from user2
data from user3
tag_3
data from user1
data from user2
data from user3
Note* - those files user1.txt, user2.txt etc are updated with that format every week(before user enters its data) with another script so that user has to write only data(under each tag) not format and data
If u know better file format which make parsing more easy then shoot your comments.

How to compare and verify user input with a csv file columns data?

I am trying to get answer for the last few weeks. I want to give a textfield to users in the app and ask them to enter a id number and this number will be checked with the uploaded csv file columns. If it matches then display an alert saying that its found a match or else it doesn't.
A csv file is basically a normal file in which items are separated by commas.
For example:
id,name,email
1,Joe,joe#d.com
2,Pat,pat#d.com
To get the list of IDs stored in this csv files, simply loop through the lines, and grab the first item after splitting the string (use comma to split)
id = line.split(",").get(0); // first element ** Note this depends on what language you're using
Now, add this id to a collection of ids you are storing, like in a list.
IDs.add(id);
When you take the user input, all you need to do is to check if the id is in your list of ids.
if (IDs.contains(userId)) { print "Found"; }

IMAP batch fetch text part of messages

I'd like to download the text (that is mime type text/plain, text/html text/richtext) from UID x to UID y.
I have the UID's (and not mailbox IDs).
How can I do something like
FETCH 412444:412500 (BODY.PEEK[TEXT/PLAIN OR TEXT/HTML OR TEXT/RICHTEXT])
Thanks!
After checking RFC3501, the UID command (section 6.4.8) seems to be able to do part of this:
The UID command has two forms. In the first form, it takes as its
arguments a COPY, FETCH, or STORE command with arguments
appropriate for the associated command. However, the numbers in
the sequence set argument are unique identifiers instead of
message sequence numbers. Sequence set ranges are permitted, but
there is no guarantee that unique identifiers will be contiguous.
Thus, you should be able to call:
UID FETCH 412444:412500 (BODY.PEEK[TEXT/PLAIN OR TEXT/HTML OR TEXT/RICHTEXT])

Resources