I'm trying to build a fluentd pipeline that pulls data from an API and puts it in PostgresQL:
<source>
#type http_pull
tag energydata
url http://10.0.0.30:8080/data
interval 10s
format json
</source>
# Record from source looks like this (I've sent it to stdout to verify):
# {
# "url":"http://10.0.0.30:8080/data",
# "status":200,
# "message":
# {
# "timestamp":"2022-12-01T09:28:43Z",
# "currentPowerConsumption":0.429
# }
# }
<match energydata>
#type sql
host 10.0.0.10
port 5432
database energy
adapter postgresql
username fluent
password somepasswd
<table>
table energymeterdata
column_mapping '$.message.timestamp:timestamp,$.message.currentPowerConsumption:currentPowerConsumption'
</table>
</match>
The resulting SQL row contains only NULL values. What is the right syntax for the record_accessor in the column_mapping?
I've tried different syntaxes and quote styles for the record_accessor use in the column mapping, but I can't find the right format.
Before knowing it should be possible with the record accessor (as claimed by one of the maintainers in a Github issue) I tried flattening the JSON structure, but I could not get that to work either. I prefer the approach mentioned in this post, because it is a cleaner solution. It seems this is a fairly basic scenario and I may be overlooking something.
Turns out there is nothing other than the comment from user 'repeatedly' in the Github issue to show that record_accessor syntax is supported in the sql column_mapping parameter. This means that we will have to rely on filters to modify the json structure. For me, the following configuration works:
<filter energydata>
#type record_transformer
renew_record true
keep_keys message
enable_ruby true
<record>
message ${record["message"].to_json.to_s}
</record>
</filter>
<filter energydata>
#type parser
key_name message
reserve_data true
remove_key_name_field true
<parse>
#type json
</parse>
</filter>
Source: here
Related
i'm facing a problem regarding the shared secret in "clients.conf" file in freeradius server 3.0.25.
I tried to follow the documentation, but with no luck, in particular I'm trying to use the exact example in documentation of the octal representation of the secret "AB":
clients.conf:
secret = "\101\102"
then I run the radtest:
./radtest -x testing password localhost 0 "AB"
in server debug log I find:
"Dropping packet without response because of error: Received packet from 127.0.0.1 with invalid Message-Authenticator! (Shared secret is incorrect.)"
I tried every combination that come in mind: with or without quotes, with the "-t eap-md5" parameter in radtest, ..
Of course if I write 'secret = "AB" ' in clients.conf everything works, but I need octal representation because a client of ours uses special non printable characters in the secret.
Any help is appreciated
Thanks
I was able to make it work by changing the default value of parameter correct_escapes in file radiusd.conf:
correct_escapes = false <-- it was 'true' by default
Still is not clear to me why it doesn't work with correct_escapes set to 'true', maybe it's a bug?
I am using the inputs.http plugin of Telegraf in order to import data from an API to influxdb. The API requires a time filter in the body of a POST request and responds with data between that time filter. I want to periodically call this API and retrieve data for the past 10 or so seconds. So I need to include the current timestamp in the body of the POST request. Can I pass the current server timestamp to telegraf.conf in the form of an environment variable or a command line argument? What I have attempted so far is using an environment variable in the telegraf.conf file as shown below. It did not work.
[[inputs.http]]
#URL
urls = ["url"]
#http method
method = "POST"
## Optional HTTP headers
headers = {"cache-control" = "no-cache","content-type" = "application/json"}
## HTTP entity-body to send with POST/PUT requests.
#body = "{\"measurement\":\"measurement_name\", \"time_filter\":[1593068400, 1593068800]}"
body = "{\"measurement\":\"measurement_name\", \"time_filter\":[1593562547, ${date +%s}]}"
#Data from HTTP in JSON format
data_format = "json"
I then run the command below
$telegraf -config telegraf.conf
and receive a 400 error. If I replace the body line (includes variable) with the line above it (no variable) everything works fine.
I am trying to create a proof on concept using the TICK stack for monitoring. I have the helloworld stack running and showing CPU/Docker metrics.
I am trying to use the telegraf http input plugin to pull from an http endpoint:
From the docs i have simply configured the URL, GET and type (Set to json)
[[inputs.http]]
## One or more URLs from which to read formatted metrics
urls = [
"http://localhost:500/Queues"
]
method = "GET"
data_format = "json"
However nothing appears in Influx/Chronograf.
I can modify the endpoint to suit any changes there, but what am i doing wrong in telegraf config ?
I think I had the same struggle. For me the following conf worked:
[[inputs.http]]
name_override ="restservice_health"
urls = [
"https://localhost:5001/health"
]
method = "GET"
data_format = "value"
data_type = "string"
In this way, it appeared in Influxdb under the name "restservice_health" (allthough this option is not important for the example, so you could leave it out).
First, you would have to look at the result of the http://localhost:500/Queues request to make sure that it's a valid JSON object.
Then, depending on what is returned from that endpoint, you may have to configure the JSON parser, for example by setting json_query to a GJSON query to navigate the JSON response to the data you need.
I have a website in Brazilian portuguese. I'm using Elasticsearch to run our site search.
When the visitors search from our site, everything works, but codebasehq give some exceptions (errors) like this: Tire::Search::SearchRequestFailed
nested: JsonParseException[Invalid UTF-8 middle byte 0x72\n at [Source: [B#42dcdefd; line: 1, column: 46]]; }]","status":500}
These errors only came from URLs that I don't know where are these links, for example:
?q=Acess%F3rios (error)
?q=Acessórios (ok)
?q=Acess%C3%B3rios (ok)
I don't know how to fix this error, I'm trying to stop to generate that errors in codebasehq.
The error seems to be coming from Elasticsearch, which trips on the invalid JSON received.
In general, Tire handles accented characters in searches just fine:
# encoding: UTF-8
require 'tire'
s = Tire.search do
query { string 'Žluťoučký' }
end
p s.results
You should enable the Tire logging with:
Tire.configure { logger STDERR, level: "debug" }
or with the Rails logger, to find the offending JSON, debug it, and possibly post more information here.
I've got problems with encoding in (I think so) Entity. In more details I've got composite component which is responsible for inplace editing - user clicks on text, clicks save and data are saved in database. The problem is that when user enters some non-english chars (diacritic chars?) encoding breaks. For example if user will enter polish char ą in Entity I get something like that ºÄ. Data are stored in mysql database where encoding is set to UTF-8, page on which data are shown is also encoded in UTF-8. I've checked that problem appears after sending data from client (browser) to server but I don't know what is wrong.
I've finally found the solution. All I had to do was to add character encoding filter to web.xml.
<filter>
<filter-name>SetCharacterEncoding</filter-name>
<filter-class>org.apache.catalina.filters.SetCharacterEncodingFilter</filter-class>
<init-param>
<param-name>encoding</param-name>
<param-value>UTF-8</param-value>
</init-param>
</filter>
<filter-mapping>
<filter-name>SetCharacterEncoding</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>