I want to write every input on the relative DB (ex. input1 --> DB1, input2-->DB2) on influxdb
This is my telegraf.conf
# OUTPUT PLUGINS #
[[outputs.influxdb]]
urls = ["http://172.18.0.2:8086"]
database = "shellyem"
namepass = ["db1"]
# OUTPUT PLUGINS #
[[outputs.influxdb]]
urls = ["http://172.18.0.2:8086"]
database = "shell"
namepass = ["db2"]
# INPUT PLUGINS #
[[inputs.db1]]
urls = [
"http://192.168.1.191/emeter/0",
]
timeout = "1s"
data_format = "json"
# INPUT PLUGINS #
[[inputs.db2]]
urls = [
"http://192.168.1.192/emeter/0",
]
timeout = "1s"
data_format = "json"
It doesn't work because i don't understand how namepass works, can you help me? Thank you.
But it's so simple, just read for dindirindina
ok copy and paste the code below
[[outputs.influxdb]]
urls = ["http://172.18.0.2:8086"]
database = "Mirko"
[outputs.influxdb.tagpass]
influxdb_tag = ["Mirko"]
[[outputs.influxdb]]
urls = ["http://172.18.0.2:8086"]
database = "Simone"
[outputs.influxdb.tagpass]
influxdb_tag = ["Simone"]
[[inputs.http]]
urls = [
"http://192.168.1.191/emeter/0",
"http://192.168.1.191/emeter/1"
]
data_format = "json"
[inputs.http.tags]
influxdb_tag = "Mirko"
[[inputs.http]]
urls = [
"http://192.168.1.201/emeter/0",
"http://192.168.1.201/emeter/1"
]
data_format = "json"
[inputs.http.tags]
influxdb_tag = "Simone"
I'm trying to send this very simple JSON string to Telegraf to be saved into InfluxDB:
{ "id": "id_123", "value": 10 }
So the request would be this: curl -i -XPOST 'http://localhost:8080/telegraf' --data-binary '{"id": "id_123","value": 10}'
When I make that request, I get the following answer: HTTP/1.1 204 No Content Date: Tue, 20 Apr 2021 13:02:49 GMT but when I check what was written to database, there is only value field:
select * from http_listener_v2
time host influxdb_database value
---- ---- ----------------- -----
1618923747863479914 my.host.com my_db 10
What am I doing wrong?
Here's my Telegraf config:
[global_tags]
[agent]
interval = "10s"
round_interval = true
metric_batch_size = 1000
metric_buffer_limit = 10000
collection_jitter = "0s"
flush_interval = "10s"
flush_jitter = "0s"
precision = ""
hostname = ""
omit_hostname = false
# OUTPUTS
[[outputs.influxdb]]
urls = ["http://127.0.0.1:8086"]
database = "telegraf"
username = "xxx"
password = "xxx"
[outputs.influxdb.tagdrop]
influxdb_database = ["*"]
[[outputs.influxdb]]
urls = ["http://127.0.0.1:8086"]
database = "httplistener"
username = "xxx"
password = "xxx"
[outputs.influxdb.tagpass]
influxdb_database = ["httplistener"]
# INPUTS
## system
[[inputs.cpu]]
percpu = true
totalcpu = true
collect_cpu_time = false
report_active = false
[[inputs.disk]]
ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"]
[[inputs.mem]]
[[inputs.swap]]
[[inputs.system]]
## http listener
[[inputs.http_listener_v2]]
service_address = ":8080"
path = "/telegraf"
methods = ["POST", "PUT"]
data_source = "body"
data_format = "json"
[inputs.http_listener_v2.tags]
influxdb_database = "httplistener"
Use json_string_fields = ["id"]
I am trying to see if this is possible
Terraform 0.12.28
bucket_names = {
"bucket1" = "test_temp"
"bucket2" = "test_jump"
"bucket3" = "test_run"
}
module "s3" {
source = "./modules/s3"
name = var.bucket_names
region = var.region
tags = var.tags
}
module
resource "aws_s3_bucket" "s3" {
for_each = var.name
bucket = "${each.value}"
region = var.region
request_payer = "BucketOwner"
tags = var.tags
versioning {
enabled = false
mfa_delete = false
}
}
Works JUST fine, each bucket is created. BUT now my question is how can i keep a clean way of applying specific policies to each bucket in the list?
policy 1 => test1_temp
policy 2 => test2_jump
policy 3 => test2_run
Each policy will be slightly different
My thoughts, regex to find _temp and apply policy1 etc
I'm with #ydaetkocR on this. It's complexity for no gain in a real system, but it could be interesting for learning.
terraform.tfvars
bucket_names = {
"bucket1" = "test_temp"
"bucket2" = "test_jump"
"bucket3" = "test_run"
}
bucket_policy = {
"bucket1" = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::test_temp/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
}
}
]
}
POLICY
"bucket2" = "..."
"bucket3" = "..."
}
root module
module "s3" {
source = "./modules/s3"
name = var.bucket_names
policy = var.bucket_policies
region = var.region
tags = var.tags
}
modules/s3
resource "aws_s3_bucket" "s3" {
for_each = var.name
bucket = each.value
region = var.region
request_payer = "BucketOwner"
tags = var.tags
versioning {
enabled = false
mfa_delete = false
}
}
resource "aws_s3_bucket_policy" "s3" {
for_each = var.policy
bucket = aws_s3_bucket.s3[each.key].id
policy = each.value
}
The trick here is having the same keys for names, policies, and resource instances. You don't have to use two maps, but it's likely the simplest example.
As you can see above it would be a nuisance doing this because you'd have to manually synch the bucket names in the policies or write some very elaborate code to substitute the values in your module.
I am following SE Thread to get some response to HTTP POST on an express node. But unable to get any response from kapacitor.
Environment
I am using Windows 10 via PowerShell.
I am connected to an InfluxDB internal Server which is mentioned in the kapacitor.conf and have a TICKscript to stream data via it.
kapacitor.conf
hostname = "134.102.97.81"
data_dir = "C:\\Users\\des\\.kapacitor"
skip-config-overrides = true
default-retention-policy = ""
[alert]
persist-topics = true
[http]
bind-address = ":9092"
auth-enabled = false
log-enabled = true
write-tracing = false
pprof-enabled = false
https-enabled = false
https-certificate = "/etc/ssl/kapacitor.pem"
https-private-key = ""
shutdown-timeout = "10s"
shared-secret = ""
[replay]
dir = "C:\\Users\\des\\.kapacitor\\replay"
[storage]
boltdb = "C:\\Users\\des\\.kapacitor\\kapacitor.db"
[task]
dir = "C:\\Users\\des\\.kapacitor\\tasks"
snapshot-interval = "1m0s"
[load]
enabled = false
dir = "C:\\Users\\des\\.kapacitor\\load"
[[influxdb]]
enabled = true
name = "DB5Server"
default = true
urls = ["https://influxdb.internal.server.address:8086"]
username = "user"
password = "password"
ssl-ca = ""
ssl-cert = ""
ssl-key = ""
insecure-skip-verify = true
timeout = "0s"
disable-subscriptions = true
subscription-protocol = "https"
subscription-mode = "cluster"
kapacitor-hostname = ""
http-port = 0
udp-bind = ""
udp-buffer = 1000
udp-read-buffer = 0
startup-timeout = "5m0s"
subscriptions-sync-interval = "1m0s"
[influxdb.excluded-subscriptions]
_kapacitor = ["autogen"]
[logging]
file = "STDERR"
level = "DEBUG"
[config-override]
enabled = true
[[httppost]]
endpoint = "kapacitor"
url = "http://localhost:1440"
headers = { Content-Type = "application/json;charset=UTF-8"}
alert-template = "{\"id\": {{.ID}}}"
The daemon runs without any problems.
test2.tick
dbrp "DBTEST"."autogen"
stream
|from()
.measurement('humid')
|alert()
.info(lambda: TRUE)
.post()
.endpoint('kapacitor')
Already defined the task .\kapacitor.exe define bc_1 -tick test2.tick
Enabled it .\kapacitor.exe enable bc_1
The status shows nothing:
.\kapacitor.exe show bc_1
ID: bc_1
Error:
Template:
Type: stream
Status: enabled
Executing: true
Created: 13 Mar 19 15:33 CET
Modified: 13 Mar 19 16:23 CET
LastEnabled: 13 Mar 19 16:23 CET
Databases Retention Policies: ["NIMBLE"."autogen"]
TICKscript:
dbrp "TESTDB"."autogen"
stream
|from()
.measurement('humid')
|alert()
.info(lambda: TRUE)
.post()
.endpoint('kapacitor')
DOT:
digraph bc_1 {
graph [throughput="0.00 points/s"];
stream0 [avg_exec_time_ns="0s" errors="0" working_cardinality="0" ];
stream0 -> from1 [processed="0"];
from1 [avg_exec_time_ns="0s" errors="0" working_cardinality="0" ];
from1 -> alert2 [processed="0"];
alert2 [alerts_inhibited="0" alerts_triggered="0" avg_exec_time_ns="0s" crits_triggered="0" errors="0" infos_triggered="0" oks_triggered="0" warns_triggered="0" working_cardinality="0" ];
}
The Daemon logs provide this for the task
ts=2019-03-13T16:25:23.640+01:00 lvl=debug msg="starting enabled task on startup" service=task_store task=bc_1
ts=2019-03-13T16:25:23.677+01:00 lvl=debug msg="starting task" service=kapacitor task_master=main task=bc_1
ts=2019-03-13T16:25:23.678+01:00 lvl=info msg="started task" service=kapacitor task_master=main task=bc_1
ts=2019-03-13T16:25:23.679+01:00 lvl=debug msg="listing dot" service=kapacitor task_master=main dot="digraph bc_1 {\nstream0 -> from1;\nfrom1 -> alert2;\n}"
ts=2019-03-13T16:25:23.679+01:00 lvl=debug msg="started task during startup" service=task_store task=bc_1
ts=2019-03-13T16:25:23.680+01:00 lvl=debug msg="opened service" source=srv service=*task_store.Service
ts=2019-03-13T16:25:23.680+01:00 lvl=debug msg="opening service" source=srv service=*replay.Service
ts=2019-03-13T16:25:23.681+01:00 lvl=debug msg="skipping recording, metadata is already correct" service=replay recording_id=353d8417-285d-4fd9-b32f-15a82600f804
ts=2019-03-13T16:25:23.682+01:00 lvl=debug msg="skipping recording, metadata is already correct" service=replay recording_id=a8bb5c69-9f20-4f4d-8f84-109170b6f583
But I get nothing on the Express Node side. The code is exactly the same as that in the above mentioned SE thread.
Any Help as to how to capture stream from Kapacitor on HTTP Post? I already have a live system that is pushing information into the dedicated database already
I was able to shift focus from stream to batch in the above query. I have documented the complete process on medium.com.
Some Files:
kapacitor.gen.conf
hostname = "my-windows-10"
data_dir = "C:\\Users\\<user>\\.kapacitor"
skip-config-overrides = true
default-retention-policy = ""
[alert]
persist-topics = true
[http]
bind-address = ":9092"
auth-enabled = false
log-enabled = true
write-tracing = false
pprof-enabled = false
https-enabled = false
https-certificate = "/etc/ssl/kapacitor.pem"
https-private-key = ""
shutdown-timeout = "10s"
shared-secret = ""
[replay]
dir = "C:\\Users\\des\\.kapacitor\\replay"
[storage]
boltdb = "C:\\Users\\des\\.kapacitor\\kapacitor.db"
[task]
dir = "C:\\Users\\des\\.kapacitor\\tasks"
snapshot-interval = "1m0s"
[load]
enabled = false
dir = "C:\\Users\\des\\.kapacitor\\load"
[[influxdb]]
enabled = true
name = "default"
default = true
urls = ["http://127.0.0.1:8086"]
username = ""
password = ""
ssl-ca = ""
ssl-cert = ""
ssl-key = ""
insecure-skip-verify = true
timeout = "0s"
disable-subscriptions = true
subscription-protocol = "http"
subscription-mode = "cluster"
kapacitor-hostname = ""
http-port = 0
udp-bind = ""
udp-buffer = 1000
udp-read-buffer = 0
startup-timeout = "5m0s"
subscriptions-sync-interval = "1m0s"
[influxdb.excluded-subscriptions]
_kapacitor = ["autogen"]
[logging]
file = "STDERR"
level = "DEBUG"
[config-override]
enabled = true
# Subsequent Section describes what this conf does
[[httppost]]
endpoint = "kap"
url = "http://127.0.0.1:30001/kapacitor"
headers = { "Content-Type" = "application/json"}
TICKScript
var data = batch
| query('SELECT "v" FROM "telegraf_test"."autogen"."humid"')
.period(5s)
.every(10s)
data
|httpPost()
.endpoint('kap')
Define the Task
.\kapacitor.exe define batch_test -tick .\batch_test.tick -dbrp DBTEST.autogen
I suspect the hostname was michieveous where it was set to localhost previously but I set it my machine's hostname and instead used the IP address 127.0.0.1 whereever localhost was mentioned
I'm trying to create a new UPS.conf file for telegraf to collect data from a batch of ups units via SNMP. Inputs such as hostname and upsType when queried via SNMPGet the OID's return a String, but when run using Telegraf I get only integer results.
My UPS.conf File
[[inputs.snmp]]
agents = [ "192.168.15.60", "192.168.15.64" , "192.168.15.65","192.168.15.66","192.168.15.67" ]
## Timeout for each SNMP query.
timeout = "10s"
## Number of retries to attempt within timeout.
retries = 3
## SNMP version, values can be 1, 2, or 3
version = 3
## SNMP community string.
community = "heabc"
#
# ## The GETBULK max-repetitions parameter
# max_repetitions = 10
#
# ## SNMPv3 auth parameters
sec_name = "grafana"
auth_protocol = "SHA" # Values: "MD5", "SHA", ""
auth_password = "redacted"
sec_level = "authPriv" # Values: "noAuthNoPriv", "authNoPriv", "authPriv"
# #context_name = ""
priv_protocol = "AES" # Values: "DES", "AES", ""
priv_password = "redacted"
#
# ## measurement name
[[inputs.snmp.field]]
name = "hostname"
oid = "iso.1.3.6.1.2.1.1.6.0"
conversion = ""
is_tag = true
[[inputs.snmp.field]]
name = "upsType"
oid = "iso.1.3.6.1.4.1.318.1.1.1.1.1.1.0"
is_tag = true
conversion = ""
[[inputs.snmp.field]]
name = "batteryCapacityPercent"
oid = "iso.1.3.6.1.4.1.318.1.1.1.2.2.1.0"
[[inputs.snmp.field]]
name = "batteryTemp"
oid = "iso.1.3.6.1.4.1.318.1.1.1.2.2.2.0"
[[inputs.snmp.field]]
name = "batteryRuntimeRemain"
oid = "iso.1.3.6.1.4.1.318.1.1.1.2.2.3.0"
[[inputs.snmp.field]]
name = "batteryReplace"
oid = "iso.1.3.6.1.4.1.318.1.1.1.2.2.4.0"
[[inputs.snmp.field]]
name = "inputVoltage"
oid = "iso.1.3.6.1.4.1.318.1.1.1.3.2.1.0"
[[inputs.snmp.field]]
name = "inputFreq"
oid = "iso.1.3.6.1.4.1.318.1.1.1.3.2.4.0"
[[inputs.snmp.field]]
name = "lastTransferReason"
oid = "iso.1.3.6.1.4.1.318.1.1.1.3.2.5.0"
[[inputs.snmp.field]]
name = "outputVoltage"
oid = "iso.1.3.6.1.4.1.318.1.1.1.4.2.1.0"
[[inputs.snmp.field]]
name = "outputFreq"
oid = "iso.1.3.6.1.4.1.318.1.1.1.4.2.2.0"
[[inputs.snmp.field]]
name = "outputLoad"
oid = "iso.1.3.6.1.4.1.318.1.1.1.4.2.3.0"
[[inputs.snmp.field]]
name = "ouputCurrent"
oid = "iso.1.3.6.1.4.1.318.1.1.1.4.2.4.0"
[[inputs.snmp.field]]
name = "lastSelfTestResult"
oid = "iso.1.3.6.1.4.1.318.1.1.1.7.2.3.0"
[[inputs.snmp.field]]
name = "lastSelfTestDate"
oid = "iso.1.3.6.1.4.1.318.1.1.1.7.2.4.0"
Ouput of telegraf --test --config UPS.conf - Notice the hostname on each, one is 121, one is 91, 82 etc. The upsType field also comes through as a string, but is being converted to a number.
* Plugin: inputs.snmp, Collection 1
> snmp,hostname=121,upsType=122,agent_host=192.168.15.60,host=HEAGrafana batteryTemp=124i,inputVoltage=127i,outputFreq=131i,outputLoad=132i,lastSelfTestDate=135i,outputVoltage=130i,ouputCurrent=133i,lastSelfTestResult=134i,batteryCapacityPercent=123i,batteryRuntimeRemain=125i,batteryReplace=126i,inputFreq=128i,lastTransferReason=129i 1527721763000000000
> snmp,host=HEAGrafana,hostname=103,upsType=104,agent_host=192.168.15.64 batteryCapacityPercent=105i,batteryReplace=108i,inputFreq=110i,lastTransferReason=111i,lastSelfTestResult=116i,ouputCurrent=115i,lastSelfTestDate=117i,batteryTemp=106i,batteryRuntimeRemain=107i,inputVoltage=109i,outputVoltage=112i,outputFreq=113i,outputLoad=114i 1527721764000000000
> snmp,hostname=91,upsType=92,agent_host=192.168.15.65,host=HEAGrafana lastSelfTestDate=105i,batteryTemp=94i,inputVoltage=97i,inputFreq=98i,outputFreq=101i,outputLoad=102i,ouputCurrent=103i,lastSelfTestResult=104i,batteryCapacityPercent=93i,batteryRuntimeRemain=95i,batteryReplace=96i,lastTransferReason=99i,outputVoltage=100i 1527721766000000000
> snmp,hostname=82,upsType=83,agent_host=192.168.15.66,host=HEAGrafana batteryReplace=87i,inputVoltage=88i,inputFreq=89i,lastTransferReason=90i,outputLoad=93i,batteryCapacityPercent=84i,batteryTemp=85i,batteryRuntimeRemain=86i,lastSelfTestResult=95i,lastSelfTestDate=96i,outputVoltage=91i,outputFreq=92i,ouputCurrent=94i 1527721768000000000
> snmp,hostname=61,upsType=62,agent_host=192.168.15.67,host=HEAGrafana lastTransferReason=69i,outputVoltage=70i,outputFreq=71i,outputLoad=72i,batteryTemp=64i,batteryReplace=66i,inputVoltage=67i,inputFreq=68i,lastSelfTestDate=75i,batteryCapacityPercent=63i,batteryRuntimeRemain=65i,ouputCurrent=73i,lastSelfTestResult=74i 1527721769000000000
Output of snmpget -v2c -c heabc 192.168.15.60 .1.3.6.1.4.1.318.1.1.1.1.1.1.0 - It returns a string.
iso.3.6.1.4.1.318.1.1.1.1.1.1.0 = STRING: "Smart-UPS X 3000"