I followed this URL -- https://www.linkedin.com/pulse/integrating-gatling-influx-graphite-grafana-live-tests-phani-bushan/
And it works well with the gatling-demo project provided by them but it is not working with my Gatling project
pom Gatling versions used :
<gatling.version>3.7.4</gatling.version>
<gatling-plugin.version>4.1.1</gatling-plugin.version>
<maven-jar-plugin.version>3.2.0</maven-jar-plugin.version>
<scala-maven-plugin.version>4.5.6</scala-maven-plugin.version>
Gatling Config setup:
data {
writers = [console, file, graphite] # The list of DataWriters to which Gatling write simulation data (currently supported : console, file, graphite)
console {
#light = false # When set to true, displays a light version without detailed request stats
#writePeriod = 5 # Write interval, in seconds
}
file {
#bufferSize = 8192 # FileDataWriter's internal data buffer size, in bytes
}
leak {
#noActivityTimeout = 30 # Period, in seconds, for which Gatling may have no activity before considering a leak may be happening
}
graphite {
light = false # only send the all* stats
host = "localhost" # The host where the Carbon server is located
port = 2003 # The port to which the Carbon server listens to (2003 is default for plaintext, 2004 is default for pickle)
protocol = "tcp" # The protocol used to send data to Carbon (currently supported : "tcp", "udp")
rootPathPrefix = "gatling" # The common prefix of all metrics sent to Graphite
bufferSize = 8192 # Internal data buffer size, in bytes
writePeriod = 1 # Write period, in seconds
}
}
}
Related
I have conducted a test sending 100K persistent MQTT messages (QoS 2) to ActiveMQ Artemis. The topic has two Telegraf listeners, one on VM 85 and the other on VM 86. These listeners write data to the InfluxDB on their respective servers.
The main goal of the test is to ensure all messages delivered to VM 85 are also delivered to VM 86 even if VM 86 is down. Before executing the test both listeners connect to the broker each with a unique client ID and with clean-session = false and subscribe to the topic using QoS 2. This ensures the subscription for each is present when the messages are sent whether or not the listeners are actually active. Neither listener is connected when the test starts. The order of operations is:
Start listener on VM 85.
Send data.
Ensure messages are delivered to listener on VM 85.
Start listener on VM 86.
Ensure messages are delivered to listener on VM 86.
The good news is that all messages are delivered to the Influx DB on both VMs. However, the relevant queue for VM 86 still shows about 4.3 K messages remaining, as shown below:
If I then restart the listener on VM 86, it shows it's writing more data, as shown below:
However, the total messages in the InfluxDB correctly remains at 100K. If InfluxDB receives a duplicate record, it will overwrite it. However, the client is incrementing by one and setting the date at each increment, so this shouldn't occur, at least from the client.
I'm not clear on why this would be. Why does the the listener on VM 86 need to be restarted to completely empty the queue?
There is one parameter I haven't tried in the Telegraf plugin:
## Maximum messages to read from the broker that have not been written by an
## output. For best throughput set based on the number of metrics within
## each message and the size of the output's metric_batch_size.
##
## For example, if each message from the queue contains 10 metrics and the
## output metric_batch_size is 1000, setting this to 100 will ensure that a
## full batch is collected and the write is triggered immediately without
## waiting until the next flush_interval.
# max_undelivered_messages = 1000
It seems the batch size defaults to 1000, based on the output messages. But the maximum messages to read before output seems to be something greater, since 4.3K are output when restarted. Except that they have already been output. That's the confusing part.
Client Code:
package abc;
import java.time.Instant;
import org.eclipse.paho.client.mqttv3.MqttClient;
import org.eclipse.paho.client.mqttv3.MqttConnectOptions;
import org.eclipse.paho.client.mqttv3.MqttException;
import org.eclipse.paho.client.mqttv3.MqttMessage;
import org.eclipse.paho.client.mqttv3.MqttSecurityException;
import org.eclipse.paho.client.mqttv3.persist.MemoryPersistence;
import com.influxdb.client.domain.WritePrecision;
import com.influxdb.client.write.Point;
public class MqttPublishSample {
public static void main(String[] args) throws MqttSecurityException, MqttException, InterruptedException {
String broker = "tcp://localhost:1883";
String clientId = "JavaSample";
MemoryPersistence persistence = new MemoryPersistence();
int qos = 2;
int start = Integer.parseInt(args[0]);
int end = Integer.parseInt(args[1]);
String topic = args[2];
if (topic == null) {
topic = "testtopic/999";
}
System.out.println("start: " + start + ", end: " + end + ", topic: " + topic + " qos: " + qos);
MqttClient sampleClient = new MqttClient(broker, clientId, persistence);
MqttConnectOptions connOpts = new MqttConnectOptions();
connOpts.setCleanSession(false);
connOpts.setUserName("admin");
connOpts.setPassword("xxxxxxx".toCharArray());
System.out.println("Connecting to broker: " + broker);
sampleClient.connect(connOpts);
System.out.println("Connected");
for (int i = start; i <= end; i++) {
// print out every 1000
if (i%100 == 0) {
System.out.println("i: " + i);
}
try {
Point point = Point.measurement("temperature").addTag("machine", "unit43").addField("external", i)
.time(Instant.now(), WritePrecision.NS);
content = point.toLineProtocol();
MqttMessage message = new MqttMessage(content.getBytes());
message.setQos(qos);
sampleClient.publish(topic, message);
Thread.sleep(10);
} catch (MqttException me) {
System.out.println("reason " + me.getReasonCode());
System.out.println("msg " + me.getMessage());
System.out.println("loc " + me.getLocalizedMessage());
System.out.println("cause " + me.getCause());
System.out.println("excep " + me);
me.printStackTrace();
}
}
sampleClient.disconnect();
System.out.println("Disconnected");
}
}
Telegraph Plugin on 85:
###############################################################################
# INPUT PLUGINS #
###############################################################################
[[inputs.mqtt_consumer]]
servers = ["tcp://127.0.0.1:1883"]
## Topics that will be subscribed to.
topics = [
"testtopic/#",
]
## The message topic will be stored in a tag specified by this value. If set
## to the empty string no topic tag will be created.
# topic_tag = "topic"
## When using a QoS of 1 or 2, you should enable persistent_session to allow
## resuming unacknowledged messages.
qos = 2
persistent_session = true
## If unset, a random client ID will be generated.
client_id = "InfluxData_on_86_listen_local"
## Username and password to connect MQTT server.
username = "admin"
password = "xxxxxx"
data_format = "influx"
[[inputs.mqtt_consumer]]
servers = ["tcp://10.102.11.86:1883"]
## Topics that will be subscribed to.
topics = [
"testtopic/#",
]
## The message topic will be stored in a tag specified by this value. If set
## to the empty string no topic tag will be created.
# topic_tag = "topic"
## When using a QoS of 1 or 2, you should enable persistent_session to allow
## resuming unacknowledged messages.
qos = 2
persistent_session = true
## If unset, a random client ID will be generated.
client_id = "InfluxData_on_86_listen_85"
## Username and password to connect MQTT server.
username = "admin"
password = "xxxx"
data_format = "influx"
###############################################################################
# OUTPUT PLUGINS #
###############################################################################
[[outputs.influxdb_v2]]
## The URLs of the InfluxDB cluster nodes.
##
## Multiple URLs can be specified for a single cluster, only ONE of the
## urls will be written to each interval.
urls = ["http://127.0.0.1:8086"]
## Token for authentication.
token = "xxxx"
## Organization is the name of the organization you wish to write to.
organization = "xxxx"
# ## Destination bucket to write into.
bucket = "events"
I wasn't able to replicate this issue even initially at lower volumes, although I had it twice at 100K messages.
When i added the following parameters to the Telegraf Listener:
max_undelivered_messages = 100
It seemed to slow things down, as batches were limited to 100 according to the telegraph output.
However, when I removed it, it seemed batches where still limited to 100.
Finally, I changed the same parameter to 1000:
max_undelivered_messages = 1000
After this, message batch sizes improved to well beyond 100, as they were initially.
Furthermore, at least on the third try of 100K messages, there are no longer any messages remaining in the queue after the sequence described in the question is completed.
I'm not really sure if this change did anything, but in any case the correct amount of messages were always being received.
So, I'm marking this as answered.
I followed the BlazeMeter article to monitor Gatling tests with Grafana and InfluxDB but no data is sent to InfluxDB and not any database created with the name "graphite".
InfluxDB is up and listen to port :2003. This is the log from InfluxDB:
2018-06-24T09:48:17Z Listening on TCP: [::]:2003 service=graphite addr=:2003
And I set gatling.conf fields to these:
data {
#writers = [console, file] # The list of DataWriters to which Gatling write simulation data (currently supported : console, file, graphite, jdbc)
console {
#light = false # When set to true, displays a light version without detailed request stats
}
file {
#bufferSize = 8192 # FileDataWriter's internal data buffer size, in bytes
}
leak {
#noActivityTimeout = 30 # Period, in seconds, for which Gatling may have no activity before considering a leak may be happening
}
graphite {
light = false # only send the all* stats
host = "localhost" # The host where the Carbon server is located
port = 2003 # The port to which the Carbon server listens to (2003 is default for plaintext, 2004 is default for pickle)
protocol = "tcp" # The protocol used to send data to Carbon (currently supported : "tcp", "udp")
rootPathPrefix = "gatling" # The common prefix of all metrics sent to Graphite
bufferSize = 8192 # GraphiteDataWriter's internal data buffer size, in bytes
writeInterval = 1 # GraphiteDataWriter's write interval, in seconds
}
}
gatling.conf is in src/test/resources folder and I ensured that this config file is loaded by Gatling by debugging it.
What I have missed?
You have invalid data writers configuration. Set it to:
writers = [console, file, graphite]
My question was already asked but I didn't succeed to solve my issue.
I don't succeed to send my data from Gatling in real time to InfluxDB.
I'm on Windows 10.
Gatling Version: 2.3.0 (the last one).
InfluxDB version: 1.3.5 (the last is 1.3.6).
My gatling.conf:
data {
writers = [console, file, graphite] # The list of DataWriters to which Gatling write simulation data (currently supported : console, file, graphite, jdbc)
console {
#light = false # When set to true, displays a light version without detailed request stats
}
file {
#bufferSize = 8192 # FileDataWriter's internal data buffer size, in bytes
}
leak {
#noActivityTimeout = 30 # Period, in seconds, for which Gatling may have no activity before considering a leak may be happening
}
graphite {
#light = false # only send the all* stats
host = "127.0.0.1" # The host where the Carbon server is located
port = "2003" # The port to which the Carbon server listens to (2003 is default for plaintext, 2004 is default for pickle)
protocol = "tcp" # The protocol used to send data to Carbon (currently supported : "tcp", "udp")
rootPathPrefix = "gatling" # The common prefix of all metrics sent to Graphite
#bufferSize = 8192 # GraphiteDataWriter's internal data buffer size, in bytes
#writeInterval = 1 # GraphiteDataWriter's write interval, in seconds
}
}
My influxdb.conf:
[http]
# Determines whether HTTP endpoint is enabled.
enabled = true
# The bind address used by the HTTP service.
bind-address = "127.0.0.1:8086"
###
### [[graphite]]
###
### Controls one or many listeners for Graphite data.
###
[[graphite]]
# Determines whether the graphite endpoint is enabled.
enabled = true
database = "gatlingdb"
# retention-policy = ""
bind-address = ":2003"
protocol = "tcp"
# consistency-level = "one"
templates = [
"gatling.*.*.*.*.measurement.simulation.request.status.field"
]
My gatlingdb database is created on InfluxDB, it stays empty.
When I try:
C:\InfluxDB-1.3.5-1>influx -host 127.0.0.1
I'm connected to InfluxDB
>USE gatlingdb
I'm connected to my database. Then:
>SHOW SERIES
and
>SELECT * FROM gatling
Don't return anything. It's empty.
Note: I put "FROM gatling" because I put that in my gatling.conf: rootPathPrefix = "gatling"
I didn't download Graphite but I saw that InfluxDB accept the graphite protocol. I assume I can send data from Gatling to InfluxDB. I certainly missed something.
I succeeded in connecting InfluxDB to Grafana and I display data from other databases. I just missed the connection between Gatling and InfluxDB.
Thanks in advance for your help, I definitely need it!
Anthony
I'm almost finished the article which shows all the steps required to create the whole monitoring infrastructure using the Gatling, Grafana and InfluxDB (btw, without Graphite installed separately) which worked very well for me.
I think I'll publish it in my blog on the blazemeter.com just in few days! So stay tuned there!
http://blazemeter.com/blog
There you will even find the ready solution to spin up everything inside the Docker.
But until this (if it is urgent for you), can share my InfluxDB config section:
[[graphite]]
enabled = true
bind-address = ":2003"
database = "graphite"
retention-policy = ""
protocol = "tcp"
batch-size = 5000
batch-pending = 10
batch-timeout = "1s"
consistency-level = "one"
separator = "."
udp-read-buffer = 0
gatling.conf:
graphite {
light = false # only send the all* stats
host = "localhost" # The host where the Carbon server is located
port = 2003 # The port to which the Carbon server listens to (2003 is default for plaintext, 2004 is default for pickle)
protocol = "tcp" # The protocol used to send data to Carbon (currently supported : "tcp", "udp")
rootPathPrefix = "gatling" # The common prefix of all metrics sent to Graphite
bufferSize = 8192 # GraphiteDataWriter's internal data buffer size, in bytes
writeInterval = 1 # GraphiteDataWriter's write interval, in seconds
}
The first thing you need to check is that InfluxDB actually accepts incoming metrics via graphite protocol. For example, during InfluxDB startup logs you should find this line:
influxdb_1 | [I] 2018-01-26T13:40:37Z Listening on TCP: [::]:2003 service=graphite addr=:2003
Is it possible to add a monitoring package through the Softlayer API. On the portal, I can go into the Monitoring section and Order a "Monitoring Package - Basic", which will associate it with that Virtual Guest.
Is it possible to do this either during the placeOrder call or after the initial placeOrder call (i.e if the customer wants to add Basic Monitoring after the server is provisioned).
I tried to look into examples but they all assumed that there was a monitoring agent available, but it wasnt in my case. I also looked into Going Further with Softlayer part 3 but not sure how to extract the Basic Monitoring package from Product_Package Service.
Im using Python to do this, so any pointers in associating a Monitoring service during creation or after-creation would be very helpful.
Thanks in Advance!
try this:
"""
Order a Monitoring Package
Build a SoftLayer_Container_Product_Order_Monitoring_Package object for a new
monitoring order and pass it to the SoftLayer_Product_Order API service to order it
In this care we'll order a Basic (Hardware and OS) package with Basic Monitoring Package - Linux
configuration for more details see below
Important manual pages:
https://sldn.softlayer.com/reference/datatypes/SoftLayer_Container_Product_Order_Monitoring_Package
http://sldn.softlayer.com/reference/datatypes/SoftLayer_Product_Item_Price
http://sldn.softlayer.com/reference/services/SoftLayer_Product_Order/verifyOrder
http://sldn.softlayer.com/reference/services/SoftLayer_Product_Order/placeOrder
http://sldn.softlayer.com/reference/datatypes/SoftLayer_Monitoring_Agent_Configuration_Template_Group
License: http://sldn.softlayer.com/article/License
Author: SoftLayer Technologies, Inc. <sldn#softlayer.com>
"""
import SoftLayer
USERNAME = 'set me'
API_KEY = 'set me'
"""
Build a skeleton SoftLayer_Container_Product_Order_Monitoring_Package object
containing the order you wish to place.
"""
oderTemplate = {
'complexType': 'SoftLayer_Container_Product_Order_Monitoring_Package',
'packageId': 0, # the packageID for order monitoring packages is 0
'prices': [
{'id': 2302} # this is the price for Monitoring Package - Basic ((Hardware and OS))
],
'quantity': 0, # the quantity for order a service (in this case monitoring package) must be 0
'sendQuoteEmailFlag': True,
'useHourlyPricing': True,
'virtualGuests': [
{'id': 4906034} # the virtual guest ID where you want add the monitoring package
],
'configurationTemplateGroups': [
{'id': 3} # the templateID for the monitoring group (in this case Basic Monitoring package for Unix/Linux operating system.)
]
}
# Declare the API client to use the SoftLayer_Product_Order API service
client = SoftLayer.Client(username=USERNAME, api_key=API_KEY)
productOrderService = client['SoftLayer_Product_Order']
"""
verifyOrder() will check your order for errors. Replace this with a call to
placeOrder() when you're ready to order. Both calls return a receipt object
that you can use for your records.
Once your order is placed it'll go through SoftLayer's provisioning process.
"""
try:
order = productOrderService.verifyOrder(oderTemplate)
print(order)
except SoftLayer.SoftLayerAPIError as e:
print("Unable to verify the order! faultCode=%s, faultString=%s"
% (e.faultCode, e.faultString))
exit(1)
this is an example to create an network monitoring
"""
Create network monitoring
The script creates a monitoring network with Service ping
in a determinate IP address
Important manual pages
http://sldn.softlayer.com/reference/services/SoftLayer_Network_Monitor_Version1_Query_Host
http://sldn.softlayer.com/reference/datatypes/SoftLayer_Network_Monitor_Version1_Query_Host
License: http://sldn.softlayer.com/article/License
Author: SoftLayer Technologies, Inc. <sldn#softlayer.com>
"""
import SoftLayer.API
from pprint import pprint as pp
# Your SoftLayer API username and key.
USERNAME = 'set me'
API_KEY = 'set me'
# The ID of the server you wish to monitor
serverId = 7698842
"""
ID of the query type which can be found with SoftLayer_Network_Monitor_Version1_Query_Host_Stratum/getAllQueryTypes.
This example uses SERVICE PING: Test ping to address, will not fail on slow server response due to high latency or
high server load
"""
queryTypeId = 1
# IP address on the previously defined server to monitor
ipAddress = '10.104.50.118'
# Declare the API client
client = SoftLayer.Client(username=USERNAME, api_key=API_KEY)
networkMonitorVersion = client['SoftLayer_Network_Monitor_Version1_Query_Host']
# Define the SoftLayer_Network_Monitor_Version1_Query_Host templateObject.
newMonitor = {
'guestId': serverId,
'queryTypeId': queryTypeId,
'ipAddress': ipAddress
}
# Send the request for object creation and display the return value
try:
result = networkMonitorVersion.createObject(newMonitor)
pp(result)
except SoftLayer.SoftLayerAPIError as e:
print("Unable to create new network monitoring "
% (e.faultCode, e.faultString))
exit(1)
Regards
When I tried localhost in url setting inside influxdb setting of kapacitor.conf then I am able to get the alerts properly.
But when I tried to point url to some remote location in infludb configuratin section then I am not able to get any alert et all.
Can anybody help me there ?
Please find below the kapacitor.conf file:
# The hostname of this node.
# Must be resolvable by any configured InfluxDB hosts.
hostname = "localhost"
# Directory for storing a small amount of metadata about the server.
data_dir = "/var/lib/kapacitor"
[http]
# HTTP API Server for Kapacitor
# This server is always on,
# it servers both as a write endpoint
# and as the API endpoint for all other
# Kapacitor calls.
bind-address = ":9092"
auth-enabled = false
log-enabled = true
write-tracing = false
pprof-enabled = false
https-enabled = false
https-certificate = "/etc/ssl/kapacitor.pem"
[logging]
# Destination for logs
# Can be a path to a file or 'STDOUT', 'STDERR'.
file = "/var/log/kapacitor/kapacitor.log"
# Logging level can be one of:
# DEBUG, INFO, WARN, ERROR, or OFF
level = "INFO"
[replay]
# Where to store replay files, aka recordings.
dir = "/var/lib/kapacitor/replay"
[task]
# Where to store the tasks database
dir = "/var/lib/kapacitor/tasks"
# How often to snapshot running task state.
snapshot-interval = "60s"
[deadman]
# Configure a deadman's switch
# Globally configure deadman's switches on all stream tasks.
# NOTE: for this to be of use you must also globally configure at least one alerting method.
global = false
# Threshold, if globally configured the alert will be triggered if the throughput in points/interval is <= threshold.
threshold = 0.0
# Interval, if globally configured the frequency at which to check the throughput.
interval = "10s"
# Id -- the alert Id, NODE_NAME will be replaced with the name of the node being monitored.
id = "node 'NODE_NAME' in task '{{ .TaskName }}'"
# The message of the alert. INTERVAL will be replaced by the interval.
message = "{{ .ID }} is {{ if eq .Level \"OK\" }}alive{{ else }}dead{{ end }}: {{ index .Fields \"collected\" | printf \"%0.3f\" }} points/INTERVAL."
# Multiple InfluxDB configurations can be defined.
# Exactly one must be marked as the default.
# Each one will be given a name and can be referenced in batch queries and InfluxDBOut nodes.
[[influxdb]]
# Connect to an InfluxDB cluster
# Kapacitor can subscribe, query and write to this cluster.
# Using InfluxDB is not required and can be disabled.
enabled = true
default = true
name = "localhost"
urls = ["http://remote_ip:8086"]
#urls = ["http://localhost:8086"]
username = ""
password = ""
timeout = 0
# Absolute path to pem encoded CA file.
# A CA can be provided without a key/cert pair
# ssl-ca = "/etc/kapacitor/ca.pem"
# Absolutes paths to pem encoded key and cert files.
# ssl-cert = "/etc/kapacitor/cert.pem"
# ssl-key = "/etc/kapacitor/key.pem"
# Do not verify the TLS/SSL certificate.
# This is insecure.
insecure-skip-verify = false
# Subscriptions use the UDP network protocl.
# The following options of for the created UDP listeners for each subscription.
# Number of packets to buffer when reading packets off the socket.
udp-buffer = 1000
# The size in bytes of the OS read buffer for the UDP socket.
# A value of 0 indicates use the OS default.
udp-read-buffer = 0
[influxdb.subscriptions]
# Set of databases and retention policies to subscribe to.
# If empty will subscribe to all, minus the list in
# influxdb.excluded-subscriptions
#
# Format
# db_name = <list of retention policies>
#
# Example:
# my_database = [ "default", "longterm" ]
[influxdb.excluded-subscriptions]
# Set of databases and retention policies to exclude from the subscriptions.
# If influxdb.subscriptions is empty it will subscribe to all
# except databases listed here.
#
# Format
# db_name = <list of retention policies>
#
# Example:
# my_database = [ "default", "longterm" ]
[smtp]
# Configure an SMTP email server
# Will use TLS and authentication if possible
# Only necessary for sending emails from alerts.
enabled = false
host = "localhost"
port = 25
username = ""
password = ""
# From address for outgoing mail
from = ""
# List of default To addresses.
# to = ["oncall#example.com"]
# Skip TLS certificate verify when connecting to SMTP server
no-verify = false
# Close idle connections after timeout
idle-timeout = "30s"
# If true the all alerts will be sent via Email
# without explicity marking them in the TICKscript.
global = false
# Only applies if global is true.
# Sets all alerts in state-changes-only mode,
# meaning alerts will only be sent if the alert state changes.
state-changes-only = false
[opsgenie]
# Configure OpsGenie with your API key and default routing key.
enabled = false
# Your OpsGenie API Key.
api-key = ""
# Default OpsGenie teams, can be overridden per alert.
# teams = ["team1", "team2"]
# Default OpsGenie recipients, can be overridden per alert.
# recipients = ["recipient1", "recipient2"]
# The OpsGenie API URL should not need to be changed.
url = "https://api.opsgenie.com/v1/json/alert"
# The OpsGenie Recovery URL, you can change this
# based on which behavior you want a recovery to
# trigger (Add Notes, Close Alert, etc.)
recovery_url = "https://api.opsgenie.com/v1/json/alert/note"
# If true then all alerts will be sent to OpsGenie
# without explicity marking them in the TICKscript.
# The team and recipients can still be overridden.
global = false
[victorops]
# Configure VictorOps with your API key and default routing key.
enabled = false
# Your VictorOps API Key.
api-key = ""
# Default VictorOps routing key, can be overridden per alert.
routing-key = ""
# The VictorOps API URL should not need to be changed.
url = "https://alert.victorops.com/integrations/generic/20131114/alert"
# If true the all alerts will be sent to VictorOps
# without explicity marking them in the TICKscript.
# The routing key can still be overridden.
global = false
[pagerduty]
# Configure PagerDuty.
enabled = false
# Your PagerDuty Service Key.
service-key = ""
# The PagerDuty API URL should not need to be changed.
url = "https://events.pagerduty.com/generic/2010-04-15/create_event.json"
# If true the all alerts will be sent to PagerDuty
# without explicity marking them in the TICKscript.
global = false
[slack]
# Configure Slack.
enabled = false
# The Slack webhook URL, can be obtained by addding
# an Incoming Webhook integration.
# Visit https://slack.com/services/new/incoming-webhook
# to add new webhook for Kapacitor.
#url = "test_hook"
# Default channel for messages
channel = "test-alerts"
# If true the all alerts will be sent to Slack
# without explicity marking them in the TICKscript.
global = false
# Only applies if global is true.
# Sets all alerts in state-changes-only mode,
# meaning alerts will only be sent if the alert state changes.
state-changes-only = false
[hipchat]
# Configure HipChat.
enabled = false
# The HipChat API URL. Replace subdomain with your
# HipChat subdomain.
url = "https://subdomain.hipchat.com/v2/room"
# Visit https://www.hipchat.com/docs/apiv2
# for information on obtaining your room id and
# authentication token.
# Default room for messages
room = ""
# Default authentication token
token = ""
# If true then all alerts will be sent to HipChat
# without explicitly marking them in the TICKscript.
global = false
# Only applies if global is true.
# Sets all alerts in state-changes-only mode,
# meaning alerts will only be sent if the alert state changes.
state-changes-only = false
[alerta]
# Configure Alerta.
enabled = false
# The Alerta URL.
url = ""
# Default authentication token.
token = ""
# Default environment.
environment = ""
# Default origin.
origin = "kapacitor"
[sensu]
# Configure Sensu.
enabled = false
# The Sensu Client host:port address.
addr = "sensu-client:3030"
# Default JIT source.
source = "Kapacitor"
[reporting]
# Send anonymous usage statistics
# every 12 hours to Enterprise.
enabled = true
url = "https://usage.influxdata.com"
[stats]
# Emit internal statistics about Kapacitor.
# To consume these stats create a stream task
# that selects data from the configured database
# and retention policy.
#
# Example:
# stream.from().database('_kapacitor').retentionPolicy('default')...
#
enabled = true
stats-interval = "10s"
database = "_kapacitor"
retention-policy= "default"
[udf]
# Configuration for UDFs (User Defined Functions)
[udf.functions]
# Example go UDF.
# First compile example:
# go build -o avg_udf ./udf/agent/examples/moving_avg.go
#
# Use in TICKscript like:
# stream.goavg()
# .field('value')
# .size(10)
# .as('m_average')
#
# uncomment to enable
#[udf.functions.goavg]
# prog = "./avg_udf"
# args = []
# timeout = "10s"
# Example python UDF.
# Use in TICKscript like:
# stream.pyavg()
# .field('value')
# .size(10)
# .as('m_average')
#
# uncomment to enable
#[udf.functions.pyavg]
# prog = "/usr/bin/python2"
# args = ["-u", "./udf/agent/examples/moving_avg.py"]
# timeout = "10s"
# [udf.functions.pyavg.env]
# PYTHONPATH = "./udf/agent/py"
[talk]
# Configure Talk.
enabled = false
# The Talk webhook URL.
url = "https://jianliao.com/v2/services/webhook/uuid"
# The default authorName.
author_name = "Kapacitor"
##################################
# Input Methods, same as InfluxDB
#
[collectd]
enabled = false
bind-address = ":25826"
database = "collectd"
retention-policy = ""
batch-size = 1000
batch-pending = 5
batch-timeout = "10s"
typesdb = "/usr/share/collectd/types.db"
[opentsdb]
enabled = false
bind-address = ":4242"
database = "opentsdb"
retention-policy = ""
consistency-level = "one"
tls-enabled = false
certificate = "/etc/ssl/influxdb.pem"
batch-size = 1000
batch-pending = 5
batch-timeout = "1s"
My guess: Hostname must be resolvable by the InfluxDB instance. Change from localhost to the hostname/IP address of your kapacitor machine.