Grafana version 4.0
Datasource influxDB
Please consider me as a beginner.
For this, how to set alerts in Grafana dashboard? alerts send to emails.
/etc/grafana/grafana.ini
I wrote SMTP config like this:
[smtp]
enabled = True
host = localhost:25
user =
If the password contains # or ; you have to wrap it with trippel
quotes. Ex """#password;"""
[emails]
welcome_email_on_sign_up = True
When I set alerts in Grafana dashboard its show error:
template variables are not supported.
Configure this /usr/share/grafana/conf/defaults.ini file as the following:
[smtp]
enabled = true
host = smtp.gmail.com:587
user = Your_Email_Address#gmail.com
password = """Your_Password"""
cert_file =
key_file =
skip_verify = true
from_address = Your_Email_Address#gmail.com
from_name = Your_Name
ehlo_identity =
In this example, I set my own Gmail account with its SMTP:
smtp.gmail.com with 587(TLS) port.
You Should find your SMTP email address with its port.
[NOTE]
Don't forget to put your password in password_field.
Mail alert grafana configuration for windows \grafana-6.4.4.windows-amd64\grafana-6.4.4\conf\defaults.ini
[smtp]
enabled = true
host = smtp.gmail.com:587
;user =
# If the password contains # or ; you have to wrap it with triple quotes. Ex """#password;"""
;password =
;cert_file =
;key_file =
skip_verify = true
from_address = your_mail_id
from_name = Grafana
;ehlo_identity = dashboard.example.com
Related
I have some tests written using Artillery + Playwright and I am using the publish-metrics plugin with type influxdb-statsd. I then have the following telegraf.config
[[outputs.influxdb_v2]]
urls = ["http://${INFLUX_DB2_HOST_ADDRESS}:8086"]
token = "${INFLUX_DB2_TOKEN}"
organization = "${INFLUX_DB2_ORGANIZATION}"
bucket = "${INFLUX_DB2_BUCKET}"
[[inputs.statsd]]
protocol = "udp"
max_tcp_connections = 250
tcp_keep_alive = false
service_address = ":8125"
delete_gauges = true
delete_counters = true
delete_sets = true
delete_timings = true
metric_separator = "_"
parse_data_dog_tags = true
datadog_extensions = true
datadog_distributions = false
Data from artillery is sent in this format to statsD
artillery.browser.page.FID.compliance-hub_dashboard.min:3.2|g
artillery.browser.page.FID.compliance-hub_dashboard.max:3.2|g
artillery.browser.page.FID.compliance-hub_dashboard.count:2|g
artillery.browser.page.FID.compliance-hub_dashboard.p50:3.2|g
I would like to set up a telegraf template so that in Influx DB
artillery.browser.page.FID.compliance-hub_dashboard is a measurement and min, max, count and p50 are fields.
How do I do that?
I tried:
templates = [
"measurement.measurement.measurement.measurement.measurement.field",
]
but it's not working. :(
What I see in InfluxDb is a measurement of artillery_browser_page_FID_compliance-hub_dashboard_min with a field of value = 3.2
I would like to change the context path for the Traefik dashboard from e.g. https://apps.example.com/ to https://apps.example.com/traefik, as I have Heimdall routed to https://apps.example.com/. All the examples I could find are for Traefik 1.x. What would be the easiest way to do this? My current config (which doesn't work):
traefik.toml:
[entryPoints]
[entryPoints.web]
address = ":80"
[entryPoints.web.http.redirections.entryPoint]
to = "websecure"
scheme = "https"
[entryPoints.websecure]
address = ":443"
[api]
dashboard = true
insecure = true
[log]
level = "DEBUG"
[certificatesResolvers.cloudflare.acme]
email = "email#email.com"
storage = "acme.json"
[certificatesResolvers.cloudflare.acme.dnsChallenge]
provider = "cloudflare"
resolvers = ["1.1.1.1:53", "8.8.8.8:53"]
[providers.docker]
watch = true
network = "web"
exposedByDefault = false
endpoint = "unix:///var/run/docker.sock"
[providers.file]
filename = "traefik_dynamic.toml"
traefik_dynami.toml:
[http.routers.api]
rule = "Host(`apps.example.com`) && Path(`/traefik`)"
entrypoints = ["websecure"]
service = "api#internal"
[http.routers.api.tls]
certResolver = "cloudflare"
I had a similar requirement as well. I followed the suggestion here on a feature request to be able to set the context path.
I also saw others here suggesting it possible using the configuration file.
I am taking the below wso2 course.
Course Link
This is the video:
In Cloud Native API Management with WSO2 API Manager - an Overview
Lab 4 - Using a microgateway (10min)
As I used this command, the docker image is not created in the local properly.
micro-gw build Petstore --deployment-config E:\wso2-CertificatonPreparation\micorgateway-projects\Petstore\deployment.toml
I am getting the below error. Please help me to resolve this issue.
Generating docker artifacts...
error [docker plugin]: module [wso2/Petstore:3.1.0] unable to connect to server:Host name may not be null
And also,
What should I configure in target of deployment.toml
source =E:/wso2-CertificatonPreparation/wso2-softwares/wso2am-micro-gw-toolkit-windows-3.1.0/resources/conf/micro-gw.conf
target = /home/ballerina/conf/micro-gw.conf
I am using version 3.1.0
This is the deployment.toml
[docker]
[docker.dockerConfig]
enable = true
name = " petstore "
registry = ' docker.wso2.com '
tag = ' v1 '
#buildImage = ''
#dockerHost = ''
#dockerCertPath = ''
baseImage = 'wso2/wso2micro-gw:3.0.2'
#enableDebug = ''
#debugPort = ''
#push = ''
[docker.dockerCopyFiles]
enable = true
[[docker.dockerCopyFiles.files]]
source ='E:/wso2-CertificatonPreparation/wso2-softwares/wso2am-micro-gw-toolkit-windows-3.1.0/resources/conf/micro-gw.conf'
target = '/home/ballerina/conf/micro-gw.conf'
isBallerinaConf = true
Can you check with this config?. This works for me without any issue.
[docker]
[docker.dockerConfig]
enable = true
name = "petstore"
registry = 'docker.wso2.com'
tag = 'v1'
#buildImage = ''
#dockerHost = ''
#dockerCertPath = ''
baseImage = 'wso2/wso2micro-gw:3.0.2'
#enableDebug = ''
#debugPort = ''
#push = ''
username = '####'
password = '####'
[docker.dockerCopyFiles]
enable = true
[[docker.dockerCopyFiles.files]]
source = '/Users/hasunie/RD/UI/wso2am-micro-gw-toolkit-macos-3.1.0/resources/conf/micro-gw.conf'
target = '/home/ballerina/conf/micro-gw.conf'
isBallerinaConf = true
I have setup a new Rails application using devise and devise_saml_authenticatable to authenticate against Office 365.
The login unfortunately shows following error message:
Sign in
Sorry, but we’re having trouble signing you in.
AADSTS7500522: XML element 'AuthnContextClassRef' in XML namespace 'urn:oasis:names:tc:SAML:2.0:assertion' in the SAML message must be a URI.
My config/decise.rb file looks as follows:
config.saml_create_user = true
config.saml_update_user = true
config.saml_default_user_key = :email
config.saml_session_index_key = :session_index
config.saml_use_subject = true
config.idp_settings_adapter = nil
config.saml_configure do |settings|
settings.assertion_consumer_service_url = "https://localhost:3000/users/saml/auth"
settings.assertion_consumer_service_binding = "urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST"
settings.name_identifier_format = "urn:oasis:names:tc:SAML:2.0:nameid-format:transient"
settings.issuer = "https://localhost:3000/saml/metadata"
settings.authn_context = ""
settings.idp_slo_target_url = "https://login.microsoftonline.com/common/wsfederation?wa=wsignout1.0"
settings.idp_sso_target_url = "https://login.microsoftonline.com/xxx/saml2"
settings.idp_cert_fingerprint = "E4:....."
settings.idp_cert_fingerprint_algorithm = "http://www.w3.org/2000/09/xmldsig#sha1"
end
and the "Reply URL (Assertion Consumer Service URL)" in the Azure configuration is set to
https://localhost:3000/users/saml/auth
Any ideas how to fix this?
Finally figured it out: All the Devise examples have
settings.authn_context = ""
set. If I set it to
settings.authn_context = "urn:oasis:names:tc:SAML:2.0:ac:classes:Password"
then the error disappears.
I am following SE Thread to get some response to HTTP POST on an express node. But unable to get any response from kapacitor.
Environment
I am using Windows 10 via PowerShell.
I am connected to an InfluxDB internal Server which is mentioned in the kapacitor.conf and have a TICKscript to stream data via it.
kapacitor.conf
hostname = "134.102.97.81"
data_dir = "C:\\Users\\des\\.kapacitor"
skip-config-overrides = true
default-retention-policy = ""
[alert]
persist-topics = true
[http]
bind-address = ":9092"
auth-enabled = false
log-enabled = true
write-tracing = false
pprof-enabled = false
https-enabled = false
https-certificate = "/etc/ssl/kapacitor.pem"
https-private-key = ""
shutdown-timeout = "10s"
shared-secret = ""
[replay]
dir = "C:\\Users\\des\\.kapacitor\\replay"
[storage]
boltdb = "C:\\Users\\des\\.kapacitor\\kapacitor.db"
[task]
dir = "C:\\Users\\des\\.kapacitor\\tasks"
snapshot-interval = "1m0s"
[load]
enabled = false
dir = "C:\\Users\\des\\.kapacitor\\load"
[[influxdb]]
enabled = true
name = "DB5Server"
default = true
urls = ["https://influxdb.internal.server.address:8086"]
username = "user"
password = "password"
ssl-ca = ""
ssl-cert = ""
ssl-key = ""
insecure-skip-verify = true
timeout = "0s"
disable-subscriptions = true
subscription-protocol = "https"
subscription-mode = "cluster"
kapacitor-hostname = ""
http-port = 0
udp-bind = ""
udp-buffer = 1000
udp-read-buffer = 0
startup-timeout = "5m0s"
subscriptions-sync-interval = "1m0s"
[influxdb.excluded-subscriptions]
_kapacitor = ["autogen"]
[logging]
file = "STDERR"
level = "DEBUG"
[config-override]
enabled = true
[[httppost]]
endpoint = "kapacitor"
url = "http://localhost:1440"
headers = { Content-Type = "application/json;charset=UTF-8"}
alert-template = "{\"id\": {{.ID}}}"
The daemon runs without any problems.
test2.tick
dbrp "DBTEST"."autogen"
stream
|from()
.measurement('humid')
|alert()
.info(lambda: TRUE)
.post()
.endpoint('kapacitor')
Already defined the task .\kapacitor.exe define bc_1 -tick test2.tick
Enabled it .\kapacitor.exe enable bc_1
The status shows nothing:
.\kapacitor.exe show bc_1
ID: bc_1
Error:
Template:
Type: stream
Status: enabled
Executing: true
Created: 13 Mar 19 15:33 CET
Modified: 13 Mar 19 16:23 CET
LastEnabled: 13 Mar 19 16:23 CET
Databases Retention Policies: ["NIMBLE"."autogen"]
TICKscript:
dbrp "TESTDB"."autogen"
stream
|from()
.measurement('humid')
|alert()
.info(lambda: TRUE)
.post()
.endpoint('kapacitor')
DOT:
digraph bc_1 {
graph [throughput="0.00 points/s"];
stream0 [avg_exec_time_ns="0s" errors="0" working_cardinality="0" ];
stream0 -> from1 [processed="0"];
from1 [avg_exec_time_ns="0s" errors="0" working_cardinality="0" ];
from1 -> alert2 [processed="0"];
alert2 [alerts_inhibited="0" alerts_triggered="0" avg_exec_time_ns="0s" crits_triggered="0" errors="0" infos_triggered="0" oks_triggered="0" warns_triggered="0" working_cardinality="0" ];
}
The Daemon logs provide this for the task
ts=2019-03-13T16:25:23.640+01:00 lvl=debug msg="starting enabled task on startup" service=task_store task=bc_1
ts=2019-03-13T16:25:23.677+01:00 lvl=debug msg="starting task" service=kapacitor task_master=main task=bc_1
ts=2019-03-13T16:25:23.678+01:00 lvl=info msg="started task" service=kapacitor task_master=main task=bc_1
ts=2019-03-13T16:25:23.679+01:00 lvl=debug msg="listing dot" service=kapacitor task_master=main dot="digraph bc_1 {\nstream0 -> from1;\nfrom1 -> alert2;\n}"
ts=2019-03-13T16:25:23.679+01:00 lvl=debug msg="started task during startup" service=task_store task=bc_1
ts=2019-03-13T16:25:23.680+01:00 lvl=debug msg="opened service" source=srv service=*task_store.Service
ts=2019-03-13T16:25:23.680+01:00 lvl=debug msg="opening service" source=srv service=*replay.Service
ts=2019-03-13T16:25:23.681+01:00 lvl=debug msg="skipping recording, metadata is already correct" service=replay recording_id=353d8417-285d-4fd9-b32f-15a82600f804
ts=2019-03-13T16:25:23.682+01:00 lvl=debug msg="skipping recording, metadata is already correct" service=replay recording_id=a8bb5c69-9f20-4f4d-8f84-109170b6f583
But I get nothing on the Express Node side. The code is exactly the same as that in the above mentioned SE thread.
Any Help as to how to capture stream from Kapacitor on HTTP Post? I already have a live system that is pushing information into the dedicated database already
I was able to shift focus from stream to batch in the above query. I have documented the complete process on medium.com.
Some Files:
kapacitor.gen.conf
hostname = "my-windows-10"
data_dir = "C:\\Users\\<user>\\.kapacitor"
skip-config-overrides = true
default-retention-policy = ""
[alert]
persist-topics = true
[http]
bind-address = ":9092"
auth-enabled = false
log-enabled = true
write-tracing = false
pprof-enabled = false
https-enabled = false
https-certificate = "/etc/ssl/kapacitor.pem"
https-private-key = ""
shutdown-timeout = "10s"
shared-secret = ""
[replay]
dir = "C:\\Users\\des\\.kapacitor\\replay"
[storage]
boltdb = "C:\\Users\\des\\.kapacitor\\kapacitor.db"
[task]
dir = "C:\\Users\\des\\.kapacitor\\tasks"
snapshot-interval = "1m0s"
[load]
enabled = false
dir = "C:\\Users\\des\\.kapacitor\\load"
[[influxdb]]
enabled = true
name = "default"
default = true
urls = ["http://127.0.0.1:8086"]
username = ""
password = ""
ssl-ca = ""
ssl-cert = ""
ssl-key = ""
insecure-skip-verify = true
timeout = "0s"
disable-subscriptions = true
subscription-protocol = "http"
subscription-mode = "cluster"
kapacitor-hostname = ""
http-port = 0
udp-bind = ""
udp-buffer = 1000
udp-read-buffer = 0
startup-timeout = "5m0s"
subscriptions-sync-interval = "1m0s"
[influxdb.excluded-subscriptions]
_kapacitor = ["autogen"]
[logging]
file = "STDERR"
level = "DEBUG"
[config-override]
enabled = true
# Subsequent Section describes what this conf does
[[httppost]]
endpoint = "kap"
url = "http://127.0.0.1:30001/kapacitor"
headers = { "Content-Type" = "application/json"}
TICKScript
var data = batch
| query('SELECT "v" FROM "telegraf_test"."autogen"."humid"')
.period(5s)
.every(10s)
data
|httpPost()
.endpoint('kap')
Define the Task
.\kapacitor.exe define batch_test -tick .\batch_test.tick -dbrp DBTEST.autogen
I suspect the hostname was michieveous where it was set to localhost previously but I set it my machine's hostname and instead used the IP address 127.0.0.1 whereever localhost was mentioned