Grafana Provisioning Notification Channels not working - docker

I am trying to build a Docker container with existing datasources, dashboards and notification channels. The provisioning of datasources and dashboards are working but not the provisioning of Notification Channels. Using Grafana v6.3.5 (commit: 67bad72)
I am using the example config from the Grafana Provisioning documentation. I have added it to the /etc/grafana/provisioning/notifiers directory to a file called AlertNotificationChannel.yaml
I can see it is processing the file because I can see a message "Deleting alert notification logger=provisioning.notifiers name=notification-channel-1 uid=notifier1" in the logs. However no messages about inserting or updating alert notification and nothing in UI.
Contents of yaml file:
notifiers:
- name: notification-channel-1
type: slack
uid: notifier1
# either
org_id: 2
# or
org_name: Main Org.
is_default: true
send_reminder: true
frequency: 1h
disable_resolve_message: false
# See `Supported Settings` section for settings supporter for each
# alert notification type.
settings:
recipient: "XXX"
token: "xoxb"
uploadImage: true
url: https://slack.com
delete_notifiers:
- name: notification-channel-1
uid: notifier1
# either
org_id: 2
# or
org_name: Main Org.
I believe this functionality was added after v5 of Grafana and I am trying to follow the documentation but not working.

So I was having the same issue for a little bit today and I was able to make it work. I want to guess that you ended up finding a solution but I find it useful to post an example of something that works for future people going through this issue. The reason nothing was appearing in the UI is probably cause they were a mistake somewhere.
This is an example of my docker-compose:
grafana:
image: grafana/grafana
container_name: grafana
restart: always
user: "0"
ports:
- "3000:3000"
volumes:
- type: bind
source: "/root/Docker/grafana/grafana"
target: "/var/lib/grafana"
- type: bind
source: "/root/Docker/grafana/provisioning"
target: "/etc/grafana/provisioning"
This is an example of my "/grafana/provisioning/notifiers/slack.yml"
notifiers:
- name: slack-alarming
type: slack
username: Grafa_Alert
is_default: true
send_reminder: true
org_name: LML
settings:
uploadImage: true
url: POSTHOOKURL from slack
Note that the org Name is the name of my company and the username is random.
Thanks,
Wassim

Related

Cypher queries fails with Neo4jError: Unknown function 'apoc.convert.fromJsonMap' but apoc should be installed

I deployed Neo4j in my AKS cluster using the standalone Helm chart.
It all gets deployed and my Node.js server connects to Neo4j correctly.
However queries throw the Neo4jError: Unknown function 'apoc.convert.fromJsonMap' error, so apoc is clearly missing.
I followed the procedure described here https://neo4j.com/docs/operations-manual/current/kubernetes/configuration/#operations-installing-plugins and my Values are here below.
The only difference I find is that in the guide apoc core is actually enabled afterwards by upgrading the helm chart, while I'm installing it with the option enabled already.
Looking at https://neo4j.com/docs/apoc/current/config/ I saw
As of Neo4j v.5.0, APOC config settings are no longer supported in the neo4j.conf file. Please move all apoc.* settings to apoc.conf. It is also possible to set the config settings using environment variables.
so as neo4j-standalone is using version 4.4.16 I moved the apoc configurations from apoc.config to neo4.config but still apoc procedures are not found by the queries.
Is there something I'm missing out to configure in order to enable apoc?
Thank you very much.
neo4j-db:
# neo4j-standalone:
nameOverride: "neo4j"
fullnameOverride: 'neo4j'
neo4j:
# Name of your cluster
name: "fixit-neo4j" # this will be the label: app: value for the service selector
password: "password"
##
passwordFromSecret: ""
passwordFromSecretLookup: false
edition: "community"
acceptLicenseAgreement: "yes"
offlineMaintenanceModeEnabled: false
resources:
cpu: "1000m"
memory: "2Gi"
volumes:
data:
mode: 'volumeClaimTemplate'
volumeClaimTemplate:
accessModes:
- ReadWriteOnce
storageClassName: neo4j-sc-data
resources:
requests:
storage: 4Gi
backups:
mode: 'share' # share an existing volume (e.g. the data volume)
share:
name: 'logs'
logs:
mode: 'volumeClaimTemplate'
volumeClaimTemplate:
accessModes:
- ReadWriteOnce
storageClassName: neo4j-sc-logs
resources:
requests:
storage: 4Gi
services:
# A ClusterIP service with the same name as the Helm Release name should be used for Neo4j Driver connections originating inside the
# Kubernetes cluster.
default:
# Annotations for the K8s Service object
annotations: { }
# A LoadBalancer Service for external Neo4j driver applications and Neo4j Browser
neo4j:
### this would create cluster-neo4j svc
enabled: false
# env:
# NEO4J_PLUGINS: '["graph-data-science"]'
config:
server.bolt.enabled : "true"
server.bolt.tls_level: "REQUIRED"
server.bolt.listen_address: "0.0.0.0:7687"
dbms.ssl.policy.bolt.client_auth: "NONE"
dbms.ssl.policy.bolt.enabled: "true"
server.directories.plugins: "/var/lib/neo4j/labs"
dbms.security.procedures.unrestricted: "apoc.*"
server.config.strict_validation.enabled: "false"
dbms.security.procedures.allowlist: "gds.*,apoc.*"
apoc_config:
apoc.trigger.enabled: "true"
apoc.jdbc.neo4j.url: "jdbc:foo:bar"
apoc.import.file.enabled: "true"
startupProbe:
failureThreshold: 1000
periodSeconds: 50
ssl:
# setting per "connector" matching neo4j config
bolt:
privateKey:
secretName: tls-secret
subPath: tls.key
publicCertificate:
secretName: tls-secret
subPath: tls.crt
trustedCerts:
sources: [ ]
revokedCerts:
sources: [ ]
OK after a bit of looking at quite a few issues on the same subject, I found that some solutions for this problem was to add dbms.directories.plugins: "/var/lib/neo4j/labs" and dbms.config.strict_validation: "false" in the config section which, as I understand it, mirrors these settings both for server and dbms. It indeed worked, but it's weird that in the official guide it's not mentioned. I mean, these mirrored settings make sense, tell both the server and the dbms where to look for plugins, but still it should be mentioned. I see so many post about this, which means the documentation is not clear enough. It's easy to take things for granted and in fact because this mirrored plugin location both for the server AND dbms need is just not stated anywhere in the docs, I as many others thought that dbms was already configured with the same location as server.directories.plugins: "/var/lib/neo4j/labs" ( which the docs say to configure ) and haven't added it, but hey.. ain't nobody's perfect I guess. Hope they change the docs then for future devs' sake, but meanwhile this answer could be helpful.
So the correct configuration is
env:
NEO4J_PLUGINS: '["graph-data-science"]'
config:
server.bolt.enabled: 'true'
server.bolt.tls_level: 'REQUIRED'
server.bolt.listen_address: '0.0.0.0:7687'
dbms.ssl.policy.bolt.client_auth: 'NONE'
dbms.ssl.policy.bolt.enabled: 'true'
## apoc
server.directories.plugins: '/var/lib/neo4j/labs'
server.config.strict_validation.enabled: 'false'
dbms.security.procedures.unrestricted: 'apoc.*'
dbms.security.procedures.allowlist: 'gds.*,apoc.*'
### additional needed dbms config mirroring server config
dbms.directories.plugins: "/var/lib/neo4j/labs"
dbms.config.strict_validation: "false"
apoc_config:
apoc.trigger.enabled: "true"
apoc.jdbc.neo4j.url: "jdbc:foo:bar"
apoc.import.file.enabled: "true"
It seems the docs are missing installing the APOC plugin. Change the following line to include APOC as well:
NEO4J_PLUGINS: '["graph-data-science", "apoc"]'
and you should be good

How to disable apikey for local serverless development?

I created a simple api (using serverless) which is protected by an apikey (when deployed via $ serverless deploy). However, for local development ($ serverless offline) I do not want to use an api key. How can I disable this for local only?
This is my serverless.yml:
service: my-service
frameworkVersion: "3"
provider:
name: aws
runtime: nodejs16.x
region: eu-central-1
apiGateway:
apiKeys:
- name: my-apikey
value: ${ssm:my-apikey}
functions:
myfunc:
handler: src/v1/myfunc/index.get
events:
- http:
path: /v1/myfunc
method: get
private: true
plugins:
- serverless-esbuild
- serverless-offline
- serverless-dotenv-plugin
Note: I am aware that I could simply set private: false when doing local development but this is quite tedious when there is a long list of functions.
The solution was to use the --noAuth option:
serverless offline --noAuth

IoT-Agent OPC-UA Docker-compose setting for NGSI ld or NGSI v2

In the docker-composer files of the OPC-UA IoT-Agent there are some comments unclear to me, in particular at the line is told to comment if you want to use NGSI-LD or to comment the line if you want to use NGSI-V2.
Reading the strings that should be commented out however, it would seem that it is necessary to remove the comments from both the lines to use NGSI-LD, and comment both of them to use NGS-V2.
Is my interpretation correct? Thanks for clearing it up.
PS: the same issue is present to the file docker-compose-external-server.yml
Setting up NGSI-v2 vs NGSI-LD is common to all IoT Agents. The Installation Guide describes the required configuration - default operation is NGSI-v2.
If you want to operate NGSI-LD, the ngsiVersion and jsonLdContext must be defined.
{
host: '192.168.56.101',
port: '1026',
ngsiVersion: 'ld',
jsonLdContext: 'http://context.json-ld'
}
ngsiVersion can be v2, ld or mixed.
Both settings can also be set up using Environment Variables which is more convenient when using Docker
Therefore, for NGSI-LD the following minimal set-up is required:
iotage:
hostname: iotage
image: iotagent4fiware/iotagent-opcua:latest
environment:
- IOTA_CB_NGSI_VERSION=ld
- IOTA_JSON_LD_CONTEXT=https://path-to-context-file
- IOTA_FALLBACK_TENANT=opcua_car
- IOTA_RELAX_TEMPLATE_VALIDATION=true
For NGSI-v2 the following is required:
iotage:
hostname: iotage
image: iotagent4fiware/iotagent-opcua:latest
environment:
- IOTA_CB_NGSI_VERSION=v2
- IOTA_RELAX_TEMPLATE_VALIDATION=true
IOTA_RELAX_TEMPLATE_VALIDATION is required for OPC-UA to allow the provisioning of OPC-UA topics with = within them which would normally be disallowed.

Error "Property Listeners cannot be empty" occurs when deploy ruby-on-rails project

I am newbie in AWS Cloudformation. My Elastic Beanstalk Worker uses Ruby on Rails. The EB is a Stack based on cloudformation template.
I don’t know why, when I deploy (eb deploy) recently, Event gave the following error message:
The AWSEBLoadBalancer is not in Resources: of the template. But I find it in .ebextensions of the source code.
Resources:
AWSEBLoadBalancer:
Properties:
AccessLoggingPolicy:
EmitInterval: 5
Enabled: true
S3BucketName:
Ref: LogsBucket
Type: "AWS::ElasticLoadBalancing::LoadBalancer"
DependsOn: "LogsBucketPolicy"
LogsBucket:
DeletionPolicy: Retain
Type: "AWS::S3::Bucket"
LogsBucketPolicy:
Properties:
Bucket:
Ref: LogsBucket
PolicyDocument:
Statement:
-
Action:
- "s3:PutObject"
Effect: Allow
Principal:
AWS:
? "Fn::FindInMap"
:
- Region2ELBAccountId
-
Ref: "AWS::Region"
- AccountId
Resource:
? "Fn::Join"
:
- ""
-
- "arn:aws:s3:::"
-
Ref: LogsBucket
- /AWSLogs/
-
Ref: "AWS::AccountId"
Can you please give me some hints to solve this problem?
The error message says that you are missing Listeners. With the Listeners your balancer definition would be something like (need to modify to your own settings):
AWSEBLoadBalancer:
Properties:
Listeners:
- InstancePort: 80
InstanceProtocol: HTTP
LoadBalancerPort: 80
#PolicyNames:
# - String
Protocol: HTTP
#SSLCertificateId: String
AccessLoggingPolicy:
EmitInterval: 5
Enabled: true
S3BucketName:
Ref: LogsBucket
Type: "AWS::ElasticLoadBalancing::LoadBalancer"
DependsOn: "LogsBucketPolicy"

Prometheus AlertManager - Send Alerts to different clients based on routes

I have 2 services A and B which I want to monitor. Also I have 2 different notification channels X and Y in the form of receivers in the AlertManager config file.
I want to send to notify X if service A goes down and want to notify Y if service B goes down. How can I achieve this my configuration?
My AlertManager YAML file is:
route:
receiver: X
receivers:
- name: X
email_configs:
- name: Y
email_configs:
And alert.rule files is:
groups:
- name: A
rules:
- alert: A_down
expr: expression
for: 1m
labels:
severity: critical
annotations:
summary: "A is down"
- name: B
rules:
- alert: B_down
expr: expression
for: 1m
labels:
severity: warning
annotations:
summary: "B is down"
The config should roughly look like this (not tested):
route:
group_wait: 30s
group_interval: 5m
repeat_interval: 2h
receiver: 'default-receiver'
routes:
- match:
alertname: A_down
receiver: X
- match:
alertname: B_down
receiver: Y
The idea is, that each route field can has a routes field, where you can put a different config, that gets enabled if the labels in match match the condition.
For clarifying - The General Flow to handle alert in Prometheus (Alertmanager and Prometheus integration) is like this:
SomeErrorHappenInYourConfiguredRule(Rule) -> RouteToDestination(Route)
-> TriggeringAnEvent(Reciever)-> GetAMessageInSlack/PagerDuty/Mail/etc...
For example:
if my aws machine cluster production-a1 is down, I want to trigger an event sending "pagerDuty" and "Slack" to my team with the relevant error.
There's 3 files important to configure alerts on your prometheus system:
alertmanager.yml - configuration of you routes (getting the triggered
errors) and receivers (how to handle this errors)
rules.yml - This rules will contain all the thresholds and rules
you'll define in your system.
prometheus.yml - global configuration to integrate your rules into routes and recivers together (the two above).
I'm attaching a Dummy example In order to demonstrate the idea, in this example I'll watch overload in my machine (using node exporter installed on it):
On /var/data/prometheus-stack/alertmanager/alertmanager.yml
global:
# The smarthost and SMTP sender used for mail notifications.
smtp_smarthost: 'localhost:25'
smtp_from: 'JohnDoe#gmail.com'
route:
receiver: defaultTrigger
group_wait: 30s
group_interval: 5m
repeat_interval: 6h
routes:
- match_re:
service: service_overload
owner: ATeam
receiver: pagerDutyTrigger
receivers:
- name: 'pagerDutyTrigger'
pagerduty_configs:
- send_resolved: true
routing_key: <myPagerDutyToken>
Add some rule On /var/data/prometheus-stack/prometheus/yourRuleFile.yml
groups:
- name: alerts
rules:
- alert: service_overload_more_than_5000
expr: (node_network_receive_bytes_total{job="someJobOrService"} / 1000) >= 5000
for: 10m
labels:
service: service_overload
severity: pager
dev_team: myteam
annotations:
dev_team: myteam
priority: Blocker
identifier: '{{ $labels.name }}'
description: 'service overflow'
value: '{{ humanize $value }}%'
On /var/data/prometheus-stack/prometheus/prometheus.yml add this snippet to integrate alertmanager:
global:
...
alerting:
alertmanagers:
- scheme: http
static_configs:
- targets:
- "alertmanager:9093"
rule_files:
- "yourRuleFile.yml"
...
Pay attention that the key point of this example is service_overload which connects and binds the rule into the right receiver.
Reload the config (restart the service again or stop and start your docker containers) and test it, if it's configured well you can watch the alerts in http://your-prometheus-url:9090/alerts

Resources