Can I run grafana outside of firewall - influxdb

I'll have influxDB storing arduino sensor data. I need to visualize this.
I want admins(around 100 people) go to their browser, type www.example.com, fill in their username and password, and see Grafana 10 visualization belongs to them from 1000 vizualizations. Is it possible with grafana or should I use something else?

In short yes.
To put it simple, Grafana only knows how to plot data, what you need to do is provide those data.
Commonly Grafana is used with Graphite, but you're absolutely not forced to use it, and have in fact various alternative for your datasource; InfluxDB is one of those. If you haven't yet done it, I suggest you to read more at http://docs.grafana.org/datasources/influxdb/
From your instance of Grafana click on Dashboards and select Data Sources; there you'll be able to select InfluxDB 0.9.x or InfluxDB 0.8.x and continue providing required data.
What you'll need is configure your architecture to give access to the InfluxDB from both your Arduino application(s) (to push/store data) and Grafana (to pull data).

Related

Difference between database connector/reader nodes in KNIME

While creating some basic workflow using KNIME and PSQL I have encountered problems with selecting proper node for fetching data from db.
In node repo we can find at least:
PostgreSQL Connector
Database Reader
Database Connector
Actually, we can do the same using 2) alone or connecting either 1) or 2) to node 3) input.
I assumed there are some hidden advantages like improved performance with complex queries or better overall stability but on the other hand we are using exactly the same database driver, anyway..
There is a big difference between the Connector Nodes and the Reader Node.
The Database Reader, reads data into KNIME, the data is then on the machine running the workflow. This can be a bad idea for big tables.
The Connector nodes do not. The data remains where it is (usually on a remote machine in your cluster). You can then connect Database nodes to the connector nodes. All data manipulation will then happen within the database, no data is loaded to your machine (unless you use the output port preview).
For the difference of the other two:
The PostgresSQL Connector is just a special case of the Database Connector, that has pre-set configuration. However you can make the same configuration with the Database Connector, which allows you to choose more detailed options for non standard databases.
One advantage of using 1 or 2 is that you only need to enter connection details once for a database in a workflow, and can then use multiple reader or writer nodes. I'm not sure if there is a performance benefit.
1 offers simpler connection details with the bundled postgres jdbc drivers than 2

find a free,cluster working influxdb

I'm going to use influxdb to store a lot of iot data from sensors.
As the last cluster version of influxdbv0.11 is not ready to use in production, and the Relay HA is too young too, is there another way to scale-out influxdb?
eg:
What are the maturity of the last cluster version of influxdb v0.11? Should I customize v0.11 or try other cost-saving way.
How about use kafka infront of influxdb to buffer data when influxdb got down?
How about sharding?Is there any detailed document about sharding in influxdb( https://influxdata.com/high-availability/)?
Any way, I just want to find a free, cluster working influxdb.
Other than InfluxDB Relay there isn't a free way to scale out InfluxDB.

Zabbix & external monitoring systems

I need to make freinds zabbix & other monitoring system.
My company uses Zabbix for monitoring. Our partner plans to use other system.
We need to exchange monitoring datas.
I'm interested in coopereation with the next systems: BMC Patrol, MS SCOM, NetCool, Portal.
What is the best way to integrate it?
Maybe via SNMP?
Replicate hosts and metrics into your Zabbix (use Zabbix trapper item type and setup also Allowed hosts value) and then just use some suitable zabbix-sender implementation and push data into Zabbix.
IMO it's terrible idea, because latency, syncing, ... Do you really need data (item values) or do you need only visualize data from different datasources in one graph?
Regarding BMC Patrol you can use History Loader/Propagator KM to export the monitoring data:
https://docs.bmc.com/docs/display/public/unixlinux912/PATROL+KM+for+History+Loader
or you can use the 'dump_hist' command to dump the history data from the agents:
https://docs.bmc.com/docs/display/pia9600/dump_hist+uility
Regarding Netcool events, you could get the information using different approaches, for example, depending on the version, you could get the events from the HTTP interface, as described below:
https://www.ibm.com/support/knowledgecenter/en/SSNFET_9.2.0/com.ibm.netcool_OMNIbus.doc_7.4.0/omnibus/wip/api/reference/omn_api_http_httpinterface.html
Or perhaps you could create a flat file gateway to read the events and write them on a file:
https://www.ibm.com/support/knowledgecenter/en/SSSHTQ/omnibus/gateways/flatfilegw/wip/concept/flatfilegw_intro.html

How can we collect performance metrics from CAdvisor docker container?

Sorry I just started to learn docker. My question may seem stupid for some of you.
In fact, I would like to know if there is a way to collect performance metrics from "CAdvisor" container (not from cgroup) at runtime ? I mean, extract performance values from the curves designed by cadvisor like memory usage or network traffic.
I need to record this values and save them in a database so that, I can perform a statistic analyzes upon these generated values (like comparing memory consumption for two docker containers at t=50s).
Thanks in advance.
As other answers mention, cAdvisor doesn't provide its own performance data API, instead it exposes metrics which are typically handled in a separate database if one wants to derive performance data beyond "real time". For example, cAdvisor exports Prometheus metrics natively:
http://prometheus.io/docs/instrumenting/exporters/
The Prometheus metric types:
http://prometheus.io/docs/concepts/metric_types/
Prometheus supports a fairly rich functional expression language that can be used for querying and visualization:
http://prometheus.io/docs/querying/basics/
cAdvisor does provide a rest endpoint to get any stats in real time. By default, it keeps latest two minute of data. You can configure it to keep more or less. It also supports a storage backend to keep dumping stats to an influxdb database.
REST Api:
eg. /api/v1.3/containers
doc: https://github.com/google/cadvisor/blob/master/docs/api.md
Doc on setting up InfluxDB:
https://github.com/google/cadvisor/blob/master/docs/influxdb.md
I think you could use https://github.com/tutumcloud/container-metrics for this. Basically what that would be doing is using influxdb http://influxdb.com/ as a time series data store.
There is some more information available here: http://blog.tutum.co/2014/08/25/panamax-docker-application-template-with-cadvisor-elasticsearch-grafana-and-influxdb/
A couple of people seemed to be looking into the ELK stack (Elastic Search, Logstash, Kibana) for visualising some of this data here: https://github.com/google/cadvisor/issues/634

InfluXDB Raspberry: send data periodically to logging host

I would like to use InfluXDB wit my Raspberry/Openhab home automatisation.
I am just worried about db size/performance.
So my plan would be: log only 1 month on Pi, let it be cleaned automatically.
Clean I understand is easy with retention. (Automatically clear old data )
For long time anaylsis I want to collect all the data on a server.
Now question: How can I export the data on PI before retention into flatfile and afterwards import that data in a seperate InfluXDB on different server?
(Or even better: is there a way to do this in a sort of cluster mode?)
thanks a lot,
Chris
I use InfluxDB on a pi for sensor logs. I log now 4 records every 5 seconds for more than 3 months and performances on my pi are really good. I don't have now the file size but was not more than 10Mb
You can use InfluxDB in cluster mode but not sure it will answer yor question for data cleaning.
To exprt data, you can use InfluxDB API to get all series in the data base, then all data and flush that in a json file. You can use the API to load that file in another DB

Resources