How to increase plotting intervall in ThingsBoard Dashboard - iot

I have successfully managed to install ThingsBoard on a Ubuntu 18.04. In my application I want to send large data packages from a few (<20) devices via MQTT. I also want to display the incoming data packets in real time.
For testing purposes I now play with the chart dashboard of ThingsBoard. Unfortunately I am not able to set the plotting interval on the dashboard to less than 1 second.
This is the current situation and I try to increase the speed of the plot:
Dashboard GIF
Are there any other settings that would meet my needs?
Thank you very much.

change the "data aggregation function" to "none" and it will start displaying real-time, raw data.

Related

Writing the data to the timeseries database over unstable network

I'm trying to find a time series database for the following scenario:
Some sensor on raspberry pi provides the realtime data.
Some application takes the data and pushes to the time series database.
If network is off (GSM modem ran out of money or rain or something else), store data locally.
Once network is available the data should be synchronised to the time series database in the cloud. So no missing data and no duplicates.
(Optionally) query database from Grafana
I'm looking for time series database that can handle 3. and 4. for me. Is there any?
I can start Prometheus in federated mode (Can I?) and keep one node on raspberry pi for initial ingestion and another node in the cloud for collecting the data. But that setup would instantly consume 64mb+ of memory for Prometheus node.
Take a look at vmagent. It can be installed at every device where metrics from local sensors must be collected (e.g. at the edge), and collect all these metrics via various popular data ingestion protocols. Then it can push the collected metrics to a centralized time series database such as VictoriaMetrics. Vmagent buffers the collected metrics on the local storage when the connection to a centralized database is unavailable, and pushes the buffered data to the database as soon as the connection is recovered. Vmagent works on Rasberry PI and on any device with ARM, ARM64 or AMD64 architecture.
See use cases for vmagent for more details.

scale capability of volttron

I am trying volttron for a project solution and want to know the capability of volttron in a long term. The project is to control/monitor ~100k devices, and possibly millions if things run well.
What is the biggest scale of volttron usage in a real scenario? How many devices that one node can accommodate if say that the host machine have high spec?
What is the constrain of volttron later in the future after its use? (constrain as like in database / server resource / network)
The answer hoped to get is not an exact value. I just wanted to find the capability range.
Thanks,
There are several drivers for how well VOLTTRON scales for a single VOLTTRON instance.
In no particular order:
Network and device communication speed. (Are your devices on a serial connection? BACnet devices behind a MSTP router?)
Frequency of data collection. (10 seconds?, 1 minute? 5 minutes? 15 minutes?)
How close together (time wise) does data from differnt devices need to be.
Frequency of commands issued/ number of commands issued.
Machine specs
Often we see the bottleneck being the network for device communication. This will drive the rate at which you can communicate with devices. For collection a mid level PC is overkill in most situations.
In the field our users have been able to scrape 1.5K+ BACnet devices in less than 15 minutes with a single node. Many of these devices were on an MSTP trunk which would be the major limiting factor. If these were TCP BAcnet devices the rate of data acquisition would be much higher.
There are parameters to tune the rate of data collection for a specific node. It is common to tweak these values to find the optimal rate of collection after initial platform configuration.
The kind of scaling you are looking for will require using multiple VOLTTRON instances. It is common to have multiple collection boxes for an installation. Usually these instances will gather data for some number of devices (based on your scenario) and either send those values directly to a database or forward them to another central instance of the platform that will submit the data on the remote nodes behalf. Numbers for some real deployments can be found here: https://volttron.org/sites/default/files/publications/VOLTTRON%20Scalability-update-final.pdf
There are several database options from MySQL to Mongo to SQLite. You will want to pick a central database based on your data collection needs (so not SQLite).

Axibase Time-Series Database data sampling maximum rate

I am using Axibase Time Series Database Community Edition, version 10379. I try to store my data that comes from a force sensor and save it every 2 milliseconds, how can I configure the portal to accept this time resolution?
I made an attempt to send the data in that rate by using an Arduino board with WiFi shield but the TCP connection disconnected after sending a little data.
Time resolution in Axibase Time-Series Database is 1 millisecond by default, so the problem is probably occurring for other reasons such as:
Invalid timestamp
Missing end-of-line character at the end of the series command
Same timestamp for multiple commands with the same entity/metric/tags. For example, these commands are duplicates and one of the them will be discarded:
series ms:1445762625574 e:e-1 m:m-1=100
series ms:1445762625574 e:e-1 m:m-1=125
Overflow of receiving queue in ATSD. This can occur if ingestion rate is higher than disk write speed for long period of time. Open ATSD portal in the GUI and check the top right chart if rejected_count metric is greater than zero. This can be addressed by changing default configuration settings.
Other reasons specified in https://axibase.com/docs/atsd/api/data/#errors
I would recommend starting netcat in server mode and recording data from the Arduino board to file to see exactly what commands are sent into ATSD.
Stop ATSD with ./atsd-tsd.sh stop
Launch netcat in server mode and record received data to command.log file:
netcat -lk 8081 > command.log
Restart Arduino and send some data into ATSD (now netcat). Review command.log file
Start ATSD with ./atsd-tsd.sh start
Disclosure: I work for Axibase.

Fetch data subset from gmond

This is in the context of a small data-center setup where the number of servers to be monitored are only in double-digits and may grow only slowly to few hundreds (if at all). I am a ganglia newbie and have just completed setting up a small ganglia test bed (and have been reading and playing with it). The couple of things I realise -
gmetad supports interactive queries on port 8652 using which I can get metric data subsets - say data of particular metric family in a specific cluster
gmond seems to always return the whole dump of data for all metrics from all nodes in a cluster (on doing 'netcat host 8649')
In my setup, I dont want to use gmetad or RRD. I want to directly fetch data from the multiple gmond clusters and store it in a single data-store. There are couple of reasons to not use gmetad and RRD -
I dont want multiple data-stores in the whole setup. I can have one dedicated machine to fetch data from the multiple, few clusters and store them
I dont plan to use gweb as the data front end. The data from ganglia will be fed into a different monitoring tool altogether. With this setup, I want to eliminate the latency that another layer of gmetad could add. That is, gmetad polls say every minute and my management tool polls gmetad every minute will add 2 minutes delay which I feel is unnecessary for a relatively small/medium sized setup
There are couple of problems in the approach for which I need help -
I cannot get filtered data from gmond. Is there some plugin that can help me fetch individual metric/metric-group information from gmond (since different metrics are collected in different intervals)
gmond output is very verbose text. Is there some other (hopefully binary) format that I can configure for export?
Is my idea of eliminating gmetad/RRD completely a very bad idea? Has anyone tried this approach before? What should I be careful of, in doing so from a data collection standpoint.
Thanks in advance.

Capture / Monitor system data of application server in Graphite

I am using graphite server to capture my metrics data and bring down to graphs. I have 4 application servers which is load balancer setup. My aim is capture system data such as cpu usage, memory usage, disk load, etc., for all the 4 application servers. I setup an graphite environment in a separate server and i wanted to push the system data for all the applications servers to graphite and get it display as graphs. I don't know what needs to be done for feeding system data to graphite. My thinking was to install statsd in all application servers and feed the system data to graphite but looks like statsd does not support system data rather application data.
Can anyone help me to catch the right track. Thanks in advance.
Running collectd with a graphite agent would be an excellent start to gather the information your after.
There is an almost unlimited amount of ways to get your data into graphite.
You can find a list of tools that have known to work very well with graphite on the readthedocs.org page: http://graphite.readthedocs.org/en/0.9.10/tools.html
There is also an example script that gathers load average from the system in the carbon project: example-client.py

Resources