Zenoss, Not able to understand use of data values - monitoring

I am working on a project in which we are monitoring data of some virtual devices which is running on zenoss server , I get the actual data by calling several api's lets suppose calling on some url which ends with /getRRDInfoTabDevice gives value of data store and data point, then i again make call using this values and i get data in xml which is as follows
<row><t>2147483700</t><v0>1.8917126005e-305</v0></row>
where t is data for time, and v0 is actual data, so my question is what is the use of this value v0, as one can see its in e-305 which is almost near to zero ?

This value is in octet /second , this value is used to monitor cpu utilization , network utilization etc. It basically depends on what you are monitoring through your rrdtool.

Related

What does CBATTError Code insufficientResources really mean?

I'm trying to send data over BLE from my iPhone to an ESP32 board. I'm developing in flutter platform and I'm using flutter_reactive_ble library.
My iPhone can connect to the other device and it can also send 1 byte using writeCharacterisiticWithResponse function. But when I try to send my real data which is large (>7000 bytes), it then gives me the error:
flutter: Error occured when writing 9f714672-888c-4450-845f-602c1331cdeb :
Exception: GenericFailure<WriteCharacteristicFailure>(
code: WriteCharacteristicFailure.unknown,
message: "Error Domain=CBATTErrorDomain Code=17
"Resources are insufficient."
UserInfo={NSLocalizedDescription=Resources are insufficient.}")
I tried searching for this error but didn't find additional info, even in Apple Developer website. It just says:
Resources are insufficient to complete the ATT request.
What does this error really means? Which resources are not sufficient and how to work around this problem?
This is almost certainly larger than this characteristic's maximum value length (which is probably on the order of 10s of bytes, not 1000s of bytes). Before writing, you need to call maximumWriteValueLength(for:) to see how much data can be written. If you're trying to send serial data over a characteristic (which is common, but not really what they were designed for), you'll need to break your data up into chunks and reassemble them on the peripheral. You will likely either need an "end" indicator of some kind, or you will need to send the length of the payload first so that the receiver knows how much to excpect.
First of all, a characteristic value cannot be larger than 512 bytes. This is set by the ATT standard (Bluetooth Core Specification v5.3, Vol 3, Part F (ATT), section 3.2.9). This number has been set arbitrarily by the protocol designers and does not map to any technical limitation of the protocol.
So, don't send 7000 bytes in a single write. You need to keep it at most 512 to be standard compliant.
If you say that it works with another Bluetooth stack running on the GATT server, then I guess CoreBluetooth does not enforce/check the maximum length of 512 bytes on the client side (I haven't tested). Therefore I also guess the error code you see was sent by the remote device rather than by CoreBluetooth locally as a pre-check.
There are three different common ways of writing a characteristic on the protocol level (Bluetooth Core Specification v5.3, Vol 3, Part G (GATT), section 4.9 Characteristic Value Write):
Write Without Response (4.9.1)
Write Characteristic Value (4.9.3)
Write Long Characteristic Values (4.9.4)
Number one is unidirectional and does not result in a response packet. It uses a single ATT_WRITE_CMD packet where the value must be at most ATT_MTU-3 bytes in length. This length can be retrieved using maximumWriteValueLength(for:) with .withoutResponse. The requestMtu method in flutter_reactive_ble uses this method internally. If you execute many writes of this type rapidly, be sure to add flow control to avoid CoreBluetooth dropping outgoing packets before they are sent. This can be done through peripheralIsReadyToSendWriteWithoutResponse by simply always waiting for this callback after each write, before you write the next packet. Unfortunately, it seems flutter_reactive_ble does not implement this flow control mechanism.
Number two uses a single ATT_WRITE_REQ where the value must be at most ATT_MTU-3 bytes in length, just as above. Use the same approach as above to retrieve that maximum length (note that maximumWriteValueLength with .withResponse always returns 512 and is not what you want). Here however, either an ATT_WRITE_RSP will be returned on success or an error packet will be received with an error code. Only one ATT transaction can be outstanding at a time, which significantly lowers throughput compared to Write Without Response.
Number three uses a sequence of multiple ATT_PREPARE_WRITE_REQ packets (containing offset and value) followed by an ATT_EXECUTE_WRITE_REQ. The maximum length of the value in each each chunk is ATT_MTU-5. Each _REQ packet also requires a corresponding _RSP packet before it can continue (alternatively, an error code could be sent by the remote device). This approach is used when the characteristic value to be written is too long to be sent using a single ATT_WRITE_REQ.
For any of the above write methods, you are always also limited by the maximum attribute size of 512 bytes as per the specification.
Any Bluetooth stack I know of transparently chooses between "Write Characteristic Value" and "Write Long Characteristic Values" when you tell it to write with response, depending on the value length and MTU. Server side it's a bit different. Some stacks put the burden on the user to combine all packets but it seems nimble handles that on its own. From what I can see in the source code (https://github.com/apache/mynewt-nimble/blob/26ccb8af1f3ea6ad81d5d7cbb762747c6e06a24b/nimble/host/src/ble_att_svr.c#L2099) it can return the "Insufficient Resources" error code when it tries to allocate memory but fails (most likely due to too much buffered data). This is what might happen for you. To answer your first actual question, the standard itself does not say anything else about this error code than simply "Insufficient Resources to complete the request".
The error has nothing to do with LE Data Length extension, which is simply an optimization for a lower layer (BLE Link Layer) that does not affect the functionality of the host stack. The L2CAP layer will take care of the reassembling of smaller link layer packets if necessary, and must always support up to the negotiated MTU without overflowing any buffers.
Now, to answer your second question, if you send very large amounts of data (7000 bytes), you must divide the data in multiple chunks and come up with a way to correctly be able to combine them. Each chunk is written as a full characteristic value. When you do this, be sure to send values at most of size ATT_MTU-3 (but never larger than 512 bytes), to avoid the inefficient overheads of "Write Long Characteristic Values". It's then up to your application code to make sure you don't run out of memory in case too much data is sent.

Graphite has null values between data points

I have an API that fetches data packets from different servers. It formats this data to different small JSON units. I wrote an algorithm that sends them to graphite with the command json2graphite.
The sending works very well, the incoming data doesn't look bad either.
Now the problem:
The data displayed in graphite shows that each entry is followed by a null.
The data points that should be connected
I am aware that this data can also be connected using a function provided by the Graphite interface, but this doesn't help because Grafana boards always jump back and forth between value and null.
Is there a way to tell Grafana that it only goes to null if there was no data for more than 1 min or so?
I already tried to fix the problem with the data from "storage-schemas.conf" and "storage-aggregation.conf". Unfortunately without success.
storage-schemas.conf:
[default_1min_for_1day]
pattern = .*
retentions = 10s:6h,30s:8d,1m:31d,10m:1y,1h:5y
aggregation.conf:
[default_average]
pattern = .*
xFilesFactor = 0
aggregationMethod = average
If you want to know any more, ask me. : )
Grafana has an option to connect datapoints that are separated by nulls. You can see how to enable this in the screenshot shown under Display Styles settings on Grafana's documentation.
In Graphite composer you can also do it by specifying the connected line mode under Graph options here:
Additionally, you could use Graphite's keepLastValue function to carry the last received value over gaps where there are nulls.
I haven't found a direct solution but I will now try to minimize the interval between the entries. I noticed that the requests take much too long: 2-5 minutes.
There are probably too many servers, so the requests block the port too long.
The problem is not solved yet but I think I will mark it as solved if nobody says I have the problem within 5 days.

Roscpp data time value

I am working in roscpp and publishing odometry data to a ros node. Two classes are responsible for publishing this data. For safety reasons, it is sometimes necessary to call the destructor on the primary class and post still data (an odometry pose that uses the same values to let ros know the robot is standing still) from the other class. I have already confirmed the secondary class posts the correct pose on the correct anchor.
Most of the time after the switch to the secondary class, I am running into a check failed stemming from the data time not being greater than the previous data's time.
The error message I get is this:
F0402 08:12:25.187301 9044 map_by_time.h:43] Check failed: data.time > std::prev(trajectory.end())->first (636898039451158060 vs. 636898039451158060)
As you can see it thinks the data time is equal to the previous data time. I have confirmed that the timestamp being published from my code is in the area of 1554227809034068 (in microseconds since epoch), which is accurate to the time I collected the data. After the switch the time stamp is still correct, a value close to and slightly higher than the previous value.
I am trying to figure out why this error message has such a large number for time, and why it does not match the posted timestamp.

Lowest value from 2 payloads in Node-Red

I have a IoT system in home and two temperature sensors.
One of the sensors could work in some hours in direct sun.
The real temperature is always the lowest value, so sometimes temp1, sometimes temp2.
What I want to achieve is:
read the temperature from sensors1 (via MQTT)
read the temperature from sensors2 (via MQTT)
compare values
find the lowest one and send in via MQTT
go back to reading in loop
For this example I can simulate readings with injection nodes
How to do that? I am new in Node-Red, have tried but without success.
Here is my flow:
[{"id":"fa6372cc.47f92","type":"tab","label":"Flow 8","disabled":false,"info":""},{"id":"5ac90e03.22da3","type":"join","z":"fa6372cc.47f92","name":"","mode":"custom","build":"object","property":"payload","propertyType":"msg","key":"topic","joiner":"","joinerType":"str","accumulate":true,"timeout":"","count":"2","reduceRight":false,"reduceExp":"","reduceInit":"","reduceInitType":"","reduceFixup":"","x":990,"y":340,"wires":[["f09774bf.3c8428","a197b84d.6a7338"]]},{"id":"f09774bf.3c8428","type":"debug","z":"fa6372cc.47f92","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"true","x":1130,"y":340,"wires":[]},{"id":"43900e79.98cd8","type":"change","z":"fa6372cc.47f92","name":"set payload value","rules":[{"t":"set","p":"payload","pt":"msg","to":"req.params.value","tot":"msg"}],"action":"","property":"","from":"","to":"","reg":false,"x":790,"y":340,"wires":[["5ac90e03.22da3"]]},{"id":"b71d9143.c03bd","type":"change","z":"fa6372cc.47f92","name":"set topic temp1","rules":[{"t":"set","p":"topic","pt":"msg","to":"temp1","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":560,"y":320,"wires":[["43900e79.98cd8"]]},{"id":"e87114aa.6cd1","type":"change","z":"fa6372cc.47f92","name":"set topic temp2","rules":[{"t":"set","p":"topic","pt":"msg","to":"temp2","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":560,"y":360,"wires":[["43900e79.98cd8"]]},{"id":"783c47fd.8dd58","type":"inject","z":"fa6372cc.47f92","name":"temp source 2","topic":"","payload":"12","payloadType":"num","repeat":"3","crontab":"","once":false,"onceDelay":"1.5","x":380,"y":360,"wires":[["e87114aa.6cd1"]]},{"id":"271dedab.aaa7b2","type":"inject","z":"fa6372cc.47f92","name":"temp source 1","topic":"","payload":"10","payloadType":"num","repeat":"2","crontab":"","once":false,"onceDelay":"1","x":380,"y":320,"wires":[["b71d9143.c03bd"]]},{"id":"a197b84d.6a7338","type":"mqtt out","z":"fa6372cc.47f92","name":"temperature","topic":"domoticz/in","qos":"","retain":"","broker":"7e3561ec.acad","x":1150,"y":280,"wires":[]},{"id":"7e3561ec.acad","type":"mqtt-broker","z":"","name":"Domoticz","broker":"192.168.6.11","port":"8084","clientid":"","usetls":false,"compatmode":true,"keepalive":"60","cleansession":true,"birthTopic":"","birthQos":"0","birthRetain":"false","birthPayload":"","closeTopic":"","closeRetain":"false","closePayload":"","willTopic":"","willQos":"0","willRetain":"false","willPayload":""}]
One way to do it would be like this:
This is storing the two temps in flow variables - the first flow initially sets them to a high number so the "min" in "choose lower value" will later work. In this case I've used a change node setting the payload to the JSONata of
$min([$flowContext("temp1"), $flowContext("temp2")])
but there's a few ways you could choose to do it.
Here is the code to try:
[{"id":"6bc2755e.9feb9c","type":"debug","z":"f454a93f.0e89d8","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"true","x":990,"y":340,"wires":[]},{"id":"38bd03eb.f7d06c","type":"change","z":"f454a93f.0e89d8","name":"choose lower value","rules":[{"t":"set","p":"payload","pt":"msg","to":"$min([$flowContext(\"temp1\"), $flowContext(\"temp2\")])\t","tot":"jsonata"}],"action":"","property":"","from":"","to":"","reg":false,"x":790,"y":340,"wires":[["6bc2755e.9feb9c"]]},{"id":"9066677f.eb0358","type":"change","z":"f454a93f.0e89d8","name":"store temp1","rules":[{"t":"set","p":"temp1","pt":"flow","to":"payload","tot":"msg"}],"action":"","property":"","from":"","to":"","reg":false,"x":550,"y":320,"wires":[["38bd03eb.f7d06c"]]},{"id":"a70c9b2a.e7db58","type":"change","z":"f454a93f.0e89d8","name":"store temp2","rules":[{"t":"set","p":"temp2","pt":"flow","to":"payload","tot":"msg"}],"action":"","property":"","from":"","to":"","reg":false,"x":550,"y":360,"wires":[["38bd03eb.f7d06c"]]},{"id":"4bd27616.d022c8","type":"inject","z":"f454a93f.0e89d8","name":"temp source 2","topic":"","payload":"12","payloadType":"num","repeat":"","crontab":"","once":false,"onceDelay":"1.5","x":370,"y":360,"wires":[["a70c9b2a.e7db58"]]},{"id":"7378dd4f.3825b4","type":"inject","z":"f454a93f.0e89d8","name":"temp source 1","topic":"","payload":"10","payloadType":"num","repeat":"","crontab":"","once":false,"onceDelay":"1","x":370,"y":320,"wires":[["9066677f.eb0358"]]},{"id":"314eb0ec.85211","type":"inject","z":"f454a93f.0e89d8","name":"","topic":"","payload":"","payloadType":"date","repeat":"","crontab":"","once":true,"onceDelay":0.1,"x":370,"y":260,"wires":[["688646b.138a6b8"]]},{"id":"688646b.138a6b8","type":"change","z":"f454a93f.0e89d8","name":"set to high","rules":[{"t":"set","p":"temp1","pt":"flow","to":"999","tot":"num"},{"t":"set","p":"temp2","pt":"flow","to":"999","tot":"num"}],"action":"","property":"","from":"","to":"","reg":false,"x":550,"y":260,"wires":[[]]}]

How to paginate using Orient 3.0 streams

According to streaming example at http://orientdb.com/docs/3.0.x/java/Java-Query-API.html, we can use the Orient result set streaming API as follows
ODatabaseDocument db;
...
String statement = "SELECT FROM V WHERE name = ? and surnanme = ?";
OResultSet rs = db.query(statement, "John", "Smith");
rs.stream().forEach(x -> System.out.println(x.getProperty("age")));
rs.close();
This is fine but too trivial - what if we need to keep the rs/stream around? We can't very well close the resultset because we want to reuse the stream on a subsequent user request in a web application, say (in scenarios such as paging).
But to keep the streams "alive" the Orient user guide says that:
OResultSet is implemented as a paginated structure, that holds some
iterators open during the iteration. This is true both in remote and
in embedded usage.
You should always invoke OResultSet.close() at the end of the
execution, to free resources.
OResultSet instances are automatically closed when you close the
ODatabase that returned them.
It is important to always close result sets, even when they are
converted to streams (after the stream is consumed).
Are there any best practices around this. As far as I can tell, we would need to:
1) Keep the Orient database connection open until the user "paging" session is done (which could be say 5-10 minutes). Only when the user says "done" can we close the result set & close the database connection. The Orient database connection (and whatever stream it generated) thus becomes "private" to a single application user. Moreover, since every user request can be activated on a different thread, the said database connection would need to be made active on the current thread before using it.
2) Use the Java Stream API to navigate through arbitrary subsets of the "arbitrarily" large resultset. How would memory usage be handled by the underlying Orient db stream implementation? What determines the memory usage for using a "single rs/stream" and keeping it around for a while? What happens when we have thousands of open rs/streams especially if each user has their own "private" rs/stream they're looking at?
3) If a given Orient database connection can only be used on a single thread at a time (an Orient requirement), how do we handle multiple users with their own custom long-lived rs/streams/connections? Does this mean that if we have a 1000 clients using their own private rs/stream (that they hang on to for say 5 minutes), then we have to keep 1000 database connections open (i.e. one for each user/rs?) What are the limits around this? This style is obviously quite different from the more typical execute query/close rs pattern for quick user interaction that is stateless from one request to the next (naive paging that re-executes queries every time for a given range and this can get expensive)
P.S. I realize that once we get a Java stream, then we pretty much start just using the Java API itself - so I suppose that JOOQ streaming usage (for example) would be pretty similar to Orient streaming usage once you start getting into the Stream interfaces - I'm not familiar with the Java Streams API, but I suppose How to paginate a list of objects in Java 8? is a good place to start?
My conclusion is that streaming works well when scrolling through a large result set without consuming a large amount of memory or having to keep re-executing offset/limit queries (similar to forward only scrolling over JDBC resultsets). A typical use case is an export scenario.
For forward and backward paging, in Orient at least, you likely need an indexed property/properties and perform range queries - you'll need to make sure the index is SB-tree so that it supports range queries.
FYI, Solr has a cursor mechanism which works pretty well for forward pagination on sorted results - but if you keep some simple state markers on the client you can also go back to results already encountered. "go to" random pages is not supported in Solr cursors but you can always re-sort/filter on some other criteria in order to move "useful" results to the top of the resultset instead of deep paging (https://lucene.apache.org/solr/guide/6_6/pagination-of-results.html)

Resources