Is MQTT support in Cumulocity? - mqtt

it's possible to receive MQTT messages from Cumuloyity API?
How can I get with Java Clients the values from following Measurements:
Analog Measurement
Motion Measurement
Thanks

Querying measurements is described here: http://cumulocity.com/guides/java/developing/, Section "Accessing events and measurements". There are currently no pre-defined Java classes for analog measurements and motion measurements, however, you can still retrieve them as generic properties. Check the example on the web page and instead of
measurementFilter.byFragmentType(SignalStrength.class);
try
measurementFilter.byFragmentType("c8y_MotionMeasurement");
and instead of
measurement.get(SignalStrength.class);
try
measurement.getProperty("c8y_MotionMeasurement");
You can also create the Java classes representing the measurements on your own by "stealing" and modifying one of the existing classes:
https://bitbucket.org/m2m/cumulocity-clients-java/src/53216dc587e24476e0578b788672416e8566f92b/device-capability-model/src/main/java/c8y/?at=default

Related

'GetPositionLowerLimits' and 'GetPositionUpperLimits' does not support get by ModelInstance as other functions

What I find in Drake API is that 'GetPositionLowerLimits' and 'GetPositionUpperLimits' does not support get by ModelInstance as other functions.
Anybody knows how to query this when I have multiple robots and I am interested in the Lower and Upper Limits for every robot?
I believe you can take the result of GetPositionLowerLimits (or GetPositionUpperLimits) and feed it through GetPositionsFromArray to select just one model instance at a time.
The model instances documentation has some more information.

How to define Alerts with exception in InfluxDB/Kapacitor

I'm trying to figure out the best or a reasonable approach to defining alerts in InfluxDB. For example, I might use the CPU batch tickscript that comes with telegraf. This could be setup as a global monitor/alert for all hosts being monitored by telegraf.
What is the approach when you want to deviate from the above setup for a host, ie instead of X% for a specific server we want to alert on Y%?
I'm happy that a distinct tickscript could be created for the custom values but how do I go about excluding the host from the original 'global' one?
This is a simple scenario but this needs to meet the needs of 10,000 hosts of which there will be 100s of exceptions and this will also encompass 10s/100s of global alert definitions.
I'm struggling to see how you could use the platform as the primary source of monitoring/alerting.
As said in the comments, you can use the sideload node to achieve that.
Say you want to ensure that your InfluxDB servers are not overloaded. You may want to allow 100 measurements by default. Only on one server, which happens to get a massive number of datapoints, you want to limit it to 10 (a value which is exceeded by the _internal database easily, but good for our example).
Given the following excerpt from a tick script
var data = stream
|from()
.database(db)
.retentionPolicy(rp)
.measurement(measurement)
.groupBy(groupBy)
.where(whereFilter)
|eval(lambda: "numMeasurements")
.as('value')
var customized = data
|sideload()
.source('file:///etc/kapacitor/customizations/demo/')
.order('hosts/host-{{.hostname}}.yaml')
.field('maxNumMeasurements',100)
|log()
var trigger = customized
|alert()
.crit(lambda: "value" > "maxNumMeasurements")
and the name of the server with the exception being influxdb and the file /etc/kapacitor/customizations/demo/hosts/host-influxdb.yaml looking as follows
maxNumMeasurements: 10
A critical alert will be triggered if value and hence numMeasurements will exceed 10 AND the hostname tag equals influxdb OR if value exceeds 100.
There is an example in the documentation handling scheduled downtimes using sideload
Furthermore, I have created an example available on github using docker-compose
Note that there is a caveat with the example: The alert flaps because of a second database dynamically generated. But it should be sufficient to show how to approach the problem.
What is the cost of using sideload nodes in terms of performance and computation if you have over 10 thousand servers?
Managing alerts manually directly in Chronograph/Kapacitor is not feasible for big number of custom alerts.
At AMMP Technologies we need to manage alerts per database, customer, customer_objects. The number can go into the 1000s. We've opted for a custom solution where keep a standard set of template tickscripts (not to be confused with Kapacitor templates), and we provide an interface to the user where only expose relevant variables. After that a service (written in python) combines the values for those variables with a tickscript and using the Kapacitor API deploys (updates, or deletes) the task on the Kapacitor server. This is then automated so that data for new customers/objects is combined with the templates and automatically deployed to Kapacitor.
You obviously need to design your tasks to be specific enough so that they don't overlap and generic enough so that it's not too much work to create tasks for every little thing.

Lowest value from 2 payloads in Node-Red

I have a IoT system in home and two temperature sensors.
One of the sensors could work in some hours in direct sun.
The real temperature is always the lowest value, so sometimes temp1, sometimes temp2.
What I want to achieve is:
read the temperature from sensors1 (via MQTT)
read the temperature from sensors2 (via MQTT)
compare values
find the lowest one and send in via MQTT
go back to reading in loop
For this example I can simulate readings with injection nodes
How to do that? I am new in Node-Red, have tried but without success.
Here is my flow:
[{"id":"fa6372cc.47f92","type":"tab","label":"Flow 8","disabled":false,"info":""},{"id":"5ac90e03.22da3","type":"join","z":"fa6372cc.47f92","name":"","mode":"custom","build":"object","property":"payload","propertyType":"msg","key":"topic","joiner":"","joinerType":"str","accumulate":true,"timeout":"","count":"2","reduceRight":false,"reduceExp":"","reduceInit":"","reduceInitType":"","reduceFixup":"","x":990,"y":340,"wires":[["f09774bf.3c8428","a197b84d.6a7338"]]},{"id":"f09774bf.3c8428","type":"debug","z":"fa6372cc.47f92","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"true","x":1130,"y":340,"wires":[]},{"id":"43900e79.98cd8","type":"change","z":"fa6372cc.47f92","name":"set payload value","rules":[{"t":"set","p":"payload","pt":"msg","to":"req.params.value","tot":"msg"}],"action":"","property":"","from":"","to":"","reg":false,"x":790,"y":340,"wires":[["5ac90e03.22da3"]]},{"id":"b71d9143.c03bd","type":"change","z":"fa6372cc.47f92","name":"set topic temp1","rules":[{"t":"set","p":"topic","pt":"msg","to":"temp1","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":560,"y":320,"wires":[["43900e79.98cd8"]]},{"id":"e87114aa.6cd1","type":"change","z":"fa6372cc.47f92","name":"set topic temp2","rules":[{"t":"set","p":"topic","pt":"msg","to":"temp2","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":560,"y":360,"wires":[["43900e79.98cd8"]]},{"id":"783c47fd.8dd58","type":"inject","z":"fa6372cc.47f92","name":"temp source 2","topic":"","payload":"12","payloadType":"num","repeat":"3","crontab":"","once":false,"onceDelay":"1.5","x":380,"y":360,"wires":[["e87114aa.6cd1"]]},{"id":"271dedab.aaa7b2","type":"inject","z":"fa6372cc.47f92","name":"temp source 1","topic":"","payload":"10","payloadType":"num","repeat":"2","crontab":"","once":false,"onceDelay":"1","x":380,"y":320,"wires":[["b71d9143.c03bd"]]},{"id":"a197b84d.6a7338","type":"mqtt out","z":"fa6372cc.47f92","name":"temperature","topic":"domoticz/in","qos":"","retain":"","broker":"7e3561ec.acad","x":1150,"y":280,"wires":[]},{"id":"7e3561ec.acad","type":"mqtt-broker","z":"","name":"Domoticz","broker":"192.168.6.11","port":"8084","clientid":"","usetls":false,"compatmode":true,"keepalive":"60","cleansession":true,"birthTopic":"","birthQos":"0","birthRetain":"false","birthPayload":"","closeTopic":"","closeRetain":"false","closePayload":"","willTopic":"","willQos":"0","willRetain":"false","willPayload":""}]
One way to do it would be like this:
This is storing the two temps in flow variables - the first flow initially sets them to a high number so the "min" in "choose lower value" will later work. In this case I've used a change node setting the payload to the JSONata of
$min([$flowContext("temp1"), $flowContext("temp2")])
but there's a few ways you could choose to do it.
Here is the code to try:
[{"id":"6bc2755e.9feb9c","type":"debug","z":"f454a93f.0e89d8","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"true","x":990,"y":340,"wires":[]},{"id":"38bd03eb.f7d06c","type":"change","z":"f454a93f.0e89d8","name":"choose lower value","rules":[{"t":"set","p":"payload","pt":"msg","to":"$min([$flowContext(\"temp1\"), $flowContext(\"temp2\")])\t","tot":"jsonata"}],"action":"","property":"","from":"","to":"","reg":false,"x":790,"y":340,"wires":[["6bc2755e.9feb9c"]]},{"id":"9066677f.eb0358","type":"change","z":"f454a93f.0e89d8","name":"store temp1","rules":[{"t":"set","p":"temp1","pt":"flow","to":"payload","tot":"msg"}],"action":"","property":"","from":"","to":"","reg":false,"x":550,"y":320,"wires":[["38bd03eb.f7d06c"]]},{"id":"a70c9b2a.e7db58","type":"change","z":"f454a93f.0e89d8","name":"store temp2","rules":[{"t":"set","p":"temp2","pt":"flow","to":"payload","tot":"msg"}],"action":"","property":"","from":"","to":"","reg":false,"x":550,"y":360,"wires":[["38bd03eb.f7d06c"]]},{"id":"4bd27616.d022c8","type":"inject","z":"f454a93f.0e89d8","name":"temp source 2","topic":"","payload":"12","payloadType":"num","repeat":"","crontab":"","once":false,"onceDelay":"1.5","x":370,"y":360,"wires":[["a70c9b2a.e7db58"]]},{"id":"7378dd4f.3825b4","type":"inject","z":"f454a93f.0e89d8","name":"temp source 1","topic":"","payload":"10","payloadType":"num","repeat":"","crontab":"","once":false,"onceDelay":"1","x":370,"y":320,"wires":[["9066677f.eb0358"]]},{"id":"314eb0ec.85211","type":"inject","z":"f454a93f.0e89d8","name":"","topic":"","payload":"","payloadType":"date","repeat":"","crontab":"","once":true,"onceDelay":0.1,"x":370,"y":260,"wires":[["688646b.138a6b8"]]},{"id":"688646b.138a6b8","type":"change","z":"f454a93f.0e89d8","name":"set to high","rules":[{"t":"set","p":"temp1","pt":"flow","to":"999","tot":"num"},{"t":"set","p":"temp2","pt":"flow","to":"999","tot":"num"}],"action":"","property":"","from":"","to":"","reg":false,"x":550,"y":260,"wires":[[]]}]

General principle to implement node-based workflow as seen in Unreal, Blender, Alteryx and the like?

This topic is difficult to Google, because of "node" (not node.js), and "graph" (no, I'm not trying to make charts).
Despite being a pretty well rounded and experienced developer, I can't piece together a mental model of how these sorts of editors get data in a sensible way, in a sensible order, from node to node. Especially in the Alteryx example, because a Sort module, for example, needs its entire upstream dataset before proceeding. And some nodes can send a single output to multiple downstream consumers.
I was able to understand trees and what not in my old data structures course back in the day, and successfully understand and adapt the basic graph concepts from https://www.python.org/doc/essays/graphs/ in a real project. But that was a static structure and data weren't being passed from node to node.
Where should I be starting and/or what concept am I missing that I could use implement something like this? Something to let users chain together some boxes to slice and dice text files or data records with some basic operations like sort and join? I'm using C#, but the answer ought to be language independent.
This paradigm is called Dataflow Programming, it works with stream of data which is passed from instruction to instruction to be processed.
Dataflow programs can be programmed in textual or visual form, and besides the software you have mentioned there are a lot of programs that include some sort of dataflow language.
To create your own dataflow language you have to:
Create program modules or objects that represent your processing nodes realizing different sort of data processing. Processing nodes usually have one or multiple data inputs and one or multiple data output and implement some data processing algorithm inside them. Nodes also may have control inputs that control how given node process data. A typical dataflow algorithm calculates output data sample from one or many input data stream values as for example FIR filters do. However processing algorithm also can have data values feedback (output values in some way are mixed with input values) as in IIR filters, or accumulate values in some way to calculate output value
Create standard API for passing data between processing nodes. It can be different for different kinds of data and controlling signals, but it must be standard because processing nodes should 'understand' each other. Data usually is passed as plain values. Controlling signals can be plain values, events, or more advanced controlling language - depending of your needs.
Create arrangement to link your nodes and to pass data between them. You can create your own program machinery or use some standard things like pipes, message queues, etc. For example this functional can be implemented as a tree-like structure whose nodes are your processing nodes, and have references to next nodes and its appropriate input that process data coming from the output of the current node.
Create some kind of nodes iterator that starts from begin of the dataflow graph and iterates over each processing node where it:
provides next data input values
invokes node data processing methods
updates data output value
pass updated data output values to inputs of downstream processing nodes
Create a tool for configuring nodes parameters and links between them. It can be just a simple text file edited with text editor or a sophisticated visual editor with GUI to draw dataflow graph.
Regarding your note about Sort module in Alteryx - perhaps data values are just accumulated inside this module and then sorted.
here you can find even more detailed description of Dataflow programming languages.

How to access "key" in combine.perKey in beam

In How to create custom Combine.PerKey in beam sdk 2.0, I asked and got a correct answer on how to create a custom Combine.PerKey in the new beam sdk 2.0. However, I now need to create a custom combinePerKey such that within my custom CombinePerKey logic, I need to be able to access the contents of the key. This was easily possible in dataflow 1.x, but in the new beam sdk 2.0, I'm unsure how to do so. Any little code snippet/example would be extremely useful.
EDIT #1 (per Ben Chambers's request)
The real use case is hard to explain, but I'm going to try:
We have a 3d space composed of millions of little hills. We try to determine the apex of these millions of hills as follows: we create billions of "rectangular probes" for the whole 3d space, and then we ask each of these billions of probes to "move" in a greedy way to the apex. Once it hits the apex, it stops. The probe then returns the apex and itself. The apex is the KEY for which we'll do a custom combine by key.
Now, the custom combine function is going to finally return a final object (called a feature) which is derived from all the probes that reach the same apex (ie the same key). When generating this "feature" object, we need to know infomration about the final apex/key (ie the top of the hill). Hence, I need this key info.
One way to solve this is using a group by key, but that was slow (at least in df 1.x); we got it to be fast (in df 1.x) using a custom combine fn. So, we'd like the key. That said, groupbykey works in beam skd 2.0.
Alternatively, we could stick the "apex" information into the "probe" objects itself, but this means that each of our billions of probe objects now needs to be tripled in size just to hold this apex information (and this apex information repeats itself, since there are only say 1 million apexes but 1 billion probes, so this intuitively feels highly inefficient.)
Rather than relying on the CombineFn to compute the entire result, could you instead have the ComibneFn compute some partial result based only on information about the probes? Then your Combine.perKey(...) returns a PCollection<KV<Apex, InfoAboutProbes>> and you can use a ParDo to combine the information about the apex with the summary information about the probes. This allows you to use the CombineFn for efficiently combining information about many probes, while using a ParDo to access the key.

Resources