How to use a custom timestamp property in IoT Central charts X axis - azure-iot-sdk

On an IoT Central app, is there a way to use a custom timestamp property on the X axis of the charts displayed on the dashboards?
Here's a picture of some sample raw data: Raw Data
{
"_eventcreationtime": "2022-09-05T18:05:18.316Z",
"Controller": {
"Temperature4K_1": 71.468,
"Temperature4K_2": 61.428,
"Temperature40K_1": 39.135,
"Temperature40K_2": 32.69,
"TelemetryMagnetID": "Ex accusamus non commodi id ab excepturi facere et.",
"Timestamp": "2023-01-07T19:50:11.189Z" **<- I want to use this Timestamp as the X Axis**
},
"_eventtype": "Telemetry",
"_timestamp": "2022-09-05T18:05:18.369Z" **<- Looks like the the charts always use this timestamp on X Axis**
}
I would like to use the Controller.Timestamp as the value to be used on the X axis of the charts, but it looks like the _timestamp is always used.
If this is the case is there a way to set the Controller.Timestamp property has another system property before the message is sent by the device (using the C# IoT Device SDK) so that I can use the value of the source Timestamp on the X axis of the charts?

Posting an answer just in case others need this information.
After reaching out to Microsoft support, I got the following answer:
You can use the "iothub-creation-time-utc" property in message
properties to set the source timestamp. If this property is set,
IoTCentral will use the timestamp value of this field for
dashboards/charts. If the property is not set, IoTCentral will use the
enqueued time.
So adding the code below addressed my question:
message.properties.add('iothub-creation-time-utc', timestamp);
There's also a related question here: Provide timestamp in message to IoT central

Related

First timestamp of max value in Prometheus graph

I am trying to find the timestamp of the first occurrence of max value(the cursor point in below image)
I wrote query
min(timestamp(max(jmeter_count_total{label="GET - Company Updates - ua_users_company-updates"})))
But it's returning the max timestamp of the max value
I am not able to grab the value highlighted by cursor in below image(minimum value). Instead I am getting highest value when I use above query.
I've played with this for a bit and I think this may work (take it with a grain of salt, due to limited testing).
TLDR - the query (using only foo for brevity):
min_over_time((timestamp(foo) and (foo == scalar(max_over_time(foo[2h]))))[1h:])
This portion of the query:
foo == scalar(max_over_time(foo[2h]))
returns only values where foo matches the max value of foo in the last 2h interval. For retrieving the timestamp of those cases we use the timestamp function and use these previous clause as a conditional:
timestamp(foo) and (foo == scalar(max_over_time(foo[2h])))
Finally we only want to get the first/lowest timestamp value over the time window, which is what the outer min_over_time with the nested subquery should do.
I fiddled with the online Prometheus demo using one of the metrics present there. You can check the queries here.

Promql query not working "cpu_usage_value or memory_usage_value"

according to prometheus doc :
vector1 or vector2 results in a vector that contains all original elements (label sets + values) of vector1 and additionally all elements of vector2 which do not have matching label sets in vector1
but the above query only returns cpu_usage_value
promql beginner, pardon if understood the doc wrong
The or operator doesn't take into account metric names when searching for time series on the left side of or with labelsets, which are missing in time series on the right side of or. See these docs.
There are the following solutions:
To explicitly mention __name__ label (aka metric name) in the list of labels, which should be taken into account when matching series by their labelsets: foo or on(__name__) bar would return series with both foo and bar names.
To enumerate the needed metric names in series selector regexp: {__name__=~"foo|bar"} returns series with both foo and bar names.
To use union function from MetricsQL: union(foo, bar) returns series with foo and bar names. Note that this solution works only in VictoriaMetrics (Prometheus-like system I work on). It doesn't work in Prometheus :(

how can I refactor this py2neo v4 code to use neo4j 3.4 temporal data types?

I'm stuck trying to add a date_accepted property to an upload node that represents a scientific paper. Previously, I would have just added a timetree node. However, py2neo v4 no longer supports GregorianCalendar (shame). How would I convert this code snippet to use one of the new temporal data types? I've looked at the docs & online but I'm not yet savvy enough I'm afraid.
from datetime import date, datetime # ??? how to use this...
def getAccepted(year_accepted, month, day):
with open('/home/pmy/pdf/id.txt') as f:
id = f.read()
matcher = NodeMatcher(graph)
upload = matcher.match("Upload", id = id).first()
a = year_accepted+month+day
d = datetime.strptime(a, '%Y%m%d').strftime('%Y-%m-%d')
# >>> HOW TO CONVERT d TO A TEMPORAL DATA TYPE HERE? <<<
try:
graph.merge(upload)
upload['accepted_date']=d
graph.push(upload)
except IndexError as e:
print("type error: " + str(e))
pass
return 0
This works but it pushes the datetime string whereas I want to push a neotime temporal date...
It is possible to insert the datetime variable d above into something like this query below, which also works, but I'm winging this & suspect there's a better way...
query='''UNWIND [date({param})] AS date RETURN date'''
result=graph.run(query, param=d).data()
print(result)
which returns
[{'date': neotime.Date(2010, 10, 23)}]
So I could maybe extract the value & push that to the graph? Is this what the developers intended? The Docs are terse and aimed at proper programmers so IDK
Maybe
accepted=result[0].get('date') # <class 'neotime.Date'>
& push that to the graph perhaps?
The py2neo v4 neotime temporal types are very recent & there is not much documentation, or basic tutorials to adapt afaik. Hence this long-winded post. Anyone out there with experience care to comment?
Another user posted a similar question here: https://stackoverflow.com/a/61989193/13593049
Essentially, you need to use the neotime package for your dates and times if you want to use Neo4j data types in your graph. (Documentation)
neotime also has functionality to convert neotime objects into datetime objects.
import neotime
date_accepted = neotime.Date(2020, 05, 25)
print(date_accepted.to_native())
# # datetime.date(2020, 5, 25)

InfluxDB design issue

I am using influxDB and using line protocol to insert large set of data into Data base. Data i am getting is in the form of Key value pair, where key is long string contains Hierarchical data and value is simple integer value.
Sample Key Value data :
/path/units/unit/subunits/subunit[name\='NAME1']/memory/chip/application/filter/allocations
value = 500
/path/units/unit/subunits/subunit[name\='NAME2']/memory/chip/application/filter/allocations
value = 100
(Note Name = 2)
/path/units/unit/subunits/subunit[name\='NAME1']/memory/chip/application/filter/free
value = 700
(Note Instead of allocation it is free at the leaf)
/path/units/unit/subunits/subunit[name\='NAME2']/memory/graphics/application/filter/swap
value = 600
Note Instead of chip, graphics is in path)
/path/units/unit/subunits/subunit[name\='NAME2']/harddisk/data/size
value = 400
Note Different path but till subunit it is same
/path/units/unit/subunits/subunit[name\='NAME2']/harddisk/data/free
value=100
Note Same path but last element is different
Below is the line protocol i am using to insert data.
interface, Key= /path/units/unit/subunits/subunit[name\='NAME2']/harddisk/data/free, valueData= 500
I am Using one measurement namely, Interface. And one tag and one field set. But this DB design is causing issue for querying data.
How can I design database so that i can query like, Get all record for subunit where name = Name1 or get all size data for every hard disk.
Thanks in advance.
The Schema I'd recommend would be the following:
interface,filename=/path/units/unit/subunits/subunit[name\='NAME2']/harddisk/data/free value=500
Where filename is a tag and value is the field.
Given that the cardinality of filename in the thousands this schema should work well.

D2L Valence: Retrieve Final Grades

Is there a way to get both the Final Calculated Grade and the Final Adjusted Grade? I would like to be able to compare them.
I'm wondering:
GET /d2l/api/le/(version)/(orgUnitId)/grades/values/(userId)/
Retrieve all the grade objects for a particular user assigned in an org unit.
Return. This action returns a JSON array of GradeValue blocks.
Grade.GradeValue{
"DisplayedGrade": <string>,
"GradeObjectIdentifier": <string:D2LID>,
"GradeObjectName": <string>,
"GradeObjectType": <number:GRADEOBJ_T>,
"GradeObjectTypeName": <string>|null,
"PointsNumerator": <number>|null,
"PointsDenominator": <number>|null,
"WeightedDenominator": <number>|null,
"WeightedNumerator": <number>|null
}
and then look at the "GradeObjectType" for "7" or "8"?
Grade object type / Value
FinalCalculated / 7 ^
FinalAdjusted / 8 ^
(I wonder what is meant by "^ Direct creation of these types through these APIs is not supported.")
It seems the best solution (or workaround) is to retrieve the final grade, determine what column it's coming from, and then subtract or add (+1 / -1) to the objectID to get the corresponding Calculated or Adjusted column.
I believe that there currently is no way to retrieve the final adjusted grade value through the Valence Learning Framework API, only the final calculated grade value. Additionally, end-user type callers can only see the final grade when the grade gets released: up until that point, only users capable of setting a final grade value (or perhaps releasing it?) can see the final grade value for a user.

Resources