How to retrieve telemetry for customer’s all devices from Thingsboard - thingsboard

If I have this kind of thingsboard entities
There are Device D1,D2 and Customer C1
First time, I assigned D1 device to C1 customer
--> D1-C1
and later I unassigned C1 from D1, and I assigned D2 device to C1 customer.
--> D2-C1
then is it possible to check that customer's all telemetry C1 used?
In this case I want to retrieve C1 customer's data : all telemetry from Device D1, D2 (please see attached image)
Actually I don't know how to retrieve all those data from many devices. should I use rule chain or rest api?
imgae

Rule Chain would help you if each device has unique set of telemetry names. In such case every time when new data comes to server you could redirect it from device to customer using Change originator node
In your case try to use two-step REST calls:
Get list of customer's devices
Get telemetry for each device.

Related

Duplicates developer metadata when using cut + paste

I create Developer Metadata for each of the columns in the sheet.
If a new column gets created, I track it and create another developer metadata for it.
The process above works great until the user starts to move columns using Cut (cmd+x) and Paste (cmd+v).
When you cut and paste, the developer metadata is transferred to the destination column and as a result, you're ending with 2 metadata on the same column.
It gets more complicated when you do that process multiple times.
Eventually, I collect the changes and I see more than 1 metadata on a given column and I don't know which of them to choose.
Do you have an Idea how can I deal with that scenario?
The flow explained:
The user connect his google sheet document.
I go over his sheet and create metadata on the columns.
Name [444]
id [689]
Country [997]
Du
10
US
Re
30
US
The user is doing multiple changes on the sheet. One of the changes is cutting and pasting the column country over id. As a result, the column id gets removed but the metadata id we created stays on (Google Sheet API implementation)
Here is the new state:
Name [444]
Country [689, 997]
Du
US
Re
US
As you can see now, we have 2 metadata ids on the same column (Country). Why it is a problem for me? when I periodically collect the changes I recollect the metadata changes from the column. When I encounter two metadata ids on the same column I don't know which of them to choose.
So why can't I just select randomly? because I already have an existing mapping on my end and I don't know which of them to choose now. Take into account that the user may have changed the column name also so I can count on the column label.

Upload Googlesheet Data to Google Cloud service and then Analyse with CloudSQL

This question is edited to make requirements clearer:
I am involved with a charity with multiple location. It provides low cost medicines. Current the transaction data - sale purchases is stored in Googlesheets in each location. As it grows it is becoming a pain to analyse one by one.
The data is structured. Each Table/Tab has defined fields. Each location has identical structure and I can easily have a unique ID.
I have also experimented with local Postgresql database. Data is in standard Table format - each location has identical format.
As each shop is selling each day, each day there are new transactions.
I need a simple way of collecting all this information in one place in the cloud so that I can analyse the data using SQL type queries without having to manually get it from each location each day. In total there are some 2000 new transactions in a day that need to be added to database.
One logical solution would be for each local postgresql database to send new data to a cloud master postgresql database - using WAL? Local databases dont have 24x7 internet access - net is patchy. Charity is in India. I am in UK.
Ideally I need a solution where when the local computer finds net, it transmits the new transactions to the cloud database.
"""Import these modules"""
import gspread
from oauth2client.service_account import ServiceAccountCredentials
"""Import the sheet"""
scope = [
'https://www.googleapis.com/auth/spreadsheets',
'https://www.googleapis.com/auth/drive'
]
creds = ServiceAccountCredentials.from_json_keyfile_name("Path to your
credentials.json", scope)
client = gspread.authorize(creds)
sheet = client.open("Your sheet").sheet1
"""The cell on Top of a column is for the title of the column """
sheet.updatecell(1, 1, "Sales")
"""Reading the file where the data is stored and writing it in the sheet"""
with open("Path to the file where the sales are stored", "r") as l:
data = l.readlines()
for e, i in enumerate(data):
i = i[:-1]
sheet.updatecell(len(e+2), 1, i)
I hope this helps, I can't do more since I don't know how the data is stored and organized.

How to query data that's present in one tab, but not in another?

In Google Sheets, I have a sheet with three tabs. One tab for subscriptions (containing a name, in this case an unique username + date + some personal information), one tab for unsubscriptions (with the same username as in subscriptions) and one tab where I want to show all active subscriptions.
An active subscription is when the username is present in subscriptions, but not in unsubscriptions. There is a possibility that the same username subscribes again, in which case there are multiple entries in the two tabs. In that case, it's active when the subscriptiondate is more recent than the last unsub.
The structure for both sub & unsubs-tab are the same, except for the headertext (Sub.date vs Unsub.date for example).
I now have a query which returns all the subscriptions + information needed for the "Active subs" tab. I don't know where to start for the "filtering" with the Unsubs-tab..
I expect a Query-like output with all active subscriptions with the information from the subscriptionstab I need. The information is in different columns, so subscriptiondate, address and so on are located in for example column E & F, while the unique username is in column B.
Might need a little adjustment but seems worth a try, in a spare column:
=if(and(maxifs(E:E,B:B,B1)=E1,maxifs(E:E,B:B,B1)>maxifs(UnSub!E:E,UnSub!B:B,B1)),"Y",)
then select on the basis of the presence of 'Y'.
The logic is supposed to be:
Is this the row that contains the most recent subscription date for this individual AND
is this individual's most recent subscription date more recent than the most recent unsubscription date for this individual?
So could be shortened a little to:
=if(and(E1=maxifs(E:E,B:B,B1),E1>maxifs(UnSub!E:E,UnSub!B:B,B1)),"Y",)
Or:
Is this the most recent date for this individual across both sheets?

influxdb query group by value

I am new to influxdb and the TICK environment so maybe it is a basic question but I have not found how to do this. I have installed Influxdb 1.7.2, with Telegraph listening to a MQTT server that receives JSON data generated by different devices. I have Chronograph to visualize the data that is being recieved.
JSON data is a very simple message indicating the generating device as a string and some numeric values detected. I have created some graphs indicating the number of messages recieved in 5 minutes lapse by one of the devices.
SELECT count("devid") AS "Device" FROM "telegraf"."autogen"."mqtt_consumer" WHERE time > :dashboardTime: AND "devid"='D9BB' GROUP BY time(5m) FILL(null)
As you can see, in this query I am setting the device id by hand. I can set this query alone in a graph or combine multiple similar queries for different devices, but I am limited to previously identifying the devices to be controlled.
Is it posible to obtain the results grouped by the values contained in devid? In SQL this would mean including something like GROUP BY "devid", but I have not been able to make it work.
Any ideas?
You can use "GROUP BY devid" if devid is a tag in measurement scheme. In case of devid being the only tag the number of unique values of devid tag is the number of time series in "telegraf"."autogen"."mqtt_consumer" measurement. Typically it is not necessary to use some value both as tag and field. You can think of a set of tags in a measurement as a compound unique index(key) in conventional SQL database.

Azure Application insights for web, display unique users in power BI

We set up web analytics using Application Insights -> Stream Analytics -> Power BI path.
We would like to see the chart of daily unique visitors in Power BI dashboard.
Users are considered unique if [context].[user].anonId are different. Time is placed in [context].[data].eventTime in insights json.
The export query should look something like that (we know how to address missing unique keyword, so I'll use it for simplicity):
SELECT
count(unique A.[context].[user].anonId)
SYSTEM.TIMESTAMP
FROM
[export-input] A
TIMESTAMP BY A.[context].[data].eventTime
GROUP BY
TumblingWindow(day, 1)
The problem is TIMESTAMP BY does not support qualified fields. Without that, we're actually timestamping users not by actual page visit time, but by the time this data entered stream analytics. This means, we might loose a bunch of unique uesrs, or count some of them twice.
Is there a workaround for that?
TIMESTAMP BY now supports qualified fields, so it should not be a problem anymore. However, please note that Stream Analytics does not have Unique/Distinct keyword. You will need to rewrite your query like this to compute unique count:
WITH step1 AS
(
SELECT
COUNT(*) countPerAnonId
FROM
[export-input] A
TIMESTAMP BY A.[context].[data].eventTime
GROUP BY
A.[context].[user].anonId,
TumblingWindow(day, 1)
)
SELECT COUNT(*)
FROM step1
GROUP BY System.Timestamp
Could you, on the ASA side, just submit the 'view time' as a property of your event (from the client) and then select across that in ASA. I'm not that familiar with ASA's limits, but I can find someone to help if the above doesn't work. In Power BI you can make Q&A queries like "show distinct anonID in the last 24 hours" or "... in the last day". Which if there's a date field, should match your expected behavior.

Resources