I'm studying the ThingsBoard IoT platform and what's not clear to me is:
does ThingsBoard store by default its telemetry data into the configured database (Postgres or Cassandra) ?
I can also put the question in another way: when I view telemetry data from device's dashboard, where do those data come from?
What I understood is that the default data flow is:
device > transport layer (mqtt, http) > Kafka
so I think you must create an appropriate rule into the rule engine if you want to further save your telemetry data into your database, but I'm not sure about this, please correct me if i'm wrong.
Thank you all
Found the answer:
Telemetry data are not stored into database by default unless you configure a rule chain with the specific action to do so.
That being said, during ThingsBoard installation the Root rule chain is created for you, and it contains the actions to save timeseries and attributes into the configured database. The target tables where telemetry data are stored are ts_kv_latest_cf for latest telemetry data and ts_kv_cf for timeseries data.
If you want to do a quick and simple check, try to temporarily remove the 'save timeseries' rule node from the Root rule chain, and to send data into the platform.
Related
I have a task toget some data from an external supplier.
They have a Rest OData API. I have to connect using a subscription-key(APIKey).
When creating the OData LService, I add an Auth Header: "subscription-key" and in the Value field, I enter my key. After saving, I create a new dataset, and the OData LinkedService, provides me with the remote tables. I can choose the table I want and after that I create a pipeline to copy data from that table to my Azure SQL server.
This works fantastic :-)
However, after closing my browser and re-open it, the subscription key that I have entered earlier on the linked service, is now replaced with stars as it is a securestring. When I now run my pipeline, it will think that my key is the ten stars that have replaced my real key.
What am I doing wrong here ?
Also I would prefer to get my value from the KeyVault, but it seems that this is not possible on ODat connections....
Hope someone is able to provide some insight here :-)
BR Tom
From my testing I did not get any error on re-running. However coming to dynamic keys - I was not able to achieve it using the ODATA linked service.
Alternatively, if you can hit the ODATA endpoint with REST / HTTP Connector
You could - have a Web Activity to get the keys from the Key Vault and Set in the Variable.
WEB Activity URL : https://<your-keyvalut-name>.vault.azure.net/secrets/<your-secret-name>;
You could access the output of the web Activity using : #activity('Web1').output.value & Store in a variable.
You can reference this variable as the SUBSCRIPTION KEY for the subsequent steps in the REST/HTTP dataset.
You could pass it along the additional headers
I've recently considering various security issues of my Firebase services and faced an interesting issue related to the Firebase pricing. The question is simple as below:
If a set of security rules of realtime database read some data inside itself(rtdb) within the security rules' validation process, are such server-reading for validating purpose subject to any part of rtdb billing? For example, if a line of rule needs a "role" data in the matching rtdb's json tree, is such validation free from rtdb pricing of download fee($1/GB) or connection quota(Simultaneous connections of 200,000)? It might be true because the validation must read the data anyway to find out if the request is compliant with the rules.
If a set of security rules of cloud firestore read some data inside itself(firestore) within the security rules' validations, are such data-reading for validation subject to read operation fee of firestore($0.036 per 100,000 documents in LA location)? For example, to validate a line allow read if resource.data.visibility == 'public', the rule have to retrieve data just like a mobile client read such data without any security rule.
Hope this question reach gurus in the community! Thank you in advance [:
firebaser here
There is no charge for reads done inside Realtime Database security rules. They are also not counted as persistent connections.
Accessing the current resource or the future request.resource data in your Firestore security rules are not charged. Additional document reads (with get(), getAfter(), exists() and existsAfter()) that you perform are charged.
For more on the latter, see the Firebase documentation on access calls and pricing in Firestore security rules.
I am currently creating an iOS application with Swift. For the database I use Firebase Realtime Database where I store among other things information about the user and requests that the user sends me.
It is very important for my application that the data in the database is not corrupted.
For this I have disabled data persistence so that I don't have to store the requests locally on the device. But I was wondering if it was possible for the user to directly modify the values of the variables during the execution of my application and still send erroneous requests.
For example the user has a number of coins, can he access the memory of the application, modify the number of coins, return to the application and send an erroneous request without having to modify it himself.
If this is the case then is it really more secure to disable data persistence or is this a misconception?
Also, does disabling access to jailbroken devices solve my problems? Because I've heard that a normal user can still modify the request backups before they are sent.
To summarize I would like to understand if what I think is correct? Is it really useful to prevent requests to save locally or then anyway a malicious user will be able to modify the values of variables directly during the execution and this without jailbreak?
I would also like to find a solution so that the data in my database is reliable.
Thank you for your attention :)
PS : I also set the security rules of the db so that only a logged in user can write and read only in his area.
You should treat the server-side data as the only source of truth, and consider all data coming from the client to be suspect.
To protect your server-side data, you should implement Firebase's server-side security rules. With these you can validate data structures and ensure all read/writes are authorized.
Disabling client-side persistence, or write queues as in your previous question, is not all that useful and not necessary once you follow the two rules above.
As an added layer of security you can enable Firebase's new App Check, which works with a so-called attestation provider on your device (DeviceCheck on iOS) to detect tampering, and allows you to then only allow requests from uncorrupted devices.
By combining App Check and Security Rules you get both broad protection from abuse, and fine-grained control over the data structure and who can access what data.
I am using Kaa client to pump data to Kaa server.
I want to fetch this data in order to showcase on a client application. With the use of Log Appenders, I am able to do so.
However, is it possible to do the same without adding any external db? I read in Kaa documentation that by default, Kaa stores data to MySQL (MaraidB / PostGre).
However, when I tried to access Mysql (which is part of Kaa Sandbox), I was unable to do so.
Can anyone tell how can we do this?
Yes, Kaa should be configured to append the gathered telemetry to some Log Appender (one can also create a Customer Log Appender with specific functionality if required) or even a set of Log Appenders - depending on the use case.
The easiest way is to configure one of the existing Log Appenders to log the data to e.g. Cassandra and then retrieve the data from there.
Should you need real time triggering of some actions depending on the data received from client(s), you would probably need to try developing a Custom Log Appender for that.
We need to have a external Log appender in order to log the data. The internal database takes care of Schema, event class families, Client/server profile info, notification info logging.
Let's say I have some data that I obtained through a non-graphql endpoint for example from third party server (firebase).
How do I put the data into the local relay store?
Is there an easy way to add / edit / overwrite data to relay store directly without going through query or mutation?
A non public RelayStoreData field is accessible from the Relay.Store instance and it gives you direct access to the records contained in the store. I haven't done anything with this myself but you could try modifying the cache directly like this:
RelayStore._storeData._cachedStore._records[recordId][fieldName]=newValue
I would use relay without a server, defining your graphql schema locally and doing your API requests from your graphql schema the same way you would query a database in your schema.
https://github.com/relay-tools/relay-local-schema