I am using Kaa client to pump data to Kaa server.
I want to fetch this data in order to showcase on a client application. With the use of Log Appenders, I am able to do so.
However, is it possible to do the same without adding any external db? I read in Kaa documentation that by default, Kaa stores data to MySQL (MaraidB / PostGre).
However, when I tried to access Mysql (which is part of Kaa Sandbox), I was unable to do so.
Can anyone tell how can we do this?
Yes, Kaa should be configured to append the gathered telemetry to some Log Appender (one can also create a Customer Log Appender with specific functionality if required) or even a set of Log Appenders - depending on the use case.
The easiest way is to configure one of the existing Log Appenders to log the data to e.g. Cassandra and then retrieve the data from there.
Should you need real time triggering of some actions depending on the data received from client(s), you would probably need to try developing a Custom Log Appender for that.
We need to have a external Log appender in order to log the data. The internal database takes care of Schema, event class families, Client/server profile info, notification info logging.
Related
I'm studying the ThingsBoard IoT platform and what's not clear to me is:
does ThingsBoard store by default its telemetry data into the configured database (Postgres or Cassandra) ?
I can also put the question in another way: when I view telemetry data from device's dashboard, where do those data come from?
What I understood is that the default data flow is:
device > transport layer (mqtt, http) > Kafka
so I think you must create an appropriate rule into the rule engine if you want to further save your telemetry data into your database, but I'm not sure about this, please correct me if i'm wrong.
Thank you all
Found the answer:
Telemetry data are not stored into database by default unless you configure a rule chain with the specific action to do so.
That being said, during ThingsBoard installation the Root rule chain is created for you, and it contains the actions to save timeseries and attributes into the configured database. The target tables where telemetry data are stored are ts_kv_latest_cf for latest telemetry data and ts_kv_cf for timeseries data.
If you want to do a quick and simple check, try to temporarily remove the 'save timeseries' rule node from the Root rule chain, and to send data into the platform.
I am upgrading our old application for Serilog... One of the existing functionality is ... When log level = ERROR, it will log into local file and send 'WCF' request to the remote server, remote server will update database...
Basically it will log into multiple source(local file, remote database by sending wcf request) if it level is 'ERROR'.
I understand using 'rollingfile' appender to logging into local file.
However, i do not know how to configure 'WCF Service' for Serilog... is there any 'WCF SINK' can help me achieve this?
As of this writing there's no generic sink that makes "WCF" calls... You'd have to build your own sink, implementing the calls you need.
You can see a list of documented sinks in the "Provided Sinks" page on Serilog's wiki, and you can also see available sinks in NuGet.org.
The log file (Indy) of an email in SMTP format includes the information of the attached files that are not necessary for my needs.
Adding information from attached files greatly increases the log file and causes me problems reading this information. I keep this file in a "blob" field of the database. Reading this field is causing me problems.
Do you have an example of a code that retains this information (other than the files attached)?
The default TIdLog... components are meant to log whatever raw data is being transmitted/received over the socket connection, for purposes of debugging and replaying sessions. There are no real filtering capabilities.
If you don't want portions of the emails being logged, you will have to use TIdLogEvent or TIdConnectionIntercept, or derive a custom TIdLog... or TIdConnectionIntercept... based component, to parse the raw data yourself, essentially re-implementing the SMTP and RFC822 protocols so you can choose to log only what you want.
I have Neo4j instances in several remote geographical locations, working with local data. I also have a master Neo4j instance which will store data from all locations. In case of network failure I want the remote instances to save queries which can be replayed later at the master instance.
What is the best way to achieve this?
I'm using Neo4j .Net client
Are you using Neo4j Enterprise with the High Availibility mode ? In that case the slaves are writing automatically first to the master.
Otherwise you can log the full requests (meaning the cypher query + parameters)
Make sure to activate the http logging in the neo4j-server.properties configuration file :
org.neo4j.server.http.log.enabled=true
org.neo4j.server.http.log.config=conf/neo4j-http-logging.xml
And in your neo4j-http-logging.xml you can add this pattern in your appenders :
<pattern>%fullRequest\n\n%</pattern>
There are some undocumented logging options (experimental):
dbms.querylog.enabled=true
# in ms
dbms.querylog.threshold=500
dbms.querylog.path=data/log/queries.log
We're implementing Elmah for an internal application. For development and testing we use a single server instance but on the production environment the app is delivered using a load balanced environment.
Everything works as charm using Elmah, except for the fact that the logs are done independant in each server. What I mean with this is that if an error happens in Server1 the xml file is stored physically on that server and the same for Server2, since I'm storing that files on the App_Data
When I access the axd location to see the error list, I just see the ones of the server that happened to attend my request.
Is there any way to consolidate the xml files other than putting them on a shared folder? Having a shared folder will make us to allow the user that executes the application on the server to have access to that separate folder and to be on only one of the servers instead of both.
I cannot use In-Memory or Database logging since FileLog is the only one allowed.
You might consider using ElmahR for this case, since you are not able to implement In-Memory or Database logging. ElmahR will provide you with a central location for the two load balanced servers to send errors to (in addition to logging them locally) via an Http post. Then you can access the ElmahR site for to view an aggregated list. Also, ElmahR is storing the error messages in a SqlServerCE database, so it can persist the error messages it receives.
Keep in mind that if the ElamhR Dashboard app design does not meet your initial needs/desires, it could be modified as needed given that it is an open source project.
Hope this might be a viable option.