How to download influxdb2.0 data in csv format? - influxdb

I am not familiar with influxdb command line especially influxDB2.0. so I choose to use InfluxDB 8086 port frontend. but I found if want to download .csv by frontend, too many data leads to browser collapse, which at last leads to download failure.
I have read influxdb2.0 documentation and found no answer. Whether I must use command line or what command line should I use? Thanks a lot in advance

I have the same issues using Flux in a browser session.
If you need a lot of data use the Influx API and catch the result in a file. See the 'example query request' using curl on the referenced page. I find this to be very quick and haven't had it overload from returning large data sets.
(If the amount of data is enormous you can also ask Influx to gzip it before download, but of course this may load up the machine it's running on.)

Related

Real Time computation

I have a algorithm written in python and mysql which takes inputs in csv file and some properties and then run for 20-25 mins to produce output.
I want to make it realtime such that if input csv is upload or and properties is changed the output is changed without need to run algorithm
Note Data on which algorithm runs can be very large.
Need help in making realtime computing algorithm
i am trying to change mysql to nosql DB but it still takes time to run and is not realtime
You should try using one of the streaming services for event driven update. Instead of CSV, write the data to stream like Kafka or Kinesis. Write your consumer that reads incoming invents and updates the data without running the full algorithm. You might be able to use Apache Flink for aggregation against Kafka as well.

Encrypt and compress a FirebirdSQL 2.5 backup on-the-fly from Delphi7 securely

We need to protect customer data and using FirebirdSQL 2.5(.8) with Delphi 7.
Also it is essential to do regular backups on "secondary" PC, or pen-drives if the "master" fails.
For that we used this method, calling Gbak.exe and 7z.exe with stdin/out.
Realized that was a bad idea because it's very easy to see the parameters (passwords) added to command line during the process, even with a simple Task-manager.
Is there a more secure way to do it?
(Using standard Interbase componenst OR UIB)
Upgrade to Firebird 3 which added Database Encryption capability. If you don't want or cannot, I believe you might run the GBAK tool from your application with the STDOUT option but instead of using 7-zip for compression you would read that output in your application, and encrypt such input by some encryption library on the fly.
I believe you may find many examples how to run an application and read its standard output over here (here is something related to start with), so the rest might be about finding a way of an on the fly stream encryption. Or just capturing STDOUT in one stream and encypting in another.
Firebird guys on SQL.ru forum say, that actually it is possible to use Services API to get backup stream remotely.
That does not mean that IBX or UIB or any other library readily support it though. Maybe it does, maybe not.
They suggested to read Release Notes for Firebird 2.5.2 or Part 4 of doc\README.services_extension.txt files of Firebird 2.5.2+ installation.
Below is a small excerpt from the latter:
The simplest way to use this feature is fbsvcmgr. To backup database
run approximately the following:
fbsvcmgr remotehost:service_mgr -user sysdba -password XXX action_backup -dbname some.fdb -bkp_file stdout >some.fbk
and to restore it:
fbsvcmgr remotehost:service_mgr -user sysdba -password XXX action_restore -dbname some.fdb -bkp_file stdin <some.fbk
Please notice - you can't use "verbose" switch when performing backup
because data channel from server to client is used to deliver blocks
of fbk files. You will get appropriate error message if you try to do
it. When restoring database verbose mode may be used without
limitations.
If you want to perform backup/restore from your own program, you
should use services API for it. Backup is very simple - just pass
"stdout" as backup file name to server and use isc_info_svc_to_eof in
isc_service_query() call. Data, returned by repeating calls to
isc_service_query() (certainly with isc_info_svc_to_eof tag) is a
stream, representing image of backup file.
Restore is a bit more tricky. Client sends new spb parameter
isc_info_svc_stdin to server in
isc_service_query(). If service needs some data in stdin, it returns
isc_info_svc_stdin in query results, followed by 4-bytes value -
number of bytes server is ready to accept from client. (0 value means
no more data is needed right now.) The main trick is that client
should NOT send more data than requested by server - this causes an
error "Size of data is more than requested". The data is sent in next
isc_service_query() call in the send_items block, using
isc_info_svc_line tag in traditional form: isc_info_svc_line, 2 bytes
length, data. When the server needs next portion, it once more returns
non-zero isc_info_svc_stdin value from isc_service_query().
A sample of how services API should be used for remote backup and
restore can be found in source code of fbsvcmgr.

Getting a large amount of data from standalone Neo4j db into an embedded Neo4j db?

We have a significant, though not huge, amount of data loaded into a stand-alone Neo4j instance via the browser and we need to get that data into the embedded instance in our app. I tried dumping the data to cypher (a 600kb file) and uploading it to our app to execute, that gets stack overflow errors.
I'm hoping to find an efficient way of doing this with the Java API so that we can do it again on other developers' machines. This is test data for development but it was entered, with significant effort, and we'd rather not redo all that.
Here's a silly question. Can we just copy the data files from the stand-alone db to the embedded?
Just like #Dave_Bennett mentions in his comment you can copy over the graph.db folder. Embedded and standalone use exactly the same binary format.
If you want to copy over programmatically, I suggest going with the batch inserter API, see http://neo4j.com/docs/stable/batchinsert.html.
There a great tool for copying over datastores at https://github.com/jexp/store-utils which might give some hints as well.

Getting vlc SAP Broadcast dump

I am receiving SAP broadcasts, which I can normally use and play using the standalone vlc application.
I have been asked to provide a dump of the same. I have 2 questions:
I dont clearly understand what exactly dump is
How can I obtain the same?
There are multiple types of dumps, so you might first find out, what kind of dump is meant. It could be a database dump, which is similar to a backup, but usually it's a memory dump.
A memory dump or crash dump is a copy of the application including its memory at a specific point in time. Usually you want to create a dump exactly at the time an application is crashing or hanging. The dump will then be helpful to find the cause of the problem.
There are many ways to obtain a dump. First, Windows might do that for you, when it asks "Send information to Microsoft". Second, you can create it using Task Manager. Right click a process and choose "Create dump file". Third, there are many tools out there, e.g. Process Explorer or ProcDump, which all have pros and cons and serve different purposes.
To suggest a tool for your specific case, we would need more information. Exact wording might matter in this situation.
Update
In your particular case it looks like SAP means Service Advertising Protocol, which is related to the network. A broadcast is a message which is sent to everybody.
You could capture that one with Wireshark, but you would need a lot of network knowledge to get the filters set up. In this case the term "dump" probably refers to a something similar to a database dump, because SAP uses tables to store lists of services.

Which is the best way to get real time data from avaya cms server?

I am sorry if this question goes out of topic but i forced to ask here as there is very limited resources found over the net on this.
I am looking to implement system to get real time data from avaya cms server I did lot of RND on JTAPI but it has got some limitations it is not giving all events all data as stored in CMS database. I also tried connecting cms database using Java but no success because it also give historical data in delay of 30 mins.
Is it possible to get the same technically using JTAPI,TAPI anything. Or is there anyone who have used any paid tool by avaya which is cheaper and can solve this purpose.
I saw clint but don't intend to use. Please let me know the ways if anyone had done this.
Your CMS may provide a feature known to me as realtime socket. It is a service pushing data about skills/splits, vdns and vectors over a network socket.
It is virtually the same what you'll find in hsplit and so on but realtime.
Pushed data can be configured by your cms admin.
If you are looking for call data you may take a look at *call_rec* table in cms.
You can use clintSVR which is a high level tool based on CMS CLINT. By using clintSVR, you can use CGI, OCX and C++ interfaces to get the real time data from CMS.
As others have said you can get this from realtime reports. You'll need to scrape them.
RT socket is just a set of wrappers around clint for running reports. It takes the realtime report data and sends to to a socket.
You can roll your own real time reports with clint and feed that to whatever needs to ship the data. A sample realtime report can be run from the command line like:
/cms/toolsbin/clint -u your_user <<EXECUTE_DONE
do menu 0 "cu:rea:Meas"
do "Run"
do "Exit"
EXECUTE_DONE
Here is an example of running a report directly
Run report directly:
/cms/toolsbin/clint -u ini <<EXECUTE_DONE
clear
run gem "r_custom/cr_r_3"
do "Run"
do "Exit"
EXECUTE_DONE

Resources