Can one use CSV as an in-memory database with alasql? - alasql

I currently have data stored in a CSV and want to query this using alasql in node.js. Would it make more sense to store this in an alternative format? My requirements are that it needs to be in-memory, as speed will be paramount.
Would sqlite or indexeddb be the way to go?
My CSV contains around 1.5 million rows (& 20 columns).

Related

How can i set records "modified" in a delphi TClientDataset

I need to set modified all the records loaded in a TClientDataSet to force saving. How can i do that?
Delphi XE8
Your actual problem seems to be "How do I efficiently transfer data from a table on MS SQL to SQLite? And secondly, can I use Delphi's TClientDataset to do this efficiently without too much fuss and bother?"
My inclination would be to say that TClientDataset isn't really going to buy you anything here, and won't be any quicker than directly doing inserts, so you might as well go that route, e.g. write a bit of code to generate some inserts for the data you want to export.
For larger quantities of data you may want some kind of bulk-load. SQLite can import CSV files: Import CSV to SQLite so in the same vein as before: Write a bit of code to dump your SQL data to CSV (or export directly from SQL server.

How to load relational database tables into neo4j database

I have tables which have millions of records. I need to load these records as nodes in neo4j.
Please help me out on how to do it as I'm new to neo4j.
It is quite easy, just map your entities that should become nodes into a set of csv files and the connections that should become relationships in another set of files.
Then run them with my batch-importer: https://github.com/jexp/batch-import/tree/20#binary-download
import.sh nodes1.csv,nodes2.csv rels1.csv,rels2.csv
Add types and index information to the headers and the batch.properties config file as needed.
You can use the batch-importer for the initial inserter but also subsequent updates (but the database has to be shut-down for that).
It is pretty easy to connect to your existing database using its driver and then extract the information of the right shape and kind and insert it into your graph model,
Either using Cypher statements with parameters or the embedded, transactional Java API for ongoing updates.
See: http://jexp.de/blog/2013/05/on-importing-data-in-neo4j-blog-series/
You can export to CSV and import it into node (probably wont work well since you have millions of records)
You can write a program to do it (this is what I am currently working on).
This also depends on what programming languages you know... but the bottom line is, because no two databases are created equally (unless on purpose), it's very difficult to create a catch-all solution for migrating data from SQL to Neo.
The best way that I've discovered so far is to create a program that queries the tables in the database, finds all related tables (i.e. foreign keys), and imports all those table rows into Neo, labeling the nodes using the Table name, then process the foreign keys as relationships.
It's not easy. I've been working on something for my database here for a week or so now... but I'm close!

Database in iOS application: Sqlite vs XML + XPath?

my app needs to read some data exported from a SQL database. In addition to reading, the app should also query the database in order to find relations between different data.
Potentially, the amount of data can be very large.
Of course, everything should be responsive and not impact negatively of the user experience.
Here is my doubt: I was going to export the database in XML format and run queries by using XPath but I'm not 100% sure that this approach is going to be efficient enough, especially if the number of records in the database is around thousands.
What can you guys tell me about the efficiency and performance of XPath in iOS? Is that a good solution for large XML database?
Would Sqlite be a better and more efficient approach? By using Sqlite, am I risking to make my app heavy?
Thanks!
You definately should use SQLite over XML in iOS apps. I would recommend the best wrapper for such a database: https://github.com/ccgus/fmdb
With FMDB + making queries in background threads the performance of app shouldn't be affected at all, even with big DB.
Many of my iOS applications use data stored in SQLite databases. I have good trust in SQLite databases.
Some of my databases have more than 30000 elements in tables and applications make complex queries such as filtering/grouping by criteria. It is quite fast.
The best way to create your SQLite data is to export as CSV from your original databases and to import data in a new SQLite database.
You can find an example here.

Coredata index insert directly in the sqlite file

I need to improve the time of one fetch request to coredata, so I was thinking in indexing some attributes.
My doubt is, I will insert the rows manually in the sqlite file with sql, so will the indexing of the attributes have any effect? Or I need to insert the data by code?
This is a Bad Idea. The database isn't yours to insert data into. The Core Data framework does not expect its data to change underneath it, and you will almost certainly get unusual/unexpected results.
If Core Data doesn't perform well enough for you (and it may not, as described here), then you should consider just using SQLite directly, and managing your own database.

How to create a custom ADO Multi Dimensional Catalog with no database

Does anyone know of an example of how to dynamically define and build ADO MD (ActiveX Data Objects Multidimensional) catalogs and cube definitions with a set of data other than a database?
Background: we have a huge amount of data in our application that we export to a database and then query using the usual SQL joins, groups, sums etc to produce reports. The data in the application is originally in objects and arrays. The problem is the amount of data is so large the export can take > 2 hours. So I am trying to figure out a good way of querying the objects in memory, either by a custom OLAP algorithm or library, or ADO MD. But I haven't been able to find an example of using ADO MD without a database behind it.
We are using Delphi 2010 so would use ADO ActiveX but I imagine the ADO.NET MD is similar. I realize that if the application data was already stored in a database the problem would solve itself. Also if Delphi had LINQ capability I could query the objects and arrays that way.
The export takes 2 hours? Some companies have dealt with worse! On a nightly basis!
We used to schedule the transfer of data into the warehouse at 3am, and the trigger the cube processing jobs around 6am....then struggle to make 9am availability.
One thing that improved efficiency was making sure that only new data was dealt with, not old values that remained unchanged. For example, our warehouse held restaurant sales for the last 5 years, so there was no need to reload any rows over a month old as they would be the same!
Is it possible to export your entire application's data into a SQL database just once, and then each day after that just add a little bit of new data, or re-export just a subsection of the application? That will help load times.

Resources