FoxPro OleDbException: File Too Large - oledb

I am running a FoxPro OLEDB query with several joins over a fairly large dataset. However despite only asking for "MAX" or "TOP 100" [rows] data, I get the following error:
System.Data.OleDb.OleDbException (0x80004005): File
c:\users\appX\appdata\local\temp\4\00004y7t002o.tmp is too large.
[LOCAL]
OR
System.Data.OleDb.OleDbException (0x80004005): Error writing to file
c:\users\appX\appdata\local\temp\00002nuh0025.tmp. [REMOTE]
(I have tried the query both locally and remotely).
Seemingly the OLEDB query creates/deletes a huge amount of temp files, e.g.
This would suggest my query is simply too large and will require several smaller queries/workarounds.
The question is: is this a known issue? Is there an official workaround? Would the FoxPro ODBC adapter have the same problem?

Basically 2GB is the upper limit for any file that Visual FoxPro has to deal with. None of those temp files are anywhere near that. Does the location they are being created in have enough disk space? Are there user disk quotas in effect?

Related

Cosmos DB Gremlin Query timeout

I am currently creating a PoC using Cosmos DB Graph. The data itself is around 100k nodes and 630k edges.
In one subset of this data (1.7k nodes and 3.8k edges) I am trying to find the shortest path from A to B with the gremlin.
Somehow this is not possible.
I get a query timeout (30 seconds) or I get a loop error (cannot exceed 32 loops) !?!?
There must be something wrong (on my side or Cosmos side) - can you please help or give a hint?
I tried a lot of query variants already, but the errors are still there...
One of the basic queries I tried
The limits of the Gremlin API service are documented here: https://learn.microsoft.com/en-us/azure/cosmos-db/gremlin-limits
It may be necessary that you are looking for an OLAP engine to process such a large shortest path query. You could consider Spark and their GraphFrame support to process it. Here is a sample: https://github.com/Azure/azure-cosmosdb-spark/blob/2.4/samples/graphframes/main.scala

Getting a large amount of data from standalone Neo4j db into an embedded Neo4j db?

We have a significant, though not huge, amount of data loaded into a stand-alone Neo4j instance via the browser and we need to get that data into the embedded instance in our app. I tried dumping the data to cypher (a 600kb file) and uploading it to our app to execute, that gets stack overflow errors.
I'm hoping to find an efficient way of doing this with the Java API so that we can do it again on other developers' machines. This is test data for development but it was entered, with significant effort, and we'd rather not redo all that.
Here's a silly question. Can we just copy the data files from the stand-alone db to the embedded?
Just like #Dave_Bennett mentions in his comment you can copy over the graph.db folder. Embedded and standalone use exactly the same binary format.
If you want to copy over programmatically, I suggest going with the batch inserter API, see http://neo4j.com/docs/stable/batchinsert.html.
There a great tool for copying over datastores at https://github.com/jexp/store-utils which might give some hints as well.

Cypher Queries work in local machine but does not work in server . Also query works good in embedded mode but not in Rest mode

The cypher queries that work fine in local machine (Windows) do not work in linux instance. The cypher queries also run great in embedded mode in the server/local but the same query does not work using the Rest mode (0 rows returned). The database size between local and server is hugely different, so are there any parameters we need to change in order of accomodate this difference in db size ?
I get a
com.sun.jersey.api.client.ClientHandlerException:
java.net.SocketTimeoutException: Read timed out
Example Queries are simple queries like: match n where n:LABEL_BRANDS return n .
The properties in neo4j.properties file are:
neostore.nodestore.db.mapped_memory=25M
neostore.relationshipstore.db.mapped_memory=50M
neostore.propertystore.db.mapped_memory=90M
neostore.propertystore.db.strings.mapped_memory=130M
neostore.propertystore.db.arrays.mapped_memory=130M
Neo4J Version I use 2.0.0-RC1 .
Also I get a "Disconnected from Neo4j. Please check if the chord is unplugged" error on opening the browser interface very frequently.
Would there be a mistake in setting some properties in config files, could you identify the mistake here. Thanks .
Upgrade to Neo4j 2.0
How big is the machine you run your Neo4j server on?
Try to configure a sensible amount of heap (8-16GB) and the rest of your RAM as memory-mapping according to the store-file sizes on disk
the query you've shown is a global scan which will return a lot of data over the wire on a large database. What are the actual graph queries/use-cases you want to run?
The error messages from the browsers also indicate that either your network setup is flaky or your server has issues. Please do upload your messages.log as Stefan has indicated to an accessible place and add the link to your question.

Performance Issues Using TUniTable

I'm in the process of converting a Paradox database application written in Delphi to use SQL Server 2008 R2. We are using the UNIDAC components from Devart to access the database/tables. However, I am finding the performance rather slow. For example, in the Paradox version it is more or less instant when it opens up a table (Using TTable) with 100,000 records, but the SQL Server (Using TUniTable) take approximately 2 seconds. Now I know this doesn't seem a lot but there are 10 TUniTable datasets which open up on form creation, all of which contain around the same number of records, so at present it is taking just under 20 seconds to open them all. Does anyone have any performance tips?
I'm using Delphi 2007
IMHO, UniDAC TUniTable is just a wrapper of TUniQuery. TUniTable open may lead to fetching all records on SQL Server. Not sure how, but try to change SQL Server cursor type and/or location.
If it is not late, then consider to use AnyDAC and TADTable. It uses "Live Data Window" technology, which allows to open and browse through the large tables without significant delays, eg Open and Last calls will be always fast. We migrated few BDE applications to AnyDAC and Firebird, TADTable works really great.

Delphi application and ODBC driver - best practices. Load large amount of data and kill application

I have an application which is using the ODBC driver. It is more or less build by following this small tutorial http://edn.embarcadero.com/article/25639.
My question is: let's say that I'm asking from database to bring in application almost 100k records through a query. But, before I got the results the application will be closed (software design which is not made by me). What's happening with the ODBC driver? It will freeze? Memory used by the driver will increase? How should I manage this 'potential' problem, if exists?
I don't understand if the problem you are asking about is one you've experienced (freezing, memory increase) or just one you are asking and in any case the answer probably depends on the ODBC driver and database. If you are half way through retrieving a large result-set and simply call SQLFreeStmt(SQL_CLOSE) what happens next will depend on the ODBC driver and database. Some ODBC drivers talk to the database using a protocol which is synchronous and it may not be possible for the ODBC driver to inform the database it no longer needs any more rows - they just get sent and as the ODBC driver does not know what you are going to do next, it will have to read all that data but throw it away (which may be seen as a pause especially when the result-set is large). Some may be able to send some sort of signal to the database saying stop sending rows.
Either way, at the end as the application is closed memory use is neither here or there. It is good practise to try and tidy up when your app is closed but it shouldn't really matter unless the driver creates some system resources which are not cleared up properly.

Resources