Using Foxpro tables and Advantage Data Architect - advantage-database-server

I mainly want to use advantage to be able to access Fox tables larger than 2 gig. My programs are simple and are run from the command window.
I have Adv Data Archetect installed and have the ODBC driver installed.
I'm not very knowledgeable with connections, etc.
Can someone explain to me, provide a link or provide the code that I would need to be able to use and create 2 gig + tables.
Thanks

I cannot tell from the OP what you have actually done, but it sounds like you are expecting to be able to use an ODBC driver with an existing Visual FoxPro application without changing the application from the direct table access. That is not possible.
Here is a link to a screencast showing an example of using ODBC to get to a table that is over the 2GB limit. If I recall correctly, it shows how to use views to access the data; doing it that way can minimize the number of changes you need to make. More information about remote views can be found here.
You can also use ODBC "directly" with SQL pass through statements. It is also possible to use OLE DB with cursor adapters if you prefer that over ODBC.

Related

How to store "large" amounts of data in a desktop app?

Update
On a completely unrelated search, I have found this: Lightweight SQL database which doesn't require installation which lists a few possible options with about the same goals.
Original post
We have a desktop .Net/WPF app, with large (for a desktop app) amounts of data stored: it's has layouts, templates, products list and technical specs, and many more stuff.
Today it's stored in an Access DB, but we have hit the Access limitations pretty hard: it's not very fast, the DB weights 44Mb (which results in a large setup package), and more importantly, it's a pain to use with version control, because we can't merge the data from a branch to another. For instance, we might create a branch to add a few products, but then we have to add them manually in the trunk when we merge. We could use SQL scripts, but writing advanced SQL scripts for Access is a pain.
Basically, I want to replace the MS Access DB with another storage format, because Access is not well adapted.
I had tought of using JSON files that would be unzipped during or after install, but I'm a bit afraid of performance problems.
I'm also thinking of splitting the data into multiple files with multiple formats, depending on its usage, but using different formats might get complicated or annoying to develop.
Performance
Some parts of the DB are accessed pretty often and should be performance-optimized, whereas others are accessed maybe 1 or 2 times per work session, and using a poor-performance but high-compression format could be OK.
Size
We want the installer to be the smallest possible, so the library should be small, and the format should use small files. Using a library that adds 5 Mb to the installer file is out of the question.
Compatibility
The software must be able to run on .Net 4 (not 4.5), and it would be great if it ran on Windows XP (even though we're thinking more and more of just abandoning it going forward, it's still more than 7% of our market share).
Moreover, it should not need to install a server (like MS Access or SQLite) because it will be installed on end-user's computers, and we don't want to bloat them.
Versionning
It should be easy to version the data and the DB structure. The file should either be a text file (like JSON), or scripts should be easy to run in the continuous integration platform (like SQL server).
So, which technology would you use that answers all these contraints ?
Thanks !
As for your version control pains, it sounds from your description that if I were you, I'd keep the raw data in text files that are version-controlled, and only have the build process produce the database from them. This way, you should be able to use SQLite.
I would go for SQLite in your case, since the files are self-contained and easy to locate (hence easy to save on a version control system), installer is small, and performance is good.
http://www.sqlite.org/

How to visualise Neo4j graph database created from an embedded Neo4j java application

I created an application which embedded Neo4j. In that application I created and stored some nodes with some relationships. My application has saved this database to a file. I would like to visualise that data. I know I can see graphs if I fire up the Neo4j server but I do not know how to import my neo4j.db file into the Neo4j server so that I can visualise it. Any suggestions would be greatly appreciated.
Depending on your use case you might have different solutions:
Use a web-based visualization
Use a desktop application to visualize your data
Use web-based visualization
In this case you have to take care of the web-app to visualize the data.
You have basically two solutions out there: Javascript or Java applets.
For the Javascript side you have many choices: D3js, VivaGraph, SigmaJS, KeyLines.
The first three are open source and free while the last one has a commercial licence and non-free.
There're already a million questions about these libraries on SO, so I'll link you to some of those to understand the various differences.
Desktop Application
The main solutions in this case I would recommend you, depending on the kind of data are: either Gephi or Cytoscape.
In both cases I believe you have to write your own adapter to communicate with your application.
Architecture Reference
The architecture in both cases will be the following:
The controller renders a webpage with the JS visualisation framework you want to use
The controller offers a couple of JSON endpoints the client can use to query the data from the Neo4J embedded
Each query fetch the data, put in a model and render the JSON to send to the client
If you're NOT using neo4j 2.0+ then really good way to visualize your graph is by using neoclipse. https://github.com/neo4j-contrib/neoclipse/downloads
it's really handy and it has cypher support too.
Or
another quick hack is to copy your db folder (which you created by using embedded database) into $NEO4j_HOME/data/
and
change $NEO4j_HOME/conf/neo4j-server-properties file to point to
and
start your server (bin/.neo4j start). You'll be able to visualize your database at localhost:7474
I hope it helps!

Dynamic database connection in a Rails App

I'm quite new to Rails but in my current assignment I have no other choice but use RoR. My problem is that in my app I would like to create, connect and destroy databases automatically on user demand but as far as I understand it is quite hard to accomplish this with ActiveRecord. It would be nice to hear some advice from more experienced RoR developers on this issue.
The problem in details:
I have a main database (which I access with activerecord). In this database I store a list of my active programs (and some template data for creating new programs). I would like to create a separate database for each of this programs (when a user creates a new program in my app).
In the programs' databases I would like to store the state and basic info of the particular program and a huge amount of program related data (which is used to calculate the state and is necessary to have for audit reasons).
My problem is that for example I want a dashboard listing all the active programs and their state data. So first I have to get the list from my main db and after that I have to connect to all the required program databases and get the state data.
My question is what is the best practice to accomplish this? What should I use (ActiveRecord, a particular gem, etc.)?
Hi, thanks for your answers so far, I would like to add a couple of details to make my problem more clear for you:
First of all, I'm not confusing database and table. In my case there is a tool which is processing log files. Its a legacy tool (written in ruby 1.8.6) and before running it, I have to run an SQL script which creates a database with prefilled- and also with empty tables for this tool. The tool then processes the logs and inserts the calculated data into different tables in this database. The catch is that the new system should support running programs parallel which means I have to create different databases for different programs.(this was not an issue so far while the tool was configured by hand before each run, but now the configuration must be automatic by my tool) There is no way of changing the legacy tool while it would be too complicated in the given time frame, also it's a validated tool. So this is the reason I cannot use different tables for different programs, because my solution should be based on an other tool.
Summing my task up:
I have to crate a complex tool using RoR and Ruby 2.0.0 which:
- creates a specific database for a legacy tool every time a user want to start a new program
- configures this old tool on a daily basis to process the required logs and insert the calculated data into the appropriate database
- access these databases and show dashboards based on their data
The database I'm using is MySQL.
I cannot use other framework, because the future owner of my tool won't be able to manage/change/update it. So I have to go with RoR, which is quite painful for me right now and I really hope some of you guys can give me a little guidance.
Ok, this is certainly outside of the typical use case scenario, BUT it is very doable within Rails and ActiveRecord.
First of all, you're going to want to execute some SQL directly, which is fine, but you'll also have to take extra care if you're using user input to determine the name of the new database for instance, and do your own escaping. (Or use one of ActiveRecord's lower-level escaping methods that we normally don't worry about.) The basic idea though is something like:
create_sql = <<SQL
CREATE TABLE foo ...
SQL
ActiveRecord::Base.connection.execute(create_sql)
Although now that I look at ActiveRecord::ConnectionAdapters::Mysql2Adapter, there's a #create method that might help you.
The next step is actually doing different things in the context of different databases. The key there is ActiveRecord::Base.establish_connection. Using that, and passing in the params for the database you just created, you should be able to do what you need to for that particular db. If the db's weren't being created dynamically, I'd put that line at the top of a standard ActiveRecord model so that that model would always connect to that db instead of the main one. If you want to use the same class, and connect it to different db's (one at a time of course), you would probably remove_connection before calling establish_connection to the next one.
I hope this points you in the right direction. Good luck!

How to detect any modification that happens in SQL Server database?

I have an application using TADODataSet and TADOConnection to connect with SQL Server database.
I would like to detect any modification that happens in the database.
modifications = Insert, Update, Delete
I want to know which TADODataset or which table has been modified.
I'm doing this because I have a multi-user application that works over local network. The users may add, delete or edit records in tables so I want to refresh the datasets to show the new modifications.
Also I want this to build a log.
I don't want to use TTime to keep watching the modifications.
I don't want to use triggers
I prefer a message from TADOConnection.
I'm using SQL Server 2005 and Delphi 2007 with ADO components.
New modification : I need it on SQL Server 2000
Regards.
Maybe not the answer you expect but I think you should evaluate Bold for Delphi. My employer Attracs has successfuly used Bold over ten years in a big multiuser application. Bold have many features that simplify development when application grows and things got really complicated. Currently Bold do not support Unicode so it can only be used with D2007 or older. But we have plans to fix this in the future.
Bold solve your problem by having automatic updates of gui components when another user make changes to the database.
For more information about Bold see my blog at boldfordelphi.

How to copy (and rename) a database?

I am coding in Delphi, using a TADOConnection to access ODBC compliant databases.
How do I copy a database leaving the new copy on the same database server?
And how do I rename? (I suppose I could copy & delete the original - if I knew how to copy).
ODBC does not provide for the copying or creating databases. That is a technology-specific (specific to the RDBMS) facility. The closest you can get is by creating and populating (copying) tables.
The only way you could do it would be to issue a db-specific command via an ODBC connection but for that we would have to know exactly what type of database you're using.
Are you using ODBC drivers, or ADO providers? If the later, you can look into the ADOX library, which provides vendor neutral support for working with the structure of databases. I don't know myself whether it supports operations on the entire database.

Resources