can PostgreSQL function talk to SQL Server 2000 as a linked server - oledb

Situation: gradually migrating a boatload of legacy SQL Server apps to PostgreSQL.
Question: can a PostgreSQL function execute a query against SQL Server 2000 database? Anything comparable to OpenQuery in PostgreSQL? Or like Oracle's Heterogeneous Services?

You can you DBLink between PostgreSQL instances. There is a project out there for doing this to MSSQL call dblink-tds. I have never used it, so I can't say how well it works. And it looks pretty old, so I'm suspicious it may no longer be supported.
But, remember with PostgreSQL you get your choice of languages to use. if you need to access data in other non-postgresql stores, you can use one of the "un-managed" languages. For example, plpythonu or you can always write your functions in C and access any information you need/want.

Related

Does Firebird 3 embedded server have major disadvantages?

Are there major disadvantages using the embedded Firebird 3 in a multi-user application server (Delphi Webbroker) instead of the full blown server install?
The application usually has very short transactions with low data volume.
As far as I am informed accessing one database file with multiple threads through the embedded server is not problematic but user security is not available. As the application server does the rights stuff I do not need Firebird security.
But will I loose performance or things like garbage collection?
Firebird Embedded provides all the features (except network access and authentication) that a normal Firebird server provides. However, because it is in-process, any problems that cause your application to crash, will take Firebird with it and vice versa.
Other possible downsides:
Garbage collection will - as far as I know - always use the 'cooperative' model (where the connection to find old record versions, is the one that cleans it up),
You can't use other tools to access your database remotely which may make administration harder,
You can't put your database on a separate server from your web application (think of security requirements).
Personally, I would only choose Firebird Embedded if the situation calls for it. In all other situations, I will use Firebird Server.

Does the DBMS save the compiled queries from prepared statements, in the form of stored procedures on the DBMS server?

Does the DBMS save the compiled queries from prepared statements in JDBC, in the form of stored procedures on the DBMS server? I thought that prepared statement isn't a concept in DBMS but in JDBC, so I was wondering how it is implemented on DBMS server side.
My question comes from Why do Parameterized queries allow for moving user data out of string to be interpreted?
I read DIfference Between Stored Procedures and Prepared Statements..?, but don't find my answer.
Thanks.
I am interested in PostgreSQL, MySQL, or SQL server in order.
No, prepared statements are not implemented as stored procedures in any RDBMS.
Prepared statements are parsed and saved on the server-side so they can be executed multiple times with different parameter values, but they are not saved in the form of a stored procedure. They are saved in some implementation-dependent manner. For example, as some kind of in-memory object, totally internal to the code of the database server. These are not callable like a stored procedure.
Re your comment:
Consider MySQL for example.
MySQL in the very early days did not support prepared statements, so the MySQL JDBC driver has an option to "emulate" prepared statements. The idea of emulation mode is that the SQL query string is saved in the JDBC client when you create a PreparedStatement. The SQL is not yet sent to the database server. Then when you bind parameters and call execute(), it copies the parameter values into the SQL query and sends the final result.
I don't know whether a similar feature exists in other brands of JDBC driver.

gpfdist vs gpload greenplum

I am setting greenplum for the first time. I am following the documentation. I want to setup connection from sql to greenplum database. Currently figuring out what's the best way to achieve this. I came across gpfdist and gpload.
How are the two different? Since both use external tables, both work on slaved nodes and are used for parallel loading. So Is there any advantage of using one over other?
Answering to your question for " I want to setup connection from sql to greenplum database"...
It's ambiguous for which SQL database you are referring to.
Also, there is no direct connectivity drivers available to connect non-greenplum database to greenplum database.
However if you want to migrate data from Oracle to Greenplum, then you can use Informatica's fastclone tool.
To answer your second part of question regarding gpfdist and gpload. GPFDIST is a file distributed process which runs on host system and it serves file parallely to many segments. While initialising external table to read/ write from file, you will need to specify which process will serve the file, In your case it will be GPFDIST. There are other processes too like FTP, GPHDFS, HTTP.
GPLOAD is a wrapper utility which makes your work easier by automatically creating gpfdist processes and external tables.
Also be aware that GPLOAD can only create readable external tables.
gpfdist n gpload or same. In gpfdist you do it manually while in gpload you can automate the activities via maiking entries in config(yaml file) file.
GPLOAD is a wrapper around GPFDIST. so when you load data via gpload it will internally use gpfdist only.
If you want to load/ migrate data from any other RDBMS to Greenplum and you are using any ETL or migration tool, it will use normal copy command and while loading/migrating if you enable gpload(now a days in the latest version of most of the ETL tool and migration tool support gpload feature when you migrate/load data to Greenplum) it will load data in parallel fashion via using gpfdist internally.

Delphi SQLite3 using ZeosLib, how to share a database?

I am using Delphi 7 and ZeosLib 6.6.6 to access SQLite3 database.
What is the best practice to use shared database.
I plan to put the database file (data.db3) in a shared location.
And the Delphi application is on local desktop computer of every users.
I want to know how to manage database locking for example. Detecting if the database is being locked by certain user, things like that.
Thanks.
SQlite3 handle database sharing by default, locally on the same computer. You have nothing to do, just open the database several times on your hard drive. Of course, it does have an overhead, and locking will make it slower than access from one unique process.
But if by "in a shared location" you mean a network drive, as your question suggests, it probably won't work as expected.
Locking files over a network are not safe (at least in Windows world). See http://www.sqlite.org/cvstrac/wiki?p=SqliteNetwork
You should instead rely on a true Client/Server approach, still possible with SQLite3 on the server, and Clients accessing to it via the network. See e.g. our RESTful server using JSON and several protocols.
You can put a SQLite database on a shared network resource. According to the SQLite documentation - that is not recommended. Main reason - SQLite cannot effectively manage locking on a shared resource.
If you need multi-user access to a SQLite database, then you may consider using middleware, like DataAbstract. As a driver for Data Abstract you can use our library AnyDAC. Some articles: Using SQLite with AnyDAC and Using Data Abstract with AnyDAC. In first article check "Connecting to SQLite database from Delphi application" for usage cases, including how to setup for concurrent access.

offline web application design recommendation

I want to know which is the best architecture to adopt for this case :
I have many shops that connect to a web application developed using Ruby on Rails.
internet is not reachable all the time
The solution was to develop an offline system which requires installing a local copy of the distant database.
All this wad already developed.
Now what I want to do :
Work always on the local copy of the database.
Any change on the local database should be synchronized with distant database.
All the local copies should have the same data in other local copies.
To resolve this problem I thought about using a JMS like software eventually Rabbit MQ.
This consists on pushing any sql request into a JMS queue that will be executed on the distant instance of the application which will insert into the distant DB and push the insert or SQL statement into another queue that will be read by all the local instances. This seems complicated and should slow down the application.
Is there a design or recommendation that I must apply to resolve this kind of problem ?
You can do that but essentially you are developing your own replication engine. Those things can be a bit tricky to get right (what happens if m1 and m3 are executed on replica r1, but m2 isn't?) I wouldn't want to develop something like that unless you are sure you have the resources to make it work.
I would look into existing off-the shelf replication solution. If you are already using a SQL DB it probably has some support for it. Look here for more details if you are using MySQL
Alternatively, if you are willing to explore other backends, I heard that CouchDB has great support for replication. I also heard of people using git libraries to do that sort of thing.
Update: After your comment, I realize you already use MySql replication and are looking for solution for re-syncing the databases after being offline.
Even in that case RabbitMQ doesn't help you at all since it requires constant connection to work, so you are back to square one. Easiest solution would be to just write all the changes (SQL commands) into a text file at a remote location, then when you get connection back copy that file (scp, ftp, emaill or whatever) to master server, run all the commands there and then just resync all the replicas.
Depending on your specific project you may also need to make sure there are no conflicts when running commands from different remote location but there is no general technical solution to this. Again, depending on the project, you may want to cancel one of the transactions, notify the users that it happened and so on.
I would recommend taking a look at CouchDB. It's a non-SQL database that does exactly what you are describing automatically. It's used especially in phone applications that often don't have internet or data connectivity. The idea is that you have a local copy of a CouchDB database and one or more remote CouchDB databases. The CouchDB server then takes care of teh replication of the distributed systems and you always work off your local database. This approach is nice because you don't have to build your own distributed replication engine. For more details I would take a look at the 'Distributed Updates and Replication' section of their documentation.

Resources