I'm trying to implement a driver that connect a webserver to oracle using the OCI API.
is it possible to sent multiple querys in the same time (the same string) and receive responses (multiple result sets)
Thanks :)
This advanced concept can be used for different database connections. Multiple connection pools can also be used when different priorities are assigned to users. Different service level guarantees can be implemented using connection pooling.
Try
OCISessionPoolCreate
Related
I am planning to use neo4j for an event management system. The involved entities are events, places, persons, organizations, so and so forth. But inorder to keep each org data separate I plan to create separate DB instance for each org. Becoz of this separation of Database instances, the 'Place' nodes are likely to get repeated in these mutliple db instances.
So, now, is it possible to aggregate for events based on Place nodes from all db instances? Or is it that I have to build my own custom aggregation like map-reduce?
Thanks in advance for helping me out on this..
In Neo4j 4.0, if you have a license for enterprise edition, you can leverage Neo4j Fabric, which should allow you to do exactly this: connect to a proxy instance, which must be configured to see other running db instances (which may be running on the same Neo4j dbms as the proxy instance, or which could instead be running on separate servers/clusters).
Then you can query across the graphs, aggregating and working with the result set across them as needed.
While creating some basic workflow using KNIME and PSQL I have encountered problems with selecting proper node for fetching data from db.
In node repo we can find at least:
PostgreSQL Connector
Database Reader
Database Connector
Actually, we can do the same using 2) alone or connecting either 1) or 2) to node 3) input.
I assumed there are some hidden advantages like improved performance with complex queries or better overall stability but on the other hand we are using exactly the same database driver, anyway..
There is a big difference between the Connector Nodes and the Reader Node.
The Database Reader, reads data into KNIME, the data is then on the machine running the workflow. This can be a bad idea for big tables.
The Connector nodes do not. The data remains where it is (usually on a remote machine in your cluster). You can then connect Database nodes to the connector nodes. All data manipulation will then happen within the database, no data is loaded to your machine (unless you use the output port preview).
For the difference of the other two:
The PostgresSQL Connector is just a special case of the Database Connector, that has pre-set configuration. However you can make the same configuration with the Database Connector, which allows you to choose more detailed options for non standard databases.
One advantage of using 1 or 2 is that you only need to enter connection details once for a database in a workflow, and can then use multiple reader or writer nodes. I'm not sure if there is a performance benefit.
1 offers simpler connection details with the bundled postgres jdbc drivers than 2
I am building a neo4j ogm service using ogm in java. I need to connect to 2 neo4j server from my service to handle failover and replication. Is it possible to create multiple session each towards different neo4j server from one ogm service.
You can, in theory, create multiple SessionFactory instances, pointing to different database instances and perform each operation on both. Just use Java configuration instead of property file (this is true for OGM only, SDN would not be that simple).
There are multiple things to look out for:
you can't rely on the the auto generated ids as they could be different on each database instance
when writing to 2 instances, write to first instances may (for various reasons - network issues, latencies, concurrency etc..) succeed and write to second may fail, or vice versa - your code would need to handle that somehow
concurrency in general - queries dependent on state of the database may behave differently on the two instances because of one of them received more updates than the other (the second is "behind")
Because of all these reasons I wouldn't recommend such solution at all.
You would be better off with either Neo4j's HA or causal cluster. See the website regarding licensing.
We have a problem with our BizTalk applications with receive locations and send ports connecting to an oracle database. We run out of connections.
I don't know why, but original developers used both WCF-Custom and WCF-OracleDB, but I think both use ODP.NET as the ADO.NET provider.
Since, in ADO.NET and certainly in ODP.NET, connection pools are keyed on the connection string (exact string match, I think), a connection pool could logically be shared among send ports and receive locations. Since we don't have control of the connection string itself, we have to assume that connection strings in the adapters are consistently generated from one port to the other.
My questions are:
1- Am I right to assume that receive locations and send ports can share connection pools, as long as they run on the same host instance, and
2- Would it be a good idea to group similar ports and locations (the ones using the same connections strings) into one host instance?
Thank you,
Michel
Based on the below website, the connection pool is indeed determined via the uniqueness of the connection string:
http://www.connectionstrings.com/oracle-data-provider-for-net-odp-net/
(See "Specifying Pooling parameters")
One way to tackle this problem or at least give you a better insight, would be to enable ODP.NET tracing and performance counters. This will allow you a clear view on how many connections are being used in the pool(s).
For more information on how to enable those, see:http://blog.ilab8.com/2011/09/02/odp-net-pooling-and-connection-request-timed-out/
Set pooling=false on the connection string.
We are trying out an option of having multiple parallel thread meaning multiple consuming for different set of keywords by establishing single connection. Is it possible or should we establish new connection if we want to consume another set of data? Can anyone guide on this
Regards,
Balaji D
This is documented on DataSift's Multiple Streaming docs page. Some of the official API Client Libraries also support multi-streaming over WebSockets.