Groovy SQL DataSource Connection handling, withTransaction - grails

I've been quite confused at groovy SQL documentation as far as the connection handling is concerned. I need to understand the best practices or how to actually handle the underlying connection properly.
Let's say I have an object like this.
SqL sql = new Sql(dataSource);
sql.withTransaction {
.....
}
Now should I close connection? sql.close in a finally block? Or I just leave it there.
Now consider this:
SqL sql = Sql.newInstance (connection parameters);
sql.withTransaction {...}
Is sql.close() required now?
Now here is another variant.
Sql sql //(with any constructor).
Sql.//[do something] without a withTransaciton ...
Do I have to manage the connection myself this time? Or I can still leave it there without sql.close().
In any of the above case I'm writing my code in a Grails service which may or may not be transactional.
Thanks.

afaik, all methods do the connection handling themselves.
you should deal with it only if you used the constructor that receives a connection.
if you don't need transacions and you want to reuse the connection you colud use Sql#cacheConnection(Closure)

Related

How to call function in sql server with entity framework

No idea how to do this;
The client I work for has a database with functions (not stored procedures!) that have a lot of complex logic that we must use since there is no time to reproduce them in C# code. We use database first.
I honestly don't know where to start! I have added the function with "Update the Model" via the edmx, and I see the function appear in the storage section of the xml configuration.
But now? With stored procedures, EF creates methods in the context class. But apparently not so with functions. I do not know what to do from here. Do I manually add a method for the function in this context class?
Please help!

Does groovy.sql.Sql.firstRow Closes Connection After Execution?

In MyService I have the following:
import groovy.sql.Sql
class MyService {
Sql groovySql
def serviceMethod(){
groovySql.firstRow("some query.....")
}
}
In resources.groovy groovySql inject as follows:
groovySql(groovy.sql.Sql, ref('dataSource'))
This is a Grails 2.4.5 application. Now, the question is when serviceMethod is called, is the connection closed automatically?
Every method in Sql creates and releases resources if necessary.
Under the covers the facade hides away details associated with getting
connections, constructing and configuring statements, interacting with
the connection, closing resources and logging errors.
If you create a Sql with a DataSource, it will get a new connection every time, and close it at the end of the operation.
If we are using a DataSource and we haven't enabled statement caching,
then strictly speaking the final close() method isn't required - as
all connection handling is performed transparently on our behalf;
however, it doesn't hurt to have it there as it will return silently
in that case.

Connection Pooling with Apache DBCP

I want to use Apache Commons DBCP to enable connection pooling in a Java Application (no container-provided DataSource in this). In many sites of the web -including Apache site- the usage of the library is based in this snippet:
BasicDataSource ds = new BasicDataSource();
ds.setDriverClassName("oracle.jdbc.driver.OracleDriver");
ds.setUsername("scott");
ds.setPassword("tiger");
ds.setUrl(connectURI);
Then you get your DB connections through the getConnection() method. But on other sites -and Apache Site also- the Datasource instance is made through this:
ConnectionFactory connectionFactory = new DriverManagerConnectionFactory(connectURI,null);
PoolableConnectionFactory poolableConnectionFactory = new PoolableConnectionFactory(connectionFactory);
ObjectPool objectPool = new GenericObjectPool(poolableConnectionFactory);
PoolingDataSource dataSource = new PoolingDataSource(objectPool);
What's the difference between them? I'm using connection pooling with BasicDataSource, or I need an instance of PoolingDataSource to work with connection pooling? Is BasicDataSource thread-safe (can I use it as a Class attribute) or I need to synchronize its access?
BasicDataSource is everything for basic needs.
It creates internally a PoolableDataSource and an ObjectPool.
PoolableDataSource implements the DataSource interface using a provided ObjectPool. PoolingDataSource take cares of connections and ObjectPool take cares of holding and counting this object.
I would recommend using BasicDataSource.
Only, If you really need something special maybe then you can use PoolingDatasource with another implementation of ObjectPool, but it will be very rare and specific.
BasicDataSource is thread-safe, but you should take care to use appropriate accessors rather than accessing protected fields directly to ensure thread-safety.
This is more of a (big) supporting comment to ivi's answer above, but am posting it as an answer due to the need for adding snapshots.
BasicDataSource is everything for basic needs. It creates internally a
PoolableDataSource and an ObjectPool.
I wanted to look at the code in BasicDataSource to substantiate that statement (which turns out to be true). I hope the following snapshots help future readers.
The following happens when the first time one does a basicDatasource.getConnection(). The first time around the DataSource is created as follows :
This is the raw connectionFactory.
This is the Generic Object Pool ('connectionPool') that is used in the remaining steps.
This combines the above two (connectionFactory + an Object Pool) to create a PoolableConnectionFactory. Significantly, during the creation of the PoolableConnectionFactory, the connectionPool is linked with the connectionFactory like so:
Finally, a PoolingDataSource is created from the connectionPool

Injecting connection strings vs IDbConnection

What are the trade-offs for injecting connection strings vs. an instance of IDbConnection?
I use StructureMap to inject various services into my ASP.NET MVC application, most of which require database access for LINQ-to-SQL queries. Injecting an IDbConnection seems more testable and easier to configure for IoC than a generic connection string parameter, but I'm worried about open connections hanging around if I don't explicitly wrap the connection in a using block.
Are there any connection pooling advantages or disadvantages I should be aware of?
Injected Connection String
using (var con = new SqlConnection(InjectedConnectionString))
{
con.Execute("INSERT INTO Logs (...) VALUES (...)");
using (var db = new MyDataContext(con))
{
var records = from p in db.Products
select p;
}
}
Injected IDbConnection
con.Execute("INSERT INTO Logs (...) VALUES (...)");
using (var db = new MyDataContext(InjectedConnection))
{
var records = from p in db.Products
select p;
}
A feature of any moderately sophisticated IoC container (structuremap) is being able to control the lifetimes of objects. By default, structuremap uses a Transient lifetime. This means it creates a new instance per object graph. In practice, this often is the same as per-web-request (unless you sprinkle your code with usages of container.GetInstance<T>()).
By using structuremap to inject precious resources like database connections you gain control over how long they live. A single resource can (if you choose) be reused throughout an entire web request, or created fresh for every usage.
Furthermore, these choices (as well as configuration) are now externalized into the registry instead of sprinkling them through your code. If you have to change how the connection is created, you only have to look one place. Classes with a single responsibility are always preferred.
As far as your connection pooling concerns, no IoC container will involve itself in details like connection pooling. They do, however, help with lifetimes. Structuremap will call Dispose() on any IDisposable object (well, it's actually the interpreter that calls it).
Edit: Again on connection pooling, each lifetime carries its own rules for how and when objects are disposed. Transient relies on the CLR to dispose, however HttpRequestScoped deterministically disposes objects at the end of each request. Using HttpRequestScoped would prevent you from maxing out the number of connections.

Registering Transaction Event Handlers in neo4j

I'm currently using Spring Data with Neo4j and have subclassed the SpringRestGraphDatabase to allow the registration of specific transaction event handlers.
I call the registerTransactionEventHandler method to do so. Unfortunately I always get the following exception:
Caused by: java.lang.UnsupportedOperationException: null
at org.neo4j.rest.graphdb.AbstractRemoteDatabase.registerTransactionEventHandler(AbstractRemoteDatabase.java:52) ~[neo4j-rest-graphdb-1.6.jar:1.6]
at org.neo4j.rest.graphdb.RestGraphDatabase.registerTransactionEventHandler(RestGraphDatabase.java:28) ~[neo4j-rest-graphdb-1.6.jar:1.6]
By looking closely at the AbstractRemote I see that it always throws an exception:
public <T> TransactionEventHandler<T> registerTransactionEventHandler( TransactionEventHandler<T> tTransactionEventHandler ) {
throw new UnsupportedOperationException();
}
The RestGraphDatabase doesn't provide an implementation for the register method hence the exception. I'm not sure what alternatives to use, especially as I'm extending SpringRestGraphDatabase.
Is there a cleaner alternative?
(I'm using v2.1.0.M1)
Yeah,
the exposure of these handlers would be very costly over the network. Depending on what you want to do, I would suggest writing a custom plugin to expose your operations and register what you need via a REST endpoint, see http://docs.neo4j.org/chunked/snapshot/server-plugins.html

Resources