I have a webservice built with grails that connects to a MySQL database. Since i upgraded to 2.4.3 I've had problems with the connectionpool not releasing the connections, resulting in an exception:
org.apache.tomcat.jdbc.pool.PoolExhaustedException: [http-bio-8080-exec-216] Timeout: Pool empty. Unable to fetch a connection in 30 seconds, none available[size:50; busy:50; idle:0; lastwait:30000]
This is my Datasources.groovy
dataSource {
url = "jdbc:mysql://..."
username = "xxx"
password = "xxx"
pooled = true
properties {
maxActive = 50
maxAge = 10 * 60000
timeBetweenEvictionRunsMillis = 5000
minEvictableIdleTimeMillis = 60000
numTestsPerEvictionRun=3
testOnBorrow=true
testWhileIdle=true
testOnReturn=true
validationQuery="SELECT 1"
}
}
dataSource_survey {
url = "jdbc:mysql://..."
username = "xxx"
password = "xxx"
pooled = true
properties {
maxActive = 50
maxAge = 10 * 60000
timeBetweenEvictionRunsMillis = 5000
minEvictableIdleTimeMillis = 60000
numTestsPerEvictionRun=3
testOnBorrow=true
testWhileIdle=true
testOnReturn=true
validationQuery="SELECT 1"
}
}
I've read grails JIRA and some people seem to have similar problems. But I haven't been able to fix it with the information provided there.
Accessing the status of the connectionpool would help debugging a great deal. How can I check the status of the connectionpool to see how many connections are idle/busy during runtime?
The connection pool is registered as a javax.sql.DataSource but that interface only has methods for getting a Connection (one with username/password and one without), accessing a log writer, and getting/setting the login timeout. Everything else is left to the vendor to decide, and there's very little commonality between vendors in their methods for configuring pools initially, and working with and monitoring them throughout the app run.
So you really need to find out which library is used for the pool and use their API. That would ideally be as simple as accessing the dataSource bean (that's easy, just dependency-inject it into a service/controller/etc. like any bean - as a class-scope field, in this case def dataSource) and printing its class name. But we wrap the datasource in a few proxies to add some important behaviors, so it not easy to access
You're in luck though - for cases like this, we leave the original unproxied instance alone and register it as the dataSourceUnproxied bean which you can also dependency-inject (just don't access any of its connections, only information).
For a long time we used commons-pool to manage datasources, but a while back we switched to the Tomcat JDBC Pool because benchmark tests showed that it's faster than any other they looked at (including C3P0), and its configuration methods are based on commons-pool's, so it was basicallly a drop-in replacement with a significant performance boost and more configurability.
Related
I need the functionality to send emails from my Play 2.6.x server. I found that I could use play-mailer (https://github.com/playframework/play-mailer#usage)
Question 1 - Do I need a separate smtp server or is play-mailer an smtp server itself.
Question 2 - At the moment, I am running the application on localhost but I'll eventually deploy it. Would my application work if I just use localhost in the configuration below?
play.mailer {
host = localhost // (mandatory)
port = 25 // (defaults to 25)
ssl = no // (defaults to no)
tls = no // (defaults to no)
tlsRequired = no // (defaults to no)
user = null // (optional)
password = null // (optional)
debug = no // (defaults to no, to take effect you also need to set the log level to "DEBUG" for the application logger)
timeout = null // (defaults to 60s in milliseconds)
connectiontimeout = null // (defaults to 60s in milliseconds)
mock = true// (defaults to no, will only log all the email properties instead of sending an email)
}
Question 3 - Once I deploy the application in the cloud (say AWS), do I just need to change host in the above configuration to make it work?
Question 4 - I am suppose to pass username and password in the play.mailer config. Considering that I version-control my application.conf, is it safe to enter the username and password in the file?
Answer 1:
You will need an smtp server for play.mailer to connect to. This is generally what you'll put in your host in production.
Answer 2:
Yes it should work just like that, I think you'll have to set mock = yes though.
Answer 3:
If you decide to use aws (https://aws.amazon.com/ses/), your conf will look something like this.
play.mailer {
host = "email-smtp.us-east-1.amazonaws.com" // (mandatory) - url from amazon
port = 465 // (defaults to 25)
ssl = yes // (defaults to no)
tls = no // (defaults to no)
tlsRequired = no // (defaults to no)
user = "id_from_amazon"
password = "password_from_amazon"
debug = no // (defaults to no)
timeout = null // (defaults to 60s in milliseconds)
connectiontimeout = null // (defaults to 60s in milliseconds)
mock = no // for actually sending emails. set it to yes if you want to mock.
}
Answer 4:
So the security aspect depends on what environment you're using your play server in. If application.conf is likely to be seen by somebody then you could use environment variables instead of writing it in the application.conf
password = ${APP_MAILER_PASSWORD}
and then set APP_MAILER_PASSWORD as an environment variable. Again, this isn't secure if someone can access the console of your server - but not much is at that point.
I need to setup a configuration for many similar environments. Each will have a different hostname that follows a pattern, e.g. env1, env2, etc.
I can use a pool per environment and a single virtual server with an irule that selects a pool based on hostname.
What I'd prefer to do is dynamically generate and select the pool name based on the requested hostname rather than listing out every pool in the switch statement. It's easier to maintain and automatically handles new environments.
The code might look like:
when HTTP_REQUEST {
pool [string tolower [HTTP:host]]
}
and each pool name matches the hostname.
Is this possible? Or is there a better method?
EDIT
I've expanded my hostname pool selection. I'm now trying to include the port number. The new rule looks like:
when HTTP_REQUEST {
set lb_port "[LB::server port]"
set hostname "[string tolower [getfield [HTTP::host] : 1]]"
log local0.info "Pool name $hostname-$lb_port-pool"
pool "$hostname-$lb_port-pool"
}
This is working, but I'm seeing no-such-pool errors in the logs because somehow a port 0 request is coming into the pool. It seems to be the first request and the followed by the request with the legitimate port.
Wed Feb 17 20:39:14 EST 2016 info tmm tmm[6519] Rule /Common/one-auto-pool-select-by-hostname-port <HTTP_REQUEST>: Pool name my.example.com-80-pool
Wed Feb 17 20:39:14 EST 2016 err tmm1 tmm[6519] 01220001 TCL error: /Common/one-auto-pool-select-by-hostname-port <HTTP_REQUEST> - no such pool: my.example.com-0-pool (line 1) invoked from within "pool "$hostname-$lb_port-pool""
Wed Feb 17 20:39:14 EST 2016 info tmm1 tmm[6519] Rule /Common/one-auto-pool-select-by-hostname-port <HTTP_REQUEST>: Pool name my.example.com-0-pool
What is causing the port 0 request? And is there any workaround? e.g. could I test for port 0 and select a default port or ignore it?
ONE MORE EDIT
Rebuilt the virtual server, and now the error has gone. The rebuild of the VS was just to rename it though. I'm fairly sure I recreated the settings exactly the same.
Yes, you can specify the pool name in a string. What you have there would work as long as you have a pool with that same name. Though it doesn't show an example of doing it this way, you can also check out the pool wiki page on DevCentral for more information.
As an aside, in my environment I generally create pools with the suffix _pool to distinguish them from other objects when looking at config files. So in my iRules, I would do something like this (essentially the same thing):
when HTTP_REQUEST {
pool "[string tolower [HTTP::host]]_pool"
}
The simple case mentioned by Michael works. I'd recommend removing the port value if present:
when HTTP_REQUEST {
pool "pool_[string tolower [getfield [HTTP::host] : 1]]_[LB::server port]"
}
Keep in mind that clients might send a partial hostname. If the DNS search path is set to example.org then the client might hit shared/ which maps to shared.example.org, but the HTTP::host header will just have shared. Some API libraries may append the port number even if it's on the default port. Simple code might not send a Host header. Malicious code might send completely bogus Host headers. You could trap these cases with catch.
You can also use a datagroup to map hostnames to pools. This allows multiple hosts to use the same pool. Sample code:
when HTTP_REQUEST {
set host [string tolower [getfield [HTTP::host] ":" 1]]
if { $host == "" } {
# if there's no Host header, pull from virtual server name
# we use: pool_<virtualserver>_PROTOCOL
set host [getfield [virtual name] _ 2]
} elseif { not ($host contains ".") } {
# if Host header does not contain a dot, assume example.org
set host $host.example.org
}
set pool [class match -value $host[HTTP::uri] starts_with dg_shared.example.org]
if { $pool ne ""} {
set matched [class match -name $host[HTTP::uri] starts_with dg_shared.example.org]
set log(matched) $matched
set log(pool) $pool
if { [catch { pool $pool } ] } {
set log(reason) "Failed to Connect to Pool"
call hsllog log
call errorpage 404 $log(reason) "https://[HTTP::host][HTTP::uri]" log
}
} else {
call errorpage 404 "No Pool Found" "https://[HTTP::host][HTTP::uri]" log
}
}
when SERVER_CONNECTED {
if {!($pool ends_with "_HTTPS") } {
SSL::disable serverside
}
}
This allows host.example.org/path1 to be on a different pool than host.example.org or host.example.org/path2 by including separate entries in the datagroup. I didn't include the hsllog and errorpage procs here. They dump the log array as well as the other passed parameters.
We then disable serverside ssl for pools that don't end in _HTTPS.
Note: As with dynamically generated pool names, the BIG-IP UI does not look inside datagroups for pool references, so the interface will allow you do delete one of these pools thinking it's not in use.
We use BigIPReport to identify orphan pools:
https://devcentral.f5.com/s/articles/bigip-report
My Grails app uses a h2 database in dev mode (the default behaviour for Grails apps). The DB connection settings in DataSource.groovy are
dataSource {
pooled = true
jmxExport = true
driverClassName = "org.h2.Driver"
username = "sa"
password = ""
dbCreate = "create-drop" // one of 'create', 'create-drop', 'update', 'validate', ''
url = "jdbc:h2:mem:devDb;MVCC=TRUE;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE"
}
I'm trying to setup a connection for this database using IntelliJ IDEA's database client tools. I start off creating the connection like so
Then in the following dialog, I enter the JDBC URL
And choose all available databases on the "Schemas & Tables" tab.
The "Test Connection" button indicates success, but as you can see from the red circle, no tables are found. It seems like I've correctly setup a connection to the h2 server, but not the schema itself.
BTW, I try to setup this connection once the app is running, so I'm sure that the schema/tables do actually exist.
Your configuration is for an h2:mem database. Memory Databases have no tables upon connecting to them, and any & all tables are lost when all the connections are closed. Furthermore, a (named) in memory database is unique to the JVM process that opens it. From the H2 documentation:
Sometimes multiple connections to the same in-memory database are required. In this case, the database URL must include a name. Example: jdbc:h2:mem:db1. Accessing the same database using this URL only works within the same virtual machine and class loader environment. (Emphasis added)
This means IDEA will create a unique devDb in its JVM (and classloader) space and your application will create a unique devDb in its JVM (and classloader) space. You can not connect to an in memory database from an external JVM process.
If you want to connect both your application and IntelliJ IDEA (or any other DB tool) to an H2 database at the same time, you will need to either
use an embedded database (that writes to a file) in your application and use Mixed Mode to allow IntelliJ IDEA (and/or other database tools) to connect to it
use a server mode database
See http://www.h2database.com/html/features.html#connection_modes for more information.
This article has a great write up on how to set up the IntelliJ database client to connect to an H2 in-memory database if you happen to be using Spring Boot: http://web.archive.org/web/20160513065923/http://blog.techdev.de/querying-the-embedded-h2-database-of-a-spring-boot-application/
Basically, you wrap the in-memory database with a tcp server, then you have an access point to connect with a sql client via remote access.
Try to open http://localhost:8080/dbconsole and fill your jdbc url
During development you can use grails h2 dbconsole
Let's imagine you've already created entities (Users, Addresses)
Step 1. In application.yml file add H2 properties.
server:
port: 8080
spring:
datasource:
url: jdbc:h2:~/data/parserpalce (for Mac OS)
username: sa
password: password
driver-class-name: org.h2.Driver
jpa:
database-platform: org.hibernate.dialect.H2Dialect
hibernate:
ddl-auto: update
h2:
console:
enabled: true
Step 2. Add H2 Database client
Step 3. Configure H2 database client properties based on your application.yml properties.
Step 4. Run the application.
Step 5. Check if tables(Users, Addresses) are created.
Or you can use H2 console for it in browser:
http://localhost:8080/h2-console
P.S. Do not forget to paste appropriate values in fields!
i just got deploying a very simple grails app in heroku using cleardb addon, everything is fine until past some minutes and when i try to get view that has access to database i get and error message.
Here i paste a snippet from heroku logs
2014-06-04T20:12:17.511251+00:00 app[web.1]: 2014-06-04 20:12:17,511 [http-nio-38536-exec-1] ERROR util.JDBCExceptionReporter - No operations allowed after
connection closed.
2014-06-04T20:12:17.515181+00:00 app[web.1]: 2014-06-04 20:12:17,514 [http-nio-38536-exec-1] ERROR errors.GrailsExceptionResolver - EOFException occurred when processing request: [GET] /user
2014-06-04T20:12:17.515187+00:00 app[web.1]: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost.. Stacktrace follows:
2014-06-04T20:12:17.515190+00:00 app[web.1]: at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3166)
2014-06-04T20:12:17.515188+00:00 app[web.1]: java.io.EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost.
And here is my datasource production section
production {
dataSource {
dbCreate = "update"
driverClassName = "com.mysql.jdbc.Driver"
dialect = org.hibernate.dialect.MySQL5InnoDBDialect
uri = new URI(System.env.CLEARDB_DATABASE_URL?:"//bb2c98a68a13fe:3a6fd398#us-cdbr-east-06.cleardb.net/heroku_b28cc03a245469f?reconnect=true")
url = "jdbc:mysql://"+uri.host+uri.path
username = uri.userInfo.split(":")[0]
password = uri.userInfo.split(":")[1]
}
}
The log tell me about connection closed. i do not know what to do from here, i hope you can help me, thanks for your time
I finally got a solution to this problem, thanks for the help from heroku support team and cleardb support team, the solution was to add properties to datasource into production environment, the final code in datasource.groovy
production {
dataSource {
dbCreate = "update"
driverClassName = "com.mysql.jdbc.Driver"
dialect = org.hibernate.dialect.MySQL5InnoDBDialect
uri = new URI(System.env.CLEARDB_DATABASE_URL?:"//bb2c98a68a13fe:3a6fd398#us-cdbr-east-06.cleardb.net/heroku_b28cc03a245469f?reconnect=true")
url = "jdbc:mysql://"+uri.host+uri.path
username = uri.userInfo.split(":")[0]
password = uri.userInfo.split(":")[1]
properties {
// See http://grails.org/doc/latest/guide/conf.html#dataSource for documentation
jmxEnabled = true
initialSize = 5
maxActive = 50
minIdle = 5
maxIdle = 25
maxWait = 10000
maxAge = 10 * 60000
timeBetweenEvictionRunsMillis = 5000
minEvictableIdleTimeMillis = 60000
validationQuery = "SELECT 1"
validationQueryTimeout = 3
validationInterval = 15000
testOnBorrow = true
testWhileIdle = true
testOnReturn = false
jdbcInterceptors = "ConnectionState"
defaultTransactionIsolation = java.sql.Connection.TRANSACTION_READ_COMMITTED
}
}
}
this are the default values just do not delete it as i did it.
the reason why is this necessary is in cleardb team support answer, i paste next
6/05/2014 03:07PM wrote
Hello,
My suspicion - without troubleshooting further, just a first pass - is
that your app is not managing connections properly. Your current
database service tier allows a maximum of 4 simultaneous connections
to the db. It is likely that your application is trying to open more
connections than are allowed.
If you are using connection pooling, you must make sure that your pool
is set to no larger than 4 total (and that's across all dynos if you
have more than 1).
Heroku's networking times out idle connections at 60 seconds, so your
database connector must be set either to have an idle timeout of no
more than 60 seconds, or you must have a keep-alive interval of less
than 60 (which sends a trivial query such as "SELECT 1" to keep the
connection active).
We do not support grails directly, so I unfortunately can't give you
specific directions of how to do that within this framework.
I hope this is helpful.
Mike ClearDB Support Team
I hope it help to someone
We have an app that invokes various remote methods on MBeans using MBeanServerConnection.invoke.
Occasionally one of these methods hangs.
Is there any way to have a timeout on the call? so that it will return with an exception if the call takes too long?
Or do I have to move all those calls into separate threads so they don't lock up the UI and require killing the app?
See http://weblogs.java.net/blog/emcmanus/archive/2007/05/making_a_jmx_co.html
===== Update =====
I was thinking about this stuff when I first responded, but I was on my mobile and I can't type worth a damn on it.....
This is really an RMI problem, and unless you use a different protocol, there's not much you can do, except, as you say, move all those calls into separate threads so they don't lock up the UI.
But.... if you have the option of fiddling with the target server and you can customize the connecting client, you have at least 1 option which is to customize the JMXConnectorServer on your target servers.
The standard JMXConnectorServer implementation is the RMIConnectorServer. Part of it's specification is that when you create a new instance using any of the constructors (like RMIConnectorServer(JMXServiceURL url, Map environment)), the environment map can contain a key/value pair where the key is RMIConnectorServer.RMI_CLIENT_SOCKET_FACTORY_ATTRIBUTE and the value is a RMIClientSocketFactory. Therefore, you can specify a socket factory method like this:
RMIClientSocketFactory clientSocketFatory = new RMIClientSocketFactory() {
public Socket createSocket(String host, int port) {
Socket s = new Socket(host, port);
s.setSoTimeout(3000);
}
};
This factory creates a Socket and then sets its SO_TIMEOUT using setSoTimeout, so when the client connects using this socket, all operations, including connecting, will timeout after 3000 ms.
You could also checkout the JMXMP connector and server in the jmx-optional package of the OpenDMK. (links are to my github mavenized). No built in solution, mind you, but they're super easy to extend and JMXMP is simple TCP socket based rather than RMI, so this type of customization would be trivial.
Cheers.
# Nicholas : The above code is not working.I mean request is not getting timeout after 3000. ms.
map.put(RMIConnectorServer.RMI_CLIENT_SOCKET_FACTORY_ATTRIBUTE , new RMIClientSocketFactory() {
#Override
public Socket createSocket(String host, int port) throws IOException {
if(logger.isInfoEnabled() ){
logger.info("JMXManager inside createSocket..." + host + ": port :" + port);
}
Socket s = new Socket(host, port);
s.setSoTimeout(3000);
return s;
}
});
cs = JMXConnectorServerFactory.newJMXConnectorServer(url,map,mbeanServer);
As I answered on: How to set request timeout for JMX Connector the RMI properties can help you. All the properties are on Oracle documentation site:
http://docs.oracle.com/javase/7/docs/technotes/guides/rmi/sunrmiproperties.html.
For example: -Dsun.rmi.transport.tcp.responseTimeout=60000 is a client side tcp response timeout. There are also properties for connect timeout and for server side connections.
I also am not happy how the JMX/RMI/TCP stack hides important settings from lower level protocols, and makes it not available for a single connection.