TinyTds Error: Adaptive Server connection timed out - ruby-on-rails

We are running a ruby on rails application on rails 3.2.12 (ruby 1.9.3) with current tinyTDS gem 0.6.2.
We use MS SQL 2012 or 2014 and facing more then usual the following error message:
TinyTds::Error: Adaptive Server connection timed out: EXEC sp_executesql [...]
Database AUTOCLOSE is off.
TCP Socket Timeouts are default Windows system.
Application server is on machine #1 (windows server), SQL server is on machine #2 (windows server).
When I check the connections (netstat) I have like 250 connections open for around 20-30 users.
I run perform.exe to see idle time on SQL server for the data and log disks.
database.yml has connection pool:32 and reconnect:true.
To me it looks like that tinyTDS lost connection and any exception prevents from reconnecting.
The question is, how can I debug into the problem to find out what the problem is?
UPDATE
My mistake, the original error message belongs to tinytDS 0.5.x. Since I updated to the latest version I get the following error in addition or instead:
ActiveRecord::LostConnection (TinyTds::Error: DBPROCESS is dead or not enabled: BEGIN TRANSACTION):

First, that pool size seems excessive. Are you using a ton of threads? If not, then only one connection will be used per app request/response. Just seems like that value is way to high.
Second, what SQL timed out? Have you found that certain SQL is slower than others? If so, then you have two options. The first would be to tune the DB using standard practices like indexes, etc. The second would be to increase the "timeout" option in your database.yml. The default timeout is 5000 which is 5 seconds. Have you tried setting it to 10000? I guess what I am asking is how are you sure this is a "connect" timeout vs a "wait" timeout?

Related

How to properly organize work with database connections?

I have a Rails application that I've been developing for quite some time. All this time I tested it locally and on a DEV server. On the DEV server, next to the deployed application, there is also a PG database. And there were no problems with connections. I think there is simply no connection limit, or it is too high - not so important.
Today I started doing deployment to the PROD server. It is similar in power to that for DEV, but BD is already in the DO Database. By the way, the servers themselves are also located in DigitalOcean.
The problem is that DO Database has a limit of 20 connections. And as far as I understand, exceeding this limit - the Rails application gives an error:
ActiveRecord::ConnectionNotEstablished (FATAL: remaining connection slots are reserved for non-replication superuser connections)
The most obvious option is to reduce the number of requests on page load. But this still did not solve the problem if, for example, the number of users increases.
Can you please tell me which way to look? Are there any solutions to the problem other than updating the DO Database power?
You might want to try PG Bouncer (never tried it though, so i can't really tell how it will impact the app).

PG::UnableToSend: no connection to the server in Rails 5

I have 2 servers(A, B).
I am running rails app in A and db in B.
In server B, I have pgbouncer and postgresql running.
When I run 200 threads in A, I am getting that issue even though I increased pgbouncer max client connection to 500. And pgbouncer pool_mode is session.
Postgresql pool is 100.
I also increased db pool to 500 in server A.
How can I avoid this issue and run 200 threads without any issue?
Later, I've updated code. Dropped pgbouncer and use postgresql directly.
Created 2 new threads which do db operation and other threads don't do db operation anymore.
And while threads run, I was monitoring active connections. It keeps 3 active.
But at the end of threads, I got this issue.
I showed connections pool status using ActiveRecord::Base.connection_pool.stat
{:size=>500, :connections=>4, :busy=>3, :dead=>0, :idle=>1, :waiting=>0, :checkout_timeout=>5}
rake aborted!
ActiveRecord::StatementInvalid: PG::UnableToSend: no connection to the server
Is there anyone who can help me with this issue?
I merged db instance and app instance.
That works.
I am still not sure if it's db version issue or postgresql remote access issue.
In my opinion, it's remote access issue.

Issue with non responding website. How to debug?

We have a website created in asp-mvc4 running on iss on windows server 2012 and using MSSQL 2012 as data storage. Connections are done using entity framework-6... Very standard stuff.
We are not a high volume website (max 3000 users around the world so hitting it in different timezones)
The issue is that sometimes without warning the site becomes unresponsive (browser does not show it and time out). Nothing special but here is the strange issues:
The server itself is working fine if you terminal server into it
Restarting the ISS does not help er there are no error logs
SQL server have around 100 connections from the website all sleeping (but killing theses processes does not make the site recover)
SQL server at the time show half of them as waiting tasks but it is still responsive if executing sql from SSMS or even remote from excel (remote reporting)
Looking at SQL Profiler website is still sending in SQL request despite being down but they are all request like this: if db_id('dbname') is not null else select... (Not something specifically written in the website)
the really strange one: If we restart the SQL server the website becomes responsive again)
I know this is not a lot to go on but we are very puzzled and don't really know how to proceed. Northing indicate error in any kind of log (website, iss, sql server or windows). I can deduct it must be the website thinking SQL cannot give it what it need because connection pool or something is used up but why it is only freed up with a complete sql server restart and not just killing the processes really puzzles me, and why the connection pool buildup happen in the first place since and sql is handled in entity framework
Any advice on how to debug further is most welcome

Rails 4: When is a database connection established?

I'm deploying a Rails 4 app on Heroku. As I'm looking over the database plans available, I don't understand what the 'connection limit' means. The 'hobby tier plans' has a connection limit of 20. The next tier has a limit of 60. Now I'm curious when a database connection is established, so that I can calculate which plan is best for me. Is there a connection for every query? Because if so, it would mean that only 20 users can use the app at the time. I quess some of these are cached, but anyway, I'm not clear on this. Thanks for your help in advance! :)
When the rails process starts up it will grab a database connection and hold on to that connection until the process stops.
For most MRI Ruby apps you need 1 connection per process, you will most likely run unicorn on heroku with 3 workers per dyno, each worker will need 1 database connection.
When you connect to console heroku run console that will use a new database connection until you logout of the console.
https://devcenter.heroku.com/articles/rails-unicorn
If you are running a threaded Ruby like jruby then each thread will need its own database connection.
Checkout "Concurrency and Database Connections in Ruby with ActiveRecord" on the heroku docs, it is has very detailed explanation:
https://devcenter.heroku.com/articles/concurrency-and-database-connections

MySQL connections timing out/being abandoned under JRuby on Rails app on Jetty

We're running a JRuby on Rails application on Jetty, and having reached the staging server prior to launch have suddenly hit a problem with our JDBC connections being abandoned. Here's a lovely stacktrace to illustrate:
Last packet sent to the server was 12 ms ago.
STACKTRACE:
com.mysql.jdbc.CommunicationsException: Communications link failure due to underlying exception:
** BEGIN NESTED EXCEPTION **
java.io.EOFException
STACKTRACE:
java.io.EOFException
at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:1913)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2304)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2803)
From reading around my understanding is that MySQL is killing our connection pool over a period of time due to inactivity (which makes sense as staging is under very light load right now). It's running under JRuby 1.3.1 with the following gems:
activerecord-jdbc-adapter (0.9.1)
activerecord-jdbcmysql-adapter (0.9.1)
jdbc-mysql (5.0.4)
I'm assuming that I probably need to set some JDBC configuration somehow to ensure the connections are kept alive or recycled properly, but I need some help finding out where to look. Can anyone furnish me with the details?
Thanks,
Steve
This is probably due to the wait_timeout setting. You could try increasing it to something very large, but that assumes you have administrative access on the database server.
I've never used JRuby or Rails. But in "regular" Java, the way I would solve this is to use a connection pool which automatically recycles idle connections. For example, c3p0 has a maxIdleTime setting that controls this.
EDIT: Just for fun, I did a Google search on "activerecord idle connection" and I got a few hits. Here's one: http://groups.google.com/group/sinatrarb/browse_thread/thread/54138bfedac59849
Apparently there is a method called ActiveRecord::Base.verify_active_connections! that you can use. I make no guarantees whatsoever about this solution :-). IANARP (I am not a Ruby programmer).
Either Rails or our ActiveRecord-JDBC code should probably provide for a periodic connection ping or idle-time teardown. Connections being culled by the server is a standard case any connection pooling implementation should be able to handle.
I'd say file a bug with ActiveRecord-JDBC on kenai.com, but first ask on JRuby ML if anyone else has found a solid solution for this.

Resources