Caching forcedly stopped, cache adapter is obsolete error in Xodus Databse - xodus

I am using xodus database for my application and I have 3 simple entities in it. when the application receives bulk load the xodus throws Caching forcedly stopped, cache adapter is obsolete error. because of that the GC operation is not happening and the application crashes.
Error Message : [EntityStoreSharedAsyncProcessor0] INFO jetbrains.exodus.entitystore.EntityIterableCache - Caching forcedly stopped, cache adapter is obsolete: 7-0-0
when the load is minimum, this issue does not occur.
Any ideas for this issue?

Related

PG::UnableToSend: no connection to the server in Rails 5

I have 2 servers(A, B).
I am running rails app in A and db in B.
In server B, I have pgbouncer and postgresql running.
When I run 200 threads in A, I am getting that issue even though I increased pgbouncer max client connection to 500. And pgbouncer pool_mode is session.
Postgresql pool is 100.
I also increased db pool to 500 in server A.
How can I avoid this issue and run 200 threads without any issue?
Later, I've updated code. Dropped pgbouncer and use postgresql directly.
Created 2 new threads which do db operation and other threads don't do db operation anymore.
And while threads run, I was monitoring active connections. It keeps 3 active.
But at the end of threads, I got this issue.
I showed connections pool status using ActiveRecord::Base.connection_pool.stat
{:size=>500, :connections=>4, :busy=>3, :dead=>0, :idle=>1, :waiting=>0, :checkout_timeout=>5}
rake aborted!
ActiveRecord::StatementInvalid: PG::UnableToSend: no connection to the server
Is there anyone who can help me with this issue?
I merged db instance and app instance.
That works.
I am still not sure if it's db version issue or postgresql remote access issue.
In my opinion, it's remote access issue.

Couldn't rollback ado.net connection, transaction was not connected or disconnected error - quartz.net

I am getting the transaction error and my quartz scheduler seems to be stuck and I have to restart it for it to go again. Just wondering if there is some retry setting that I can use.
Quartz version: 3.0.7
2019-09-23 07:20:20.0654||ERROR|Quartz.Impl.AdoJobStore.ConnectionAndTransactionHolder|Couldn't rollback ADO.NET connection. Transaction not connected, or was disconnected System.InvalidOperationException: Transaction not connected, or was disconnected
at Quartz.Impl.AdoJobStore.ConnectionAndTransactionHolder.CheckNotZombied()
at Quartz.Impl.AdoJobStore.ConnectionAndTransactionHolder.Rollback(Boolean transientError)
There is already retry logic in place, but apparently it has a bug in it. I was going to report the issue but it had already been reported. Though I added a bit of clarification on how this issue is happening:
https://github.com/quartznet/quartznet/issues/794
Update
This has been fixed in Quartz 3.1.0
https://github.com/quartznet/quartznet/releases/tag/v3.1.0
Fix potential scheduler deadlock caused by changed lock request id inside ExecuteInNonManagedTXLock (#794)

Service Worker file and an offline mode

Do I understand correctly that a server worker file in a PWA should not be cached by a PWA? As I understand, once registered, installed and activated, it exits as an entity separate from a page in a browser environment and gets reloaded by the browser once a new version is found (I am omitting details that are not important here). So I see no reason to cache a service worker file. Browser kind of caching it by storing it in memory (or somewhere). I think caching a service worker file will complicate discovery of its code update and will bring no benefits.
However, if a service worker is not cached, there will be an error trying to retrieve it while refreshing a page that registers it in an offline mode because the service worker file is not available when the network is down.
What's the best way to eliminate this error? Or should I cache a service worker file? What's the most efficient strategy here?
I was doing some reading on PWA but found no clear explanation of the matter. Please help me with your advice if possible.
You're correct. Never cache service-worker.js.
To avoid the error that comes from trying to register without connectivity simply check the connection state from window.navigator.onLine and skip calling register if offline.
You can listen for network state changes and call registration at a later point in time if you want.

How to workaround an CachedConnectionManager error on JBoss with JRuby?

I am having an issue deploying an JRuby Rails application into JBoss, using JNDI to manage database connections.
After the first request I have this error:
[CachedConnectionManager] Closing a connection for you. Please close them yourself
I think this is because JBoss uses a connection pool and expect that rails (jruby) release the connection after each use, what is wrong, because rails (ActiveRecord) has its own connection pool.
I've tried to call
ActiveRecord::Base.clear_active_connections!
after each request, in a after_filter, but this haven't worked.
Does somebody have some idea?
Thanks in advance.
I am also having connection pool problems trying to run a Sinatra/ActiveRecord app in multithreaded mode in a Glassfishv3 container. The app includes a "before" block with ActiveRecord::Base.connection to force acquisition of a thread-owned connection, and an "after" block with ActiveRecord::Base.clear_active_connections! to release that connection.
Multi threaded mode
I have tried a great many variations of ActiveRecord 3.0.12 and 3.2.3, JNDI with a Glassfish connection pool, simply using ActiveRecord's connection pool, or even monkey patching the ActiveRecord connection pool to bypass it completely, using the Glassfish pool directly.
When testing with a simple multi-threaded HTTP fetcher, all variations I have tried have resulted errors, with a greater percentage of failing requests as I increase the worker threads in the HTTP fetcher.
With AR 3.0.12, the typical error is the Glassfish pool throwing a timeout exception (for what it's worth, my AR connection pool is larger than my Glassfish pool; I understand AR will pool the connection adapter object, and arjdbc will acquire and release actual connections behind the scenes).
With AR 3.2.3, I get a more ominous error. The error below came from a test not using JNDI but simply using the ActiveRecord connection pool. This is one of the better configurations in that about 95% of requests complete OK. The error requests fail with this exception:
org.jruby.exceptions.RaiseException: (ConcurrencyError) Detected invalid hash contents due to unsynchronized modifications with concurrent users
at org.jruby.RubyHash.keys(org/jruby/RubyHash.java:1356)
at ActiveRecord::ConnectionAdapters::ConnectionPool.release(/Users/pat/app/glassfish/glassfish3/glassfish/domains/domain1/applications/lookup_service/WEB-INF/gems/gems/activerecord-3.2.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:294)
at ActiveRecord::ConnectionAdapters::ConnectionPool.checkin(/Users/pat/app/glassfish/glassfish3/glassfish/domains/domain1/applications/lookup_service/WEB-INF/gems/gems/activerecord-3.2.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:282)
at MonitorMixin.mon_synchronize(classpath:/META-INF/jruby.home/lib/ruby/1.9/monitor.rb:201)
at MonitorMixin.mon_synchronize(classpath:/META-INF/jruby.home/lib/ruby/1.9/monitor.rb:200)
at ActiveRecord::ConnectionAdapters::ConnectionPool.checkin(/Users/pat/app/glassfish/glassfish3/glassfish/domains/domain1/applications/lookup_service/WEB-INF/gems/gems/activerecord-3.2.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:276)
at ActiveRecord::ConnectionAdapters::ConnectionPool.release_connection(/Users/pat/apps/glassfish/glassfish3/glassfish/domains/domain1/applications/lookup_service/WEB-INF/gems/gems/activerecord-3.2.3/lib/active_record/connection_adapters/abstrac/connection_pool.rb:110)
at ActiveRecord::ConnectionAdapters::ConnectionHandler.clear_active_connections!(/Users/pat/apps/glassfish/glassfish3/glassfish/domains/domain1/applications/lookup_service/WEB-INF/gems/gems/activerecord-3.2.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:375)
...
Single threaded mode
Losing confidence in the general thread safety of ActiveRecord (or arjdbc?), I gave up on using ActiveRecord multithreaded and configured warbler and JRuby-Rack do JRuby runtime pooling, emulating multiple single-threaded Ruby processes much like Unicorn, Thin, and other typical Ruby servers.
In config/warble.rb:
config.webxml.jruby.min.runtimes = 10
config.webxml.jruby.max.runtimes = 10
I reran my test, using JNDI this time, and all requests completed with no errors!
My Glassfish connection pool is size 5. Note the number of jruby runtimes is greater than the connection pool size. There is probably little point in making the JRuby runtime pool larger than the database connection pool since in this application, each request consumes a database connection, but I just wanted to make sure that even with contention for database connections I did not get the time-out errors I had seen in multithreaded mode.
These concurrency problems are disappointing to say the least. Is anyone successfully using ActiveRecord under moderate concurrency? I know that in a Rails app, one must call config.threadsafe! to avoid the global lock. Looking at the code, it does not seem to modify any ActiveRecord setting; is there some configuration of ActiveRecord I'm not doing?
The Rails connection pool does hold connections open, even after they are returned to the pool, so the underlying JDBC connection does not get closed. Are you using a JNDI connection with activerecord-jdbc-adapter (adapter: jndi or jndi: location in database.yml)?
If so, this should happen automatically, and would be a bug.
If not, you can use the code in ar-jdbc and apply the appropriate connection pool callbacks yourself.
https://github.com/jruby/activerecord-jdbc-adapter/blob/master/lib/arjdbc/jdbc/callbacks.rb
class ActiveRecord::ConnectionAdapters::JdbcAdapter
# This should force every connection to be closed when it gets checked back
# into the connection pool
include ActiveRecord::ConnectionAdapters::JndiConnectionPoolCallbacks
end

How to find out connection leaks?

We have a struts web application that is deployed on sun applicaition server that despite doing load test / system integrating test in development environments. There is no scenario of conneciton leak.
But however in the production environment. There seen to be connection leak as the connection in used keep on increasing.
Beside application codes, what other scenario could cause connection leak?
How do you measure this? Are you looking at the number of connections in the database? The size of the pool in the app server? Which database are you using?
I don't understand "There is no scenario of conneciton [sic] leak." If you aren't properly closing all your ResultSets, Statements, and Connections (in individual try/catch blocks in a finally block in method scope, in reverse order of instantiation) you might experience leaks.
Unless there's another application that is using the same database, it's got to be your code or the app server. If you're in deep denial about your code, try switching app servers and see if that helps.
I'd suggest that your test scenarios aren't realistic. If you don't observe this behavior in production, either your tests aren't triggering the behavior or the test and prod deployments are not identical.
Connection leaks are one of the most common issues encountered.
The major reason for this being Not closing the result set , connection after use.
try{
//perform jdbc operations
}catch(Exception e){
//perform error operations
}finally{
//close the connections
}
Also there are ways by which you can enable logging for connection leaks but this will impact the performance.

Resources