I need a way to increase the sunspot query timeout in local development. As I was checking the sunspot code I got this code in sunspot.
solr:
hostname: localhost
port: 8983
log_level: WARNING
path: /solr/production
read_timeout: 20
open_timeout: 1
auto_index_callback: after_commit
auto_remove_callback: after_commit
Here we have two things read_timeout and open_timeout which looks similar to my problem. Since there is no documentation for it. So, can anybody tell me the purpose of both the things
Related
I am setting up redis-gorm for my grails project.
I have followed their documentation, but thing are'nt going well.
implementation 'org.grails.plugins:redis-gorm:5.0.13'
application.yml
grails:
redis-gorm:
host: "localhost"
port: 6379
pooled: true
resources: 15
timeout: 25000
Weather i set the static mapWith = "redis" in the doamain class or not i get a class not found exception and the application stops
Caused by: java.lang.ClassNotFoundException: org.grails.datastore.gorm.bean.factory.AbstractMappingContextFactoryBean
Any idea as to what i might have missed?
Working on an Elixir app. There's a Scraper function that copies data from a Google Spreadsheet into a postgres database via the Postgrex driver. The connection through the Google API works fine, but the function always times out after 15 seconds.
01:48:36.654 [info] Running MyApp.Endpoint with Cowboy using http://localhost:80
Interactive Elixir (1.6.4) - press Ctrl+C to exit (type h() ENTER for help)
iex(1)> Scraper.update
542
iex(2)> 01:48:55.889 [error] Postgrex.Protocol (#PID<0.324.0>) disconnected: ** (DBConnection.ConnectionError) owner #PID<0.445.0> timed out because it owned the connection for longer than 15000ms
I have tried changing the 15_000 ms timeout setting everywhere in the source, but it seems the setting has been compiled into binary. I am not an erlang/elixir developer, just helping a client install the app for the purposes of demoing. My question is:
How can I recompile the Postgrex driver with the modified timeout setting?
Is there another way to override this setting, or disable the timeout altogether? I have tried find-replace of basically every instance of "15" in the source.
When issuing a query with postgrex, the last argument can be a keyword list of options.
Postgrex.query!(pid, "AN SQL STATEMENT;", [], timeout: 50_000, pool_timeout: 40_000)
https://hexdocs.pm/postgrex/Postgrex.html#query/4
config :my_app, MyApp.Repo,
adapter: Ecto.Adapters.Postgres,
username: "postgres",
password: "postgres",
database: "my_app_dev",
hostname: "localhost",
timeout: 600_000,
ownership_timeout: 600_000,
pool_timeout: 600_000
Look at timeout and ownership_timeout. These values are set to 600 seconds. And probably not of them are necessary.
Also I want to say that once I had to remove everything from _build and recompile an application to have this values actually applied.
I run an web app localhost using Rails5, Puma webserver, and forwardHQ for my localhost testing, such that fizzbuzz.dev is the Internet address. My webpack+React App renders and the Chrome dev console is relatively clean if I use Webpack 3 to compile things on the fly, no dev server. If I switch to using a Procfile.dev and foreman with the line:
webpacker: ./bin/webpack-dev-server
the App continues to run, but the console fills with XHR calls to some sockjs-node/info call that fails. Maybe 4 or 5 of those. What are they? Is that something I should care about? It seems my App runs nonetheless, but a messy console is not something I like.
Any tips on how to fix it so this bad XHR calls turns into a success? What would change if it did work?
My webpacker yaml setup is:
development:
<<: *default
compile: true
dev_server:
https: true
host: 0.0.0.0
port: 3035
public: 0.0.0.0:3035
hmr: false
# Inline should be set to true if using HMR
inline: true
overlay: true
compress: true
disable_host_check: true
use_local_ip: false
quiet: false
headers:
'Access-Control-Allow-Origin': '*'
watch_options:
ignored: /node_modules/
I tried using localhost for 0.0.0.0, same result. Not sure how to resolve this. Tried without HTTPS too but that also failed.
I have updated the mongoid gem to 5 from mongoid-4 on rails 4 application. I am facing the following warning on app restart
W, [2017-02-15T13:59:49.356541 #14483] WARN -- : MONGODB Unsupported client option 'max_retries'. It will be ignored.
W, [2017-02-15T13:59:49.356739 #14483] WARN -- : MONGODB
Unsupported client option 'retry_interval'. It will be ignored.
W, [2017-02-15T13:59:49.356877 #14483] WARN -- : MONGODB Unsupported client option 'username'. It will be ignored.
How to update Mongoid Yml to remove the warning?
Also This is current YML file
staging:
clients:
default:
database: chillr_api
hosts:
- localhost:27017
options:
read:
mode: :nearest
# In the test environment we lower the retries and retry interval to
# low amounts for fast failures.
max_retries: 1
retry_interval: 0
username: 'username'
password: 'username'
Okey.Seems that max_retries and retry_interval are Moped configuration . Since Moped is taken away in Mogoid 5 This could be removed from configuration .Yml example
I've been trying to figure out a workflow where my Minitest suite will launch a second Solr instance for feature tests even if the development instance is running. However, I'm running into issues just getting the servers to start (i.e. when I start them outside of testing).
To start my servers I'm using:
RAILS_ENV=development bin/rake sunspot:solr:start
RAILS_ENV=test bin/rake sunspot:solr:start
However, whichever server starts second becomes locked. Any attempt to access the server in tests or just in development yields this error:
RSolr::Error::Http - 500 Internal Server Error
Error: {msg=SolrCore 'test& 'is not available due to init failure: Index locked for write for core 'test'. Solr now longer supports forceful unlocking via 'unlockOnStartup'. Please verify locks manually!,trace=org.apache.solr.common.SolrException: SolrCore 'test' is not available due to init failure: Index locked for write for core 'test'. Solr now longer supports forceful unlocking via 'unlockOnStartup'. Please verify locks manually!
at org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:974)
at org.apache.solr.servlet.HttpSolrCall.init(HttpSolrCall.java:250)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:417)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:214)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
URI: http://localhost:8981/solr/test/update?wt=ruby
Request Headers: {"Content-Type"=>"text/xml"}
Request Data: "<?xml version=\"1.0\" encoding=\"UTF-8\"?><add/>"
I've searched around for issues related to locking but I can't find any where the problem is having two servers running. My setup is:
Rails (5.0.0.1)
sunspot (2.2.5)
sunspot_rails (2.2.5)
sunspot_solr (2.2.5)
ruby 2.3.1p112
My sunspot.yml is:
production:
solr:
hostname: localhost
port: 8983
log_level: WARNING
path: /solr/production
development:
solr:
hostname: localhost
port: 8982
log_level: INFO
path: /solr/development
test:
solr:
hostname: localhost
port: 8981
log_level: WARNING
path: /solr/test
And finally, solr.xml
<solr>
<solrcloud>
<str name="host">${host:}</str>
<int name="hostPort">${jetty.port:8983}</int>
<str name="hostContext">${hostContext:solr}</str>
<bool name="genericCoreNodeNames">${genericCoreNodeNames:true}</bool>
<int name="zkClientTimeout">${zkClientTimeout:30000}</int>
<int name="distribUpdateSoTimeout">${distribUpdateSoTimeout:600000}</int>
<int name="distribUpdateConnTimeout">${distribUpdateConnTimeout:60000}</int>
</solrcloud>
<shardHandlerFactory name="shardHandlerFactory" class="HttpShardHandlerFactory">
<int name="socketTimeout">${socketTimeout:600000}</int>
<int name="connTimeout">${connTimeout:60000}</int>
</shardHandlerFactory>
</solr>
Thank you so much in advance!