Multiple Solr Instances - Second Solr Locked on Startup - ruby-on-rails

I've been trying to figure out a workflow where my Minitest suite will launch a second Solr instance for feature tests even if the development instance is running. However, I'm running into issues just getting the servers to start (i.e. when I start them outside of testing).
To start my servers I'm using:
RAILS_ENV=development bin/rake sunspot:solr:start
RAILS_ENV=test bin/rake sunspot:solr:start
However, whichever server starts second becomes locked. Any attempt to access the server in tests or just in development yields this error:
RSolr::Error::Http - 500 Internal Server Error
Error: {msg=SolrCore 'test& 'is not available due to init failure: Index locked for write for core 'test'. Solr now longer supports forceful unlocking via 'unlockOnStartup'. Please verify locks manually!,trace=org.apache.solr.common.SolrException: SolrCore 'test' is not available due to init failure: Index locked for write for core 'test'. Solr now longer supports forceful unlocking via 'unlockOnStartup'. Please verify locks manually!
at org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:974)
at org.apache.solr.servlet.HttpSolrCall.init(HttpSolrCall.java:250)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:417)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:214)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
URI: http://localhost:8981/solr/test/update?wt=ruby
Request Headers: {"Content-Type"=>"text/xml"}
Request Data: "<?xml version=\"1.0\" encoding=\"UTF-8\"?><add/>"
I've searched around for issues related to locking but I can't find any where the problem is having two servers running. My setup is:
Rails (5.0.0.1)
sunspot (2.2.5)
sunspot_rails (2.2.5)
sunspot_solr (2.2.5)
ruby 2.3.1p112
My sunspot.yml is:
production:
solr:
hostname: localhost
port: 8983
log_level: WARNING
path: /solr/production
development:
solr:
hostname: localhost
port: 8982
log_level: INFO
path: /solr/development
test:
solr:
hostname: localhost
port: 8981
log_level: WARNING
path: /solr/test
And finally, solr.xml
<solr>
<solrcloud>
<str name="host">${host:}</str>
<int name="hostPort">${jetty.port:8983}</int>
<str name="hostContext">${hostContext:solr}</str>
<bool name="genericCoreNodeNames">${genericCoreNodeNames:true}</bool>
<int name="zkClientTimeout">${zkClientTimeout:30000}</int>
<int name="distribUpdateSoTimeout">${distribUpdateSoTimeout:600000}</int>
<int name="distribUpdateConnTimeout">${distribUpdateConnTimeout:60000}</int>
</solrcloud>
<shardHandlerFactory name="shardHandlerFactory" class="HttpShardHandlerFactory">
<int name="socketTimeout">${socketTimeout:600000}</int>
<int name="connTimeout">${connTimeout:60000}</int>
</shardHandlerFactory>
</solr>
Thank you so much in advance!

Related

Rails CQL cannot connect to AWS Keyspaces (AWS Cassandra)

I am trying to connect from a Ruby on Rails application to AWS Keyspaces (AWS Cassandra), but I cannot manage to do it. I use the cequel gem and generated the config/cequel.yml which contains a similar thing to the following:
development:
host: "CONTACT_POINT"
username: "USER"
password: "PASS"
port: 9142
keyspace: key_development
max_retries: 3
retry_delay: 0.5
newrelic: true
ssl: true
server_cert: 'config/certs/AmazonRootCA1.pem'
replication:
class: NetworkTopologyStrategy
datacenter1: 3
datacenter2: 2
durable_writes: false
(Credentials where used in another app and they work which is working as expected.)
when I try to run:
rake cequel:keyspace:create
I get the following errors:
Cassandra::Errors::NoHostsAvailable: All attempted hosts failed: x.xxx.xxx.xxx (Cassandra::Errors::ServerError: Internal Server Error)
Set the dc to us-east-1 . drop the replication definition.

What is the difference between open_timeout and read_timeout in sunspot

I need a way to increase the sunspot query timeout in local development. As I was checking the sunspot code I got this code in sunspot.
solr:
hostname: localhost
port: 8983
log_level: WARNING
path: /solr/production
read_timeout: 20
open_timeout: 1
auto_index_callback: after_commit
auto_remove_callback: after_commit
Here we have two things read_timeout and open_timeout which looks similar to my problem. Since there is no documentation for it. So, can anybody tell me the purpose of both the things

How to set environment variables in Phoenix with Distillery library?

Follow this guide to use Distillery to release a elixir/phoenix project:
https://blog.polyscribe.io/a-complete-guide-to-deploying-elixir-phoenix-applications-on-kubernetes-part-1-setting-up-d88b35b64dcd
At the set config/prod.exs step, the author wrote:
config :myapp, Myapp.Repo,
adapter: Ecto.Adapters.Postgres,
hostname: "${DB_HOSTNAME}",
username: "${DB_USERNAME}",
password: "${DB_PASSWORD}",
database: "${DB_NAME}",
to config database. Here using ${DB_HOSTNAME} type to get environment variables, but not System.get_env("DB_HOSTNAME").
However, when I run MIX_ENV=prod mix release --env=prod and set environment variables local:
REPLACE_OS_VARS=true PORT=4000 HOST=0.0.0.0 SECRET_KEY_BASE=highlysecretkey DB_USERNAME=postgres DB_PASSWORD=postgres DB_NAME=myapp_dev DB_HOSTNAME=localhost ./_build/prod/rel/myapp/bin/myapp foreground
It loops:
12:05:13.671 [error] Postgrex.Protocol (#PID<0.1348.0>) failed to connect: ** (DBConnection.ConnectionError) tcp connect (${DB_HOSTNAME}:5432): non-existing domain - :nxdomain
12:05:13.671 [error] Postgrex.Protocol (#PID<0.1347.0>) failed to connect: ** (DBConnection.ConnectionError) tcp connect (${DB_HOSTNAME}:5432): non-existing domain - :nxdomain
12:05:13.672 [error] Postgrex.Protocol (#PID<0.1344.0>) failed to connect: ** (DBConnection.ConnectionError) tcp connect (${DB_HOSTNAME}:5432): non-existing domain - :nxdomain
12:05:13.672 [error] Postgrex.Protocol (#PID<0.1346.0>) failed to connect: ** (DBConnection.ConnectionError) tcp connect (${DB_HOSTNAME}:5432): non-existing domain - :nxdomain
...
It seems ${DB_HOSTNAME} didn't been known by elixir/phoenix.
I am using Elixir 1.5.2 and Phoenix 1.3 now. The version problem?
I never saw this approach and I am unsure how it is supposed to work. The common way to getting the environment variables at runtime would be:
{:system, "VAR"}
For your case it’d be:
config :myapp, Myapp.Repo,
adapter: Ecto.Adapters.Postgres,
hostname: {:system, "DB_HOSTNAME"},
username: {:system, "DB_USERNAME"},
password: {:system, "DB_PASSWORD"},
database: {:system, "DB_NAME"}

Redis keeps calling out to localhost:6379 even though deployed to Heroku

I have a Rails app deployed to Heroku and I cant for the life of me figure out why it keeps wanting to deploy to the local. I do not even have localhost:6379 anywhere in my code for the front end (react native) or the back end which is my Rails API.
This is the error I get any time I have a new broadcast:
Completed 500 Internal Server Error in 111ms (ActiveRecord: 47.1ms)
Redis::CannotConnectError (Error connecting to Redis on localhost:6379 (Errno::ECONNREFUSED)):
Application.yaml:
`gmail_username: "<email_address>"
gmail_password: "<password>"
AWS_ACCESS_KEY: "<access_key>"
AWS_SECRET_KEY: "<secret_key>"
AWS_BUCKET: "<my_s3_app_bucket>"
REDIS_URL: "<redis_url>"`
Cable.yaml
production:
adapter: redis
url: <long_url_address>
host: <host_from_url>
port: <port_from_url>
password: <password_from_url>
Production.rb:
config.action_cable.allowed_request_origins = ["https://lynx-v1.herokuapp.com/"]
config.action_cable.url = "wss://lynx-v1.herokuapp.com/cable"
config.web_socket_server_url = "wss://lynx-v1.herokuapp.com/cable"
(i set both action cable and web socket just to test which worked, no matter which i go with i still get the error)
/config/initializers/redis.rb
require "redis"
uri = URI.parse(ENV["REDIS_URL"])
$redis = Redis.new(
:url => ENV["REDIS_URL"],
)
I dont know what is going on. Is it some kind of default that makes Redis look for local host 6379? I follow the steps step by step and I keep getting this error.
It started working again. There was an add-on gem in the rails app that was not fully configured. After finishing the setup for the side gem it started working again.

Unable to perform queries on mongodb started with --auth switch in rails 3 with mongoid.

Simplified case:
I create a new rails 3.2 project, without active record. I add mongoid 3.0.0.rc to the Gemfile and then rails g mongoid:config. I edit my mongoid.yml to look like the one I have posted below (except that hosts is now set to localhost:27068).
I have added an admin user to mongodb:
$ mongo localhost/admin
> db.addUser(myadmin,adminpass)
Also I have added a regular user to my database:
use mydb
> db.addUser(myuser, mypassword)
I confirm that I can connect to my database:
$ mongo localhost/mydb -u myuser -pmypassword
MongoDB shell version: 2.0.4
connecting to: localhost/mydb
> _
After that, I start mongod with --auth switch to force authentication:
$ mongod --auth --dbpath /my/db/path
Now that everything seems to be OK, I create some random scaffold like:
$ rails g scaffold User name email
and try to run the project in the browser: localhost:3000/users. BOOM! I'm hit with the error message posted below.
Is this a bug in mongoid? Or am I missing something?
Original Question
I'm unable to do anything on my MongoHQ hosted database in a Rails 3.2 project with mongoid 3 rc. A simple query for login action gives me something like this error message:
The operation: #<Moped::Protocol::Query
#length=83
#request_id=3
#response_to=0
#op_code=2004
#flags=[]
#full_collection_name="mydb.users"
#skip=0
#limit=-1
#selector={"name"=>"Abbas"}
#fields=nil>
failed with error 10057: "unauthorized db:mydb lock type:-1 client [some ip]"
Here's what my mongoid.yml looks like:
development:
sessions:
default:
database: mydb
user: myuser
password: mypassword
hosts:
- flame.mongohq.com:27068
options:
consistency: :strong
options:
include_type_for_serialization: true
So I'm doing it the wrong way. The db user is not marked as "Read-only" in MongoHQ panel. And I'm NOT deploying to Heroku; just testing on my localhost.
Any help is appreciated.

Resources