I'm using the amazing sunspot gem(github.com/outoftime/sunspot) on a rails application but I'm having such a huge problem. I confess that I still don't know how to configure it correctly according to my environment, but everything is set up and running well on my local and stage servers.
Well, to sum up, my problem is that in production I have a model that is currently updated - every list that envolves this model, an attribute is incremented. And the main problem is that when I try to perform a complex search on this model(not contextual) the Connection Refused error appears to me but solr is running up and performing all other searches.
My solrconfig.xml is just like sunspot installation, I didn't change anything. Is the autoCommit section the solution for this, or is nothing to do with it ?
Sorry for the last update, I wasn't made a newbie mistake, the result of "ps aux | grep java" on the server:
ubuntu 4039 0.0 1.8 2278060 144084 ? Ssl Jan21 8:10 java -Djetty.port=8983 -Dsolr.data.dir=/home/ubuntu/mallguide/mallguide-rails/solr/data/production -Dsolr.solr.home=/home/ubuntu/mallguide/mallguide-rails/solr -Djava.util.logging.config.file=/tmp/logging.properties20120121-4039-co662r-0 -jar start.jar
ubuntu 23125 0.0 0.0 7628 1004 pts/1 S+ 10:47 0:00 grep --color=auto java
And my sunspot.yml file:
production:
solr:
hostname: localhost
port: 8983
log_level: WARNING
development:
solr:
hostname: localhost
port: 8982
log_level: INFO
test:
solr:
hostname: localhost
port: 8981
log_level: WARNING
auto_commit_after_request: false
Sorry for the poor english, hope that someone could help me.
I still don't know what to do to correct this problem, the point is that I have only one model that is updated(not the indexed fields on searchable) all the time and solr just fail for this model, and not for the others.
Any help ?
I fixed a similar error before. It may be related if your error symptoms match:
The kind of error messages I have encountered is like this:
Connection refused - connect(2)
with the backtrace:
/home/john/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/net/http.rb:762:in `initialize'
...
rest-client (1.6.7) lib/restclient/net_http_ext.rb:51:in `request'
rsolr (1.0.8) lib/rsolr/connection.rb:15:in `execute'
Once I restarted sunspot solr server, errors will stop. During this time I can read indexes but not write to them.
Solution
The reason behind my error is that: I manually called RAILS_ENV=production rake sunspot:solr:start on the production server, and I also use capistrano to deploy so it has the current directory and the shared directory.
When I run that start command in the current directory, for some reason the index file is still referenced by the release path (e.g. release/2012xxxxxxxx/...). Capistrano will delete old releases, so every now and then solr will not be able to reference the folder if it gets deleted.
The solution is to explicitly specify the index file path with the symlinked current directory:
RAILS_ENV=production rake sunspot:solr:start --port=8983 --data-directory=#{current_path}/solr/data/#{rails_env} --pid-dir=#{current_path}/solr/pids/#{rails_env}"
Related
I have a rails app which uses sunspot/solr for search capability. This app works well in development, but now that I'm trying to deploy it to a server Im having trouble.
I'm using a Digital Ocean droplet with the Dokku 0.5.4 on Ubuntu 14.04 one click image.
The first time I tried to get Solr up and running I followed the directions on the official sunspot github page. Despite following the directions to the letter, I was unable to get my app to talk to Solr.
Since the tutorials found in the linked guide were a little old. I decided to spin up an entirely new droplet and try again, this time with the slightly more recent directions found here. After following these directions verbatim, I'm still having connection issues with my rails app.
Here is the error I'm getting
RSolr::Error::ConnectionRefused: Connection refused - {:data=>"<?xml version=\"1.0\" encoding=\"UTF-8\"?><add><doc><field name=\"id\">Person 3</field><field name=\"type\">Person</field><field name=\"type\">ActiveRecord::Base</field><field name=\"class_name\">Person</field><field boost=\"2\" name=\"first_name_text\">Joe</field><field name=\"last_name_text\">Schmoe</field><field name=\"email_text\">joe#example.com</field></doc></add>", :headers=>{"Content-Type"=>"text/xml"}, :method=>:post, :params=>{:wt=>:ruby}, :query=>"wt=ruby", :path=>"update", :uri=>#<URI::HTTP http://localhost:8983/solr/solr_sunspot_example/update?wt=ruby>, :open_timeout=>nil, :read_timeout=>nil, :retry_503=>nil, :retry_after_limit=>nil}
I should point out that when I navigate to mydomain.com:8983 I can see that the solr server is working. Also, when I run service solr status I get the following.
Found 1 Solr nodes:
Solr process 1164 running on port 8983
{
"solr_home":"/var/solr/data/",
"version":"5.2.1 1684708 - shalin - 2015-06-10 23:20:13",
"startTime":"2016-04-15T19:38:29.651Z",
"uptime":"0 days, 0 hours, 24 minutes, 17 seconds",
"memory":"93.7 MB (%19.1) of 490.7 MB"}
Here is my sunspot.yml file
production:
solr:
hostname: localhost
port: 8983
log_level: WARNING
path: /solr/solr_sunspot_example
This is the first time I've tried to use sunspot/solr and I'm not able to see where Im messing up. Have I setup solr the right way?
Most probably, your Solr is not set up to listen to the localhost interface and only listens to mydomain.com interface.
To verify this assumption, do curl localhost:8983/solr/solr_sunspot_example and curl mydomain.com:8983/solr/solr_sunspot_example from your instance. If the second command succeeds, while the first one does not that would be the case.
You would have 2 options to solve it: either point your configuration to query the mydomain.com interface, or change Solr configuration to listen on the localhost interface. The later will be controlled by the hostname setting in the sunspot.yml (see the github wiki page you're referencing above).
May be missing user and password in production config ?
production:
solr:
hostname: point_to_solr_host
port: 443
log_level: WARNING
path: /solr/solr_sunspot_example
user: <%= ENV["SOLR_USER"] %>
password: <%= ENV["SOLR_PASSWORD"] %>
My rails project running on server nginx + passenger. I noticed that thinking sphinx cannot respond to asynchronous connections. I run in two tabs on browser search query, and one of the responses returned this error:
Error connecting to Sphinx via the MySQL protocol. Error connecting to Sphinx via the MySQL protocol. Can't connect to MySQL server on '127.0.0.1' (111) ...
thinking_sphinx.yml:
development:
quiet_deltas: true
mysql41: 9311
bin_path: "/usr/bin"
searchd_binary_name: searchd
indexer_binary_name: indexer
min_infix_len: 3
min_word_len: 2
html_strip: 1
index_exact_words: 1
min_stemming_len: 4
charset_type: "utf-8"
test:
mysql41: 9311
production:
mysql41: 9311
No such problems on localhost, server on WebRick.
What could I do, to avoid this. There is only one thinking-sphinx process. Maybe I can increase it's number.
Thanks in advance!
Update
I rebuilt thinking sphinx, I haven't done it for long time, now it doesn't falling, maybe thing was in it. But I am still interested in how to run serveral ts processes, or it is unecessary.
Via the help files:
You can run as many Sphinx instances as you wish on one machine - but each must be bound to a different port. You can do this via the config/thinking_sphinx.yml file - just add a setting for the port for the specific environment using the mysql41 setting (or port for pre-v3):
staging:
mysql41: 9313
I'm running sunspot on a rails app and can (I assume) get the search server to run with "rake sunspot:solr:run". Unfortunately I get the ECONNREFUSED error in my search controller when I try to reach the search index.
I have tried turning my firewall completely off with no luck. Changed all the ports in sunspot.yml to 8983 with no luck either. I found out about the command "netstat -anb" but don't know what process to look for. What could be causing this error?
Edit
And the following below did not help either since rake sunspot:solr:start does not work.
Common Initial Troubleshooting.
If you see: Errno::ECONNREFUSED (Connection refused - connect(2)) Then perhaps:
You have not started the solr server:
$ rake sunspot:solr:start
An error occurred in starting the solr server (such as not having the Java Runtime Environment installed). do
$ rake sunspot:solr:run
to run the server in the foreground and check for errors.
If you come across this error in testing but not development, then perhaps you have not invoked the task with the correct environment:
$ RAILS_ENV=test rake sunspot:solr:run
Edit 2
sunspot:solr:run also shows the following in the terminal but seems to continue loading other details in the terminal:
WARN: failed SocketConnector # 0.0.0.0:8983
java.net.BindException: Address already in use: JVM_Bind
:WARN: EXCEPTION
java.net.BindException: Address already in use: JVM_Bind
More details
INFO: Started SocketConnector # 0.0.0.0:8983
How can I change the port because java.exe is also listening on this port. I changed both the sunspot.yml and scripts.conf but solr:run still connects to port 8983.
Sounds like you have another instance of Solr running. Try killing all your instances and running it again.
I'm trying to deploy a new Rails app to a Bitnami/Ubuntu/Virtual Server. I am remote and using the SSH terminal.
I have successfully used Capistrano to cap deploy:update. My source is going to github and Capistrano is then putting it on the server.
So, I have this directory on the server:
/opt/bitnami/projects/ndeavor/releases/20130306180756
The server also has a PostgreSQL stack running. I have created my Postgresql user and empty database. I believe my next step is to run this command using the SSH console:
bitnami#linux:/opt/bitnami/projects/ndeavor/releases/20130306180756$ rake RAILS_ENV=production db:schema:load
Question 1 = Is that the correct next step?
When I run that command, I get this:
could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
Questions 2 = How can I get Rake to find the PostgreSQL socket?
I could put something like this in the database.yml file:
socket: /var/pgsql_socket
But, I don't know what the correct entry should be
Thanks for your help!!
UPDATE1
I also tried having the database.yml file like this:
production:
adapter: postgresql
encoding: unicode
database: ndeavor_production
pool: 5
username: (user name)
password: (password)
socket: /var/run/postgresql
But, I get the same error:
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
Why isn't it at least asking me for Unix domain socket "/var/run/postgresql" ??
UPDATE2
I found this:
"I have solved my problem by declaring the unix_socket_directory in postgresql.conf file to be /var/run/postgresql. It does seem for a standard build they should have a common location?
If you build from unmodified PG sources, the default socket location is
indeed /tmp. However, Debian and related distros feel that this
violates some distro standard or other, so they modify the source code
to make the default location /var/run/postgresql. So it depends on
whose build you're using."
But, I'm not sure if I should be changing the postgresql.conf file or the Rails database.yml file
UPDATE3
I looked in /var/run/postgresql directory and it's empty.
I can't find where the .s.PGSQL.5432 is located
As Bob noted, specifying host and port can fix the problem. Since he hasn't explained this in more detail I want to specify.
The default port is 5432, and the default "host" is a path to where it expects the socket. Port is always numeric, but host can be set for any libpq connection either to the network host or to the directory containing the socket. For example, connect using psql with
psql -h /tmp -p 5432 -U chris mydb
This will connect over the /tmp/.s.PGSQL.5432 socket. Now Ruby is somewhat different but the pg gem uses libpq so it should behave the same.
If you don't know where the socket is, the obvious next thing to try is the network address, and particularly localhost.
I've got an app that seems to be running just fine for the most part on Postgres (for Heroku), but now that I'm trying to do some fancier stuff like starting a delayed_job worker with
RAILS_ENV=production script/delayed_job
I get this error:
FATAL: password authentication failed for user "<myusername>" (PG::Error)
This is troublesome, because
Installing postgres was a huge, confusing mess for me, as a Rails newbie, and I never remember setting a password. (I've got a password for PGAdminII, but I know that one, and this isn't it). When I go into my database.yml file and try to change the password to everything I can think of it being, it doesn't work.
Fishing around on the internet, it looks like I should do something to a pg_hba.conf file, but I don't have one anywhere, apparently.
I've been working on this app for weeks, and I really don't want to erase what I've got going on, so I'm wary of initdb'ing in another directory.
Database stuff makes no sense to me. I've tried to figure it out, but I think I'm just too new to this stuff. And I never know where to start to fix things.
This question is sort of vague cause I don't know enough to know what specific question to ask -- but can can anybody help me with this? Like: How do I figure out my password? What do I do about pg_hba.conf? Will I have to start a new database?
EDIT -- Per the below suggestions (Thanks!), I ran both "ps -A | grep postgres" and "ps -A | grep pg_ctl". The output of each, respectively, is
85 ?? 0:06.94 postgres: logger process
101 ?? 0:32.04 postgres: writer process
102 ?? 0:23.98 postgres: wal writer process
103 ?? 0:06.70 postgres: autovacuum launcher process
104 ?? 0:07.60 postgres: stats collector process
6337 ttys002 0:00.01 grep postgres
and
6340 ttys002 0:00.00 grep pg_ctl
neither of which, unfortunately, appears to have anything preceded by -D.
I can find my pg_hba.conf file in the path:
/etc/postgresql/9.1/main/