My rails project running on server nginx + passenger. I noticed that thinking sphinx cannot respond to asynchronous connections. I run in two tabs on browser search query, and one of the responses returned this error:
Error connecting to Sphinx via the MySQL protocol. Error connecting to Sphinx via the MySQL protocol. Can't connect to MySQL server on '127.0.0.1' (111) ...
thinking_sphinx.yml:
development:
quiet_deltas: true
mysql41: 9311
bin_path: "/usr/bin"
searchd_binary_name: searchd
indexer_binary_name: indexer
min_infix_len: 3
min_word_len: 2
html_strip: 1
index_exact_words: 1
min_stemming_len: 4
charset_type: "utf-8"
test:
mysql41: 9311
production:
mysql41: 9311
No such problems on localhost, server on WebRick.
What could I do, to avoid this. There is only one thinking-sphinx process. Maybe I can increase it's number.
Thanks in advance!
Update
I rebuilt thinking sphinx, I haven't done it for long time, now it doesn't falling, maybe thing was in it. But I am still interested in how to run serveral ts processes, or it is unecessary.
Via the help files:
You can run as many Sphinx instances as you wish on one machine - but each must be bound to a different port. You can do this via the config/thinking_sphinx.yml file - just add a setting for the port for the specific environment using the mysql41 setting (or port for pre-v3):
staging:
mysql41: 9313
Related
I have a rails app which uses sunspot/solr for search capability. This app works well in development, but now that I'm trying to deploy it to a server Im having trouble.
I'm using a Digital Ocean droplet with the Dokku 0.5.4 on Ubuntu 14.04 one click image.
The first time I tried to get Solr up and running I followed the directions on the official sunspot github page. Despite following the directions to the letter, I was unable to get my app to talk to Solr.
Since the tutorials found in the linked guide were a little old. I decided to spin up an entirely new droplet and try again, this time with the slightly more recent directions found here. After following these directions verbatim, I'm still having connection issues with my rails app.
Here is the error I'm getting
RSolr::Error::ConnectionRefused: Connection refused - {:data=>"<?xml version=\"1.0\" encoding=\"UTF-8\"?><add><doc><field name=\"id\">Person 3</field><field name=\"type\">Person</field><field name=\"type\">ActiveRecord::Base</field><field name=\"class_name\">Person</field><field boost=\"2\" name=\"first_name_text\">Joe</field><field name=\"last_name_text\">Schmoe</field><field name=\"email_text\">joe#example.com</field></doc></add>", :headers=>{"Content-Type"=>"text/xml"}, :method=>:post, :params=>{:wt=>:ruby}, :query=>"wt=ruby", :path=>"update", :uri=>#<URI::HTTP http://localhost:8983/solr/solr_sunspot_example/update?wt=ruby>, :open_timeout=>nil, :read_timeout=>nil, :retry_503=>nil, :retry_after_limit=>nil}
I should point out that when I navigate to mydomain.com:8983 I can see that the solr server is working. Also, when I run service solr status I get the following.
Found 1 Solr nodes:
Solr process 1164 running on port 8983
{
"solr_home":"/var/solr/data/",
"version":"5.2.1 1684708 - shalin - 2015-06-10 23:20:13",
"startTime":"2016-04-15T19:38:29.651Z",
"uptime":"0 days, 0 hours, 24 minutes, 17 seconds",
"memory":"93.7 MB (%19.1) of 490.7 MB"}
Here is my sunspot.yml file
production:
solr:
hostname: localhost
port: 8983
log_level: WARNING
path: /solr/solr_sunspot_example
This is the first time I've tried to use sunspot/solr and I'm not able to see where Im messing up. Have I setup solr the right way?
Most probably, your Solr is not set up to listen to the localhost interface and only listens to mydomain.com interface.
To verify this assumption, do curl localhost:8983/solr/solr_sunspot_example and curl mydomain.com:8983/solr/solr_sunspot_example from your instance. If the second command succeeds, while the first one does not that would be the case.
You would have 2 options to solve it: either point your configuration to query the mydomain.com interface, or change Solr configuration to listen on the localhost interface. The later will be controlled by the hostname setting in the sunspot.yml (see the github wiki page you're referencing above).
May be missing user and password in production config ?
production:
solr:
hostname: point_to_solr_host
port: 443
log_level: WARNING
path: /solr/solr_sunspot_example
user: <%= ENV["SOLR_USER"] %>
password: <%= ENV["SOLR_PASSWORD"] %>
I'm trying to deploy a new Rails app to a Bitnami/Ubuntu/Virtual Server. I am remote and using the SSH terminal.
I have successfully used Capistrano to cap deploy:update. My source is going to github and Capistrano is then putting it on the server.
So, I have this directory on the server:
/opt/bitnami/projects/ndeavor/releases/20130306180756
The server also has a PostgreSQL stack running. I have created my Postgresql user and empty database. I believe my next step is to run this command using the SSH console:
bitnami#linux:/opt/bitnami/projects/ndeavor/releases/20130306180756$ rake RAILS_ENV=production db:schema:load
Question 1 = Is that the correct next step?
When I run that command, I get this:
could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
Questions 2 = How can I get Rake to find the PostgreSQL socket?
I could put something like this in the database.yml file:
socket: /var/pgsql_socket
But, I don't know what the correct entry should be
Thanks for your help!!
UPDATE1
I also tried having the database.yml file like this:
production:
adapter: postgresql
encoding: unicode
database: ndeavor_production
pool: 5
username: (user name)
password: (password)
socket: /var/run/postgresql
But, I get the same error:
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
Why isn't it at least asking me for Unix domain socket "/var/run/postgresql" ??
UPDATE2
I found this:
"I have solved my problem by declaring the unix_socket_directory in postgresql.conf file to be /var/run/postgresql. It does seem for a standard build they should have a common location?
If you build from unmodified PG sources, the default socket location is
indeed /tmp. However, Debian and related distros feel that this
violates some distro standard or other, so they modify the source code
to make the default location /var/run/postgresql. So it depends on
whose build you're using."
But, I'm not sure if I should be changing the postgresql.conf file or the Rails database.yml file
UPDATE3
I looked in /var/run/postgresql directory and it's empty.
I can't find where the .s.PGSQL.5432 is located
As Bob noted, specifying host and port can fix the problem. Since he hasn't explained this in more detail I want to specify.
The default port is 5432, and the default "host" is a path to where it expects the socket. Port is always numeric, but host can be set for any libpq connection either to the network host or to the directory containing the socket. For example, connect using psql with
psql -h /tmp -p 5432 -U chris mydb
This will connect over the /tmp/.s.PGSQL.5432 socket. Now Ruby is somewhat different but the pg gem uses libpq so it should behave the same.
If you don't know where the socket is, the obvious next thing to try is the network address, and particularly localhost.
I am having two small apps having search in it. They are completly two different apps with different database etc.
Now the issue is coming up pid file.At a time only on application is searching cause when I do
rake ts:start
on one it says another instance is already running.
How can I change this so that sphinx keep running for both the applications. I am using Capistrano for development.
Structure is something like this:
/home/me/my_app_1/production/current
/home/me/my_app_2/production/current
In both apps you have to create a config/sphinx.yml which can contain various configuration variables and one of them allows you to specify the port of the sphinx server. When you define the port manually in one app like this:
development:
port: 9313
test:
port: 9314
production:
port: 9316
and in the other:
development:
port: 9317
test:
port: 9318
production:
port: 9319
Then call rake ts:rebuild in both applications. Thinking sphinx will generate new config files for sphinx that setup different sphinx instances for each app and each environment in the app.
I'm using the amazing sunspot gem(github.com/outoftime/sunspot) on a rails application but I'm having such a huge problem. I confess that I still don't know how to configure it correctly according to my environment, but everything is set up and running well on my local and stage servers.
Well, to sum up, my problem is that in production I have a model that is currently updated - every list that envolves this model, an attribute is incremented. And the main problem is that when I try to perform a complex search on this model(not contextual) the Connection Refused error appears to me but solr is running up and performing all other searches.
My solrconfig.xml is just like sunspot installation, I didn't change anything. Is the autoCommit section the solution for this, or is nothing to do with it ?
Sorry for the last update, I wasn't made a newbie mistake, the result of "ps aux | grep java" on the server:
ubuntu 4039 0.0 1.8 2278060 144084 ? Ssl Jan21 8:10 java -Djetty.port=8983 -Dsolr.data.dir=/home/ubuntu/mallguide/mallguide-rails/solr/data/production -Dsolr.solr.home=/home/ubuntu/mallguide/mallguide-rails/solr -Djava.util.logging.config.file=/tmp/logging.properties20120121-4039-co662r-0 -jar start.jar
ubuntu 23125 0.0 0.0 7628 1004 pts/1 S+ 10:47 0:00 grep --color=auto java
And my sunspot.yml file:
production:
solr:
hostname: localhost
port: 8983
log_level: WARNING
development:
solr:
hostname: localhost
port: 8982
log_level: INFO
test:
solr:
hostname: localhost
port: 8981
log_level: WARNING
auto_commit_after_request: false
Sorry for the poor english, hope that someone could help me.
I still don't know what to do to correct this problem, the point is that I have only one model that is updated(not the indexed fields on searchable) all the time and solr just fail for this model, and not for the others.
Any help ?
I fixed a similar error before. It may be related if your error symptoms match:
The kind of error messages I have encountered is like this:
Connection refused - connect(2)
with the backtrace:
/home/john/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/net/http.rb:762:in `initialize'
...
rest-client (1.6.7) lib/restclient/net_http_ext.rb:51:in `request'
rsolr (1.0.8) lib/rsolr/connection.rb:15:in `execute'
Once I restarted sunspot solr server, errors will stop. During this time I can read indexes but not write to them.
Solution
The reason behind my error is that: I manually called RAILS_ENV=production rake sunspot:solr:start on the production server, and I also use capistrano to deploy so it has the current directory and the shared directory.
When I run that start command in the current directory, for some reason the index file is still referenced by the release path (e.g. release/2012xxxxxxxx/...). Capistrano will delete old releases, so every now and then solr will not be able to reference the folder if it gets deleted.
The solution is to explicitly specify the index file path with the symlinked current directory:
RAILS_ENV=production rake sunspot:solr:start --port=8983 --data-directory=#{current_path}/solr/data/#{rails_env} --pid-dir=#{current_path}/solr/pids/#{rails_env}"
I have two different rails applications running thinking_shpinx, and one seems to not be using the correct index.
I tried setting the port in the sphinx.yml but it seems to not have any effect.
Matthew, had you set port in sphinx.yml on a per-environment basis?
ie:
development:
port: 3312
test:
port: 3313
production:
port: 3312
Needed to kill searchd and clear the pids in /log