I have a rails app which uses sunspot/solr for search capability. This app works well in development, but now that I'm trying to deploy it to a server Im having trouble.
I'm using a Digital Ocean droplet with the Dokku 0.5.4 on Ubuntu 14.04 one click image.
The first time I tried to get Solr up and running I followed the directions on the official sunspot github page. Despite following the directions to the letter, I was unable to get my app to talk to Solr.
Since the tutorials found in the linked guide were a little old. I decided to spin up an entirely new droplet and try again, this time with the slightly more recent directions found here. After following these directions verbatim, I'm still having connection issues with my rails app.
Here is the error I'm getting
RSolr::Error::ConnectionRefused: Connection refused - {:data=>"<?xml version=\"1.0\" encoding=\"UTF-8\"?><add><doc><field name=\"id\">Person 3</field><field name=\"type\">Person</field><field name=\"type\">ActiveRecord::Base</field><field name=\"class_name\">Person</field><field boost=\"2\" name=\"first_name_text\">Joe</field><field name=\"last_name_text\">Schmoe</field><field name=\"email_text\">joe#example.com</field></doc></add>", :headers=>{"Content-Type"=>"text/xml"}, :method=>:post, :params=>{:wt=>:ruby}, :query=>"wt=ruby", :path=>"update", :uri=>#<URI::HTTP http://localhost:8983/solr/solr_sunspot_example/update?wt=ruby>, :open_timeout=>nil, :read_timeout=>nil, :retry_503=>nil, :retry_after_limit=>nil}
I should point out that when I navigate to mydomain.com:8983 I can see that the solr server is working. Also, when I run service solr status I get the following.
Found 1 Solr nodes:
Solr process 1164 running on port 8983
{
"solr_home":"/var/solr/data/",
"version":"5.2.1 1684708 - shalin - 2015-06-10 23:20:13",
"startTime":"2016-04-15T19:38:29.651Z",
"uptime":"0 days, 0 hours, 24 minutes, 17 seconds",
"memory":"93.7 MB (%19.1) of 490.7 MB"}
Here is my sunspot.yml file
production:
solr:
hostname: localhost
port: 8983
log_level: WARNING
path: /solr/solr_sunspot_example
This is the first time I've tried to use sunspot/solr and I'm not able to see where Im messing up. Have I setup solr the right way?
Most probably, your Solr is not set up to listen to the localhost interface and only listens to mydomain.com interface.
To verify this assumption, do curl localhost:8983/solr/solr_sunspot_example and curl mydomain.com:8983/solr/solr_sunspot_example from your instance. If the second command succeeds, while the first one does not that would be the case.
You would have 2 options to solve it: either point your configuration to query the mydomain.com interface, or change Solr configuration to listen on the localhost interface. The later will be controlled by the hostname setting in the sunspot.yml (see the github wiki page you're referencing above).
May be missing user and password in production config ?
production:
solr:
hostname: point_to_solr_host
port: 443
log_level: WARNING
path: /solr/solr_sunspot_example
user: <%= ENV["SOLR_USER"] %>
password: <%= ENV["SOLR_PASSWORD"] %>
Related
I'm new to web development. I'm trying to follow the instructions here to set up a local instance of the sharetribe.com website: https://github.com/sharetribe/sharetribe
I've followed all the steps under "Setting up the development environment" without any issues, but when I get to the end, I'm supposed to "Open a browser and go to the server URL (e.g. http://lvh.me:3000)"
At that URL, I get "This site can’t be reached." Is my URL supposed to be at a different port on localhost? How do I figure out where my site is at?
Foreman starts the server on 'http://0.0.0.0:5000' by default.
Instead of Foreman, you can also start the rails server with following command:
rails s -b 0.0.0.0 -p 5000
Figured it out! The IP address prints out in the terminal where you did the "foreman start" command after the line "Rails 5.1.1 application starting in development on"
Mine was http://0.0.0.0:5000 (not that it's 5000 not 3000), then after filling in the information, I was redirected to http://[my marketplace name].lvh.me:3000/ and I had to change that to 5000 to see the marketplace. Huzzah!
I have been struggling with this issue for days, i am able to authenticate with the mongo shell.
But when i access it my application from the browser, i got the above mentioned error.
Ruby on rails logs:
2016-07-05T04:29:34.415943099Z app[web.1]: MONGODB | xx.xx.xx.xx:4121
| [db].count | STARTED | {"count"=>"listings", "query"=>{}}
2016-07-05T04:29:34.418337913Z app[web.1]: MONGODB | xx.xx.xx.xx:4121
| [db].count | FAILED | not authorized on [db] to execute command {
count: "listings", query: {} } (13) | 0.0021065790000000004s
Background
Hosting on Digital Ocean, with the One-Click Deployment of Dokku.
Dokku Version : 0.6.4
MongoDB : 3.2.6
Ruby: 2.2.4
Rails 4.2.6
I have added user (with DbOwner) to MongoDB and the same to mongoid.yml.
Here is mongoid.yml
production:
clients:
default:
database: sample
hosts:
- ip:4121
user: "user"
password: "password"
options:
read:
mode: :primary
max_pool_size: 5
Ok. I found the solution for this and i thought i will close the loop on this and save someone a couple of hours/days trying to figure this out.
Basically, the one-click from digital ocean DOES work!
But you need the following in your mongoid.yml, add the URI in and remove the database, user and password.
To generate the URI, do the following in your ssh terminal console:
dokku mongo:info <<db name>>
Update the uri value with what was displayed on screen.
production:
clients:
default:
uri: <<URI>>
options:
read:
mode: :primary
max_pool_size: 1
I don't know if this solves your problem, but a general piece of advice is, don't go for one click deployment of dokku. Spin up a droplet and install dokku manually, it doesn't take too much of effort also.
One click deployment creates unwanted config errors. Its suggested to go with that approach if you want to do some RnD and play around with it.
dokku one click images works normally all the time, there is no problems
have you installed the mongodb plugin of dokku?
https://github.com/jeffutter/dokku-mongodb-plugin
if you already instaled the plugin, you don't upload the yml file, you then just need to run the link command in your server
mongodb:create <app>
mongodb:link <app> <database>
or you can create and link in the same command
mongodb:create <app> <database>
My rails project running on server nginx + passenger. I noticed that thinking sphinx cannot respond to asynchronous connections. I run in two tabs on browser search query, and one of the responses returned this error:
Error connecting to Sphinx via the MySQL protocol. Error connecting to Sphinx via the MySQL protocol. Can't connect to MySQL server on '127.0.0.1' (111) ...
thinking_sphinx.yml:
development:
quiet_deltas: true
mysql41: 9311
bin_path: "/usr/bin"
searchd_binary_name: searchd
indexer_binary_name: indexer
min_infix_len: 3
min_word_len: 2
html_strip: 1
index_exact_words: 1
min_stemming_len: 4
charset_type: "utf-8"
test:
mysql41: 9311
production:
mysql41: 9311
No such problems on localhost, server on WebRick.
What could I do, to avoid this. There is only one thinking-sphinx process. Maybe I can increase it's number.
Thanks in advance!
Update
I rebuilt thinking sphinx, I haven't done it for long time, now it doesn't falling, maybe thing was in it. But I am still interested in how to run serveral ts processes, or it is unecessary.
Via the help files:
You can run as many Sphinx instances as you wish on one machine - but each must be bound to a different port. You can do this via the config/thinking_sphinx.yml file - just add a setting for the port for the specific environment using the mysql41 setting (or port for pre-v3):
staging:
mysql41: 9313
I'm trying to deploy a new Rails app to a Bitnami/Ubuntu/Virtual Server. I am remote and using the SSH terminal.
I have successfully used Capistrano to cap deploy:update. My source is going to github and Capistrano is then putting it on the server.
So, I have this directory on the server:
/opt/bitnami/projects/ndeavor/releases/20130306180756
The server also has a PostgreSQL stack running. I have created my Postgresql user and empty database. I believe my next step is to run this command using the SSH console:
bitnami#linux:/opt/bitnami/projects/ndeavor/releases/20130306180756$ rake RAILS_ENV=production db:schema:load
Question 1 = Is that the correct next step?
When I run that command, I get this:
could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
Questions 2 = How can I get Rake to find the PostgreSQL socket?
I could put something like this in the database.yml file:
socket: /var/pgsql_socket
But, I don't know what the correct entry should be
Thanks for your help!!
UPDATE1
I also tried having the database.yml file like this:
production:
adapter: postgresql
encoding: unicode
database: ndeavor_production
pool: 5
username: (user name)
password: (password)
socket: /var/run/postgresql
But, I get the same error:
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
Why isn't it at least asking me for Unix domain socket "/var/run/postgresql" ??
UPDATE2
I found this:
"I have solved my problem by declaring the unix_socket_directory in postgresql.conf file to be /var/run/postgresql. It does seem for a standard build they should have a common location?
If you build from unmodified PG sources, the default socket location is
indeed /tmp. However, Debian and related distros feel that this
violates some distro standard or other, so they modify the source code
to make the default location /var/run/postgresql. So it depends on
whose build you're using."
But, I'm not sure if I should be changing the postgresql.conf file or the Rails database.yml file
UPDATE3
I looked in /var/run/postgresql directory and it's empty.
I can't find where the .s.PGSQL.5432 is located
As Bob noted, specifying host and port can fix the problem. Since he hasn't explained this in more detail I want to specify.
The default port is 5432, and the default "host" is a path to where it expects the socket. Port is always numeric, but host can be set for any libpq connection either to the network host or to the directory containing the socket. For example, connect using psql with
psql -h /tmp -p 5432 -U chris mydb
This will connect over the /tmp/.s.PGSQL.5432 socket. Now Ruby is somewhat different but the pg gem uses libpq so it should behave the same.
If you don't know where the socket is, the obvious next thing to try is the network address, and particularly localhost.
I'm using the amazing sunspot gem(github.com/outoftime/sunspot) on a rails application but I'm having such a huge problem. I confess that I still don't know how to configure it correctly according to my environment, but everything is set up and running well on my local and stage servers.
Well, to sum up, my problem is that in production I have a model that is currently updated - every list that envolves this model, an attribute is incremented. And the main problem is that when I try to perform a complex search on this model(not contextual) the Connection Refused error appears to me but solr is running up and performing all other searches.
My solrconfig.xml is just like sunspot installation, I didn't change anything. Is the autoCommit section the solution for this, or is nothing to do with it ?
Sorry for the last update, I wasn't made a newbie mistake, the result of "ps aux | grep java" on the server:
ubuntu 4039 0.0 1.8 2278060 144084 ? Ssl Jan21 8:10 java -Djetty.port=8983 -Dsolr.data.dir=/home/ubuntu/mallguide/mallguide-rails/solr/data/production -Dsolr.solr.home=/home/ubuntu/mallguide/mallguide-rails/solr -Djava.util.logging.config.file=/tmp/logging.properties20120121-4039-co662r-0 -jar start.jar
ubuntu 23125 0.0 0.0 7628 1004 pts/1 S+ 10:47 0:00 grep --color=auto java
And my sunspot.yml file:
production:
solr:
hostname: localhost
port: 8983
log_level: WARNING
development:
solr:
hostname: localhost
port: 8982
log_level: INFO
test:
solr:
hostname: localhost
port: 8981
log_level: WARNING
auto_commit_after_request: false
Sorry for the poor english, hope that someone could help me.
I still don't know what to do to correct this problem, the point is that I have only one model that is updated(not the indexed fields on searchable) all the time and solr just fail for this model, and not for the others.
Any help ?
I fixed a similar error before. It may be related if your error symptoms match:
The kind of error messages I have encountered is like this:
Connection refused - connect(2)
with the backtrace:
/home/john/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/net/http.rb:762:in `initialize'
...
rest-client (1.6.7) lib/restclient/net_http_ext.rb:51:in `request'
rsolr (1.0.8) lib/rsolr/connection.rb:15:in `execute'
Once I restarted sunspot solr server, errors will stop. During this time I can read indexes but not write to them.
Solution
The reason behind my error is that: I manually called RAILS_ENV=production rake sunspot:solr:start on the production server, and I also use capistrano to deploy so it has the current directory and the shared directory.
When I run that start command in the current directory, for some reason the index file is still referenced by the release path (e.g. release/2012xxxxxxxx/...). Capistrano will delete old releases, so every now and then solr will not be able to reference the folder if it gets deleted.
The solution is to explicitly specify the index file path with the symlinked current directory:
RAILS_ENV=production rake sunspot:solr:start --port=8983 --data-directory=#{current_path}/solr/data/#{rails_env} --pid-dir=#{current_path}/solr/pids/#{rails_env}"