Thinking Sphinx posix spawn pointing at wrong directory - homebrew

A rails application server where homebrew is used, is getting the following console messages invoking org.thinking_sphinx.sphinx (redacted)
posix_spawn("usr/local/sphinx/bin/searchd", ...): no such file or directory
Existed with exit code : 1
Throttling respawn: Will start in 10 seconds
So this thing is looping every 10 seconds and (?) pointlessly generating these errors.
searchd does exist, and being installed via homebrew lies in
usr/local/Cellar/sphinx/[sphinx_version_number]/bin/
Idea where the problem gets generated and how to fix?

As made clear from the comments, it looks like that the file mentioned (org.thinking_sphinx.sphinx) was from an old approach to managing Sphinx for a specific Rails app and/or Sphinx installation that perhaps no longer exists?
Sphinx certainly runs as a daemon, and Thinking Sphinx manages this via rake tasks (ts:start, ts:stop, ts:rebuild, etc). Of course, something to start the daemon automatically when the OS boots is also useful, but that's up to whoever's managing the servers. And it's worth noting that the Sphinx daemon runs on a per-app basis, not a per-system basis, which is why stopping this rogue process will not have affected Sphinx searches in other apps.

Related

Rails server claims "No such file - ["config/database.yml"]" even though it is there

After developing in Rails on a virtual Linux machine, I just recently installed Windows Subsystem for Linux. Before the Christmas weekend, it was working just fine, but as of this morning, when I try and start up my rails server, it gives me the following complaint:
Could not load database configuration. No such file - ["config/database.yml"]
Of course, when I go into my config folder, database.yml is there - just like it was last week. So I'm confused why it can't find the file now when it absolutely could find it before. Is this some sort of WSL quirk that makes the file hard to find for some reason?
This is most likely guesswork, but I assume that there is a permission issue. If config/database.yml has insufficient ownership or read-levels, it may not be found by your Rails application.
For further diagnose, I recommend posting the output of:
# Get permission details for your config
ls -laZ config/database.yml
And maybe some details of which user is starting the Rails application (effectively which user owns the Ruby process).

Rails 6 application fails due to cache directory changing ownership to root

I have a Rails 6 application running on Debian buster. In one place I am using "low-level" caching. Here is the relevant code:
# Get the value.
def self.ae_enabled?()
Rails.cache.fetch("ae_enabled", expires_in: 1.hour)
end
# Change the value.
def self.ae_toggle()
ac = AdminConfiguration.find_by(name: "ae-enabled")
ac.value = ! ac.value
ac.save()
# Invalidate the cache.
Rails.cache.delete("ae_enabled")
return ac
end
This works fine ... for a while. At some point, and for reasons I cannot figure out, the cache directory tmp/cache/3F1/ where the above value is cached changes ownership from www-data:www-data (the user Apache runs under) to root:root. Once this happens Apache can no longer read this cached value and the application throws an error.
The odd thing is none of the other directories under tmp/cache/ have their permissions change, it is only the one associated with this low-level cache.
Why is that particular cache directory changing ownership?
Technical details: Rails version 6.0.3.3.
Apache usually does not relate to rails cache, unless you're using passenger, in which case it may be passenger's bug/misconfiguration, check if user sandboxing is enabled and configured correctly.
A typical rails deployment usually has multiple processes:
a web server handling static files and proxying requests to rails (usually nginx, you've mentioned apache)
rails web server (in case of passenger - "inside" the previous, but in fact there's still a child process)
some background workers or processes run from cron
File ownership confusion most probably originates from one of the above writing to disk while running under a different os user.
Look into how your processes are started. First suspect is some cron job that may be configured as system-wide, these run under root.

Why does Rails console create so many Ruby processes?

I experimented with a Rake task with Cron. I started with no Ruby processes, then the Cron job started and spawned one process. The highlighted process below is what is run by Cron, which is expected:
I wanted to check if any records were being written to the database. I ran rails c to enter the Rails console, and noticed that suddenly four other ruby processes showed up in my process list as above. Why would this happen? I think that running the console should create one other process and not four.
After quitting the console, I am left with three Ruby processes including the Rake task that is running.
I am using Rails 4.2.
It's not that I find this to be problematic, but I am curious why there would need to be more than one process for a REPL and then two leftover processes after the REPL is closed.
This is because of spring which has shipped with rails by default for a little while now.
You might notice that the second time you run rails c is a lot faster than the first time. This is because the first time you run a springified script your app is loaded as normal and then forks to run what you requested. The second time around this loader script can just fork a second time, so you get a much faster startup. This possible because of these extra processes you have noticed.
You can see them by running
spring status
And you can kill them by running
spring stop

Rails.root points to the wrong directory in production during a Resque job

I have two jobs that are queued simulataneously and one worker runs them in succession. Both jobs copy some files from the builds/ directory in the root of my Rails project and place them into a temporary folder.
The first job always succeeds, never have a problem - it doesn't matter which job runs first either. The first one will work.
The second one receives this error when trying to copy the files:
No such file or directory - /Users/apps/Sites/my-site/releases/20130829065128/builds/foo
That releases folder is two weeks old and should not still be on the server. It is empty, housing only a public/uploads directory and nothing else. I have killed all of my workers and restarted them multiple times, and have redeployed the Rails app multiple times. When I delete that releases directory, it makes it again.
I don't know what to do at this point. Why would this worker always create/look in this old releases directory? Why would only the second worker do this? I am getting the path by using:
Rails.root.join('builds') - Rails.root is apparently a 2 week old capistrano release? I should also mention this only happens in the production environment. What can I do
?
Rescue is not being restarted (stopped and started) on deployments which is causing old versions of the code to be run. Each worker continues to service the queue resulting in strange errors or behaviors.
Based on the path name it looks like you are using Capistrano for deploying.
Are you using the capistrano-resque gem? If not, you should give that a look.
I had exactly the same problem and here is how I solved it:
In my case the problem was how capistrano is handling the PID-files, which specify which workers currently exist. These files are normally stored in tmp/pids/. You need to tell capistrano NOT to store them in each release folder, but in shared/tmp/pids/. Otherwise resque does not know which workers are currently running, after you make a new deployment. It looks into the new release's pids-folder and finds no file. Therefore it assumes that no workers exist, which need to be shut down. Resque just creates new workers. And all the other workers still exist, but you cannot see them in the Resque-Dashboard. You can only see them, if you check the processes on the server.
Here is what you need to do:
Add the following lines in your deploy.rb (btw, I am using Capistrano 3.5)
append :linked_dirs, ".bundle", "tmp/pids"
set :resque_pid_path, -> { File.join(shared_path, 'tmp', 'pids') }
On the server, run htop in the terminal to start htop and then press T, to see all the processes which are currently running. It is easy to spot all those resque-worker-processes. You can also see the release-folder's name attached to them.
You need to kill all worker-processes by hand. Get out of htop and type the following command to kill all resque-processes (I like to have it completely clean):
sudo kill -9 `ps aux | grep [r]esque | grep -v grep | cut -c 10-16`
Now you can make a new deploy. You also need to start the resque-scheduler again.
I hope that helps.

Thinking Sphinx delta indexing fails in production

Here's what I've determined:
Delta indexing works fine in development
Delta indexing does not work when I push to the production server, and no action is logged in searchd.log
I'm running Phusion Passenger, and, as recommended in the basic troubleshooting guide, have confirmed that:
www-data has permission to run indexing rake tasks (ran them from command line manually)
the path to indexer and searchd are correct (/usr/local/bin)
there are no errors in production.log
What on earth could I possibly be missing? I'm running Ruby Enterprise 1.8.6, Rails 2.3.4, Sphinx 0.9.8.1, and Thinking Sphinx 1.2.11.
Thanks!
Last night as I slept it hit me. Unsurprisingly, it was a stupid issue involving bad configuration, though I am rather surprised that it produced the results it did. I guess I don't know much about Thinking Sphinx internals.
Recently I migrated servers. sphinx.yml looked like this:
production:
bin_path: '/usr/local/bin'
host: mysql.mysite.com
On the new server, MySQL was just a local service, but I had forgotten to remove that line. Interestingly, manual rake reindexing still worked just fine. I'm intrigued that Thinking Sphinx didn't throw an error when trying to reload the deltas, since mysql.mysite.com no longer exists, even though that was clearly the source of the issue.
Thanks for all your help, and sorry to have brought up such a silly problem.
Are there any clues in Apache/Nginx's error log?
Here's the next troubleshooting step I would take. Open up the file for the delta indexing strategy you are using (presumably lib/thinking_sphinx/deltas/default_delta.rb). Find the line where it actually generates the indexing command. In mine (v1.1.6) it's line 20:
output = `#{config.bin_path}indexer --config #{config.config_file} #{rotate} #{delta_index_name model}`
Change that so you can log the command itself, and maybe log the output as well:
command = `#{config.bin_path}indexer --config #{config.config_file} #{rotate} #{delta_index_name model}`
RAILS_DEFAULT_LOGGER.info(command)
output = `#{command}`
RAILS_DEFAULT_LOGGER.info(output)
Deploy that to production and tail the log while modifying a delta-indexed model. Hopefully that will actually show you the problem. Of course maybe the problem is elsewhere in the code and you won't even get to this point, but this is where I would start.
I was having this problem and found the "bin_path" solution mentioned above. When it didn't seem to work, it took me a while to realize that I'd pasted in the example code for "production" when I was testing on "stagiing" environment. Problem solved!
This was after making sure that the rake tasks that config, index, and start sphinx are all running as the same user as your passenger instance. If you log into the server as root to run these tasks, they will work in the console but not via passenger.
I had the same problem. Works on the command line, not inthe app.
Turns out that we still had a slave database that we were using for the indexing, but the slave wasn't getting updated.
As above, same issues were faced our side on two machine. The first one we had an issue with mysql which showed in apache2 log. Only seemed to affect local OSX machine..
Second time when we deployed to Ubuntu server, we had same issue. Rails c production was fine, no errors, bla bla bla.
Ended up being a permissions problem. Couldn't figure this out as there were no problems starting, although I guess I was doing so as root.
Using capistrano and passenger, we did this:
Create a passenger user and added to www-data group
Changed user in deploy.rb to be passenger
Manually changed all the /current files to be owned by above
Logged in as passenger user.
Ran rake ts:rebuild RAILS_ENV="production"
Worked a treat for us...
Good luck

Resources