Rails 4 - how is the "/tmp" cleaned? - ruby-on-rails

I am generating PDF document and storing it temporarily to the /tmp directory. Once the document is generated and stored in the directory (I am doing it as a background process with Sidekiq), then I upload it to Amazon S3 and delete it from the /tmp directory.
What I noticed is that when a user generate a document and I am deploying some new code to the server (with using Capistrano), the process of generating/uploading document is interrupted.
I was wondering if this might be related to Sidekiq? It's running as an Upstart service on Ubuntu, so I don't think so.
Then I thought the problem might be that I am storing the document in the /tmp directory. How the directory works? Is the whole content of the directory deleted when I do a new deployment with Capistrano?
EDIT:
The document generation takes usually takes 5-10 seconds, but the queue is default, so the process might fail because there's too many default processes in the queue?

The /tmp directory should be cleaned only during server boot (as #Зелёный already commented). But your PDF generation / upload might just take too long and the process might get killed. This is documented here and I quote from the docs:
sidekiqctl stop [pidfile] 60
This sends TERM, waits up to 60 seconds and then will kill -9 the Sidekiq process if it has not exited by then. Keep in mind the deadline timeout is the amount of time sidekiqctl will wait before running kill -9 on the Sidekiq process.
The details should be shown in the console output during the capistrano deployment, so if it's not the case of process getting killed, please add the output to the question.

Related

Running filewatcher as separate process

I am still very new to Ruby so I hope you can help. I have a Ruby on Rails app that needs to watch a specific directory "Dir A" to which I keep adding txt files. Once a new file appears it needs to be processed into csv file, which then appears in a tmp directory before being attached to a user, and disappears from tmp after the file goes into ActiveStorage, while keeping the original txt file in "Dir A" for a limited amount of time.
Now, I am using filewatcher gem to watch "Dir A" , however I need it to run on server startup and continue to run in the background. I understand I need to daemonize the process, but how can I do it from *.rb files rather than terminal?
Atm I am using Threads, but I am not sure if that's the best solution...
I also have the following issues:
- how to process files which have already appeared in the folder before the server start up?
- filewatcher does not seem to pick up another new file while it's still processing the previous one, and threads don't seem to help with that
- what would you recommend to be the best way to keep track of processed files - a database, or renaming/copying files into a different folder, or some global variables or maybe there's something else? I have to know which files are processed so I don't repeat the process espacially in cases when I need to schedule filewatcher restarts due to its declining performance (filewatcher documentation states it is best to restart the process if it's been long-running)
I'm sorry to bombard with questions, but I need some guidance, maybe there's a better gem I've missed, I looked at guard gem but I am not entirely sure how it works and filewatcher seemed simpler.
This question should probably be split into two, one about running filewatcher as a background process, and another about managing processed files, but as far as filewatcher goes, a simple solution would be to use the foreman gem with a Procfile.
You can start your Rails app in one process and filewatcher in another with a Procfile in the root of your app, like this:
# Procfile
web: bundle exec puma -t 5:5 -p ${PORT:-3000} -e ${RACK_ENV:-development}
filewatcher: filewatcher "**/*.txt" "bundle exec rake process_txt_files"
and move whatever processing needs to be done into a rake task. With this you could just run foreman start locally to start both processes, and if your production server supports Procfiles (like Heroku, for example), this makes it easy to do the same in production.

Thinking Sphinx posix spawn pointing at wrong directory

A rails application server where homebrew is used, is getting the following console messages invoking org.thinking_sphinx.sphinx (redacted)
posix_spawn("usr/local/sphinx/bin/searchd", ...): no such file or directory
Existed with exit code : 1
Throttling respawn: Will start in 10 seconds
So this thing is looping every 10 seconds and (?) pointlessly generating these errors.
searchd does exist, and being installed via homebrew lies in
usr/local/Cellar/sphinx/[sphinx_version_number]/bin/
Idea where the problem gets generated and how to fix?
As made clear from the comments, it looks like that the file mentioned (org.thinking_sphinx.sphinx) was from an old approach to managing Sphinx for a specific Rails app and/or Sphinx installation that perhaps no longer exists?
Sphinx certainly runs as a daemon, and Thinking Sphinx manages this via rake tasks (ts:start, ts:stop, ts:rebuild, etc). Of course, something to start the daemon automatically when the OS boots is also useful, but that's up to whoever's managing the servers. And it's worth noting that the Sphinx daemon runs on a per-app basis, not a per-system basis, which is why stopping this rogue process will not have affected Sphinx searches in other apps.

Rails.root points to the wrong directory in production during a Resque job

I have two jobs that are queued simulataneously and one worker runs them in succession. Both jobs copy some files from the builds/ directory in the root of my Rails project and place them into a temporary folder.
The first job always succeeds, never have a problem - it doesn't matter which job runs first either. The first one will work.
The second one receives this error when trying to copy the files:
No such file or directory - /Users/apps/Sites/my-site/releases/20130829065128/builds/foo
That releases folder is two weeks old and should not still be on the server. It is empty, housing only a public/uploads directory and nothing else. I have killed all of my workers and restarted them multiple times, and have redeployed the Rails app multiple times. When I delete that releases directory, it makes it again.
I don't know what to do at this point. Why would this worker always create/look in this old releases directory? Why would only the second worker do this? I am getting the path by using:
Rails.root.join('builds') - Rails.root is apparently a 2 week old capistrano release? I should also mention this only happens in the production environment. What can I do
?
Rescue is not being restarted (stopped and started) on deployments which is causing old versions of the code to be run. Each worker continues to service the queue resulting in strange errors or behaviors.
Based on the path name it looks like you are using Capistrano for deploying.
Are you using the capistrano-resque gem? If not, you should give that a look.
I had exactly the same problem and here is how I solved it:
In my case the problem was how capistrano is handling the PID-files, which specify which workers currently exist. These files are normally stored in tmp/pids/. You need to tell capistrano NOT to store them in each release folder, but in shared/tmp/pids/. Otherwise resque does not know which workers are currently running, after you make a new deployment. It looks into the new release's pids-folder and finds no file. Therefore it assumes that no workers exist, which need to be shut down. Resque just creates new workers. And all the other workers still exist, but you cannot see them in the Resque-Dashboard. You can only see them, if you check the processes on the server.
Here is what you need to do:
Add the following lines in your deploy.rb (btw, I am using Capistrano 3.5)
append :linked_dirs, ".bundle", "tmp/pids"
set :resque_pid_path, -> { File.join(shared_path, 'tmp', 'pids') }
On the server, run htop in the terminal to start htop and then press T, to see all the processes which are currently running. It is easy to spot all those resque-worker-processes. You can also see the release-folder's name attached to them.
You need to kill all worker-processes by hand. Get out of htop and type the following command to kill all resque-processes (I like to have it completely clean):
sudo kill -9 `ps aux | grep [r]esque | grep -v grep | cut -c 10-16`
Now you can make a new deploy. You also need to start the resque-scheduler again.
I hope that helps.

Heroku Rails system command fails

I've deployed my Rails application to Heroku following https://devcenter.heroku.com/articles/rails3 and can open a website like http://severe-mountain-793.herokuapp.com
In my controller, I have a system command that downloads a file into the public directory with system('wget', ..., Rails.root.join('public/...')). Apparently, from checking the exit status of the command, I realize that the command fails. But I don't know why it fails, and I don't know how to show the output message of the command. What can I do?
I think heroku's filesystem is read-only, so you can't really save files using wget, except if you want to save them inside the '/tmp' folder, where they could be deleted anytime. Moreover, dynos have a 30 seconds timeout, so this would fail for every download which takes more than that interval.

Resque: worker status is not right

Resque is currently showing me that I have a worker doing work on a queue. That worker was shutdown by me in the middle of the queue (it's just for testing) and the worker is still showing as running. I've confirmed the process ID has been killed and bluepill is no longer monitoring it. I can't find anyway in the UI to force clear that it is working.
What's the best way to update the status for the # of workers that are currently up (I have 2, web UI reports 3).
You may have a lingering pid file. This file is independent of the process running; in other words, when you killed the process, it didn't delete the pid file.
If you're using a typical Rails and Resque setup, Resque will store the pid in the Rails ./tmp directory.
Some Resque start scripts specify the pid file in a different location, something like this:
PIDFILE=foo/bar/resque/pid bundle exec rake resque:work
Wherever the script puts the pid file, look there, then delete it, then restart.
Also on the command line, you can ask redis for the running workers:
redis-cli keys *worker:*
If there are workers that you don't expect, you can delete them with:
redis-cli del <keyname>
Try to restart the applications.
For future references: also have a look under https://github.com/resque/resque/issues/299

Resources