This question has been asked before but no answer seems to work for me. I will break the problem down into its 3 components:
1) I receive a Heroku R14 memory (memory quota exceeded) occasionally (i.e. the site has been up 2 days on Heroku and I got this error twice for a period of about 10-15 mn [I was too emotional to count the time precisely]).
2) I installed the oink gem as advised by Heroku.
3) Oink definitely logs, as I get messages to that effect in heroku logs and in Webrick when I work locally. However, I am unable to access the logging summary that shows which functions exceed a memory threshold.
The only line that returns a result (but a wrong one) is :
oink --threshold=0 logfile_for_oink
But it returns empty lines as follows:
---- MEMORY THRESHOLD ----
THRESHOLD: 0 MB
-- SUMMARY --
Worst Requests:
Worst Actions:
Aggregated Totals:
Every other attempt - often copying advice already on StackOverflow - returns errors.
I will list the different attempts I have made (so no-one posts a suggestion I may have already tried) after this.
heroku run bundle exec oink --threshold=75 log/*
This line returns the following error:
/app/vendor/bundle/ruby/1.9.1/gems/oink-0.10.1/lib/oink/cli.rb:88:in `block in get_file_listing': Could not find "log/development.log" (RuntimeError)
Every variation on this, such as log/production.rb or /log/* or what have you has failed.
I also tried the advice on the following links to no avail:
Using oink gem with heroku
oink logs command not working on heroku
oink logs command not working on heroku
How can I run oink in heroku?
Can anyone help me?
Heroku prepends the log file with an additional timestamp so oink can't read it. You can use a regex though to fix it.
http://arches.io/2013/07/understand-memory-usage-on-heroku-rails-app-using-oink/
Related
Having experienced a few periods of downtime, we've recently upgraded to a production environment in Heroku (Crane database plus 2 x web dynos) however we've seen no improvement. In fact reliability seems to have decreased since upgrading.
The root cause seems to be the following exception:
PG::Error (SSL SYSCALL error: EOF detected
which causes the dyno to fail and - eventually - restart, but not before causing some downtime.
I've no idea what's causing it. Common culprits appear to be Resque and Unicorn, neither of which I'm using. We're on rails 3.2.11, on Heroku Cedar, using pg gem 1.14.1
Logs report the following at crash time:
2013-05-23T19:01:33+00:00 app[heroku-postgres]: source=HEROKU_POSTGRESQL_PINK measure.current_transaction=34490 measure.db_size=38311032bytes measure.tables=19 measure.active-connections=7 measure.waiting-connections=0 measure.index-cache-hit-rate=0.99438 measure.table-cache-hit-rate=0.8824
2013-05-23T19:01:35.123633+00:00 app[web.2]:
2013-05-23T19:01:35.123633+00:00 app[web.2]: PG::Error (SSL SYSCALL error: EOF detected
2013-05-23T19:01:35.123633+00:00 app[web.2]: ):
I have read the following: https://groups.google.com/forum/?fromgroups#!topic/heroku/a6iviwAFgdY but can't find anything that might help.
https://gist.github.com/ktopping/5657474
The above fixes the exception, which is useful (as it should declutter my logs, and even help speed up reconnecting to the database) but doesn't actually stop my main issue which is Heroku web dynos crashing more often than I would like.
Am investigating some other routes (Unicorn, rack-timeout).
I have a Rails app on Heroku that works fine when I access it through a browser. It's also displaying data from the database correctly. However, when I try to update the database through the console, I'm getting an internal server error message. The model's name is Total.rb (table's name is "totals") I'm doing this to get the first entry
t = Total.first
! Internal server error
Since the app's working through the browser, I'm not sure if this is a problem I'm causing or if it's heroku's fault. It's been a while since I updated through the console, so I'm not sure if I'm doing it right, but that seems pretty straightforward.
I had always accessed the console with 'heroku console' now it's telling me to use 'heroku run console' but when I did that, it told me that the heroku gem has been deprecated and I need to install the Toolbelt. I installed the Toolbelt, authenticated, and tried to run a console session but same result.
Here's my Heroku info
Addons: heroku-postgresql:dev
pgbackups:plus
zerigo_dns:basic
Dynos: 1
Git URL: git#heroku.com:blahblah
Owner Email: blahblah#gmail.com
Repo Size: 19M
Slug Size: 4M
Stack: bamboo-mri-1.9.2
Web URL: http://blahblah.com
Workers: 0
Update
If I try to run the console after installing the Heroku toolbelt, I get
heroku run console
Running `console` attached to terminal... up, run.3213
bash: console: command not found
I was having the same problem there is some internal issue going on with the heroku CLI client that I don't understand.
The solution posted in this stackoverflow thread solved it for me:
Here is the link
Could Total be a reserved word? You could try renaming the table or creating a temp one, then testing the same query with that. If nothing else it will eliminate one possible problem area.
I have my app in production mode in my linode account and I get in one page a 500 internal server error the message:
We're sorry, but something went wrong.
However in my development environment works fine this page.
How can I debug this error?
How can I see the error origin in my production mode?
I want that rails show errors in production mode.
How can I do it?
Thank you!
If you have access to ssh, log in to your server via ssh and go to your rails log directory, which is inside your rails directory.
Once you are there run the command tail production.log . If this doesn't give you enough information you can also do a tail -n100 production.log (gives you last hundred lines of the production log).
If you have deployed via heroku, then you can access the logs by running heroku logs in your local console. (more information here https://devcenter.heroku.com/articles/logging)
I also find it helpful to use the exception_notification gem https://github.com/rails/exception_notification when running in production, as it emails you a stacktrace when an error occurs. Plenty of others also use Hoptoad (http://hoptoadapp.com/) or Exceptional (http://www.exceptional.io/) however i prefer the simple exception_notification gem.
Also, in some rare occasions when i can't trace the error as a final measure i sometimes open up port 3000 temporarily on the remote server firewall and cd to the rails project and run rails server production with the log level set to debug in config/environments/production.rb so i can see the error in the console, and then close off the port when i have finished.
Hope that helps.
tail -n100 production.log
will only show the last 100 lines of the log file. Just in case you want to see the log running in real time.
use this
tail -1000f log/production.log
I recently started using the oink gem on my heroku app because I noticed a small memory leak with some controller actions. The oink logs command works fine locally but I can't figure out the command to get it to work on my production site.
Here's the command I'm trying:
heroku run oink /log/*
And here's the line from my production.rb file:
config.middleware.use( Oink::Middleware, :logger => Rails.logger )
On my local installation, oink is storing the logs in the /log/oink.log file.
Here's the answer: https://stackoverflow.com/a/14145299/1684322
It is important to use Hodel3000CompliantLogger instead of Rails.logger otherwise oink will fail parsing logfiles. It may also be configured not in config/environments/production.rb but for example in config/initializers/oink.rb
YourApplication::Application.middleware.use( Oink::Middleware, :logger => Hodel3000CompliantLogger.new(STDOUT), :instruments => :memory)
This will make Oink write to default log file which may be later captured by
heroku logs -n500 --app app_name > logfile_for_oink
Or use other log management tool like PaperTrail , or set up syslog drain (rsyslog on another *nix box).
Use oink with --threshold=0 to show all entries
Here is a response from Heroku support:
"In most cases using Oink locally is good enough to understand memory usage issues. Heroku's filesystem is ephemeral and each dyno has its own isolated filesystem, so it's not very practical to write and fetch files. If you can configure Oink to write to stdout or your rails logger its messages should show up in your Heroku logs and you could use a log drain or a log archiving add-on like Papertrail to get a local copy of them."
So it sounds like they are suggesting to use it in development. Or if you can write to stdout and then log drain them yourself into the correct format.
I couldn't figure it out on short notice, so I ended up using the heroku-api gem to automatically restart the app servers every few hours from cron job. This worked as a temporary fix.
Try
heroku run bundle exec oink log/*
To analyse your production logs you'll need to run
heroku run bundle exec oink log/production.log
Here's what I've determined:
Delta indexing works fine in development
Delta indexing does not work when I push to the production server, and no action is logged in searchd.log
I'm running Phusion Passenger, and, as recommended in the basic troubleshooting guide, have confirmed that:
www-data has permission to run indexing rake tasks (ran them from command line manually)
the path to indexer and searchd are correct (/usr/local/bin)
there are no errors in production.log
What on earth could I possibly be missing? I'm running Ruby Enterprise 1.8.6, Rails 2.3.4, Sphinx 0.9.8.1, and Thinking Sphinx 1.2.11.
Thanks!
Last night as I slept it hit me. Unsurprisingly, it was a stupid issue involving bad configuration, though I am rather surprised that it produced the results it did. I guess I don't know much about Thinking Sphinx internals.
Recently I migrated servers. sphinx.yml looked like this:
production:
bin_path: '/usr/local/bin'
host: mysql.mysite.com
On the new server, MySQL was just a local service, but I had forgotten to remove that line. Interestingly, manual rake reindexing still worked just fine. I'm intrigued that Thinking Sphinx didn't throw an error when trying to reload the deltas, since mysql.mysite.com no longer exists, even though that was clearly the source of the issue.
Thanks for all your help, and sorry to have brought up such a silly problem.
Are there any clues in Apache/Nginx's error log?
Here's the next troubleshooting step I would take. Open up the file for the delta indexing strategy you are using (presumably lib/thinking_sphinx/deltas/default_delta.rb). Find the line where it actually generates the indexing command. In mine (v1.1.6) it's line 20:
output = `#{config.bin_path}indexer --config #{config.config_file} #{rotate} #{delta_index_name model}`
Change that so you can log the command itself, and maybe log the output as well:
command = `#{config.bin_path}indexer --config #{config.config_file} #{rotate} #{delta_index_name model}`
RAILS_DEFAULT_LOGGER.info(command)
output = `#{command}`
RAILS_DEFAULT_LOGGER.info(output)
Deploy that to production and tail the log while modifying a delta-indexed model. Hopefully that will actually show you the problem. Of course maybe the problem is elsewhere in the code and you won't even get to this point, but this is where I would start.
I was having this problem and found the "bin_path" solution mentioned above. When it didn't seem to work, it took me a while to realize that I'd pasted in the example code for "production" when I was testing on "stagiing" environment. Problem solved!
This was after making sure that the rake tasks that config, index, and start sphinx are all running as the same user as your passenger instance. If you log into the server as root to run these tasks, they will work in the console but not via passenger.
I had the same problem. Works on the command line, not inthe app.
Turns out that we still had a slave database that we were using for the indexing, but the slave wasn't getting updated.
As above, same issues were faced our side on two machine. The first one we had an issue with mysql which showed in apache2 log. Only seemed to affect local OSX machine..
Second time when we deployed to Ubuntu server, we had same issue. Rails c production was fine, no errors, bla bla bla.
Ended up being a permissions problem. Couldn't figure this out as there were no problems starting, although I guess I was doing so as root.
Using capistrano and passenger, we did this:
Create a passenger user and added to www-data group
Changed user in deploy.rb to be passenger
Manually changed all the /current files to be owned by above
Logged in as passenger user.
Ran rake ts:rebuild RAILS_ENV="production"
Worked a treat for us...
Good luck