Log file not updated after manually writing into it - ruby-on-rails

I have a Rails application running under Puma which is configured to log to a file called puma.stdout.log. I also have a Amazon Cloudwatch Logs agent that is monitoring this file so that I can access these logs for all my EC2 instances in the AWS Cloudwatch console.
When I write manually to this log file, it stops getting updated. When I open the log file, the last line is the line I added even though more logs should appear. Timestamps and file size don't change either.
The weird thing is: Cloudwatch is still getting updated with new logs from the agent monitoring this particular log file and the application runs just fine, so Puma keeps logging to this file but I can't see the changes.
When I restart Puma, I can see new logs into this log file and everything is back to normal.
Why can't I see this log file updated after writing manually to it?

Related

Is it possible to write delayed_jobs.log to log in Openshift pod?

The application that me and my team working on use Ruby on Rails hosted in Openshift pod. We use delayed_job to handle background process, and for delayed_job logging, the application write the log into log/delayed_job.log file. However, if a new pod for the application is created, the pod will create a new file. Also, the company is using Kibana to save every log from Openshift pod.
What we tried so far is put this code in delayed_job_config.rb
Delayed::Worker.logger = Logger.new(STDERR)
To write the log for another process beside delayed job, in order to write the log in Openshift pod log, we use this following code, e.g:
Rails.logger.info "Result: #{ldap.get_operation_result.code}"
However, the delayed_job log is not still appear in Logs tab of the pod (in order for the log to appear in Kibana)
The goal is to write the log in Logs tab of the Openshift pod.
Any help is appreciated.
UPDATE :
We tried to put Delayed::Worker.logger = Rails.logger in delayed_job_config.rb but still did not work
In case it might help anyone:
We're ended up muting the DelayedJobs ActiveRecord related logs
playing around this delayed_job repo post to also ensure that if an error occurs, we get the previous log level back
And within our container, we tail the actual delayed job log file to output to get the actual jobs log
tail -f /app/log/production.log &
It's not optimal, but at least, we get to see the logs!
In general everything written to STDOUT or STDERR from the main process of the container/pod is appearing in the log tab. I guess you problem will be that your background work is not executed in the main process and therefore not shown in the log tab.
As an example we had an application where the start script was executing the actual work in another process. Even though this new process logged to STDOUT it will not appear in the log tab as OpenShift only shows the logs of the main process which was in our case the process executing the script.

Is it possible to save log of Ruby on Rails delayed job log in terminal output, instead of in delayed_job.log file?

Currently the application that I'm working on is using Ruby on Rails. The application use delayed_job.log file to log the delayed job activity. However, the application is hosted in Openshift environment, using pod, and if the pod is deleted, and if a new pod is created, the application will make a new delayed_job.log file. One of the proposed solution from an engineer in the company is to write the log in terminal output. How can I do that? The goal is to save the information that written in delayed_job.log file to terminal output, so that I can see the log in the future, I can check the log, even the log from the deleted pod, using Kibana, instead of delayed_job.log file. Any help is appreciated.
Update :
We tried to put Delayed::Worker.logger = Rails.logger in delayed_job_config.rb but still did not work
You can change the logger in config/initializers/delayed_job_config.rb to something like:
Delayed::Worker.logger = Logger.new(STDOUT)

How can i track the issue in production RAILS

have on pdf creating function, but that PDF Generation different response, in local development working fine but the server shows 500 error. how can i track the issue in production ?
You can track issues in production in various way. There are several answers available. But better I should share very common practice regarding a rails application. If you are using apache/nginx as your http server. You can tail there log and make the operation on browser on the same time so you can get realtime log from the http server.
Next, based on your configuration (as per convention over configuration) there should be production.log in your app log directory. You can run the following command in your terminal/command line tools and check the log on real time:
tail -f log/production.log
I assumed in your case production.log is the log file currently logging everything by your production application server (passenger/unicorn/puma/etc..)
For assets and client end issues, try using your browser's console to inspect the issues.
I hope my answer will give you some idea!

Heroku Rails system command fails

I've deployed my Rails application to Heroku following https://devcenter.heroku.com/articles/rails3 and can open a website like http://severe-mountain-793.herokuapp.com
In my controller, I have a system command that downloads a file into the public directory with system('wget', ..., Rails.root.join('public/...')). Apparently, from checking the exit status of the command, I realize that the command fails. But I don't know why it fails, and I don't know how to show the output message of the command. What can I do?
I think heroku's filesystem is read-only, so you can't really save files using wget, except if you want to save them inside the '/tmp' folder, where they could be deleted anytime. Moreover, dynos have a 30 seconds timeout, so this would fail for every download which takes more than that interval.

Rails stopped logging

My rails app running in development environment stopped logging all of a sudden and I am not able to figure out why.
I tried logging into a new file by doing
config.logger = Logger.new('log/temp.log')
config.log_level = :debug
But still no luck. The new file temp.log was created but nothing is logged in the file. The thing is this happens on my development server running nginx (I run my rails app using "rails s -d" on this server). The exact same files, when I run on my local machine (my own computer), logging works fine.
So I feel the reason logging is not working is because of something specific to the server, but then I didn't do anything much on the server (e.g. I didn't install new gems, etc.) Logging has been working fine until few days ago.
When I go to rails console
rails c
> Rails.logger.debug "hello"
=> true
I do get "hello" logged into 'log/temp.log' specified above in config file.
I think permission on log directory or file is ok. What else could be wrong?
I believe it's a locking issue which you might be able to solve after removing the call to the logger which causes this.
I ran into this issue with Redmine 1.x,
I found newrelic_rpm entry in the production.log, saying it didn't run, and 1 line of a Redmine plugin init.
After removing both, newrelic_rpm from the environment.rb (config.gem line), and the plugin logger init message, the logging facility appears to be restored and log entries are appearing again.

Resources