Stackdriver logging in a Rails app in GKE is not working - ruby-on-rails

Ruby 2.3.6, Rails 4.2.6, Stackdriver 0.15.0
Following the directions at https://cloud.google.com/logging/docs/setup/ruby, I've added the stackdriver gem to Gemfile. From what I can tell, I shouldn't need to do anything else.
However, I'm not seeing any log messages from within Rails showing up in Stackdriver. I've even tried execing a shell in the container, running bundle exec rails console, and explicitly logging with Rails.logger.error "this is a log message".
config.google_cloud.use_logging = true and config.log_level = :debug, FWIW.
UPDATE: I took a closer look at the logger instance with pp Rails.logger, and it contains this:
#resource=
#<Google::Cloud::Logging::Resource:0x0000558b075d1f50
#labels={:cluster_name=>"onehq-cluster", :namespace_id=>"default"},
#type="container">,
My container is running in the staging namespace, which would explain why I'm not seeing log messages. When I look in the Stackdriver log viewer in the default namespace, lo, there are logs. Unfortunately, sending all messages to default won't do me any good, because I have apps running in two namespaces, and I need to be able to tell them apart.
I tried adding additional configuration:
config.google_cloud.logging.monitored_resource.type = "container"
config.google_cloud.logging.monitored_resource.labels = { cluster_name: "my-cluster", namespace_id: "staging" }
And now Rails.logger.error "this is a log message" doesn't show up anywhere.
UPDATE 2: After much web searching, I managed to turn up https://github.com/GoogleCloudPlatform/google-cloud-ruby/issues/2025 which suggests that the namespace to which log messages are sent is determined by the value of an environment variable, and that said env var is not being set automatically in GKE containers. (It has supposedly been resolved by https://github.com/GoogleCloudPlatform/google-cloud-ruby/pull/2071 but I don't think that's live yet, at least not in my cluster.) I'll try forcing the env var in my deployment file, and if that works I'll convert the last part of this to an answer.

After much web searching, I managed to turn up https://github.com/GoogleCloudPlatform/google-cloud-ruby/issues/2025 which suggests that the namespace to which log messages are sent is determined by the value of an environment variable, and that said env var is not being set automatically in GKE containers. (It has supposedly been resolved by https://github.com/GoogleCloudPlatform/google-cloud-ruby/pull/2071 but I don't think that's live yet, at least not in my cluster.) I forced the env var in my deployment file, and IIRC that worked. (I no longer have access to that cluster, so I can't verify that.)

Google-Cloud-Env gem has released this morning the version (1.0.2) you may update your bundle.

Related

Rails not logging requests

I'm utterly stumped as to why I'm not able to see the Rails controller outputs in my development log. I've spent days beating my head against a wall trying to figure this out and I'm not sure what else to try.
Setup: Rails 5.2.3 app running ruby 2.6.3 via docker-compose.
It started with me not being able to see my app logs when running docker logs <container-name>. However, I soon realized that I was able to see the output from puma starting and a shell script that ran rake tasks that the issue might be with rails.
To help assist with finding the issue:
Tore down and rebuilt the docker environment, several times
Stopped writing via STDOUT in favor of logs/development.log
Disabled lograge and elastic-apm, just in case
Reverted my development.rb config back to what's generated with a rails new
Followed the suggestions here
However, when running the rails console via docker exec -it <container-name>:
Running Rails.logger.level returns 2 which is warn, despite the default logging level being dev
I'm able to see log output when running Rails.logger.warn 'foo'
After setting Rails.logger.level = 0 I'm able to see output when running Rails.logger.debug 'foo'
I tried setting the value explicitly as config.log_level = :debug in development.rb yet it still set itself to the warn level.
However, I'm still not able to see any logs when navigating the application. Any thoughts?
Ugh. I feel like the biggest schmuck but I've figured out the issue.
I went back though source-control to see what has changed recently. In addition to the elastic-apm gem, I also added the Unleash gem.
I went to check out it's configuration and it looks like following their recommenced configuration causes logging to break. The line that was specifically causing offense was in the unleash initializer setting config.logger = Rails.logger

controller can not read env variables on the server

I used Capistrano 3 to deploy my Rails 4 app to amazon ec2.
Unfortunately, my app on ec2 can not read environment variables.
I've added my variables to ~/.bashrc, /etc/environment and tried
# config/deploy/staging.rb
set :default_env, {
...
}
I can get my variables via $echo $my_var on command line interface and ENV['my_var'] on rails console on my ec2 instance. However, whenever I head to pages that need to read environment variable, I got a http 500 error. According to the log, it seems that my controller can not read the env variable.
I've also rebooted my server and re-deployed many times, but there is no luck.
How can I make my server read the env variable properly?
by the way, I am using unicorn (4.8.3) and the capistrano-unicorn-nginx (~> 3.1.0).
Hopefully you've found a solution for this by now. I'll post a brief answer so maybe it benefits to others that google for this.
Because server environment varies so much from OS to OS, Capistrano wants to ignore it and isolate from it as much as possible. Here's a link with more info. Also, it is probably that capistrano does not load ~/.bashrc, more info here. These are likely the reasons why your app could not read the environment.
I'm not sure why updating the :default_env option was failing. But that's also not the best way to solve this because you'd have to place credentials in a file that is version controlled.
A good way to solve the secret credentials problem is by using either dotenv or rails_config gems. You'll likely prefer the dotenv gem because it works with env vars.
Rails 4.1+, has the built in secrets.yml "feature" which is also fine and will probably be the standard moving forward. I wrote the capistrano-secrets-yml plugin that works with this.

Custom rails environment log getting written to nginx error.log

I have a custom environment for my app called staging. For some reason, no staging.log file ever gets created, and all of the stuff that I would assume to be written there is instead showing up in the nginx error.log file. Is there a configuration option that I'm missing?
This is an old question, but I just ran into the same issue and wanted to share what fixed it for me in case it helps anyone else who runs into this.
In my case, I was using the rails_12factor gem (because I had my app deployed on Heroku at one point), which causes all logging to go to stdout. This caused it all to get dumped into the nginx_error.log. I was able to completely remove the gem to fix it, since I don't use Heroku for the app any more, but if you need to support Heroku deployment you could add some sort of configuration so that rails_12factor only gets required in that context.
Try adding something like
c = ActiveSupport::BufferedLogger.new("log/staging.log")
c.auto_flushing = true
config.logger = c
to your config/environments/staging.rb
Also you maybe should play with auto_flushing option, rails guide about configuration, says it's turned off in production env, I don't know why. See BufferedLogger docs here.
It doesn't look to me like perfect solution, but looks like it works.
for logging in nginx you have:
the error_log directive, see http://nginx.org/en/docs/ngx_core_module.html#error_log
the acces_log directive, see http://nginx.org/en/docs/http/ngx_http_log_module.html#access_log
there's one gotcha with the error_log:
if you have it set to debug, while your nginx is not compiled using the --with-debug flag, it will fallthrough to the default error.log location.
to check if your the flags your nginx was compiled with use the nginx -V command

Rails stopped logging

My rails app running in development environment stopped logging all of a sudden and I am not able to figure out why.
I tried logging into a new file by doing
config.logger = Logger.new('log/temp.log')
config.log_level = :debug
But still no luck. The new file temp.log was created but nothing is logged in the file. The thing is this happens on my development server running nginx (I run my rails app using "rails s -d" on this server). The exact same files, when I run on my local machine (my own computer), logging works fine.
So I feel the reason logging is not working is because of something specific to the server, but then I didn't do anything much on the server (e.g. I didn't install new gems, etc.) Logging has been working fine until few days ago.
When I go to rails console
rails c
> Rails.logger.debug "hello"
=> true
I do get "hello" logged into 'log/temp.log' specified above in config file.
I think permission on log directory or file is ok. What else could be wrong?
I believe it's a locking issue which you might be able to solve after removing the call to the logger which causes this.
I ran into this issue with Redmine 1.x,
I found newrelic_rpm entry in the production.log, saying it didn't run, and 1 line of a Redmine plugin init.
After removing both, newrelic_rpm from the environment.rb (config.gem line), and the plugin logger init message, the logging facility appears to be restored and log entries are appearing again.

No log messages in production.log

I wrote a demo HelloWorld Rails app and tested it with WEBrick (it doesn't even use a DB, it's just a controller which prints "hello world"). Then I tried to deploy it to a local Apache powered with Passenger. In fact this test is just to get Passenger working (it's my first deploy on Apache). Now I'm not even sure that Passenger works, but I don't get any error on the Apache side.
When I fire http://rails.test/ the browser shows the Rails 500 error page - so I assume that Passenger works. I want to investigate the logs, but it happens that production.log is empty! I don't think it's a permission problem, because if I delete the file, it is recreated when I reload the page. I tried to change the log level in conf/environments/production.rb, tried to manually write to log file with Rails console production and
Rails.logger.error('asdf')
it returns true but nothing gets written to production.log. The path (obtained per Rails.logger.inspect) is correct, and I remark that the file is recreated if I manually remove it. How can I know what's going on?
(I already checked the Apache logs, plus I set the highest debug level for Passenger but it seems a Rails problem, so is not logged by the server)
Assuming you're running Rails 3.2.1, this is a bug. It was patched in 3.2.2.
If you can't upgrade to 3.2.2 for any reason, this comment on GitHub has a workaround:
# config/initializers/patch_rails_production_logging.rb
Rails.logger.instance_variable_get(:#logger).instance_variable_get(:#log_dest).sync = true if Rails.logger
Setting this works on Rails 3.2.11:
Rails.logger = ActiveSupport::BufferedLogger.new(Rails.root.join("log","production.log"))

Resources