How can I debug e-mail sending on Gitlab? - ruby-on-rails

My Gitlab (version 5) is not sending any e-mails and I am lost trying to figure out what is happening. The logs give no useful information. I configured it to used sendmail.
I wrote a small script that sends e-mail through ActionMailer (I guess it is what gitlab uses to send e-mail, right?). And it sends the e-mail correctly.
But, on my Gitlab, I can guarantee that sendmail is not even being called.
Do I need to enable something to get e-mail notifications? How can I debug my issue?
Update
The problem is that I can not find any information anywhere. The thing just fails silently. Where can I find some kind of log? The logs in the log dir provide no useful information.
My question is, how can I make Gitlab be more verbose? How can I make it tell me what is going on?
Update 2
I just found a lot of mails scheduled on the Background jobs section. A lot of unprocessed Sidekiq::Extensions::DelayedMailer. What does it mean? Why were these jobs not processed?

Stumbled upon this issue today, here's my research:
Debugging SMTP connections in the GitLab GUI is not supported yet. However there is a pending feature request and a command line solution.
Set the desired SMTP settings /etc/gitlab/gitlab.rb and run gitlab-ctl reconfigure (see https://docs.gitlab.com/omnibus/settings/smtp.html).
Start the console running gitlab-rails console -e production.
Show the configured delivery method (should be :smtp) running the command ActionMailer::Base.delivery_method. Show all configured SMTP settings running ActionMailer::Base.smtp_settings.
To send a test mail run
Notify.test_email('youremail#example.com', 'Hello World', 'This is a test message').deliver_now
On the admin page in GitLab, the section »Background jobs« shows information about all jobs. Failing SMTP connections are listed there as well.
Please note, you may need to restart the GitLab instance in order to use the newly configured SMTP settings (on my instance the console was able to send mails, the GUI required a restart). Run gitlab-ctl restart to restart your instance.

First, I will tell what was my problem: The sidekiq is responsible for handling sending e-mails. For some reason my sidekiq was stuck, restarting it solved the problem.
Where I found information about problems I found on Gitlab:
The logs dir. It has a few informations.
On admin page, the section "Background jobs" gives information about the sidekiq.
The javascript console (if your browser supports it) also has useful information. Only if your problem is related to javascript.
And if you reach this point, you may modify Gitlab's code so you can "trace it" writing to a file:
File.open('/tmp/logfile','a') { |file| file.write("Hello World!\n") }

Maybe try enabling delivery errors in production mode and see what happens
config.action_mailer.raise_delivery_errors = true

I had the same problem and found that I needed to mod application.rb:
diff --git a/config/application.rb b/config/application.rb
index d85bcab..274976f 100644
--- a/config/application.rb
+++ b/config/application.rb
## -11,6 +11,8 ## end
module Gitlab
class Application < Rails::Application
+ config.action_mailer.sendmail_settings = { :arguments => "-i" }
+
# Settings in config/environments/* take precedence over those specified here.
# Application configuration should go into files in config/initializers
# -- all .rb files in that directory are automatically loaded.
Note: I'm running Debian 7 which uses exim for mail.

In the admin section under Background Jobs if you have lots of items in the Scheduled tab try restarting sidekiq:
cd /home/git/gitlab
exec rake sidekiq:start RAILS_ENV=production

Related

Is it possible to write delayed_jobs.log to log in Openshift pod?

The application that me and my team working on use Ruby on Rails hosted in Openshift pod. We use delayed_job to handle background process, and for delayed_job logging, the application write the log into log/delayed_job.log file. However, if a new pod for the application is created, the pod will create a new file. Also, the company is using Kibana to save every log from Openshift pod.
What we tried so far is put this code in delayed_job_config.rb
Delayed::Worker.logger = Logger.new(STDERR)
To write the log for another process beside delayed job, in order to write the log in Openshift pod log, we use this following code, e.g:
Rails.logger.info "Result: #{ldap.get_operation_result.code}"
However, the delayed_job log is not still appear in Logs tab of the pod (in order for the log to appear in Kibana)
The goal is to write the log in Logs tab of the Openshift pod.
Any help is appreciated.
UPDATE :
We tried to put Delayed::Worker.logger = Rails.logger in delayed_job_config.rb but still did not work
In case it might help anyone:
We're ended up muting the DelayedJobs ActiveRecord related logs
playing around this delayed_job repo post to also ensure that if an error occurs, we get the previous log level back
And within our container, we tail the actual delayed job log file to output to get the actual jobs log
tail -f /app/log/production.log &
It's not optimal, but at least, we get to see the logs!
In general everything written to STDOUT or STDERR from the main process of the container/pod is appearing in the log tab. I guess you problem will be that your background work is not executed in the main process and therefore not shown in the log tab.
As an example we had an application where the start script was executing the actual work in another process. Even though this new process logged to STDOUT it will not appear in the log tab as OpenShift only shows the logs of the main process which was in our case the process executing the script.

Rails not logging requests

I'm utterly stumped as to why I'm not able to see the Rails controller outputs in my development log. I've spent days beating my head against a wall trying to figure this out and I'm not sure what else to try.
Setup: Rails 5.2.3 app running ruby 2.6.3 via docker-compose.
It started with me not being able to see my app logs when running docker logs <container-name>. However, I soon realized that I was able to see the output from puma starting and a shell script that ran rake tasks that the issue might be with rails.
To help assist with finding the issue:
Tore down and rebuilt the docker environment, several times
Stopped writing via STDOUT in favor of logs/development.log
Disabled lograge and elastic-apm, just in case
Reverted my development.rb config back to what's generated with a rails new
Followed the suggestions here
However, when running the rails console via docker exec -it <container-name>:
Running Rails.logger.level returns 2 which is warn, despite the default logging level being dev
I'm able to see log output when running Rails.logger.warn 'foo'
After setting Rails.logger.level = 0 I'm able to see output when running Rails.logger.debug 'foo'
I tried setting the value explicitly as config.log_level = :debug in development.rb yet it still set itself to the warn level.
However, I'm still not able to see any logs when navigating the application. Any thoughts?
Ugh. I feel like the biggest schmuck but I've figured out the issue.
I went back though source-control to see what has changed recently. In addition to the elastic-apm gem, I also added the Unleash gem.
I went to check out it's configuration and it looks like following their recommenced configuration causes logging to break. The line that was specifically causing offense was in the unleash initializer setting config.logger = Rails.logger

Stackdriver logging in a Rails app in GKE is not working

Ruby 2.3.6, Rails 4.2.6, Stackdriver 0.15.0
Following the directions at https://cloud.google.com/logging/docs/setup/ruby, I've added the stackdriver gem to Gemfile. From what I can tell, I shouldn't need to do anything else.
However, I'm not seeing any log messages from within Rails showing up in Stackdriver. I've even tried execing a shell in the container, running bundle exec rails console, and explicitly logging with Rails.logger.error "this is a log message".
config.google_cloud.use_logging = true and config.log_level = :debug, FWIW.
UPDATE: I took a closer look at the logger instance with pp Rails.logger, and it contains this:
#resource=
#<Google::Cloud::Logging::Resource:0x0000558b075d1f50
#labels={:cluster_name=>"onehq-cluster", :namespace_id=>"default"},
#type="container">,
My container is running in the staging namespace, which would explain why I'm not seeing log messages. When I look in the Stackdriver log viewer in the default namespace, lo, there are logs. Unfortunately, sending all messages to default won't do me any good, because I have apps running in two namespaces, and I need to be able to tell them apart.
I tried adding additional configuration:
config.google_cloud.logging.monitored_resource.type = "container"
config.google_cloud.logging.monitored_resource.labels = { cluster_name: "my-cluster", namespace_id: "staging" }
And now Rails.logger.error "this is a log message" doesn't show up anywhere.
UPDATE 2: After much web searching, I managed to turn up https://github.com/GoogleCloudPlatform/google-cloud-ruby/issues/2025 which suggests that the namespace to which log messages are sent is determined by the value of an environment variable, and that said env var is not being set automatically in GKE containers. (It has supposedly been resolved by https://github.com/GoogleCloudPlatform/google-cloud-ruby/pull/2071 but I don't think that's live yet, at least not in my cluster.) I'll try forcing the env var in my deployment file, and if that works I'll convert the last part of this to an answer.
After much web searching, I managed to turn up https://github.com/GoogleCloudPlatform/google-cloud-ruby/issues/2025 which suggests that the namespace to which log messages are sent is determined by the value of an environment variable, and that said env var is not being set automatically in GKE containers. (It has supposedly been resolved by https://github.com/GoogleCloudPlatform/google-cloud-ruby/pull/2071 but I don't think that's live yet, at least not in my cluster.) I forced the env var in my deployment file, and IIRC that worked. (I no longer have access to that cluster, so I can't verify that.)
Google-Cloud-Env gem has released this morning the version (1.0.2) you may update your bundle.

Travis builds are failing with fatal listen error

My Travis tests for a Rails app have been working fine, but have suddenly started failing about one time in three with:
$ bundle exec rails test
FATAL: Listen error: unable to monitor directories for changes.
Visit
https://github.com/guard/listen/wiki/Increasing-the-amount-of-inotify-watchers
for info on how to fix this.
Looking at that suggested URL, it proposes ways to increase the number of inotify-watchers, but requires the use of sudo to change the limit. That's fine on my dev machine (though I'm actually not experiencing the error on my machine), but I don't know if that's possible (or advisable) in the Travis environment.
I looked at the Travis docs to see if there's a config setting for increasing the number of watchers, but I couldn't find anything.
So: what's the best way to deal with this error in a Travis CI test?
If you're running this on TravisCI and a CI/staging/testing server, you should not need to watch files for changes. The code should be deployed to the server, and then bundle exec rails test should run and that's it. No file watching necessary.
What I suspect is that the config for your environment is not set up correctly and that listen gem is somehow activated for the testing environment when it should only be activated for the development environment.
Try running the tests locally with the same environment as TravisCI (testing in this example):
RAILS_ENV=testing bundle exec test
and see what it says. If it gives you that error, check the config/environments/testing.rb file and look for config.cache_classes.
When config.cache_classes is set to true, the classes are cached and the listen/file-watcher will not be active. In your local development environment, config/environments/development.rb, the config.cache_classes setting should be set to false so that file-watching and reloading happens.

How do I see delayed_job jobs in production?

I'm have a server where I deploy using capistrano and I use delayed_jobs to do some mailing but at my server for some reason the jobs do not execute. The delayed_job process is running (running bin/delayed_job status answers me correctly saying there's a process there by some pid), but I don't know if the process just isn't executing my jobs or even if my jobs aren't being enqueued. Locally it all works fine, but at production staging in the server it does not.
I'd like to know if there's a way I can at least check what jobs are there, since I can't do it by accessing the console
Another gem that works with delayed job is delayed-web which you can find here https://github.com/tatey/delayed-web
you add it to your gemfile
gem 'delayed-web'
then run
rails generate delayed:web:install
this will generate an initializer file delayed_web.rb under config/initializers with the following:
Rails.application.config.to_prepare do
Delayed::Web::Job.backend = 'active_record'
end
and in config/application.rb this will be added for you as well by the generator
# config/application.rb
config.assets.enabled = true
config.assets.precompile << 'delayed/web/application.css'
In routes.rb it may add a route as well but if you are using devise then maybe you want to restrict access to admin user(s) only as follows:
authenticated :user, -> user { user.admin? } do
mount Delayed::Web::Engine, at: '/jobs'
end
Ok so I'm checked my jobs through the database itself, I entered psql through postgres user and did some queries in the delayed_jobs table, you can also try doing RAILS_ENV=production bin/delayed_jobs run (for rails 4, rails 3 use "script/" instead of "bin/") which will show you what the workers are doing while they execute the job.
You can also, as Swards commented above, use a gem to have a web interface for delayed_jobs: https://github.com/ejschmitt/delayed_job_web
If you still wanna see what was my problem with the email sending I've opened another question because it got to far away from what this one was about: What port to use sending email with SMTP (mailgun) in rails app on production server (DigitalOcean)?

Resources