I'm using websocket_rails to provide an API for a JS client. Locally it works great, but the exact same setup in production will (seemingly randomly) decide to stop working.
My production.log yields RuntimeError (eventmachine not initialized: evma_install_oneshot_timer)
At first I thought this was the root issue, but my Puma error log yields this when restart the server and try again: RuntimeError: async response must have empty headers and body
I added some logging in the puma gem, and indeed, it's receiving rails session headers when doing GET /websocket
Sometimes there is no issue at all, and everything works fine for a few days, and then, not. And no matter what I do it just refuses to work again.
Thanks in advance. I've wasted days on this problem!
Puma config:
# Change to match your CPU core count
workers 1
# Min and Max threads per worker
threads 1, 6
app_dir = File.expand_path("../..", __FILE__)
shared_dir = "#{app_dir}/shared"
# Default to production
rails_env = ENV['RAILS_ENV'] || "production"
environment rails_env
# Set up socket location
bind "unix://#{shared_dir}/sockets/puma.sock"
# Logging
stdout_redirect "#{shared_dir}/log/puma.stdout.log", "#{shared_dir}/log/puma.stderr.log", true
# Set master PID and state locations
pidfile "#{shared_dir}/pids/puma.pid"
state_path "#{shared_dir}/pids/puma.state"
activate_control_app
on_worker_boot do
require "active_record"
ActiveRecord::Base.connection.disconnect! rescue ActiveRecord::ConnectionNotEstablished
ActiveRecord::Base.establish_connection(YAML.load_file("#{app_dir}/config/database.yml")[rails_env])
end
Related
I succefully installed a rails app on my FreeBSD server but when I test rails s -e production or rails s -e development I get Read: #<NameError: uninitialized constant Puma::Server::UNPACK_TCP_STATE_FROM_TCP_INFO> from the Puma server after sending request
I missed a step somewhere ?
PS. I use Rails6 with SqlLite3
config/puma.rb
# Puma can serve each request in a thread from an internal thread pool.
# The `threads` method setting takes two numbers: a minimum and maximum.
# Any libraries that use thread pools should be configured to match
# the maximum value specified for Puma. Default is set to 5 threads for minimum
# and maximum; this matches the default thread size of Active Record.
#
max_threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
min_threads_count = ENV.fetch("RAILS_MIN_THREADS") { max_threads_count }
threads min_threads_count, max_threads_count
# Specifies the `port` that Puma will listen on to receive requests; default is 3000.
#
port ENV.fetch("PORT") { 3000 }
# Specifies the `environment` that Puma will run in.
#
environment ENV.fetch("RAILS_ENV") { "development" }
# Specifies the `pidfile` that Puma will use.
pidfile ENV.fetch("PIDFILE") { "tmp/pids/server.pid" }
# Specifies the number of `workers` to boot in clustered mode.
# Workers are forked web server processes. If using threads and workers together
# the concurrency of the application would be max `threads` * `workers`.
# Workers do not work on JRuby or Windows (both of which do not support
# processes).
#
# workers ENV.fetch("WEB_CONCURRENCY") { 2 }
# Use the `preload_app!` method when specifying a `workers` number.
# This directive tells Puma to first boot the application and load code
# before forking the application. This takes advantage of Copy On Write
# process behavior so workers use less memory.
#
# preload_app!
# Allow puma to be restarted by `rails restart` command.
plugin :tmp_restart
I just made monkey patch that seems to work.
(This problem happens on FreeBSD, not on Mac OS X)
Place this content in an initializer file. For example: config/initializers/puma_missing_constant_monkey_patch.rb.
Rails.application.config.after_initialize do
if defined?(::Puma) && !Object.const_defined?('Puma::Server::UNPACK_TCP_STATE_FROM_TCP_INFO')
::Puma::Server::UNPACK_TCP_STATE_FROM_TCP_INFO = "C".freeze
end
end
It just defines the missing constant. I've got no clue if it breaks something else. On the other hand Puma uses a constant that isn't defined. The define of this constant in Puma (lib/puma/server.rb) is conditional.
I am trying to use MidiSmtpServer to receive email in a Heroku application, and have been using the code on one of the examples that the documents show. However, I don't know where to put that code for the SMTP server to start after Puma, or where to put it for it to start at all. Using on_worker_boot in puma.rb doesnt work.
puma.rb:
# Puma can serve each request in a thread from an internal thread pool.
# The `threads` method setting takes two numbers: a minimum and maximum.
# Any libraries that use thread pools should be configured to match
# the maximum value specified for Puma. Default is set to 5 threads for minimum
# and maximum; this matches the default thread size of Active Record.
#
max_threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
min_threads_count = ENV.fetch("RAILS_MIN_THREADS") { max_threads_count }
threads min_threads_count, max_threads_count
# Specifies the `port` that Puma will listen on to receive requests; default is 3000.
#
port ENV.fetch("PORT") { 3000 }
# Specifies the `environment` that Puma will run in.
#
environment ENV.fetch("RAILS_ENV") { "development" }
# Specifies the `pidfile` that Puma will use.
pidfile ENV.fetch("PIDFILE") { "tmp/pids/server.pid" }
# Specifies the number of `workers` to boot in clustered mode.
# Workers are forked web server processes. If using threads and workers together
# the concurrency of the application would be max `threads` * `workers`.
# Workers do not work on JRuby or Windows (both of which do not support
# processes).
#
# workers ENV.fetch("WEB_CONCURRENCY") { 2 }
require "midi-smtp-server"
require "mail"
on_worker_boot do
class MySmtpd < MidiSmtpServer::Smtpd
def on_message_data_event(ctx)
puts "[#{ctx[:envelope][:from]}] for recipient(s): [#{ctx[:envelope][:to]}]..."
# Just decode message ones to make sure, that this message ist readable
#mail = Mail.read_from_string(ctx[:message][:data])
# handle incoming mail, just show the message source
puts #mail.to_s
end
end
# try to gracefully shutdown on Ctrl-C
trap("INT") do
puts "Interrupted, exit now..."
exit 0
end
# Output for debug
puts "#{Time.now}: Starting MySmtpd..."
# Create a new server instance listening at localhost interfaces 127.0.0.1:2525
# and accepting a maximum of 4 simultaneous connections
server = MySmtpd.new(2525, "0.0.0.0", 4)
# setup exit code
at_exit do
# check to shutdown connection
if server # Output for debug
puts "#{Time.now}: Shutdown MySmtpd..." # stop all threads and connections gracefully
server.stop
end # Output for debug
puts "#{Time.now}: MySmtpd down!\n"
end
# Start the server
server.start
# Run on server forever
server.join
end
# Use the `preload_app!` method when specifying a `workers` number.
# This directive tells Puma to first boot the application and load code
# before forking the application. This takes advantage of Copy On Write
# process behavior so workers use less memory.
#
# preload_app!
# Allow puma to be restarted by `rails restart` command.
plugin :tmp_restart
Applications running on Heroku are containerized, and running an SMTP server in or with the web process is not possible.
You need to instead look at services that provide inbound mail delivery. If you're using Rails 6, follow the documentation to set up ActionMailbox.
If I add the following lines in application.rb
puts 'in application.rb'
pid = spawn('rake jobs:work')
Process.detach pid
I see the following output
in application.rb
=> Booting Puma
...
* Listening on tcp://0.0.0.0:3000
Use Ctrl-C to stop
in application.rb
[Worker(host:local pid:9966)] Starting job worker
in application.rb
[Worker(host:local pid:9998)] Starting job worker
in application.rb
[Worker(host:local pid:10032)] Starting job worker
If I remove the spawn call in application.rb is displayed only once at the beginning, as expected.
This output is written at about once per second. All those processes are healthy and will be stoped when I killed puma.
I can't figure out what is happening. Why is this code getting executed every second?
... rake is also requiring the file, that's why.
To get this to work as expected:
if $0 =~ /rails/
pid = spawn('rake jobs:work')
Process.detach pid
end
Hi I'm deploying my first Rails app to Ubuntu 16 server using Capistrano everything went smooth except the images are not showing in the production environment.
On the production server the images are located in this path : /myapp/current/public/assets
But if I look at this in the browser my broken images links gives me this(see picture), this is a broken link for the header image.
the strange thing is that there is a .svgfile in the /myapp/current/public/assets which is showing up perfectly in the browser, In the picture below is the path shown
this is my Capfile
# Load DSL and set up stages
require "capistrano/setup"
# Include default deployment tasks
require "capistrano/deploy"
set :rbenv_type, :user # or :system, depends on your rbenv setup
set :rbenv_ruby, '2.3.1'
require 'capistrano/rbenv'
require 'capistrano/bundler'
require 'capistrano/rails'
# Load custom tasks from `lib/capistrano/tasks` if you have any defined
Dir.glob("lib/capistrano/tasks/*.rake").each { |r| import r }
This is the config/deploy.rb
# config valid only for current version of Capistrano
lock '3.6.1'
set :application, 'myapp'
set :repo_url, 'git#github.com:DadiHall/myapp.git'
# Default deploy_to directory is /var/www/my_app_name
set :deploy_to, '/home/deploy/myapp'
set :linked_files, %w{config/database.yml config/secrets.yml}
set :linked_dirs, %w{bin log tmp/pids tmp/cache tmp/sockets vendor/bundle public/system}
namespace :deploy do
desc 'Restart application'
task :restart do
on roles(:app), in: :sequence, wait: 5 do
execute :touch, release_path.join('tmp/restart.txt')
end
end
after :publishing, 'deploy:restart'
after :finishing, 'deploy:cleanup'
end
Here is the environments/production.rb
Rails.application.configure do
config.cache_classes = true
config.consider_all_requests_local = false
config.action_controller.perform_caching = true
config.serve_static_files = ENV['RAILS_SERVE_STATIC_FILES'].present?
config.assets.js_compressor = :uglifier
# Do not fallback to assets pipeline if a precompiled asset is missed.
config.assets.compile = false
config.assets.digest = true
config.assets.initialize_on_precompile = false
# `config.assets.precompile` and `config.assets.version` have moved to config/initializers/assets.rb
config.log_level = :debug
config.i18n.fallbacks = true
config.active_support.deprecation = :notify
config.log_formatter = ::Logger::Formatter.new
config.active_record.dump_schema_after_migration = false
Braintree::Configuration.environment = :sandbox
Braintree::Configuration.merchant_id = ENV['merchant_id']
Braintree::Configuration.public_key = ENV['public_key']
Braintree::Configuration.private_key = ENV['private_key']
end
In the /etc/nginx/sites-enabled/default I have following lines
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name mydomain.com;
passenger_enabled on;
rails_env production;
root /home/deploy/myapp/current/public;
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
this is the nginx Error log
[ 2016-09-28 06:25:02.4500 1594/7f900ee89700 age/Sha/ApiServerUtils.h:794 ]: Log file reopened.
[ 2016-09-28 09:45:43.7508 1597/7f2326502700 age/Cor/CoreMain.cpp:819 ]: Checking whether to disconnect long-running connections for process 1978, application /home/deploy/hlinreykdal/current/public (production)
App 21337 stdout:
App 21405 stdout:
[ 2016-09-28 10:30:31.0631 1597/7f2326502700 age/Cor/CoreMain.cpp:819 ]: Checking whether to disconnect long-running connections for process 21405, application /home/deploy/hlinreykdal/current/public (production)
App 23240 stdout:
App 23308 stdout:
[ 2016-09-28 10:41:40.1769 1597/7f2326502700 age/Cor/CoreMain.cpp:819 ]: Checking whether to disconnect long-running connections for process 23308, application /home/deploy/hlinreykdal/current/public (production)
App 24329 stdout:
App 24397 stdout:
I have tried bundle exec rake assets precompile with out any luck.
I have deployed and restarted nginx again and again, with out any luck
I have tried almost every answer to similar questions here on stack overflow, but nothing seems to work.
Am I missing something here?
I'm sure this problem has something to do with the assets pipeline, but I'm not sure how to fix it.
Can anyone please take a look at this and advise me.
thanks in advance
Ok if anyone is experiencing a similar problem you might want to check out the config.assets.compile In my case I only had to change the config/environments/production.rb config.assets.compile from false, I changed it to true and now everything worked.... It only took me two days to figure it out :D
Note that public/assets is where the asset pipeline puts its stuff. If this is for static assets, I would put them in app/assets/images in order to use the asset pipeline, or choose another directory name.
I was using html's img tag to serve image assets as following
<img src="/assets/AdminLTELogo.png" alt="AdminLTE Logo" class="brand-image im">
Just had to change it to following and it worked.
<%= image_tag 'AdminLTELogo.png' , alt: "AdminLTE Logo", class: "brand-image im %>
Cheers.
I have a Rails app using an Nginx HTTP server and a Unicorn app server. I'm getting the following error when the app server doesn't receive any requests for about 15min:
OCIError: ORA-02396: exceeded maximum idle time, please connect again
After a page refresh, the page loads fine.
I'm using rails 4.2.1, ruby-oci8 2.1.0, and active-record-oracle_enhanced-adapter 1.6.0.
I'm still relatively new to web development, but I'm thinking this error occurs when the oracle connection idles out but app server doesn't know that the connection is bad.
I've tried setting the reaping_frequency to every 15min, but that didn't fix the problem.
How can I manage the database connections and make sure that these idle connections are dropped? Can I set a timeout to drop database connections before oracle times out?
this is my config/unicorn.rb
app_dir = File.expand_path("../..", __FILE__)
shared_dir = "#{app_dir}/shared"
working_directory app_dir
worker_processes 2
preload_app true
timeout 15
listen "#{shared_dir}/sockets/unicorn.sock", :backlog => 64
stderr_path "#{shared_dir}/log/unicorn.stderr.log"
stdout_path "#{shared_dir}/log/unicorn.stderr.log"
pid "{shared_dir}/pids/unicorn.pid"
before_fork do |server,worker|
if defined? ActiveRecord::Base
ActiveRecord::Base.connection.disconnect!
end
end
after_fork do |server,worker|
if defined? ActiveRecord::Base
ActiveRecord::Base.establish_connection
end
end
Here is my fix, which isn't that great.
in the application controller:
before_action :refresh_connection
def refresh_connection
puts Time.now.to_s + ' - refreshing connection'
ActiveRecord::Base.connection.disconnect!
if ActiveRecord::Base.establish_connection
puts Time.now.to_s + ' - new connection established'
else
puts Time.now.to_s + ' - new connection cannot be established'
end
ebd