i'm new to elixir
target - read simplet text file
started with
mix new pluralsight_tweet --sup
pilot project deployed success
file structure
code in file_reader.ex
defmodule PluralsightTweet.FileReader do
def get_strings_to_tweet(path) do
File.read!(path)
end
end
try to compile using
iex -S mix
give errors
whats the issue and how can i fix this case
application.ex
defmodule PluralsightTweet.Application do
# See http://elixir-lang.org/docs/stable/elixir/Application.html
# for more information on OTP Applications
#moduledoc false
use Application
def start(_type, _args) do
import Supervisor.Spec, warn: false
# Define workers and child supervisors to be supervised
children = [
# Starts a worker by calling: PluralsightTweet.Worker.start_link(arg1, arg2, arg3)
# worker(PluralsightTweet.Worker, [arg1, arg2, arg3]),
]
# See http://elixir-lang.org/docs/stable/elixir/Supervisor.html
# for other strategies and supported options
opts = [strategy: :one_for_one, name: PluralsightTweet.Supervisor]
Supervisor.start_link(children, opts)
end
end
Related
When my sidekiq worker runs out of memory, I don't know what it was working on which caused this. Is there a way to log this? (other than obvious solutions like logging when starting each job).
One idea I have is to catch SIGTERM, something like this:
Sidekiq.configure_server do |config|
signals = %w[INT TERM]
signals.each do |signal|
old_handler = Signal.trap(signal) do
info = []
workers = Sidekiq::Workers.new
workers.each do |_process_id, _thread_id, worker|
info << work
end
## -> Do something with info <- ##
## StatsTracker.count("#{job.name} being processed when shutdown")
if old_handler.respond_to?(:call)
old_handler.call
else
exit
end
end
end
end
I would like to create a middleware that searches in the queues (typically default) if there are jobs with the same arguments.
Typically I send arguments like this:
perform(client_id, port_number)
I would like to be able to see, using the middleware, if there is already a client_id and a port_number in the default queue, and if there is I just return and I log it into the logger.
This is what I have so far (I want to log the data for now).
in config/initializers/sidekiq.rb I have
require 'sidekiq/api'
#Sidekiq default host redis://localhost:6379
Sidekiq.configure_server do |config|
config.redis = { url: 'redis://localhost:6379/12' }
end
Sidekiq.configure_client do |config|
config.redis = { url: 'redis://localhost:6379/12' }
config.client_middleware do |chain|
chain.add ArgumentLogging
end
end
Now, "ArgumentLogging" should be the class that will log the arguments; but I don't know where to add the code for the middleware to be honest.
I added it into app/middleware/argument_logging.rb
module Sidekiq
class ArgumentLogging
# #param [String, Class] worker_class the string or class of the worker class being enqueued
# #param [Hash] job the full job payload
# * #see https://github.com/mperham/sidekiq/wiki/Job-Format
# #param [String] queue the name of the queue the job was pulled from
# #param [ConnectionPool] redis_pool the redis pool
# #return [Hash, FalseClass, nil] if false or nil is returned,
# the job is not to be enqueued into redis, otherwise the block's
# return value is returned
# #yield the next middleware in the chain or the enqueuing of the job
def call(worker_class, job, queue, redis_pool)
# return false/nil to stop the job from going to redis
Rails.logger.info("Reading and logging #{queue} with arguments: #{job.inspect}")
yield
end
end
end
This is what I have so far. But I cannot see in my "development.log" file any string similar to "Reading and logging #{queue} with arguments: #{job.inspect}"
Does anyone know how to use Middleware and where should I put the code above?
I read all about https://github.com/mperham/sidekiq/wiki/Middleware and other StackOverflow questions and blogs, but it seems like I cannot understand why my code is not triggering.
Thank you and I look forward to hearing from you.
Rob
Consider a site where Rails is used only for API. No server-side rendering.
With server-side rendering it's more or less clear. capybara starts puma, after which tests can connect to puma for pages.
But with no server-side rendering, there's no puma to ask for pages. How do I do that?
Explain yourself when downvoting, please.
Have a look at http://ruby-hyperloop.org. You can drive your client test suite from rspec, and easily integrate with rails
While Server-Side Rendering must be very common these days, I decided to take an alternative approach.
Add the following gems to the Gemfile:
gem 'httparty', '~> 0.16.2'
gem 'childprocess', '~> 0.7.0'
Move the following lines from config/environments/production.rb to config/application.rb to make RAILS_LOG_TO_STDOUT available in the test environment.
if ENV['RAILS_LOG_TO_STDOUT'].present?
config.logger = Logger.new(STDOUT)
end
Regarding webpack, make sure publicPath is set to http://localhost:7777/, and UglifyJsPlugin is not used in the test environment.
And add these two files:
test/application_system_test_case.rb:
# frozen_string_literal: true
require 'uri'
require 'test_helper'
require 'front-end-server'
FRONT_END = ENV.fetch('FRONT_END', 'separate_process')
FRONT_END_PORT = 7777
Capybara.server_port = 7778
Capybara.run_server = ENV.fetch('BACK_END', 'separate_process') == 'separate_thread'
require 'action_dispatch/system_test_case' # force registering and setting server
Capybara.register_server :rails_puma do |app, port, host|
Rack::Handler::Puma.run(app, Port: port, Threads: "0:1",
Verbose: ENV.key?('BACK_END_LOG'))
end
Capybara.server = :rails_puma
DatabaseCleaner.strategy = :truncation
class ApplicationSystemTestCase < ActionDispatch::SystemTestCase
driven_by :selenium, using: :chrome, screen_size: [1400, 1400]
self.use_transactional_tests = false
def setup
DatabaseCleaner.start
end
def teardown
DatabaseCleaner.clean
end
def uri(path)
URI::HTTP.build(host: 'localhost', port: FRONT_END_PORT, path: path)
end
end
unless ENV.key?('NO_WEBPACK')
system(
{'NODE_ENV' => 'test'},
'./node_modules/.bin/webpack', '--config', 'config/webpack/test.js', '--hide-modules') \
or abort
end
if FRONT_END == 'separate_process'
front_srv = ChildProcess.build(
'bundle', 'exec', 'test/front-end-server.rb',
'-f', FRONT_END_PORT.to_s,
'-b', Capybara.server_port.to_s
)
if ENV.key?('FRONT_END_LOG')
front_srv.io.inherit!
end
front_srv.start
Minitest.after_run {
front_srv.stop
}
else
Thread.new do
FrontEndServer.new({
Port: FRONT_END_PORT,
back_end_port: Capybara.server_port,
Logger: Rails.logger,
}).start
end
end
unless Capybara.run_server
back_srv = ChildProcess.build(
'bin/rails', 'server',
'-P', 'tmp/pids/server-test.pid', # to not conflict with dev instance
'-p', Capybara.server_port.to_s
)
back_srv.start
# wait for server to start
begin
socket = TCPSocket.new 'localhost', Capybara.server_port
rescue Errno::ECONNREFUSED
retry
end
socket.close
Minitest.after_run {
back_srv.stop
}
end
test/front-end-server.rb:
#!/usr/bin/env ruby
require 'webrick'
require 'httparty'
require 'uri'
class FrontEndServer < WEBrick::HTTPServer
class FallbackFileHandler < WEBrick::HTTPServlet::FileHandler
def service(req, res)
super
rescue WEBrick::HTTPStatus::NotFound
req.instance_variable_set('#path_info', '/index.html')
super
end
end
class ProxyHandler < WEBrick::HTTPServlet::AbstractServlet
def do_GET(req, res)
req.header.each do |k, v|
#logger.debug("-> #{k}: #{v}");
end
#logger.debug("-> body: #{req.body}");
uri2 = req.request_uri.dup
uri2.port = #config[:back_end_port]
res2 = HTTParty.send(req.request_method.downcase, uri2, {
headers: Hash[req.header.map { |k, v| [k, v.join(', ')] }],
body: req.body,
})
res.content_type = res2.headers['content-type']
res.body = res2.body
res2.headers.each do |k, v|
#logger.debug("<- #{k}: #{v}");
end
if res.body
body = res.body.length < 100 ? res.body : res.body[0,97] + '...'
#logger.debug("<- body: #{req.body}");
end
end
alias do_POST do_GET
alias do_PATCH do_GET
alias do_PUT do_GET
alias do_DELETE do_GET
alias do_MOVE do_GET
alias do_COPY do_GET
alias do_HEAD do_GET
alias do_OPTIONS do_GET
alias do_MKCOL do_GET
end
def initialize(config={}, default=WEBrick::Config::HTTP)
config = {AccessLog: config[:Logger] ? [
[config[:Logger], WEBrick::AccessLog::COMMON_LOG_FORMAT],
] : [
[$stderr, WEBrick::AccessLog::COMMON_LOG_FORMAT],
]}.update(config)
super
if ENV.key?('FRONT_END_LOG_LEVEL')
logger.level = WEBrick::BasicLog.const_get(ENV['FRONT_END_LOG_LEVEL'])
end
mount('/', FallbackFileHandler, 'public')
mount('/api', ProxyHandler)
mount('/uploads', ProxyHandler)
end
end
if __FILE__ == $0
require 'optparse'
options = {}
OptionParser.new do |opt|
opt.on('-f', '--front-end-port PORT', OptionParser::DecimalInteger) { |o|
options[:front_end_port] = o
}
opt.on('-b', '--back-end-port PORT', OptionParser::DecimalInteger) { |o|
options[:back_end_port] = o
}
end.parse!
server = FrontEndServer.new({
Port: options[:front_end_port],
back_end_port: options[:back_end_port],
})
trap('INT') { server.shutdown }
trap('TERM') { server.shutdown }
server.start
end
Tested with rails-5.1.1, webpack-2.4.1.
To run the tests you can use the following commands:
$ xvfb-run TESTOPTS=-h bin/rails test:system
$ xvfb-run bin/rails test -h test/system/application_test.rb:6
$ xvfb-run TEST=test/system/application_test.rb TESTOPTS=-h bin/rake test
You can simplify running tests by adding package scripts:
"scripts": {
"test": "xvfb-run bin/rails test:system",
"test1": "xvfb-run bin/rails test"
}
Then:
$ yarn test
$ yarn test1 test/system/application_test.rb:6
Or so I'd like to say. But unfortunately yarn has an issue where it prepends extra paths to the PATH variable. Particularly, /usr/bin. Which leads to system ruby being executed. With all sorts of outcomes (ruby not finding gems).
To work around it you may use the following script:
#!/usr/bin/env bash
set -eu
# https://github.com/yarnpkg/yarn/issues/5935
s_path=$(printf "%s" "$PATH" | tr : \\n)
_IFS=$IFS
IFS=$'\n'
a_path=($s_path)
IFS=$_IFS
usr_bin=$(dirname -- "$(which node)")
n_usr_bin=$(egrep "^$usr_bin$" <(printf "%s" "$s_path") | wc -l)
r=()
for (( i = 0; i < ${#a_path[#]}; i++ )); do
if [ "${a_path[$i]}" = "$usr_bin" ] && (( n_usr_bin > 1 )); then
(( n_usr_bin-- ))
else
r+=("${a_path[$i]}")
fi
done
PATH=$(
for p in ${r[#]+"${r[#]}"}; do
printf "%s\n" "$p"
done | paste -sd:
)
"$#"
Then the package scripts are to be read as follows:
"scripts": {
"test": "./fix-path.sh xvfb-run bin/rails test:system",
"test1": "./fix-path.sh xvfb-run bin/rails test"
}
By default, rails starts puma in a separate thread to handle api requests while running tests. With this setup it by default runs in a separate process. Since then you can drop byebug line anywhere in your test, and the site in the browser will remain functional (XHR requests will not stuck). You can still make it run in a separate thread if you feel like it by setting BACK_END=separate_thread.
Additionally, another process (or thread, depending on the value of the FRONT_END variable) will start to handle requests for static files (or proxy requests to the back end). For that webrick is used.
To see rails's output, run with RAILS_LOG_TO_STDOUT=1, or see log/test.log. To prevent rails from colorizing the log, add config.colorize_logging = false (which will strip colors in the console as well) to config/environments/test.rb, or use less -R log/test.log. puma's output can be seen by running with BACK_END_LOG=1.
To see webrick's output, run with FRONT_END_LOG=1 (separate process), RAILS_LOG_TO_STDOUT=1 (separate thread), or see log/test.log (separate thread). To make webrick produce more info, set FRONT_END_LOG_LEVEL to DEBUG.
Also, every time you run the tests, webpack starts to compile the bundle. You can avoid that with WEBPACK=1.
Finally, to see Selenium requests:
Selenium::WebDriver.logger.level = :debug # full logging
Selenium::WebDriver.logger.level = :warn # back to normal
Selenium::WebDriver.logger.output = 'selenium.log' # log to file
I have a Rails application that runs some background jobs via ActiveJob and Sidekiq. The sidekiq logs in both the terminal and the log file show the following:
2016-10-18T06:17:01.911Z 3252 TID-oukzs4q3k ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper JID-97318b38b1391672d21feb93 INFO: start
Is there some way to show the class names of the jobs here similar to how logs work for a regular Sidekiq Worker?
Update:
Here is how a Sidekiq worker logs:
2016-10-18T11:05:39.690Z 13678 TID-or4o9w2o4 ClientJob JID-b3c71c9c63fe0c6d29fd2f21 INFO: start
Update 2:
My sidekiq version is 3.4.2
I'd like to replace ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper with Client Job
So I managed to do this by removing Sidekiq::Middleware::Server::Logging from the middleware configuration and adding a modified class that displays the arguments in the logs. The arguments themself contain the job and action names as well.
For latest version, currently 4.2.3, in sidekiq.rb
require 'sidekiq'
require 'sidekiq/middleware/server/logging'
class ParamsLogging < Sidekiq::Middleware::Server::Logging
def log_context(worker, item)
klass = item['wrapped'.freeze] || worker.class.to_s
"#{klass} (#{item['args'].try(:join, ' ')}) JID-#{item['jid'.freeze]}"
end
end
Sidekiq.configure_server do |config|
config.server_middleware do |chain|
chain.remove Sidekiq::Middleware::Server::Logging
chain.add ParamsLogging
end
end
For version 3.4.2, or similar, override the call method instead:
class ParamsLogging < Sidekiq::Middleware::Server::Logging
def call(worker, item, queue)
klass = item['wrapped'.freeze] || worker.class.to_s
Sidekiq::Logging.with_context("#{klass} (#{item['args'].try(:join, ' ')}) JID-#{item['jid'.freeze]}") do
begin
start = Time.now
logger.info { "start" }
yield
logger.info { "done: #{elapsed(start)} sec" }
rescue Exception
logger.info { "fail: #{elapsed(start)} sec" }
raise
end
end
end
end
You must be running some ancient version. Upgrade.
Sorry, looks like that's a Rails 5+ feature only. You'll need to upgrade Rails. https://github.com/rails/rails/commit/8d2b1406bc201d8705e931b6f043441930f2e8ac
I have created a very basic Elixir supervisor and worker, to test hot code reloading feature of Erlang VM. Here is the supervisor:
defmodule RelTest do
use Application
# See http://elixir-lang.org/docs/stable/elixir/Application.html
# for more information on OTP Applications
def start(_type, _args) do
import Supervisor.Spec, warn: false
port = Application.get_env(:APP_NAME, :listen_port, 9000)
{:ok, socket} = :gen_tcp.listen(port, [:binary, active: false, reuseaddr: true])
# Define workers and child supervisors to be supervised
children = [
# Starts a worker by calling: RelTest.Worker.start_link(arg1, arg2, arg3)
# worker(RelTest.Worker, [arg1, arg2, arg3]),
worker(Task, [fn -> TestListener.start(socket) end])
]
# See http://elixir-lang.org/docs/stable/elixir/Supervisor.html
# for other strategies and supported options
opts = [strategy: :one_for_one, name: RelTest.Supervisor]
Supervisor.start_link(children, opts)
end
end
Basically, I'm starting a Task worker, which is:
defmodule TestListener do
require Logger
def start(socket) do
{:ok, client} = :gen_tcp.accept(socket)
Logger.info "A client connected"
Task.async(fn -> loop(client) end)
start(socket)
end
def loop(socket) do
case :gen_tcp.recv(socket, 0) do
{:ok, _} ->
say_hello(socket)
Logger.info "Said hello to client ;)"
loop(socket)
{:error, _} ->
Logger.info "Oops, client had error :("
:gen_tcp.close(socket)
end
end
def say_hello(socket) do
:ok = :gen_tcp.send(socket, <<"Hey there!\n">>)
end
end
This is version 0.1.0. So I run these:
MIX_ENV=prod mix compile
Mix_ENV=prod mix release
and I get a nice release. I run it with ./rel/rel_test/bin/rel_test console and everything works. Now I'm going to bump the code and version, so here is the version 0.1.1 of listener:
defmodule TestListener do
require Logger
def start(socket) do
{:ok, client} = :gen_tcp.accept(socket)
Logger.info "A client connected"
Task.async(fn -> loop(client) end)
start(socket)
end
def loop(socket) do
case :gen_tcp.recv(socket, 0) do
{:ok, _} ->
say_hello(socket)
Logger.info "Said hello to client ;)"
loop(socket)
{:error, _} ->
Logger.info "Oops, client had error :("
:gen_tcp.close(socket)
end
end
def say_hello(socket) do
:ok = :gen_tcp.send(socket, <<"Hey there, next version!\n">>)
end
end
Now I run
MIX_ENV=prod mix compile
Mix_ENV=prod mix release
and appup is created successfully, then to do hot upgrade
./rel/rel_test/bin/rel_test upgrade "0.1.1"
and the upgrade works, but it kills my listener after upgrade.
I tested with a nc localhost 9000 (9000 is the port of listener), staying connected and running upgrade command. Connection gets killed and I get a message in console:
=SUPERVISOR REPORT==== 31-Aug-2016::23:40:09 ===
Supervisor: {local,'Elixir.RelTest.Supervisor'}
Context: child_terminated
Reason: killed
Offender: [{pid,<0.601.0>},
{id,'Elixir.Task'},
{mfargs,
{'Elixir.Task',start_link,
[#Fun<Elixir.RelTest.0.117418367>]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
So why this happens? Is it something I'm missing, or it is the expected behavior? Is it not the use case of hot code reloading?
I have read LYSE, but the author says the running code should keep running, only the external calls made after the upgrade are to be served with new version.
Then why kill the worker?