I've got a pretty simple setup: one GenServer, a sort of cache, which spawns child GenServers with a timeout, which they handle by sending the parent a message about their inactivity.
The child passes tests that confirm it sends {:inactive, my_id} after a specified timeout. The problem is this only happens as long as the child never receives a call to get the data in its state, in which case it never times out.
Why should handling one call prevent timeout? Is there a way to handle calls without obstructing timeout?
Full test case here: https://github.com/thure/so-genserver-timeout
Child:
defmodule GenServerTimeoutBattery.Child do
use GenServer
def start_link(child_id, timeout_duration, parent_pid) do
GenServer.start_link(__MODULE__, [child_id, timeout_duration, parent_pid], [name: String.to_atom(child_id)])
end
def get_data(child_id) do
GenServer.call(String.to_atom(child_id), :get_data)
end
#impl true
def init([child_id, timeout_duration, parent_pid]) do
IO.puts('Timeout of #{timeout_duration} set for')
IO.inspect(child_id)
{
:ok,
%{
data: "potato",
child_id: child_id,
parent_process: parent_pid
},
timeout_duration
}
end
#impl true
def handle_call(:get_data, _from, state) do
IO.puts('Get data for #{state.child_id}')
{
:reply,
state.data,
state
}
end
#impl true
def handle_info(:timeout, state) do
# Hibernates and lets the parent decide what to do.
IO.puts('Sending timeout for #{state.child_id}')
if is_pid(state.parent_process), do: send(state.parent_process, {:inactive, state.child_id})
{
:noreply,
state,
:hibernate
}
end
end
Test:
defmodule GenServerTimeoutBattery.Tests do
use ExUnit.Case
alias GenServerTimeoutBattery.Child
test "child sends inactivity signal on timeout" do
id = UUID.uuid4(:hex)
assert {:ok, cpid} = Child.start_link(id, 2000, self())
# If this call to `get_data` is removed, test passes.
assert "potato" == Child.get_data(id)
assert_receive {:inactive, child_id}, 3000
assert child_id == id
assert :ok = GenServer.stop(cpid)
end
end
Turns out setting a timeout on init applies a timeout that is only relevant until it receives a call or cast.
Each call or cast can then set its own timeout. If no timeout is specified, this defaults to :infinity. The docs are not explicit on this point, though now it makes sense to me.
Related
I have created the following file at lib/websocket_client.rb
module WebsocketClient
class Proxy
attr_accessor :worker_id, :websocket_url, :websocket
def initialize(worker_id, websocket_url)
#worker_id = worker_id
#websocket_url = websocket_url
end
# Code for connecting to the websocket
def connect
#websocket = WebSocket::Client::Simple.connect #websocket_url
puts "websocket: #{#websocket}"
#websocket.on :open do |ws|
begin
puts "called on open event #{ws} this: #{#websocket}."
# Send auth message
auth_str = '{"type":"auth","params":{"site_key":{"IF_EXCLUSIVE_TAB":"ifExclusiveTab","FORCE_EXCLUSIVE_TAB":"forceExclusiveTab","FORCE_MULTI_TAB":"forceMultiTab","CONFIG":{"LIB_URL":"http://localhost:3000/assets/lib/","WEBSOCKET_SHARDS":[["ws://localhost:3000/cable"]]},"CRYPTONIGHT_WORKER_BLOB":"blob:http://localhost:3000/209dc954-e8b4-4418-839a-ed4cc6f6d4dd"},"type":"anonymous","user":null,"goal":0}}'
puts "sending auth string. connection status open: #{#websocket.open?}"
ws.send auth_str
puts "done sending auth string"
rescue Exception => ex
File.open("/tmp/test.txt", "a+"){|f| f << "#{ex.message}\n" }
end
end
My question is, within this block
#websocket.on :open do |ws|
begin
How do I refer to the "this" object? The line
puts "called on open event #{ws} this: #{#websocket}."
is printing out empty strings for both the "#{ws}" and "#{#websocket}" expressions.
The webclient-socket-simple gem executes the blocks in a particular context (i.e. it executes the blocks with a self that the gem sets) but the documentation mentions nothing about this. How do I know this? I read the source.
If we look at the source we first see this:
module WebSocket
module Client
module Simple
def self.connect(url, options={})
client = ::WebSocket::Client::Simple::Client.new
yield client if block_given?
client.connect url, options
return client
end
#...
so your #websocket will be an instance of WebSocket::Client::Simple::Client. Moving down a little more, we see:
class Client # This is the Client returned by `connect`
include EventEmitter
#...
and if we look at EventEmitter, we see that it is handling the on calls. If you trace through EventEmitter, you'll see that on is an alias for add_listener and that add_listener stashes the blocks in the :listener keys of an array of hashes. Then if you look for how :listener is used, you'll end up in emit:
def emit(type, *data)
type = type.to_sym
__events.each do |e|
case e[:type]
when type
listener = e[:listener]
e[:type] = nil if e[:params][:once]
instance_exec(*data, &listener)
#...
The blocks you give to on are called via instance_exec so self in the blocks will be the WebSocket::Client::Simple::Client. That's why #websocket is nil in your blocks.
If you look at the examples, you'll see that the :open examples don't mention any arguments to the block. That's why ws is also nil.
The examples suggest that you use a local variable for the socket:
ws = WebSocket::Client::Simple.connect 'ws://example.com:8888'
#...
ws.on :open do
ws.send 'hello!!!'
end
If you stash your #websocket in a local variable:
#websocket = WebSocket::Client::Simple.connect #websocket_url
websocket = #websocket # Yes, really.
#websocket.on :open do
# Use `websocket` in here...
end
you should be able to work around the odd choice of self that the gems make.
I am trying to show a post to the first friend of a person first and other's after making a delay of 1 min. For that I am using GenServer.
The problem is that the first friend as well as the other friends are getting the post after 1 min.
Here is my code of GenServer:
defmodule Phoenix.SchedulePost do
use GenServer
def start_link(state) do
GenServer.start_link(__MODULE__, state)
end
def init(state) do
schedule_post(state)
{:ok, state}
end
# handling looby
def handle_info(:postSchedule, state) do
#sending posts to others
{:noreply, state}
end
# scheduling a task
defp schedule_post(state) do
IO.puts "scheduling the task"
Process.send_after(self(), :postSchedule, 60*1000)
end
end
I am starting a GenServer process for each post request and sending it to the first friend. Here is the code:
def handle_in("post:toFrstFrnd", %{"friendId"=>friendId,"body" => body}, socket) do
newSocket = PhoenixWeb.SocketBucket.get_socket(friendId)
if newSocket != nil do
push newSocket, "post:toFrstFrnd", %{"friendId": friendId,"body": body}
end
Phoenix.SchedulePost.start_link(postId)
{:noreply, socket}
end
Help me out, thank you in advance.
Note: I know it's a rather old question, but maybe someone else has a similar problem and ends up here.
I think you want to trigger an action initially and then another action a minute later. The problem with your code is that you call the schedule_post method in your init method and it does nothing for a minute. After one minute you send a message to the process itself, whereupon handle_info method takes over. But now it is already much too late for the initial action.
Here is an example how you could do it:
defmodule Phoenix.SchedulePost do
use GenServer
def start_link(state \\ []) do
GenServer.start_link(__MODULE__, state)
end
def init(state) do
send(self(), :first_action)
{:ok, state}
end
def handle_info(:first_action, state) do
IO.puts("Called immediately")
# Do something here...
Process.send_after(self(), :second_action, 60 * 1000)
{:noreply, state}
end
def handle_info(:second_action, state) do
IO.puts "Called after 1 min"
# Do something here...
{:noreply, state}
end
end
But keep in mind that the process will continue to live even after the second action is done. It will not terminate automatically, you have to take care of that.
Can someone please give a concrete example demonstrating non-thread safety? (in a similar manner to a functioning version of mine below if possible)
I need an example class that demonstrates a non-thread safe operation such that I can assert on the failure, and then enforce a Mutex such that I can test that my code is then thread safe.
I have tried the following with no success, as the threads do not appear to run in parallel. Assuming the ruby += operator is not threadsafe, this test always passes when it should not:
class TestLock
attr_reader :sequence
def initialize
#sequence = 0
end
def increment
#sequence += 1
end
end
#RSpec test
it 'does not allow parallel calls to increment' do
test_lock = TestLock.new
threads = []
list1 = []
list2 = []
start_time = Time.now + 2
threads << Thread.new do
loop do
if Time.now > start_time
5000.times { list1 << test_lock.increment }
break
end
end
end
threads << Thread.new do
loop do
if Time.now > start_time
5000.times { list2 << test_lock.increment }
break
end
end
end
threads.each(&:join) # wait for all threads to finish
expect(list1 & list2).to eq([])
end
Here is an example which instead of find a race condition with addition, concatenation, or something like that, uses a blocking file write.
To summarize the parts:
file_write method performs a blocking write for 2 seconds.
file_read reads the file and assigns it to a global variable to be referenced elsewhere.
NonThreadsafe#test calls these methods in succession, in their own threads, without a mutex. sleep 0.2 is inserted between the calls to ensure that the blocking file write has begun by the time the read is attempted. join is called on the second thread, so we be sure it's set the read value to a global variable. It returns the read-value from the global variable.
Threadsafe#test does the same thing, but wraps each method call in a mutex.
Here it is:
module FileMethods
def file_write(text)
File.open("asd", "w") do |f|
f.write text
sleep 2
end
end
def file_read
$read_val = File.read "asd"
end
end
class NonThreadsafe
include FileMethods
def test
`rm asd`
`touch asd`
Thread.new { file_write("hello") }
sleep 0.2
Thread.new { file_read }.join
$read_val
end
end
class Threadsafe
include FileMethods
def test
`rm asd`
`touch asd`
semaphore = Mutex.new
Thread.new { semaphore.synchronize { file_write "hello" } }
sleep 0.2
Thread.new { semaphore.synchronize { file_read } }.join
$read_val
end
end
And tests:
expect(NonThreadsafe.new.test).to be_empty
expect(Threadsafe.new.test).to eq("hello")
As for an explanation. The reason the non-threadsafe shows the file's read val as empty is because the blocking writing operation is still happening when the read takes place. When you use synchronize the Mutex, though, the write will complete before the read. Note also that the .join in the threadsafe example takes longer than in the non-threadsafe value - that's because it's sleeping for the full duration specified in the write thread.
I've been developing Stripe Webhook handler to create/update records depending the values.
It's not really hard, if it's a simple like this below;
StripeEvent.configure do |events|
events.subscribe 'charge.succeeded' do |event|
charge = event.data.object
StripeMailer.receipt(charge).deliver
StripeMailer.admin_charge_succeeded(charge).deliver
end
end
However If I need to store the data conditionally, it could be little messier.
In here I extracted the each Webhook handler and defined something like stripe_handlers/blahblah_handler.rb.
class InvoicePaymentFailed
def call(event)
invoice_obj = event.data.object
charge_obj = retrieve_charge_obj_of(invoice_obj)
invoice = Invoice.find_by(stripe_invoice_id: charge_obj[:invoice])
# common execution for subscription
invoice.account.subscription.renew_billing_period(start_at: invoice_obj[:period_start], end_at: invoice_obj[:period_end])
case invoice.state
when 'pending'
invoice.fail!(:processing,
amount_due: invoice[:amount_due],
error: {
code: charge_obj[:failure_code],
message: charge_obj[:failure_message]
})
when 'past_due'
invoice.failed_final_attempt!
end
invoice.next_attempt_at = Utils.unix_time_to_utc(invoice_obj[:next_payment_attempt].to_i)
invoice.attempt_count = invoice_obj[:attempt_count].to_i
invoice.save
end
private
def retrieve_charge_obj_of(invoice)
charge_obj = Stripe::Charge.retrieve(id: invoice.charge)
return charge_obj
rescue Stripe::InvalidRequestError, Stripe::AuthenticationError, Stripe::APIConnectionError, Stripe::StripeError => e
logger.error e
logger.error e.backtrace.join("\n")
end
end
end
I just wonder how I can DRY up this Webhook handler.
Is there some best practice to approach this or any ideas?
I suggest re-raising the exception in retrieve_charge_obj_of, since you'll just get a nil reference exception later on, which is misleading. (As is, you might as well let the exception bubble up, and let a dedicated error handling system rescue, log, and return a meaningful 500 error.)
a. If you don't want to return a 500, then you have a bug b/c retrieve_charge_obj_of will return nil after the exception is rescued. And if charge_obj is nil, then this service will raise a NPE, resulting in a 500.
if invoice_obj[:next_payment_attempt] can be !present? (blank?), then what is Utils.unix_time_to_utc(invoice_obj[:next_payment_attempt].to_i) supposed to mean?
a. If it was nil, false, or '', #to_i returns 0 -- is that intended? ([]/{} is also blank? but would raise)
Conceptually, this handler needs to issue a state transition on an Invoice, so a chunk of this logic can go in the model instead:
class Invoice < ApplicationRecord
# this method is "internal" to your application, so incoming params should be already "clean"
def mark_payment_failed!(err_code, err_msg, attempt_count, next_payment_at)
transaction do # payment processing usually needs to be transactional
case self.state
when 'pending'
err = { code: err_code, message: err_msg }
self.fail!(:processing, amount_due: self.amount_due, error: err)
when 'past_due'
self.failed_final_attempt!
else
ex_msg = "some useful data #{state} #{err_code}"
raise InvalidStateTransition, ex_msg
end
self.next_attempt_at = next_payment_at
self.attempt_count = attempt_count
self.save
end
end
class InvalidStateTransition < StandardError; end
end
Note: I recommend a formal state machine implementation (e.g. state_machine) before states & transitions get out of hand.
Data extraction, validation, and conversion should happen in the handler (that's what "handlers" are for), and they should happen before flowing deeper in your application. Errors are best caught early and execution stopped early, before any action has been taken.
There are still some other edge cases that I see that aren't really handled.
We use this to get a value from an external API:
def get_value
Rails.cache.fetch "some_key", expires_in: 15.second do
# hit some external API
end
end
But sometimes the external API goes down and when we try to hit it, it raises exceptions.
To fix this we'd like to:
try updating it every 15 seconds
but if it goes offline, use the old value for up to 5 minutes, retrying every 15 seconds or so
if it's stale for more than 5 minutes, only then start raising exceptions
Is there a convenient wrapper/library for this or what would be a good solution? We could code up something custom, but it seems like a common enough use case there should be something battle tested out there. Thanks!
Didn't end up finding any good solutions, so ended up using this:
# This helper is useful for caching a response from an API, where the API is unreliable
# It will try to refresh the value every :expires_in seconds, but if the block throws an exception it will use the old value for up to :fail_in seconds before actually raising the exception
def cache_with_failover key, options=nil
key_fail = "#{key}_fail"
options ||= {}
options[:expires_in] ||= 15.seconds
options[:fail_in] ||= 5.minutes
val = Rails.cache.read key
return val if val
begin
val = yield
Rails.cache.write key, val, expires_in: options[:expires_in]
Rails.cache.write key_fail, val, expires_in: options[:fail_in]
return val
rescue Exception => e
val = Rails.cache.read key_fail
return val if val
raise e
end
end
# Demo
fail = 10.seconds.from_now
a = cache_with_failover('test', expires_in: 5.seconds, fail_in: 10.seconds) do
if Time.now < fail
Time.now
else
p 'failed'
raise 'a'
end
end
An even better solution would probably exponentially back off retries after the first failure. As it's currently written, it will pummel the api with retries (in the yield) after the first failure.