Timing mistake in Elixir GenServer tutorial? - erlang

I'm going through the Elixir "Getting Started" tutorial, where the following code snippet is used:
test "removes buckets on exit", %{registry: registry} do
KV.Registry.create(registry, "shopping")
{:ok, bucket} = KV.Registry.lookup(registry, "shopping")
Agent.stop(bucket)
assert KV.Registry.lookup(registry, "shopping") == :error
end
Now, create/2 uses the cast operation whereas lookup uses call. So that means that an asynchronous call is executed and then immediately after that, a synchronous call which assumes the async action was performed successfully. Could timing issues result in the test failing when the code itself is correct, or is there some aspect of cast and call that I am missing?

Since GenServer's process all messages sequentially, the lookup call would block until the previous cast completed so there should be no timing issues.

Related

Catch back trace information without exit

I'm trying to run a series of tests and collect some meta data on each test. If there is an error during one of the tests, I would like to save the back trace information but not to exit the script. For example:
-- Example program
for _, v in ipairs(tests) do
--check some results of function calls
if v == nil then
--error("function X failed") no exit
--save back trace to variable/file
-- continue with program
end
end
I'm not currently aware if it is possible in lua to tell the function error()
not to stop after creating the back trace. Any thoughts on how to do this?
debug.traceback ([thread,] [message [, level]]) (source) is what you're looking for. You can write a function that 1. gets a traceback 2. opens a file 3. writes the traceback to the file 4. closes the file.
In that case you'd have to use a level of 2, since 0 would be the debug.traceback function, 1 would be the function calling it (i.e. your function) and 2 the funcion calling that one. message could be your error code. Then you just override the error function locally in your script and you're done; calling error will just log the error and not exit the program.
EDIT: You can also override error globally, if you want, but that might lead to unexpected results if something goes terribly wrong somewhere else (code that you didn't write yourself) and the program continues nonetheless.
You'd be better off with a construct like this:
if os.getenv 'DEBUG' then
my_error = function()
-- what I explained above
end
else
my_error = error
end
and just use my_error in all the places where you'd usually use error.

How to write a spec for trap handling on rspec?

say I have a class which traps SIGTERM and I want to write a spec to verify that that specific code is ran when SIGTERM is received. What is the proper way of doing it?
I've followed the answer for this topic: How to test signal handling in RSpec, particularly handling of SIGTERM?, but rspec is terminated on Process.kill happens.
I've also tried it like this:
raise SignalException.new('TERM')
But it doesn't seem to do anything (trap is not triggered). Finnaly, I've tried using 'allow' to substitute a method which is called during the spec to raise the signal or call Process.kill like this:
allow(<Class>).to receive(<method>).and_raise(SignalException.new('TERM'))
allow(<Class>).to receive(<method>).and_return(Process.kill 'TERM',0)
When raising the signal it also doesn't seem to do anything, and calling Process.kill simply ends rspec without a stack trace, just the word 'Terminated').
The trap code is like this:
trap('SIGTERM') {
Rails.logger.error('term')
#received_sigterm = true
}

#<Mongoid::Locker::LockError: could not get lock> in mongoid-locker rails

I applied "mongoid-locker" gem on my app but during concurrent request it got failed and got error "LockError: could not get lock".So can anyone help me out.
By default, with_lock does not wait for other locks to complete, so if you actually have a concurrent access, you will get the LockError raised if you don't tell it to wait.
Try it like so:
object = Object.first
object.with_lock wait:true do
object.foo = "bar"
object.save!
end

How to retry a rake task if you get a Bad Gateway error response from a web source

I am trying to run a rake task to get all the data with a specific tag from Instagram, and then input some of the data into my server.
The task runs just fine, except sometimes I'll get an error response. It's sort of random, so I think it just happens sometimes, and since it's a fairly long running task, it'll happen eventually.
This is the error on my console:
Instagram::BadGateway: GET https://api.instagram.com/v1/tags/xxx/media/recent.json?access_token=xxxxx&max_id=996890856542960826: 502: The server returned an invalid or incomplete response.
When this happens, I don't know what else to do except run the task again starting from that max_id. However, it would be nice if I could get the whole thing to automate itself, and retry itself from that point when it gets that error.
My task looks something like this:
task :download => :environment do
igs = Instagram.tag_recent_media("xxx")
begin
sleep 0.2
igs.each do |ig|
dl = Instadownload.new
dl.instagram_url = ig.link
dl.image_url = ig.images.standard_resolution.url
dl.caption = ig.caption.text if ig.caption
dl.taken_at = Time.at(ig.created_time.to_i)
dl.save!
end
if igs.pagination.next_max_id?
igs = Instagram.tag_recent_media("xxx", max_id: igs.pagination.next_max_id)
moreigs = true
else
moreigs = false
end
end while moreigs
end
Chad Pytel and Tammer Saleh call this "Fire and forget" antipattern in their Rails Antipatterns book:
Assuming that the request always succeeds or simply not caring if it
fails may be valid in rare circumstances, but in most cases it's
unsufficient. On the other hand, rescuing all the exceptions would be
a bad practice aswell. The proper solution would be to understand the
actual exceptions that will be raised by external service and rescue
those only.
So, what you should do is to wrap your code block into begin/rescue block with the appropriate set of errors raised by Instagram (list of those errors can be found here). I'm not sure which particular line of your code snippet ends with 502 code, so just to give you and idea of what it could look like:
begin
dl = Instadownload.new
dl.instagram_url = ig.link
dl.image_url = ig.images.standard_resolution.url
dl.caption = ig.caption.text if ig.caption
dl.taken_at = Time.at(ig.created_time.to_i)
dl.save!
rescue Instagram::BadGateway => e # list of acceptable errors can be expanded
retry # restart from beginning
end

how can we tell delayed_job when a delayed task fails so it will auto-retry?

Our app is hosted on heroku and we use delayed job when sending info to a remote system (via a GET to a url with some url params)
the remote system returns a success code usually, but it it;s real busy it returns a tryagain code.
suppose the our method is
def send_info
the_url = "http://mydomain.com/dosomething?arg=#{self.someval}"
the_result = open(the_url).read
successflag = get_success_flag_from(the_result)
end
and so somewhere in our code we do
#widget.delay.send_info
and that all works fine.
Except it does not automatically handle the case where the remote said to try back later.
Is there any way for the send_info method (which is what delayed job will execute) to "tell" delayed_job "retry me again"? Do we need to throw some custom exception or something?
Raising any kind of exception ought to cause delayed_job to requeue it (subject to only-trying-so-many-times); if you don't especially need a custom exception you can just raise a RuntimeError.

Resources