I am attempting to make it so that people can define arbitrary workflows in classes. The code for this is probably too long for Stack Overflow and so I've got a gist for it.
If you run the code, the first couple of tests will work, but when it attempts to transition to the payment state, it checks for the payment_required? method on the complete wrong object. I want it to be checking for it on the current Order instance, but instead it (seemingly) is looking for that method on the state machine anonymous class.
How do I get it to call the method correctly on the Order instance?
The problem is at the definition of the anonymous state machine, around line 42:
order.class.transitions.each { |attrs| transition(attrs) }
This means the transition guards are evaluated in the anonymous state machine's context rather than in the context of the Order class.
One solution would be to translate the transition guards. Replace the above line with this to pass your test suite:
order.class.transitions.each do |attrs|
if attrs[:if].is_a? Symbol
if_method = attrs[:if]
attrs[:if] = lambda { order.send( if_method ) }
end
transition(attrs)
end
You will need to support all of the types of transition guards for a full solution. Would recommend looking at StateMachine::EvalHelpers for the complete set.
Correction:
As others have mentioned, you will also need to fix your test suite:
go_to :payment, :if => :payment_required? # Line 107
order.stub :payment_required? => true # Line 142
I've fixed this up by moving the state machine definition to the Spree::Order class. You can see the work in this pull request.
Related
I'm unable to replicate this locally, but for some reason I am getting the following error when running tests in CircleCi:
<Double Mylogger> was originally created in one example but has leaked into another example and can no longer be used. rspec-mocks' doubles are designed to only last for one example, and you need to create a new one in each example you wish to use it for.
Here is a simplified version of my code:
# frozen_string_literal: true
describe 'my_rake_task' do
let(:my_log) { Mylogger.new }
subject { Rake::Task['my_rake_task'].execute }
describe 'one' do
context 'logs' do
let(:logs) do
[
['My message one'],
['My message two'],
]
end
after { subject }
it 'correctly' do
logs.each { |log| expect(my_log).to receive(:info).with(*log) }
end
end
end
describe 'two' do
context 'logs' do
let(:logs) do
[
['My message three'],
['My message four'],
]
end
after { subject }
it 'correctly' do
logs.each { |log| expect(my_log).to receive(:info).with(*log) }
end
end
end
end
Why is it saying MyLogger is a double? Why would it be leaking?
The reason that the error is saying that MyLogger is a double is because it is one. When you call expect(my_log).to receive or allow(my_log).to receive, you transform the instance into a partial-double.
As for why my_log is leaking: it's impossible to tell from the code that you posted. In order to cause a leak, some code either in your rake task or in the spec itself would need to be injecting my_log into some global state, like a class variable.
Mostly commonly this sort of thing is caused by storing something in a class variable. You will have to figure out where that is, and how to clear it or avoid using a class variable - it could be in your class or in a gem.
Best practice, where using a class variable or an external system is causing inter-test issues, is to clean this sort of thing between tests, if possible. For example ActionMailer::Base.deliveries and Rails.cache are common things that should be cleared. You should also clear Faker::UniqueGenerator or RequestStore if you're using those gems, and I'm sure there are more.
Once you have found the class variable, if it's in your code, and you have determined a class variable is the correct approach, you can add a reset or clear class method to the class and call it in a before(:each) RSpec block in your spec_helper.rb or rails_helper.rb.
Note that while a lot of things will clear themselves automatically between tests (such as RSpec mocks), and make you think this is all automatic, in practice it is often anything but.
Tests will only remain independent by (a) mostly making use of objects created in the tests and mostly only storing data in there and (b) ensuring anything else is cleared between tests by your explicit code or within the responsible gem.
This can be especially annoying when dealing with external third-party systems, which rarely provide an API to clear a staging environment, and hence sometimes require considerable care even when using the vcr gem.
I am new to Ruby and to Rails, and am trying to understand fully what I'm reading.
I am looking at some of the Rails source code, in this case action_controller/metal/instrumentation.rb.
def render(*args)
render_output = nil
self.view_runtime = cleanup_view_runtime do
Benchmark.ms { render_output = super }
end
render_output
end
I understand that *args is using the splat operator to collect the arguments together into an array. But after that, it stops making much sense to me.
I can't fathom why render_output is set to nil before being reassigned to equal super and then called with no arguments. I gather that some speedtest is being done, but coming from other languages I'd expect this to just be something more like Benchmark.ms(render_output) or perhaps Benchmark.start followed by render_output followed by Benchmark.end. I'm having a hard time following the way it works here.
But more importantly, I don't really follow why args isn't used again. Why bother defining a param that isn't used? And I mean, clearly it is getting used-- I just don't see how. There's some hidden mechanism here that I haven't learned about yet.
In this context, it is important to note how super works, because in some cases it passes implicitly arguments and you might not expect that.
When you have method like
def method(argument)
super
end
then super is calling the overridden implementation of method implicitly with the exact same arguments as the current method was called. That means in this example super will actually call super(argument).
Of course, you can still define a method call that explicitly sends other arguments to the original implementation, like in this example:
def method(argument)
super(argument + 1)
end
Another important edge-case is when you want to explicitly call super without any arguments although the current method was called with arguments then you need to be very explicit like this
def method(argument)
super() # note the empty parentheses
end
Let me try to describe you what I think this code does.
*args*
is using the splat operator to collect the arguments together into an array
that is totally correct, however they don't use it, and if you will go to master branch, they just changed it to *. Asking why it is defined and not used, I think that's question about bad design. They should have called it _args or at least like it is now just single splat *.
render_output is set to nil because of scopes, it has to be explicitly defined out block, lambda, proc in order to store value in it, otherwise its visibility will be locked only to those lambda, proc, block execution. Refer to this article
Benchmark.start. Blocks are great ruby construction. You are totally correct that speedtest is done, we can see it is just decorator for benchmark library.
source.
You are wondering why we cannot just pass it as Benchmark.ms(render_output), that's because what will be given to benchmark ms function? It will be given result, like <div> my html </div. And how we can measure this string result - no how. That's why we calling super right in this block, we want to access parent class function and wrap it inside block, so we are not calling it, we just construct it, and it will be called inside benchmark lib, and measured execution like
class Benchmark
...
def realtime # :yield:
r0 = Process.clock_gettime(Process::CLOCK_MONOTONIC)
yield
Process.clock_gettime(Process::CLOCK_MONOTONIC) - r0
end
...
end
So here we can count realtime of function execution, this is the code from original library
I have these code that executes a dynamic method. I'm using eval here to execute it but what I wanted to do is changed it to public_send because I was told so and it's much safer.
Current code:
# update workstep logic here.
incoming_status = params[params[:name]]
# grab workflow, this is current data, use this to compare status to in comming status
workflow = get_workorder_product_workstep(params[:workflow_id])
# check current status if its pending allow to update
# security concern EVAL!
if eval("workflow.can_#{incoming_status}?")
# update status
eval("workflow.#{incoming_status}")
# updated attribute handled_by
workflow.update_attributes(handled_by_id: #curr_user.id)
workflow.save
else
flash[:notice] = 'Action not allowed'
end
The eval here is the concern. How can I changed this to public_send?
Here's what I did.
public_send("workflow.can_#{incoming_status}?")
public_send("#{workflow}.can_#{incoming_status}?")
both of them doesn't work. gives me an error of no method. The first public error returns this undefined method workflow.can_queue? for #<Spree::Admin::WorkordersController:0x00007ff71c8e6f00>
But it should work because I have a method workflow.can_queue?
the second error on public is this
undefined method #<Spree::WorkorderProductWorkstep:0x00007ff765663550>.can_queue? for #<Spree::Admin::WorkordersController:0x00007ff76597f798>
I think for the second workflow is being evaluated separately? I'm not sure.
Working with public_send you can change the relevant lines to:
if workflow.public_send("can_#{incoming_status}?")
# update status
workflow.public_send(incoming_status.to_s)
# ...
A note about security and risks
workflow.public_send("can_#{xyz}?") can only call methods on workflow that are public and which start with the prefix can_ and end with ?. That is probably only a small number of methods and you can easily decide if you want to allow all those methods.
workflow.public_send("#{incoming_status'}) is different because it allows all public methods on workflow – even destroy. That means using this without the "can_#{incoming_status}?" is probably a bad idea. Or you should at least first check if incoming_status is in a whitelist of allowed methods.
eval is the worst because it will evaluate the whole string without any context (e.q. an object like workflow). Imaging you have eval("workflow.#{incoming_status}") without to check first if incoming_status is actually allowed. If someone then sends an incoming_status like this "to_s; system('xyz')"then xyz could be everything – like commands to send a hidden file via email, to install a backdoor or to delete some files.
With delayed_job, I was able to do simple operations like this:
#foo.delay.increment!(:myfield)
Is it possible to do the same with Rails' new ActiveJob? (without creating a whole bunch of job classes that do these small operations)
ActiveJob is merely an abstraction on top of various background job processors, so many capabilities depend on which provider you're actually using. But I'll try to not depend on any backend.
Typically, a job provider consists of persistence mechanism and runners. When offloading a job, you write it into persistence mechanism in some way, then later one of the runners retrieves it and runs it. So the question is: can you express your job data in a format, compatible with any action you need?
That will be tricky.
Let's define what is a job definition then. For instance, it could be a single method call. Assuming this syntax:
Model.find(42).delay.foo(1, 2)
We can use the following format:
{
class: 'Model',
id: '42', # whatever
method: 'foo',
args: [
1, 2
]
}
Now how do we build such a hash from a given call and enqueue it to a job queue?
First of all, as it appears, we'll need to define a class that has a method_missing to catch the called method name:
class JobMacro
attr_accessor :data
def initialize(record = nil)
self.data = {}
if record.present?
self.data[:class] = record.class.to_s
self.data[:id] = record.id
end
end
def method_missing(action, *args)
self.data[:method] = action.to_s
self.data[:args] = args
GenericJob.perform_later(data)
end
end
The job itself will have to reconstruct that expression like so:
data[:class].constantize.find(data[:id]).public_send(data[:method], *data[:args])
Of course, you'll have to define the delay macro on your model. It may be best to factor it out into a module, since the definition is quite generic:
def delay
JobMacro.new(self)
end
It does have some limitations:
Only supports running jobs on persisted ActiveRecord models. A job needs a way to reconstruct the callee to call the method, I've picked the most probable one. You can also use marshalling, if you want, but I consider that unreliable: the unmarshalled object may be invalid by the time the job gets to execute. Same about "GlobalID".
It uses Ruby's reflection. It's a tempting solution to many problems, but it isn't fast and is a bit risky in terms of security. So use this approach cautiously.
Only one method call. No procs (you could probably do that with ruby2ruby gem). Relies on job provider to serialize arguments properly, if it fails to, help it with your own code. For instance, que uses JSON internally, so whatever works in JSON, works in que. Symbols don't, for instance.
Things will break in spectacular ways at first.
So make sure to set up your debugging tools before starting off.
An example of this is Sidekiq's backward (Delayed::Job) compatibility extension for ActiveRecord.
As far as I know, this is currently not supported. You can easily simulate this feature using a custom-defined proxy-job that accepts a model or instance, a method to be performed and a list of arguments.
However, for the sake of code testing and maintainability, this shortcut is not a good approach. It's more effective (even if you need to write a little bit more of code) to have a specific job for everything you want to enqueue. It forces you to think more about the design of your app.
I wrote a gem that can help you with that https://github.com/cristianbica/activejob-perform_later. But be aware that I believe that having methods all around your code that might be executed in workers is the perfect recipe for disaster is not handled carefully :)
Which Rspec convention is more up to date and should be used in new projects ?
subject { [] }
it { should == [] }
or
subject { [] }
it { expect(subject).to eq([]) }
I haven't found way of composing shorter version using subject implicitly with expect method.
Using expect (your second example) is the more up to date version. Rspec is moving away from methods being added onto existing objects (such as should) because they can occasionally cause odd behavior.
As stated in http://myronmars.to/n/dev-blog/2012/06/rspecs-new-expectation-syntax
"In the future, we plan to change the defaults so that only expect is available unless you explicitly enable should. We may do this as soon as RSpec 3.0, but we want to give users plenty of time to get acquianted with it"
I copy from this site: "The underlying problem is RSpec’s should syntax: for should to work properly, it must be defined on every object in the system… but RSpec does not own every object in the system and cannot ensure that it always works consistently. As we’ve seen, it doesn’t work as RSpec expects on proxy objects. Note that this isn’t just a problem with RSpec; it’s a problem with minitest/spec’s must_xxx syntax as well."
Personally, I use a mix of both still, as all "shoulda" matchers use the old syntax anyway, and are so easy to use...