I am aware (and have found several posts here on SO) that one cannot pass along any additional parameters for a selector. For example whenever someone taps on my image view, I have the following taking place:
imageView.addGestureRecognizer(UITapGestureRecognizer(target: self, action:Selector("tapImage:")))
This works correctly, and many solutions say that if you wish to pass a parameter, simply assign the tag of the view to something, and then reference that as the sender within the tapImage function. The thing is i'm actually using the tag value for something else, so would need to somehow store another value somewhere else.
What are some recommended ways I can pass a true/false (or 0/1) value into my TapGestureRecognizer action "tapImage" such that I can evaluate an expression? I also need to pass a collection of classes as well.
I think the only solution is to use a different selector in this case (for example "tapImageFunctionA" vs. "tapImageFunctionB" which is fine, but before I go this route is there another way? Even with this way, I would need to access a collection of objects. Maybe I set a global variable in the view controller and access it that way?
Thanks so much!
Use some tag formatting - for example bitmasks, like 16 lower bits for one kind of info / 16 higher bits for second kind of info
Use associative references. Here's some answers on these on stack overflow: How do I use objc_setAssociatedObject/objc_getAssociatedObject inside an object?, Any gotchas with objc_setAssociatedObject and objc_getAssociatedObject?.
Personally i would use the first solution with tags, to reduce performance overhead on associative references (it's VERY minor, but it exists). Also, tags seems like more "programming" solution, when associative references - more technical and implementation-dependent.
Related
Status: Sort of solved. Switching Lua.Ref (close equivalent to LuaD LuaObject) to struct as suggested in answer has solved most issues related to freeing references, and I changed back to similar mechanism LuaD uses. More about this in the end.
In one of my project, I am working with Lua interface. I have mainly borrowed the ideas from LuaD. The mechanism in LuaD uses lua_ref & lua_unref to be able to move lua table/function references in D space, but this causes heavy problems because the calls to destructors and their order is not guaranteed. LuaD usually segfaults at least at the program exit.
Because it seems that LuaD is not maintained anymore, I decided to write my own interface for my purposes. My Lua interface class is here: https://github.com/mkoskim/games/blob/master/engine/util/lua.d
Usage examples can be found here:
https://github.com/mkoskim/games/blob/master/demo/luasketch/luademo.d
And in case you need, the Lua script used by the example is here:
https://github.com/mkoskim/games/blob/master/demo/luasketch/data/test.lua
The interface works like this:
Lua.opIndex pushes global table and index key to stack, and return Top object. For example, lua["math"] pushes _G and "math" to stack.
Further accesses go through Top object. Top.opIndex goes deeper in the table hierarchy. Other methods (call, get, set) are "final" methods, which perform an operation with the table and key at the top of the stack, and clean the stack afterwards.
Close everything works fine, except this mechanism has nasty quirk/bug that I have no idea how to solve it. If you don't call any of those "final" methods, Top will leave table and key to the stack:
lua["math"]["abs"].call(-1); // Works. Final method (call) called.
lua["math"]["abs"]; // table ref & key left to stack :(
What I know for sure, is that playing with Top() destructor does not work, as it is not called immediately when object is not referenced anymore.
NOTE: If there is some sort of operator to be called when object is accessed as rvalue, I could replace call(), set() and get() methods with operator overloads.
Questions:
Is there any way to prevent users to write such expressions (getting Top object without calling any of "final" methods)? I really don't want users to write e.g. luafunc = lua["math"]["abs"] and then later try to call it, because it won't work at all. Not without starting to play with lua_ref & lua_unref and start fighting with same issues that LuaD has.
Is there any kind of opAccess operator overloading, that is, overloading what happens when object is used as rvalue? That is, expression "a = b" -> "a.opAssign(b.opAccess)"? opCast does not work, it is called only with explicit casts.
Any other suggestions? I internally feel that I am looking solution from wrong direction. I feel that the problem reside in the realm of metaprogramming: I am trying to "scope" things at expression level, which I feel is not that suitable for classes and objects.
So far, I have tried to preserve the LuaD look'n'feel at interface user's side, but I think that if I could change the interface to something like following, I could get it working:
lua.call(["math", "abs"], 1); // call lua.math.abs(2)
lua.get(["table", "x", "y", "z"], 2); // lua table.x.y.z = 2
...
Syntactically that would ensure that reference to lua object fetched by indexing is finally used for something in the expression, and the stack would be cleaned.
UPDATE: Like said, changing Lua.Ref to struct solved problems related to dereferencing, and I am again using reference mechanism similar to LuaD. I personally feel that this mechanism suits the LuaD-style syntax I am using, too, and it can be quite a challenge to make the syntax working correctly with other mechanisms. I am still open to hear if someone has ideas to make it work.
The system I sketched to replace references (to tackle the problem with objects holding references living longer than lua sandbox) would probably need different kind of interface, something similar I sketched above.
You also have an issue when people do
auto math_abs = lua["math"]["abs"];
math_abs.call(1);
math_abs.call(3);
This will double pop.
Make Top a struct that holds the stack index of what they are referencing. That way you can use its known scoping and destruction behavior to your advantage. Make sure you handle this(this) correctly as well.
Only pop in the destructor when the value is the actual top value. You can use a bitset in LuaInterface to track which stack positions are in use and put the values in it using lua_replace if you are worried about excessive stack use.
Running the risk of being tagged as "too broad", but this is an authentic doubt.
Say I have
#my_model.complex_calculation_result to show in a view.
What are the pros and cons of:
1 - Calculating the value on the controller and send it to the view
Controller:
#result = #my_model.complex_calculation_result # caching the value in the controller
View:
<%= #result %>
2 - Calculating it directly in the view
<%= #my_model.complex_calculation_result %>
I know the last alternative represents less code and I have one less instance variable hanging around.
But are there performance diferences?
Guess 1 - the view already takes more memory to render it all, so the calculation can take longer from inside the view if it is memory expensive.
Any light shed on this and comments will be highly appreciated. :)
While I'm not answering the performance part of your question, you are breaking the Rails MVC principle when going the 2nd way. The View is not meant to perform any (especially complex) calculations on Model data.
TL;DR
In case of memory I think the only one possibility to 'slow it down' is when you exceed your memory and starting to use swap file. I haven't researched rails so deep, but if we take your thoughts into account and think that average controller execution takes X memory and average view takes 1.3 * X (absolutely random coefficient from my head), then chances to get into swap from view will be slightly bigger, then from the controller. If my thoughts are correct, then controller is better in technical side.
On conceptual side. Views are just for rendering results and in your case I would definitely move this heavy method outside the view. You are concerned about 'additional instance variable' and your concerns are correct...
My team follows Sandy Metz's rules. One of the state:
Pass only one single instance interface(variable) to the view. If you
need multiple instance variables, then wrap the whole logic with
Facade pattern and let this facade provide all the interfaces you need.
So... I would setup a facade and wrap this heavy method into one of it's property.
So we have 2-0 to put it into controllers.
PS: I have strong concerns about lazy loading and so on.... I think that facade method will be calculated only after it's real call (not during instance initialization), so for real your slow logic will still get be called only in view (during first real refer to this method). I think you can prevent this by making all the calculations inside Facade constructor initialize and put the result into instance variable and then refer from view, this variable with result.... But...
I am not even sure, that variable will be calculated before first real usage (it's possible that there are also such mechanism for optimization to not execute, what is not yet used. As ActiveRecord does such thing during methods chaining. No real SQL executed unless you really refer to one of the object). So I have concerns, that even moving this method into Facade constructor can still end up with it being calculated only in view. But you can check it with logs/debuggers to be sure
This is a conceptual question and I haven't been able to find the answer in SO, so here I go:
Why instance variables are used to connect controllers and views? Don't we have two different objects of two different classes (Controller vs Views). So, when the view is rendered we are in a different context, but we are using instance variables of another object? Isn't this breaking encapsulation in somehow?
How does Rails manage to do that matching from one object to another? Does it clone all the instances variables of the controller to the view?
In a sense, you could say that it is breaking encapsulation. I have found that if you are not careful, it is easy to get your business/presentation logic mixed together in Rails. It usually starts when I am writing a view template, and discover that I need some value which I didn't pass from the controller. So I go back, and tweak the controller to suit what I need in the view. After one tweak, and another, and another, you look at the controller method, and it is setting all kinds of instance variables which don't make sense unless you look at the view to see what they are for. So you end up in a situation where you need to look at both controller and view to understand either, rather than being able to take one or the other in isolation.
I think that using instance variables (together with the Binding trick) is simply a way to pass whatever values you need from controller to view, without having to declare parameters in advance (as you would when defining a method). No declarations means less code to write, and less to change when you want to refactor and reorganize things.
Rails uses eval and Binding to pass controller instance variables to views. See this presentation from Dave Thomas, there's a small example at minute 46' that explains how this is done.
I'm currently working on a Rails project, and have found times where it's easiest to do
if object.class == Foo
...
else if object.class == Bar
...
else
...
I started doing this in views where I needed to display different objects in different ways, but have found myself using it in other places now, such as in functions that take objects as arguments. I'm not precisely sure why, but I feel like this is not good practice.
If it's not good practice, why so?
If it's totally fine, when are times that one might want to use this specifically?
Thanks!
Not sure why that works for you at all. When you need to test whether object is instance of class Foo you should use
object.is_a? Foo
But it's not a good practice in Ruby anyway. It'd much better to use polymorphism whenever it's possible. For example, if somewhere in the code you can have object of two different classes and you need to display them differently you can define display method in both classes. After that you can call object.display and object will be displayed using method defined in the corresponding class.
Advantage of that approach is that when you need to add support for the third class or a whole bunch of new classes all you'll need to do is define display method in every one of them. But nothing will change in places where you actually using this method.
It's better to express type specific behavior using subtyping.
Let the objects know how they are displays. Create a method Display() and pass all you need from outside as parameter. Let "Foo" know to display foo and "Bar" know how to display bar.
There are many articles on replacing conditionals with polymorphism.
It’s not a good idea for several reasons. One of them is duck typing – once you start explicitly checking for object class in the code, you can no longer simply pass an instance of a different class that conforms to a similar interface as the original object. This makes proxying, mocking and other common design tricks harder. (The point can be also generalized as breaking encapsulation. It can be argued that the object’s class is an implementation detail that you as a consumer should not be interested in. Broken encapsulation ≈ tight coupling ≈ pain.)
Another reason is extensibility. When you have a giant switch over the object type and want to add one more case, you have to alter the switch code. If this code is embedded in a library, for example, the library users can’t simply extend the library’s behaviour without altering the library code. Ideally all behaviour of an object should be a part of the object itself, so that you can add new behaviour just by adding more object types.
If you need to display different objects in a different way, can’t you simply make the drawing code a part of the object?
i have a function that converts an array to a hash which i would like to use across all the model, controller and view files in the rails app.
Does this violate some core design principle, or am i missing something really obvious?
UPDATE: This is actually a software engineering question. I want to understand why some "convenient" things are not allowed in rails, and i suspect it is precisely because they do not want us to do it
This is likely actually a bad practice. It'd likely be better to instead always work with arrays and hashes in your controllers and models and if necessary convert them in the view to the alternative.
That is, if the data is natively represented as a array throughout your application work with it that way and if required to be a hash in the view either convert it first and assign it or convert it in the view using a helper.
View global helpers go in: helpers/application_helper.rb
If you must call a helper from a controller you can still define it there and I believe you can do:
def Something
....
hashData = #template.helper(arrayData)
end
Calling helpers in a model is REALLY not a good idea, there's no point.
As a final note, encapsulating this logic in a library would likely be ideal, your controllers can call a library & your view helpers can as well.
I think you are: views should not need that method. The controller ought to do it and pass it along to the view for display. The controller or, better yet, service layer might apply that method to a model object, but there's little reason for a model object to know about it.