I was wondering if I can I use encode/decode with an NSManagedObject. Can I use that or I need a NSObject?
It does't make a lot of sense to do so. Technically you can implement the methods, but you need to think carefully about what they do. You could use them to create a binary representation of the object, but it might not be meaningful for what you want to use it for. And, when reading that data back in, you could implement it to search some context and return an existing object or insert a new one if it's missing. You have the problem of supplying the context to be used to deal with...
So, while you might be able to do it, it probably isn't the correct approach to be taking.
Related
I'm refactoring my Rails 5.2 app in such a way that one model (OrigM) is being split into two (OrigM, NewM). Out in the codebase, there is code that does update on OrigM objects, and I want this to now result in specialized behavior in which associated OrigM and NewM objects are updated appropriately.
It seems like a straightforward way to do this is to override update in the OrigM model, but trying to do it a simpleminded way results in infinite recursion, because apparently even the class method OrigM.update(id, new_attrs) under the hood is just finding the object with id id and calling update as an instance method on it.
I have figured out a couple other ways I could perhaps make this work, although I haven't fully tested them yet: (1) Update OrigM attributes by calling write_attribute on them individually (seems like this will not let me check validations), or (2)
self.assign_attributes(new_attrs) ## resulting in "dirty" object, then
self.save!
I'm leaning toward this second solution, as it seems like it ought to be pretty close to functionally identical to what the standard update is doing.
EDIT:
I've realized that no one can really answer this unless I am asking a question, so I will now recast it as:
QUESTION: Should I, or should I not, override AR's update method (in OrigM class)?
Commenters have said: "do not do this or you will be reviled" (which honestly I was sure someone would say lol). But... is it really such a crime? Is there a "killer" reason not to do it (something you couldn't argue with)? Or is it just a vaguely encouraged best practice?
Also: "why not just define a new method if it does something different?" Well... I guess I need to say more about the models... I don't want to try to give actual code, because it's too complex. Suffice to say: most of the code still interacts with OrigM objects, and will not know about NewM. NewM was only needed architecturally because I needed to allow multiple OrigMs to share (belong_to) a single NewM to avoid duplication of data. So I would like to allow most of the rest of the code to maintain the illusion that there is still only OrigM, and be able to update it as usual. This feels like the most elegant solution to me. If I create a new OrigM#myupdate, future developers (including myself) might curse me because they tried to do a simple update and it broke things. Iow, isn't having to know that there is a special updating function just as confusing as having to know that AR's update has been overridden?
There are a number of questions and answers on SO that ask how do I serialise an object to in objective c.
Serialize and Deserialize Objective-C objects into JSON
Objective C serialize list of complex objects
Serialize custom object to JSON which contains NSMutableArray
How to serialize a class in IOS sdk (Objective-c)?
The following 3 methods are all mentioned in the above links.
1) Use NSJSONSerialization to serialise the object to JSON. Seems good but this requires the object in question to be either an array or dictionary at its top level. Common solution is to declare custom toDictionary or serialise method that loops over the properties and sets the relevant key and values.
2) Conform to the NSCoder protocol, a little like the approach above but there seems to be some confusion around whether or not this can serialise to JSON or just to NSData.
3) Third party library.
I'm getting slightly confused as to what approach to take. I want to serialise to JSON, there are contradictory answers some stating you can use NSCoder some saying not. I know that a third party app will work however I'd rather implement something simple like options 1 or 2.
Thoughts?
With 1, you'd basically be writing a JSON-based implementation of NSCoder from scratch. Certainly doable, though.
With 2, I believe it might be possible, since I think the output of NSCoder is some variant of XML (though compressed into a binary blob). However, I don't know if this is a great approach, since the format is proprietary and not really meant to be human-editable. There might also be a mismatch between what's allowed in JSON vs. the NSCoder format in terms of keys and leaf nodes, forcing you to do a messy conversion.
I've been trying to do something similar, and based on my research, I actually suggest 3. Using something like Mantle — a stable, polished framework that gets frequent updates — you can specify exactly how your model objects will be serialized to and deserialized from JSON. It even supports the NSCoder protocol as an option! (This is effectively solution 1, but vetted and maintained by a 3rd party.)
I'm currently working on a Rails project, and have found times where it's easiest to do
if object.class == Foo
...
else if object.class == Bar
...
else
...
I started doing this in views where I needed to display different objects in different ways, but have found myself using it in other places now, such as in functions that take objects as arguments. I'm not precisely sure why, but I feel like this is not good practice.
If it's not good practice, why so?
If it's totally fine, when are times that one might want to use this specifically?
Thanks!
Not sure why that works for you at all. When you need to test whether object is instance of class Foo you should use
object.is_a? Foo
But it's not a good practice in Ruby anyway. It'd much better to use polymorphism whenever it's possible. For example, if somewhere in the code you can have object of two different classes and you need to display them differently you can define display method in both classes. After that you can call object.display and object will be displayed using method defined in the corresponding class.
Advantage of that approach is that when you need to add support for the third class or a whole bunch of new classes all you'll need to do is define display method in every one of them. But nothing will change in places where you actually using this method.
It's better to express type specific behavior using subtyping.
Let the objects know how they are displays. Create a method Display() and pass all you need from outside as parameter. Let "Foo" know to display foo and "Bar" know how to display bar.
There are many articles on replacing conditionals with polymorphism.
It’s not a good idea for several reasons. One of them is duck typing – once you start explicitly checking for object class in the code, you can no longer simply pass an instance of a different class that conforms to a similar interface as the original object. This makes proxying, mocking and other common design tricks harder. (The point can be also generalized as breaking encapsulation. It can be argued that the object’s class is an implementation detail that you as a consumer should not be interested in. Broken encapsulation ≈ tight coupling ≈ pain.)
Another reason is extensibility. When you have a giant switch over the object type and want to add one more case, you have to alter the switch code. If this code is embedded in a library, for example, the library users can’t simply extend the library’s behaviour without altering the library code. Ideally all behaviour of an object should be a part of the object itself, so that you can add new behaviour just by adding more object types.
If you need to display different objects in a different way, can’t you simply make the drawing code a part of the object?
I've noticed that luasocket doesn't seem to provide a way to know if a value is a luasocket object or not.
The usual approach of comparing metatables doesn't work, as different socket object types have different metatables.
There don't seem to be any consistent values in the metatable to check either (eg, same __tosting metamethods)
So: how can one know if a value they have is a luasocket object?
Since you only want to know if it's a LuaSocket object so you can get the fd, why not just look to see if the object has a getfd() method? As a bonus this will work with current and future libraries that provide this method on objects, not just LuaSocket.
This technique is known as 'duck typing'.
You don't. Generally, you're expected to keep track of that sort of thing yourself. You trust that objects you are passed are what you expect them to be. And if you're not sure, you can always use pcall to call functions on them and catch any errors.
The problem is whether an instance method should in anyway alter the object that contains the method or should it return a new instance? I'm new to F# and the concept of full mmutability that is suggested for F#.
Just using psuedo code for now unless I need to be more specific.
First thought is just add the message to the message list on the object:
class Something
ctr(messages)
_messages.Add(messages)
AddMessage(message)
_messages.Add(message)
Second is to construct a new list that joins the old list and the new message. Then I would create a new instance altogther and send back.
class Something
ctr(messages)
_messages.Add(messages)
AddMessage(message)
newMessageList = _messages.Join(message)
return new Something(newMessageList)
Am I overthinking immutability?
In my opinion, the answer depends on your requirements. The immutable style is probably more idiomatic, and would be a sensible default. However, one nice thing about F# is that you can choose what to do based on your needs; there's nothing inherently wrong with code that uses mutation. Here are some things to consider:
Sometimes the mutable approach leads to better performance, particularly when used in a single-threaded context (but make sure to measure realistic scenarios to be sure!)
Sometimes the immutable approach lends itself better to use in multi-threaded scenarios
Sometimes you want to interface with libraries that are easier to use with imperitave code (e.g. an API taking a System.Action<_>).
Are you working on a team? If so, are they experienced C# developers? Experienced F# developers? What kind of code would they find easiest to read (perhaps the mutable style)? What kind of code will you find easiest to maintain (probably the immutable style)?
Are you just doing this as an exercise? Then practicing the immutable style may be worthwhile.
Stepping back even further, there are a few other points to consider:
Do you really even need an instance method? Often, using a let-bound function in a module is more idiomatic.
Do you really even need a new nominal type for what you're doing? If it's just a thin wrapper around a list, you might consider just using lists directly.
As you are doing "class based" programming which is one of the way (rather unfortunate) to do object oriented programming, you would be doing in place state modification rather than returning a new state (as that's what would be expected when you are doing OO).
In case you really want to go towards immutability then I would suggest you need to use more FP concepts like Modules, Functions (not methods which have you have in class based programming), recursive data types etc.
My answer is way too general and the appropriate answer lies in the fact that how this class of your will fit in the big picture of your application design.