I would like one instance of a model in memory to serve as a template for creating other objects for performance reasons, so that duplicates look like the original object but otherwise share no common components with the object they are initialized from, as if they were loaded with Model.find(template_object.id). I've tried some of the available solutions but none seems to do what I need: .dup and .deep_dup will create a new object with nil id and .clone will make some of the fields common to both the initializer and the initialized.
Currently my API is giving out the original objects that I keep as class variables, but I discovered that it leads to obscure memory leaks when the code using the objects manipulates their associations - these are kept in memory indefinitely. I hope that by giving out copies the associations of the template objects will stay untouched and the leak will be gone.
This sounds like the use case for defining a class and just initializing instances. You can customize whatever properties you want shared in the MyClass#new method. Without knowing more about your needs I will add that if you must store a template in memory you could store it as a class variable perhaps MyClass##template but would need to hear more to opine further. 😄
What I found when browsing rails source is the .instantiate method:
MyModel.instantiate(#my_other_instance.attributes_before_type_cast.deep_dup)
Related
Why is it allowed to declare properties in categories when neither they nor their accessor methods are synthesized? Is there any performance overhead involved?
Is categorisation purely a compiler technique?
I'm trying to understand how categories work. This just explains what to do and what not to do. Are there any sources which go into more detail?
EDIT : I know that I can use associated references. Thats not what I'm asking for. I want to know why are the properties not synthesised? Is there a performance issue or a security issue if the compiler synthesises them? If there is I want to know what and how?
Why is it allowed to declare properties in categories [...] ?
Properties have many aspects (during compile- and runtime).
They always declare one or two accessor methods on the class.
They can change the selector when the compiler transforms dot notation to messages.
In combination with the #synthesize directive (or by default) they can make the compiler synthesize accessor methods and optionally ivars.
They add introspection information to the class which is available during runtime.
Most of these aspects are still useful when declaring properties in categories (or protocols) and synthesizing is not available.
Is categorisation purely a compiler technique?
No. Categories, as properties, have both compile time as well as runtime aspects.
Categories, for example, can be loaded from dynamic libraries at a later time. So there might already be instances of a class that suddenly gets new methods added. That's one of the reasons categories cannot add ivars, because old objects would be missing these ivars and how should the runtime tell if an object has been created before or after the category has been added.
Before you go into categories, please reconsider the concept of properties in Obj-C: A property is something you can write and read to in an abstract sense, using accessors. Usually, there is an instance variable assigned to it, but there is no need to do so.
A property may also be useful e.g., to set a number of different instance variables in a consistent way, or to read from severals variables or do some calulation.
The crucial fact here: there is no need to have an instance variable assigned to a property.
A category serves as an extensiton of an object's behavior, i.e., to extend its set of methods, without changing the data. If you see a property in it abstract sense, then it add accessors, thus it matches the idea of a category.
But if you synthesize it, an instance variable would be generated what contradicts the idea of a category.
Thus, a property in a category makes only sense if you use it in the uncommon, abstract way, and #synthesize is to ease the common way.
You may want to read NSHipster about how to implement properties storage in categories.
Quoting from the article: "Why is this useful? It allows developers to add custom properties to existing classes in categories, which is an otherwise notable shortcoming for Objective-C."
#synthesize informs the compiler to go ahead and provide a default implementation for the setter and the getter.
Said default setters/getters rely on the existence of some kind of storage inside the object.
Categories do not offer any extra storage, so default setters/getters would have no place to store into, or read from.
An alternative is to use:
#dynamic
and then provide your own implementation and own storage for the said properties.
One way is to use associated objects.
Another would be to store into/read from some completely unrelated place, such as some accessible dictionary of NSUserDefaults or ...
In some cases, for read only properties, you can also reconstruct/compute their values at runtime without any need to store them.
My guess would be in the ABAP memory from the main session, but I'm not sure and cannot find anything in the documentation. Does anyone know for sure?
Check this article for the basic memory layout and terminology, unless you already have done so. The static attributes of a class are handled the same way the global variables of a function pool are (you might think of them as global variables of the class pool, but don't hit me too hard for that analogy). Whenever you open a new internal session (e. g. with SUBMIT), they are reinitialized. You could try to check this with a small program that recursively calls itself using SUBMIT ... AND RETURN for yourself.
I have an application using the Entity Framework code first. My setup is that I have a core service which all other services inherit from. The core service contains the following code:
public static DatabaseContext db = new DatabaseContext();
public CoreService()
{
db.Database.Initialize(force: false);
}
Then, another class will inherit from CoreService and when it needs to query the database will just run some code such as:
db.Products.Where(blah => blah.IsEnabled);
However, I seem to be getting conflicting stories as to which is best.
Some people advise NOT to do what I'm doing.
Other people say that you should define the context for each class (rather than use a base class to instantiate it)
Others say that for EVERY database call, I should wrap it in a using block. I've never seen this in any of the examples from Microsoft.
Can anyone clarify?
I'm currently at a point where refactoring is possible and quite quick, so I'd like some general advice if possible.
You should wrap one context per web request. Hold it open for as long as you need it, then get rid of it when you are finished. That's what the using is for.
Do NOT wrap up your context in a Singleton. That is not a good idea.
If you are working with clients like WinForms then I think you would wrap the context around each form but that's not my area.
Also, make sure you know when you are going to be actually executing against your datasource so you don't end up enumerating multiple times when you might only need to do so once to work with the results.
Lastly, you have seen this practice from MS as lots of the ADO stuff supports being wrapped in a using but hardly anyone realises this.
I suggest to use design principle "prefer composition over inheritance".
You can have the reference of the database context in your base class.
Implement a singleton for getting the DataContext and assign the datacontext to this reference.
The conflicts you get are not related to sharing the context between classes but are caused by the static declaration of your context. If you make the context an instance field of your service class, so that every service instance gets its own context, there should be no issues.
The using pattern you mention is not required but instead you should make sure that context.Dispose() is called at the service disposal.
I'm currently working on a Rails project, and have found times where it's easiest to do
if object.class == Foo
...
else if object.class == Bar
...
else
...
I started doing this in views where I needed to display different objects in different ways, but have found myself using it in other places now, such as in functions that take objects as arguments. I'm not precisely sure why, but I feel like this is not good practice.
If it's not good practice, why so?
If it's totally fine, when are times that one might want to use this specifically?
Thanks!
Not sure why that works for you at all. When you need to test whether object is instance of class Foo you should use
object.is_a? Foo
But it's not a good practice in Ruby anyway. It'd much better to use polymorphism whenever it's possible. For example, if somewhere in the code you can have object of two different classes and you need to display them differently you can define display method in both classes. After that you can call object.display and object will be displayed using method defined in the corresponding class.
Advantage of that approach is that when you need to add support for the third class or a whole bunch of new classes all you'll need to do is define display method in every one of them. But nothing will change in places where you actually using this method.
It's better to express type specific behavior using subtyping.
Let the objects know how they are displays. Create a method Display() and pass all you need from outside as parameter. Let "Foo" know to display foo and "Bar" know how to display bar.
There are many articles on replacing conditionals with polymorphism.
It’s not a good idea for several reasons. One of them is duck typing – once you start explicitly checking for object class in the code, you can no longer simply pass an instance of a different class that conforms to a similar interface as the original object. This makes proxying, mocking and other common design tricks harder. (The point can be also generalized as breaking encapsulation. It can be argued that the object’s class is an implementation detail that you as a consumer should not be interested in. Broken encapsulation ≈ tight coupling ≈ pain.)
Another reason is extensibility. When you have a giant switch over the object type and want to add one more case, you have to alter the switch code. If this code is embedded in a library, for example, the library users can’t simply extend the library’s behaviour without altering the library code. Ideally all behaviour of an object should be a part of the object itself, so that you can add new behaviour just by adding more object types.
If you need to display different objects in a different way, can’t you simply make the drawing code a part of the object?
The demos included in the Prevayler distribution show how to pass in a couple strings (or something simple like that) into a command constructor in order to create or update an object. The problem is that I have an object called MyObject that has a lot of fields. If I had to pass all of them into the CreateMyObject command manually, it would be a pain.
So an alternative I thought of is to pass my business object itself into the command, but to hang onto a clone of it (keeping in mind that I can't store the BO directly in the command). Of course after executing this command, I would need to make sure to dispose of the original copy that I passed in.
public class CreateMyObject implements TransactionWithQuery {
private MyObject object;
public CreateMyObject(MyObject business_obj) {
this.object = (MyObject) business_obj.clone();
}
public Object executeAndQuery(...) throws Exception {
...
}
}
The Prevayler wiki says:
Transactions can't carry direct object references (pointers) to business objects. This has become known as the baptism problem because it's a common beginner pitfall. Direct object references don't work because once a transaction has been serialized to the journal and then deserialized for execution its object references no longer refer to the intended objects - - any objects they may have referred to at first will have been copied by the serialization process! Therefore, a transaction must carry some kind of string or numeric identifiers for any objects it wants to refer to, and it must look up the objects when it is executed.
I think by cloning the passed-in object I will be getting around the "direct object pointer" problem, but I still don't know whether or not this is a good idea...
Cloning doesn't help you with the baptism problem, unless you make sure that the original object has no references to other objects. But that is a different problem than what you described.
If you don't want to write so many createCommands, pass in a Dictionary of name-value pairs and a key to the class to create.
I have never used Prevayler and I'm not sure if I understand your problem, but I think You gave yourself an answer:
Direct object references don't work
because once a transaction has been
serialized to the journal and then
deserialized for execution its object
references no longer refer to the
intended objects - - any objects they
may have referred to at first will
have been copied by the serialization
process
In CreateMyObject keep uniqe identifier of MyObject. Not a reference. Cloning has nothing to do here.