Does factoring class functions into modules improve performance in Rails? - ruby-on-rails

I try to clean up the models of my Rails apps by extracting repeating code into modules. I know that makes my code DRYer, but does it make my app more performant? I guess the question would be: is the code from the module loaded in memory once and for all or is it loaded every time it's included in a class? And consequently, is it advisable to include modules into ActiveRecord::Base or should they be included only in classes that need them?

As require is documented to give a different return status when told to load already loaded files, I assume that the code is only parsed once.
As to performance gains: With Java indirection is adding a penalty to method execution, which can become significant, if the method itself is small and is only found some levels of inheritance away.
On the other hand the JVM will try to turn the Java-Bytecode into Native-Bytecode, starting with the most frequently called methods. For JRuby this clearly means that methods, which are only defined once, are more likely to become optimized (early) than other methods.
I don't know the exact behavior of the Ruby-interpreter, but would assume it to also optimize more frequently called methods first.
As with any performance issue I would also assume, that there is no "this is always performing better" and individual benchmarks are needed to find the best approach for every situation.
I would advice to
do DRY programming, so you can easily adjust your code to new requirements
look at the performance of the resulting program
if you are unsatisfied with the performance (and only then)
identify often called functions and long-running functions
try to move these functions directly into the classes or rewrite in a way you generally call less functions
Including modules into classes which don't use them can become irritating and could only boost performance directly after starting the application as the may be parsed earlier. I would advice against doing so.

Related

Debating the use of dependency injection in iOS app

I'm working on an iOS app where we need different binaries for each customer based on their needs. A customer may want to change all the colors, icons and texts. We can do that through white labeling process. The problem here, though, is when they ask for different behavior, for instance, removing login screen and making it optional to login.
I thought we can use dependency injections and use different handlers for each customer if needed. For instance, we can have LoginHandler1 and LoginHandler2, both implementing ILoginHandler and inherit from UIViewController.
However, use of dependency injection is costly, it slows down the app because resolving is expensive comparing to normal instantiation.
The other way is to define all these behaviors in the app and enable/disable them in a plist file. like "is login optional? yes/no"
Any suggestions?
Thanks
You should create the entire object graph up-front, in the composition root. Object creation, and constructor injection, should not take much time at all as long as your constructors are not doing any actual work.
That being said, there are times when creating the entire object graph at the start of the application may take longer than is acceptable. In those cases, you can use lazy-loading to defer the costly initialization until later - while still creating the objects in the composition root.
Mark Seemann describes this approach in more detail here: Compose object graphs with confidence.
I thought we can use dependency injections and use different handlers for each customer if needed.
You thought right. Flexibility is one of the main reasons people use DI.
However, use of dependency injection is costly, it slows down the app because resolving is expensive comparing to normal instantiation.
It really doesn't cost that much at all. Have you tried it yourself? Unless the object in question (i.e. the object being injected) is very expensive to instantiate, you have no real reason to stay away from DI and Inversion Of Control. Also, as #Lilshieste noted above, creating the object graph up front (see AppDelegate) will probably make this even less a problem.
A good way of doing that is described here:
http://cocoapatterns.com/passing-data-between-view-controllers/ and here http://cocoapatterns.com/ios-view-controller-transitions-mediator-pattern/
The other way is to define all these behaviors in the app and enable/disable them in a plist file. like "is login optional? yes/no"
While less "elegant", this solution is a pretty useful one, especially if the project is not really big in terms of number of classes and VCs. It is also the easiest one to implement if the app code is already laid out and introducing major design changes would ask for lots of refactoring.
Always take action based on the task at hand, there is rarely if ever a single solution to a software design problem.

What strategy should be used when exposing c++ to Lua

I have a c++ library which has functionality exposed to Lua, and am seeking opinions on the best ways to organise my lua code.
The library is a game engine, with a component based Game Object system. I want to be able to write some of these components as classes in Lua. I am using LuaBind, so I can do this but there are some implementation choices I must make, and would like to know how others have done it.
Should I have just one global lua_State, or one per object, one per scene, etc?
This sounds like a lot of memory overhead, but will keep everything nice and separate.
Should I have one GLOBALS table, or one per object, which can be put in place before a call to a member? This would seem to minimize the chances of some class deciding to use globals, and another accidentally overwriting it, with less memory overhead than having many lua_States.
Or should I just bung everything in the one globals table?
Another question involves the lua code ittself. Two strategies occur... Firstly shoving all class definitions in one place, loading them when the application launches, Secondly putting one class definition per file, and simply making sure that file is loaded when I need to instance it.
I'd appreciate anyone's thoughts on this, thanks.
While LuaBind is certainly very nifty and convenient, as your engine grows, so will your compile times, drastically.
If you already have, or are planning to add, a messaging system (which I heavily recommend, specially for networking), then it simplifies problems significantly. In this case, what you will need to do is simply bind a few key functions to interface with the messaging system. This will keep your compile times down, and give you a very flexible system.
Since you are doing a component based engine (Good choice BTW), It makes more sense to integrate scripting as an object component. This way, it usually makes more sense to make each scripting component a new coroutine that runs behavior for each particular object. You need not to worry about memory too much, Lua states are very light, and can be made really fast if you interface your memory manager with Lua.
If you implement scripting as a component, it is still a good idea to have global or per-level scripts loaded, (to coordinate event triggers by other objects, or maybe enemy spawning timers).
As far as loading scripts go, it would not be bad practice, to just load the scripts you will need for a level all at once, and keep them in a global table for fas accessing, loading of lua scripts is pretty fast, specially if you pre-compiled them.
One consideration is how you're planning to thread things. If you want to run the code for two Game Objects in parallel, for example, then they really ought to have their own separate lua_States so that they can both be running at the same time. (Of course, that also means that they can't really share any state, except via the C code where you'd need to be conscious of thread-safety.)
As to the Lua code, I'd recommend loading everything when the app launches (unless you really need to do "lazy" loading of your core classes on demand). It typically simplifies maintenance and debugging. And in the case of code being loaded that's no longer needed, the garbage collector will clean that up with a quickness. :-)

Erlang: multiple behaviors defined in the same module?

Q: I'd like to have an idea of the pros and cons of defining multiple behaviors in the same module file.
E.g.
-module(someapp_sup).
-behavior(supervisor).
-behavior(application).
Using this sort of layout, I can save a module file whilst not loosing much on the maintainability side (the whole application is started through someapp_sup:start()).
As long as the callbacks defined in the behavior don't conflict with a callback of another behavior (say you defined your own behavior, for example) then there's nothing wrong with doing this other than potentially more confusing code. Obviously you can curb that with some well placed comments and laying the code out sensibly in the file.

Is there any hard data on the value of Inversion of Control or dependency injection?

I've read a lot about IoC and DI, but I'm not really convinced that you gain a lot by using them in most situations.
If you are writing code that needs pluggable components, then yes, I see the value. But if you are not, then I question whether changing a dependency from a class to an interface is really gaining you anything, other than more typing.
In some cases, I can see where IoC and DI help with mocking, but if you're not using Mocking, or TDD then what's the value? Is this a case of YAGNI?
I doubt you will have any hard data on it, so I will add some thoughts on it.
First, you don't use DI (or other SOLID principles) because it helps you do TDD. Its the other way around, you do TDD because it helps you with the design - which usually means you get code that follow those principles.
Discussing why to use interfaces is a different matter, see: https://stackoverflow.com/questions/667139/what-is-the-purpose-of-interfaces.
I will assume you agree that having your classes do many different things results in messy code. Thus, I am assuming you are already going for SRP.
Because you have different classes that do specific things, you need a way to relate them. If you relate them inside the classes (i.e. the constructors), you get plenty of code that uses specific versions of the classes. This means that making changes to the system will be hard.
You are going to need to change the system, that's a fact of software development. You can call YAGNI about not adding specific extra features, but not on that you won't be needing to change the system. In my case that's something really important as I do weekly sprints.
I use a DI framework where configuration is done through code. With a really small code configuration, you hook up lots of different relations. So, when you take away the discussion on interface vs. concrete classes, you are actually saving typing not the other way around. Also for the cases a concrete class is on the constructor, it hooks it up automatically (I don't have to configure) building the rest of the relations. It also allows me to control some objects life time, in particular I can configure an object to be a Singleton and it hands a single instance all the time.
Also note that just using these practices isn't more overhead. Using them for the first times, is what causes the overhead (because of the learning process + in some cases mind set change).
Bottom line: you ain't gonna need to put all those constructor calls all over the place to go faster.
The most significant gains from DI are not necessarily due to the use of interfaces. You do not actually need to use interfaces to have beneficial effects of dependency injection. If there's only one implementation you can probably inject that directly, and you can use a mix of classes and interfaces.
You're still getting loose coupling, and quite a few development environments you can introduce that interface with a few keypresses if needed.
Hard data on the value of loose coupling I cannot give, but it's been a vision in textbooks for as long as I can remember. Now it's real.
DI frameworks also give you some quite amazing features when it comes to hierarchical construction of large structures. Instead of looking for the leanest DI framework around, I'd recommend you look for a full-featured one. Less isn't always more, at least when it comes to learning about new ways of programming. Then you can go for less.
Apart from testing also the loose coupling is worth it.
I've worked on components for an embedded Java system, which had a fixed configuration of objects after startup (about 50 mostly different objects).
The first component was legacy code without dependency injection, and the subobjects where created all over the place. Now it happened several times that for some modification some code needed to talk to an object which was only available three constructors away. So what can you do but add another parameter to the constructor and pass it through, or even store it in a field to pass it on later. In the long run things became even more tangled than they already where.
The second component I developed from scratch, and used dependency injection (without knowing it at the time). That is, I had one factory which constructed all objects and injected then on a need to know basis. Adding another dependency was easy, just add it to the factory and the objects constructor (or add a setter to avoid loops). No unrelated code needed to be touched.

How do you make code reusable? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Closed 2 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Any code can be reused in a way or an other, at least if you modify the code. Random code is not very reusable as such. When I read some books, they usually say that you should explicitly make the code reusable by taking into account other situations of code usage too. But certain code should not be an omnipotent all doing class either.
I would like to have reusable code that I don't have to change later. How do you make code reusable? What are the requirements for code being reusable? What are the things that reusable code should definitely have and what things are optional?
See 10 tips on writing reusable code for some help.
Keep the code DRY. Dry means "Don't Repeat Yourself".
Make a class/method do just one thing.
Write unit tests for your classes AND make it easy to test classes.
Remove the business logic or main code away from any framework code
Try to think more abstractly and use Interfaces and Abstract classes.
Code for extension. Write code that can easily be extended in the future.
Don't write code that isn't needed.
Try to reduce coupling.
Be more Modular
Write code like your code is an External API
If you take the Test-Driven Development approach, then your code only becomes re-usable as your refactor based on forthcoming scenarios.
Personally I find constantly refactoring produces cleaner code than trying to second-guess what scenarios I need to code a particular class for.
More than anything else, maintainability makes code reusable.
Reusability is rarely a worthwhile goal in itself. Rather, it is a by-product of writing code that is well structured, easily maintainable and useful.
If you set out to make reusable code, you often find yourself trying to take into account requirements for behaviour that might be required in future projects. No matter how good you become at this, you'll find that you get these future-proofing requirements wrong.
On the other hand, if you start with the bare requirements of the current project, you will find that your code can be clean and tight and elegant. When you're working on another project that needs similar functionality, you will naturally adapt your original code.
I suggest looking at the best-practices for your chosen programming language / paradigm (eg. Patterns and SOLID for Java / C# types), the Lean / Agile programming literature, and (of course) the book "Code Complete". Understanding the advantages and disadvantages of these approaches will improve your coding practice no end. All your code will then become reausable - but 'by accident', rather than by design.
Also, see here: Writing Maintainable Code
You'll write various modules (parts) when writing a relatively big project. Reusable code in practice means you'll have create libraries that other projects needing that same functionality can use.
So, you have to identify modules that can be reused, for that
Identify the core competence of each module. For instance, if your project has to compress files, you'll have a module that will handle file compression. Do NOT make it do more than ONE THING. One thing only.
Write a library (or class) that will handle file compression, without needing anything more than the file to be compressed, the output and the compression format. This will decouple the module from the rest of the project, enabling it to be (re)used in a different setting.
You don't have to get it perfect the first time, when you actually reuse the library you will probably find out flaws in the design (for instance, you didn't make it modular enough to be able to add new compression formats easily) and you can fix them the second time around and improve the reusability of your module. The more you reuse it (and fix the flaws), the easier it'll become to reuse.
The most important thing to consider is decoupling, if you write tightly coupled code reusability is the first casualty.
Leave all the needed state or context outside the library. Add methods to specify the state to the library.
For most definitions of "reuse", reuse of code is a myth, at least in my experience. Can you tell I have some scars from this? :-)
By reuse, I don't mean taking existing source files and beating them into submission until a new component or service falls out. I mean taking a specific component or service and reusing it without alteration.
I think the first step is to get yourself into a mindset that it's going to take at least 3 iterations to create a reusable component. Why 3? Because the first time you try to reuse a component, you always discover something that it can't handle. So then you have to change it. This happens a couple of times, until finally you have a component that at least appears to be reusable.
The other approach is to do an expensive forward-looking design. But then the cost is all up-front, and the benefits (may) appear some time down the road. If your boss insists that the current project schedule always dominates, then this approach won't work.
Object-orientation allows you to refactor code into superclasses. This is perhaps the easiest, cheapest and most effective kind of reuse. Ordinary class inheritance doesn't require a lot of thinking about "other situations"; you don't have to build "omnipotent" code.
Beyond simple inheritance, reuse is something you find more than you invent. You find reuse situations when you want to reuse one of your own packages to solve a slightly different problem. When you want to reuse a package that doesn't precisely fit the new situation, you have two choices.
Copy it and fix it. You now have to nearly similar packages -- a costly mistake.
Make the original package reusable in two situations.
Just do that for reuse. Nothing more. Too much thinking about "potential" reuse and undefined "other situations" can become a waste of time.
Others have mentioned these tactics, but here they are formally. These three will get you very far:
Adhere to the Single Responsibility
Principle - it ensures your class only "does one thing", which means it's more likely it will be reusable for another application which includes that same thing.
Adhere to the Liskov
Substitution Principle - it ensures your code "does what it's supposed without surprises", which means it's more likely it will be reusable for another application that needs the same thing done.
Adhere to the Open/Closed Principle - it ensures your code can be made to behave differently without modifying its source, which means it's more likely to be reusable without direct modification.
To add to the above mentioned items, I'd say:
Make those functions generic which you need to reuse
Use configuration files and make the code use the properties defined in files/db
Clearly factor your code into such functions/classes that those provide independent functionality and can be used in different scenarios and define those scenarios using the config files
I would add the concept of "Class composition over class inheritance" (which is derived from other answers here).
That way the "composed" object doesn't care about the internal structure of the object it depends on - only its behavior, which leads to better encapsulation and easier maintainability (testing, less details to care about).
In languages such as C# and Java it is often crucial since there is no multiple inheritance so it helps avoiding inheritance graph hell u might have.
As mentioned, modular code is more reusable than non-modular code.
One way to help towards modular code is to use encapsulation, see encapsulation theory here:
http://www.edmundkirwan.com/
Ed.
Avoid reinventing the wheel. That's it. And that by itself has many benefits mentioned above. If you do need to change something, then you just create another piece of code, another class, another constant, library, etc... it helps you and the rest of the developers working in the same application.
Comment, in detail, everything that seems like it might be confusing when you come back to the code next time. Excessively verbose comments can be slightly annoying, but they're far better than sparse comments, and can save hours of trying to figure out WTF you were doing last time.

Resources