Creating dummy nodes for leetcode LinkedList questions - linked-list

I'm seeing a lot of solutions to these Leetcode LinkedList problems involve creating a dummy node, e.g.
Example
It seems to me that constructing this one additional ListNode object isn't a big deal and more importantly, considerably simplifies code. However, in my school, creating this kind of additional object is SEVERELY penalized.
I'm just wondering, is creating this extra dummy node acceptable in actual industry work? It makes code a lot more concise, yet is technically using extra resources

is creating this extra dummy node acceptable in actual industry work?
Yes. Although there aren't that many examples I can think of where you would use linked lists in production code and would have to implement the linked list data structure yourself (as opposed to using an implementation provided by the programming language or a standard library that comes with it).
It makes code a lot more concise
Yes
...yet is technically using extra resources.
Yes, but not more than when you would use two additional variables in your code. So this is not really an issue. A dummy node represents O(1) auxiliary space, which is what you would use anyway for iterating through a linked list using a node pointer/reference.
in my school, creating this kind of additional object is SEVERELY penalized.
Too bad.

Related

Strategy to reduce duplicate code in many modules

The Situation
So I have created some code in the form of modules that each represent a medical questionnaire (I'm calling them Catalogs). Each different questionnaire has it's own module as they may differ slightly in their content and associated calculations, but are essentially made up of simple questions that have boolean/numeric possible responses. Here is an example:
http://www.janssenmedicalinformation.ca/assets/pdf/HarveyBradshaw_English.pdf
These Catalog modules are included in an Entry class that collects responses matching the question names. Each questionnaire is transformed into a DEFINITION which is used in the Entry to do things like:
Validate inputs
Check completeness
Calculate scoring
Here are 2 examples for reference that illustrate the problem of duplication... much of the code is similar but not exactly the same.
https://gist.github.com/theworkerant/3a074d5d2a642ded1b96
The Problem
There is a lot of duplication here, but I'm not sure about the best strategy to remove it. There are a few things that make it difficult for this particular problem and make me lean towards accepting some duplication as opposed to a system that is too strict to work. The system needs to remain flexible enough to accommodate currently unknown medical questionnaires of a similar nature so I need to be careful (the reason I've gone with a Module system so far)
Here are some examples:
Each Catalog can have slightly different scoring requirements and custom grouping of questions that represent one "score"
Potentially many Catalogs are included in an Entry class and can't step on each other
Some Catalogs incorporate things like "Current Weight" for calculations, breaking the 1-5 or 1-10 paradigm and not fitting very nicely into simple sum reductions.
One Catalog requires a week of previous entries in order to be valid, a sort of weird custom validation.
The Question:
What strategies might be employed here to reduce duplication overall? I'm not looking for tweaks cut out a few lines from these specific examples. Implementation cost is a consideration.
Possibilities:
Put some of this into a database (sounds pretty good, but I think the cost of implementation could be high)
I fear there could be room for improvement in my metaprogramming here, perhaps there are better ways to accomplish this through some dynamic method creation or other voodoo.
Thanks!

Rails object structure for reporting metrics

I've asked a couple of questions around this subject recently, and I think I'm managing to narrow down what I need to do.
I am attempting to create some "metrics" (quotes because these should not be confused with metrics relating to the performance of the application; these are metrics that are generated based on application data) in a Rails app; essentially I would like to be able to use something similar to the following in my view:
#metric(#customer,'total_profit','01-01-2011','31-12-2011').result
This would give the total profit for the given customer for 2011.
I can, of course, create a metric model with a custom result method, but I am confused about the best way to go about creating the custom metrics (e.g. total_profit, total_revenue, etc.) in such a way that they are easily extensible so that custom metrics can be added on a per-user basis.
My initial thoughts were to attempt to store the formula for each custom metric in a structure with operand, operation and operation_type models, but this quickly got very messy and verbose, and was proving very hard to do in terms of adding each metric.
My thoughts now are that perhaps I could create a custom metrics helper method that would hold each of my metrics (thus I could just hard code each one, and pass variables to each method), but how extensible would this be? This option doesn't seem very rails-esque.
Can anyone suggest a better alternative for approaching this problem?
EDIT: The answer below is a good one in that it keeps things very simple - though i'm concerned it may be fraught with danger, as it uses eval (thus there is no prospect of ever using user code). Is there another option for doing this (my previous option where operands etc. were broken down into chunks used a combination of constantize and get_instance_variable - is there a way these could be used to make the execution of a string safer)?
This question was largely answered with some discussion here: Rails - Scalable calculation model.
For anyone who comes across this, the solution is essentially to ensure an operation always has two operands, but an operand can either be an attribute, or the result of a previous calculation (i.e. it can be a metric itself), and it is thus highly scalable. This avoids the need to eval anything, and thus avoids the potential security holes that this entails.

How crazy should I get with turning things into objects?

I'm still new to OOP, and the way I initially perceived it was to throw alot of procedural looking code inside of objects, and think I'd done my job. But as I've spent the last few weeks doing alot of thinking, reading, and coding (and looking at good code, which is a hugely under-rated resource), I believe I'm starting to grasp the different outlook. It's really just a matter of clarity, simplicity, and organization once you get down to it.
But now I'm starting to look at things as objects that are not as black and white a slamdunk case for being an object. For example, I have a parser, and usually the parser returns some strings that I have to deal with. But it has one specialized case where it has to return an array, and what goes in that array and how it's formatted has specialized rules. This only amounts to two lines plus one method of code, but this code sticks out to me as not being cleanly fitting in the Parser class, and I want to turn it into its own "ActionArray" object.
But is it going to far? Has OOP become a hammer that is making me look at everything like a nail? Is it possible to go too far with turning things into objects?
It's your call, but you should think of objects as real life objects.
Take for example a car. You could describe a car with different objects:
Engine
Wheels
Chassis
Or you could describe a car with just one object:
Engine
You can keep it simple and stupid or you can spread the dependency to different objects.
As a general guideline, I think Sesame Street says it best: you need an new object when "one of these things is not like the others".
Listen to your code. If it is telling you that your objects are becoming polluted with non-essential state and behavior (and thus violating the "Single Responsibility Principle"), or that one part of your object has a rate of change that is different from the rest, and so on, it is telling you that you are missing an object.
Do the simplest thing that could possibly work. When that no longer works, do the next simplest thing. And so on. In general, this means that a system tends to move from fewer, larger objects to more, smaller objects; but not always.
There are a number of great resources for OO design. In addition to the ones already mentioned, I highly recommend Smalltalk Best Practice Patterns and Implementation Patterns by Kent Beck. They use Smalltalk and Java examples, respectively, but I find the principles translate quite well to other OO languages.
Design patterns are your friend. A class rarely exists in a vacuum. It interacts with other classes, and the mechanisms by which your classes are coupled together is going to directly affect your ability to modify your code in the future. With poor class design, a change that you make in one class may ripple down and force changes in other classes, which cause you to have to change other classes, etc.
Design patterns force you to think about how classes relate to each other. For example, your Parser class might choose to implement the Strategy design pattern to abstract out the mechanism for parsing. You might decide to create your Parser as a Template design pattern, and then have each actual instance of the Parser complete the template.
The original book on Design Patters (Design Patterns: Elements of Reusable Object-Oriented Software is excellent, but can be dense and intimidating reading if you are new to OOP. A more accessible book (and specific to Ruby) might be Design Patterns in Ruby, which has a nice introduction to design patterns, and talks about the Ruby way of implementing those patterns.
Object oriented programming is a pretty tricky tool. Many people today are getting into the same conflict, by forgetting the fundamental OOP purpose, which is improving code maintainability.
You can always brainstorm about your future OO code reusability and maintainability, and decide yourself if it's the best way to go. Take look at this interesting study:
Potok, Thomas; Mladen Vouk, Andy Rindos (1999). "Productivity Analysis of Object-Oriented Software Developed in a Commercial Environment"

Questions about application service design

I come from a mostly n-tier background, and I'm trying to move more towards a DDD architecture. I'm trying to find best practices for designing the application service, and after a few searches, am still left with a few questions. Granted, I know I can't be the first person to ask these questions, so if you know where these are answered, just point me the way and I'll happily close this.
Here are my main questions:
How "open" should your signatures be? For example, is it better to be more rigid with your signatures and use simple types as parameters when possible, or is it better to use objects (messages?) that can later be modified without breaking the signature?
If you want to expose variations of a signature, for example, a UserSearch method that returns a list of users based on various (and sometimes optional) search criteria, is it better to:
A. Use overloads
B. Use optional (or nullable) parameters
C. Break each scenario into its own unique method
D. Use messages
I know that some of these answers are subjective, and also depend on what all will be calling your application service. But I'm just trying to get a general direction of things to consider and other best practices at this point.
Thanks in advance.
Good questions. Thinking about the API is obviously important.
1) How open would depend for me would depend on who the consumers are. If this application service is only being used within the context of your own solution and/or team then I think it's fine to have specific messages (or rather their interfaces) or Dtos (data transfer objects). Though if as easy then keeping to simple types is best in my book and definitely better if being consumed by others. If they don't suffice then interfaced messages that provide only just enough. Again if you are going to be distributed to different platforms then simple messages of simple types is not a bad way to go.
2) Why not have a SearchCriteria object as a paramater? It could be a SearchCriteria message of simple types if you are looking at this as a start of a messaging bus.
As you say, your question is a little open but I'd be interested to hear more as it sounds like you are asking the right questions at least.
Jerad, those are tough questions to answer generally, as you noted.
My personal preference is to use primitives in method signatures where possible. If I need to pass 3+ primitives to a method, I define custom data transfer objects.
The thinking being: if multiple values are being passed together, it's likely they represent a concept in your problem space, and thus should become an object. For example, if you are passing X and Y coordinates to a method, I'd recommend creating a Point class or struct that represents that concept.
The only time I'd end up with variations on a signature, it would be to provide convenience methods that provide default values for method parameters. To continue the above example, a Draw method might not require a Point, in which case I'd use (0,0).
So, I'd answer #1 with "not very open" and #2 with A.
I hope that helps.

Going bananas with loose coupling and dependency injection

With the latest additions to our dependency injection framework (annotations in spring), the marginal cost of creating DI-managed components seems to have hit some critical new threshold. While there previously was an overhead associated with spring (tons of XML and additional indirections), dependency injection seems to have started going where lots of patterns go; they go under the hood and "disappear".
The consequence of this is that the conceptual overhead associated with a large number of components becomes acceptable. It's arguable that we could make a system where most classes only expose
one single public method and build the whole system by just aggregating these pieces like crazy. In our case a few things are given; the user interface of your application has some functional requirements that shape the topmost services. And the back-end systems control the lower part. But in between these two, everything is up for grabs.
Our constant discussion is really why are we grouping things in classes and what should the principles be ? A couple of things are certain; the facade pattern is dead and buried. Any service containing multiple unrelated features also tend to get split up. "Unrelated feature" is interpreted in an extremely much stricter sense than I have ever done earlier.
In our team there are two prevailing trains of thought here: Implementation dependencies restrict grouping; any functionality in a single class should preferably be a client of all injected dependencies. We are a DDD project and the other fraction thinks the domain restricts grouping (CustomerService or finer grained CustomerProductService, CustomerOrderService) - normalized usage of injected dependencies is unimportant.
So in the loosely coupled DI universe, why are we grouping logic in classes ?
edit: duffymo point out that this may be moving towards a functional style of programming; which brings up the issue of state. We have quite a few "State" objects that represent (small) pieces of relevant application state. We inject these into any service that has a legitimate need for this state. (The reason we use "State" objects instead of regular domain objects is that spring construct these at an unspecified time. I see this as a slight workaround or alternate solution to letting spring manage the actual creation of domain objects. There may be better solutions here).
So for instance any service that needs OrderSystemAccessControlState can just inject this, and the scope of this data is not readily known to the consumer. Some of the security-relate state is typically used at a lot of different levels but totally invisible on the levels in-between. I really think this violates fundamentally with functional principles. I even had a hard time adjusting to this concept form an OO perspective - but as long as the injected state is precise and strongly type then the need is legit aka the use case is proper.
The overriding principals of good OO Design do not stop at loose coupling, but also high cohesion, which gets ignored in most discussions.
High Cohesion
In computer programming, cohesion is a
measure of how strongly-related or
focused the responsibilities of a
single module are. As applied to
object-oriented programming, if the
methods that serve the given class
tend to be similar in many aspects,
then the class is said to have high
cohesion. In a highly-cohesive system,
code readability and the likelihood of
reuse is increased, while complexity
is kept manageable.
Cohesion is decreased if:
* The functionality embedded in a class, accessed through its methods,
have little in common.
* Methods carry out many varied activities, often using coarsely-grained or
unrelated sets of data.
Disadvantages of low cohesion (or "weak cohesion") are:
* Increased difficulty in understanding modules.
* Increased difficulty in maintaining a system, because logical changes in
the domain affect multiple modules, and because changes in one module
require changes in related modules.
* Increased difficulty in reusing a module because most applications
won’t need the random set of operations provided by a module.
One thing that gets lost when people go crazy with IoC containers is the cohesion is lost and traceability of what and how something does something becomes a nightmare to figure out later own down the road, because all the relationships are obscured by a bunch of XML configuration files ( Spring I am looking at you ) and poorly named implementation classes.
Why are we grouping things in classes and what should the principles be ?
Are you stressing the, "Grouping," or the, "Classes?"
If you're asking why are we grouping things, then I'd second Medelt's, "Maintainability," though I'd rephrase it as, "To reduce the potential cost of ripple effects."
Consider, for a moment, not the actual coupling between your components (classes, files, whatever they may be) but the potential coupling, that is, the maximum possible number of source code dependencies between those components.
There is a theorem which shows that, given a chain of components a - b - c - d - e, such that a depends on b, etc., the probability that changing e will then change the c component cannot be greater than the possibility that changing e will then change d. And in real software systems, the probability that changing e will affect c is usually less than the probability that changing e will affect d.
Of course, you might say, that's obvious. But we can make it even more obvious. In this example, d has a direct dependency on e, and c has an indirect (via d) dependency on e. Thus we can say that, statistically, a system formed predominantly of direct dependencies will suffer from a greater ripple effect than a system formed predominantly from indirect dependencies.
Given that, in the real world, each ripple effect costs money, we can say that the cost of ripple effect of an update to system formed predominantly of direct dependencies will be higher than the cost of ripple effect of an update to system formed predominantly of indirect dependencies.
Now, back to potential coupling. It's possible to show that, within an absolute encapsulation context (such as Java or C#, where recursive encapsulation is not widely employed) all components are potentially connected to one another via either a direct dependency or an indirect dependency with a single, intermediate component. Statistically, a system which minimises the direct potential coupling between its components minimises the potential cost of ripple effect due to any update.
And how do we achieve this distinction between direct and indirect potential coupling (as if we haven't already let the cat out of the bag)? With encapsulation.
Encapsulation is the property that the information contained in a modelled entity is accessible only through interactions at the interfaces supported by that modelled entity. The information (which can be data or behavior) which is not accessible through these interfaces is called, "Information hidden." By information-hiding behavior within a component, we guarantee that it can only be accessed indirectly (via the interfaces) by external components.
This necessarily requires some sort of grouping container in which some sort of finer-grained functionality can be information-hidden.
This is why we are, "Grouping," things.
As to why we're using classes to group things:
A) Classes provide a language-supported mechanism for encapsulation.
B) We're not just using classes: we're also using namespaces/packages for encapsulation.
Regards,
Ed.
I can think of two reasons.
Maintainability: You'll naturally expect some logic to go together. Logic that defines operations on one particular outside service for example a database should probably be grouped together in a logical way. You can do this in a namespace or a class.
State and identity: objects do not only contain logic but also maintain state. Logic that is part of the interface of working with the state of a particular object should be defined on that object. Objects also maintain identity, an object that models one entity in the problem domain should be one object in your software.
As a side-node: The state and identity argument is mostly applicable to domain objects. In most solutions I've used the IoC container mainly for the services around those. My domain objects are usually created and destroyed as part of the program flow and I usually use separate factory objects for this. The factories can then be injected and handled by the IoC container. I've had some success creating factories as wrappers around the IoC container. This way the container handles lifetime of the domain objects too.
This is a very interesting question. If I look back at the way I've implemented things in the past I can see a trend towards smaller and more granular interfaces and classes. Things certainly got better this way. I don't think the optimal solution has one function per class though. This will effectively mean you're using an OO language as a functional language and while functional languages are very powerful there is a lot to be said for combining the two paradigms.
Pure DI perfect universe, I think single classes+method design is ideal. In reality we need to balance the cost of that which makes it less feasible.
Cost factors
Overhead of DI. Spinning up all the underlying and related underlyings for a single method is expensive. Grouping into a class allows us to offset some of that.
Skills - DI is new to many (myself ESPECIALLY) so understanding how to do it better or get out of old/habitual designs is tough
Brown field apps which have them already, it's easier/cheaper/quicker to live with them and worry about this in future green field apps
Hopefully my newbie-ness (yes, I am filled with made up words) with DI hasn't made me completely wrong with this.

Resources