Linked list single class vs multiple classes - linked-list

In my second term as an computer science student almost the whole term we have focused on writing linked lists in different variations(stack, queue, ...). The design of these lists always came down to this
class List<T> {
class ListElement {
T value;
ListElement next;
}
ListElement root;
}
with variations to which methods were implemented and how they worked (I have left out constructors and properties for simplicity here).
Some day I started learning scala and focusing on functional programming. This also came to the point where a linked list was written but in a different style of implementation.
class List[T]( head: T, tail: List[T])
Despite the different syntax and immutability this is in my opinion a different aproach.
And I thought to myself "Well you could have implemented lists the same way in C# or Java with one class less than the aproach you learned".
I can see why you would implement a linked list like that in a functional language where recursion is not as dangerous as in C# or Java because at least for my way of thinking a recursive implementation of all the usual methods on a linked list for this design is very intuitive.
What I do not understand is why are linked lists in C# or Java typically implemented in the first fashion when you could implement them the other way with less code but equal verbosity? (I am not talking about the implementation of lists in the libraries of the language but about the lists you typically write as a programmer to be)
The only benefit I can see with the first approach is that you can hide the implementation from the user a bit better but is this the reason and also is this worth the additional class?
I wouldn't even need to expose my implementation to the user as I could still implement my list internally different and maybe only have chosen to have a constructor like that and provide functionality to retreive the first element of the list as head and also the rest as tail.

The reasons for them to be "implemented in the first fashion" as you mentioned include
Performance.
Time and space complexity are the two most important concerns while writing algorithms or implementing data structures that support operations like search and sort. As you have mentioned, the lists created the recursive way aren't mutable! The very purpose of creating a list is attaining faster operations on that. So designers prefer the 'first fashion'.
Object orientation
While solving real world problems, the initial object oriented analysis and design (OOAD) matter a lot. With an object modelling that closely resembles the real world objects/things as much as they can, designers can achieve better solutions. The recursive approach seems to miss out this aspect
Scalability
Designers of APIs/Libraries keep scalability in mind when they draft the designs. A code written in the 'first fashion' is much more scalable, and easy to comprehend.
Other design concerns
This is not an exhaustive list of the reasons in any way. There are so many other factors and experience based learning that exist in the programming folklore, that lead to the choice of the first fashion.

Related

Why are mutables allowed in F#? When are they essential?

Coming from C#, trying to get my head around the language.
From what I understand one of the main benefits of F# is that you ditch the concept of state, which should (in many cases) make things much more robust.
If this is the case (and correct me if it's not), why allow us to break this principle with mutables? To me it feels like it they don't belong in the language. I understand you don't have to use them, but it gives you the tools to go off track and think in an OOP manner.
Can anyone provide an example of where a mutable value is essential?
Current compilers for declarative (stateless) code are not very smart. This results in lots of memory allocations and copy operations, which are rather expensive. Mutating some property of an object allows to re-use the object in its new state, which is much faster.
Imagine you make a game with 10000 units moving around at 60 ticks a second. You can do this in F#, including collisions with a mutable quad- or octree, on a single CPU core.
Now imagine the units and quadtree are immutable. The compiler would have no better idea than to allocate and construct 600000 units per second and create 60 new trees per second. And this excludes any changes in other management structures. In a real-world use case with complex units, this kind of solution will be too slow.
F# is a multi-paradigm language that enables the programmer to write functional, object-oriented, and, to an extent, imperative programs. Currently, each variant has its valid uses. Maybe, at some point in the future, better compilers will allow for better optimization of declarative programs, but right now, we have to fall back to imperative programming when performance becomes an issue.
Having the ability to use mutable state is often important for performance reasons, among other things.
Consider implementing the API List.take: count : int -> list : 'a list -> 'a list which returns a list consisting of only the first count elements from the input list.
If you are bound by immutability, Lists can only be built up back-to-front. Implementing take then boils down to
Build up result list back-to-front with first count guys from input: O(count)
Reverse that result and return O(count)
The F# runtime, for performance reasons, has the magic special ability to build Lists front-to-back when needed (i.e. to mutate the tail of the last guy to point to a new tail element). The basic algorithm used for List.take is:
Build up result list front-to-back with first count guys from input: O(count)
Return the result
Same asymptotic performance, but in practical terms it's twice as fast to use mutation in this case.
Pervasive mutable state can be a nightmare as code is difficult to reason about. But if you factor your code so that mutable state is tightly encapsulated (e.g. in implementation details of List.take), then you can enjoy its benefits where it makes sense. So making immutability the default, but still allowing mutability, is a very practical and useful feature of the language.
First of all, what makes F# powerful is, in my opinion, not just the immutability by default, but a whole mix of features like: immutability by default, type inference, lightweight syntax, sum (DUs) and product types (tuples), pattern matching and currying by default. Possibly more.
These make F# very functional by default and they make you program in a certain way. In particular they make you feel uncomfortable when you use mutable state, as it requires the mutable keyword. Uncomfortable in this sense means more careful. And that is exactly what you should be.
Mutable state is not forbidden or evil per se, but it should be controlled. The need to explicitly use mutable is like a warning sign making you aware of danger. And good ways how to control it, is using it internally within a function. That way you can have your own internal mutable state and still be perfectly thread-safe because you don't have shared mutable state. In fact, your function can still be referentially transparent even if it uses mutable state internally.
As for why F# allows mutable state; it would be very difficult to write usual real-world code without the possibility for it. For instance in Haskell, something like a random number cannot be done in the same way as it can be done in F#, but rather needs threading through the state explicitly.
When I write applications, I tend to have about 95% of the code base in a very functional style that would be pretty much 1:1 portable to say Haskell without any trouble. But then at the system boundaries or at some performance-critical inner loop mutable state is used. That way you get the best of both worlds.

What are advantages and disadvantages of "point free" style in functional programming?

I know that in some languages (Haskell?) the striving is to achieve point-free style, or to never explicitly refer to function arguments by name. This is a very difficult concept for me to master, but it might help me to understand what the advantages (or maybe even disadvantages) of that style are. Can anyone explain?
The point-free style is considered by some author as the ultimate functional programming style. To put things simply, a function of type t1 -> t2 describes a transformation from one element of type t1 into another element of type t2. The idea is that "pointful" functions (written using variables) emphasize elements (when you write \x -> ... x ..., you're describing what's happening to the element x), while "point-free" functions (expressed without using variables) emphasize the transformation itself, as a composition of simpler transforms. Advocates of the point-free style argue that transformations should indeed be the central concept, and that the pointful notation, while easy to use, distracts us from this noble ideal.
Point-free functional programming has been available for a very long time. It was already known by logicians which have studied combinatory logic since the seminal work by Moses Schönfinkel in 1924, and has been the basis for the first study on what would become ML type inference by Robert Feys and Haskell Curry in the 1950s.
The idea to build functions from an expressive set of basic combinators is very appealing and has been applied in various domains, such as the array-manipulation languages derived from APL, or the parser combinator libraries such as Haskell's Parsec. A notable advocate of point-free programming is John Backus. In his 1978 speech "Can Programming Be Liberated From the Von Neumann Style ?", he wrote:
The lambda expression (with its substitution rules) is capable of
defining all possible computable functions of all possible types
and of any number of arguments. This freedom and power has its
disadvantages as well as its obvious advantages. It is analogous
to the power of unrestricted control statements in conventional
languages: with unrestricted freedom comes chaos. If one
constantly invents new combining forms to suit the occasion, as
one can in the lambda calculus, one will not become familiar with
the style or useful properties of the few combining forms that
are adequate for all purposes. Just as structured programming
eschews many control statements to obtain programs with simpler
structure, better properties, and uniform methods for
understanding their behavior, so functional programming eschews
the lambda expression, substitution, and multiple function
types. It thereby achieves programs built with familiar
functional forms with known useful properties. These programs are
so structured that their behavior can often be understood and
proven by mechanical use of algebraic techniques similar to those
used in solving high school algebra problems.
So here they are. The main advantage of point-free programming are that they force a structured combinator style which makes equational reasoning natural. Equational reasoning has been particularly advertised by the proponents of the "Squiggol" movement (see [1] [2]), and indeed use a fair share of point-free combinators and computation/rewriting/reasoning rules.
[1] "An introduction to the Bird-Merteens Formalism", Jeremy Gibbons, 1994
[2] "Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire", Erik Meijer, Maarten Fokkinga and Ross Paterson, 1991
Finally, one cause for the popularity of point-free programming among Haskellites is its relation to category theory. In category theory, morphisms (which could be seen as "transformations between objects") are the basic object of study and computation. While partial results allow reasoning in specific categories to be performed in a pointful style, the common way to build, examine and manipulate arrows is still the point-free style, and other syntaxes such as string diagrams also exhibit this "pointfreeness". There are rather tight links between the people advocating "algebra of programming" methods and users of categories in programming (for example the authors of the banana paper [2] are/were hardcore categorists).
You may be interested in the Pointfree page of the Haskell wiki.
The downside of pointfree style is rather obvious: it can be a real pain to read. The reason why we still love to use variables, despite the numerous horrors of shadowing, alpha-equivalence etc., is that it's a notation that's just so natural to read and think about. The general idea is that a complex function (in a referentially transparent language) is like a complex plumbing system: the inputs are the parameters, they get into some pipes, are applied to inner functions, duplicated (\x -> (x,x)) or forgotten (\x -> (), pipe leading nowhere), etc. And the variable notation is nicely implicit about all that machinery: you give a name to the input, and names on the outputs (or auxiliary computations), but you don't have to describe all the plumbing plan, where the small pipes will go not to be a hindrance for the bigger ones, etc. The amount of plumbing inside something as short as \(f,x,y) -> ((x,y), f x y) is amazing. You may follow each variable individually, or read each intermediate plumbing node, but you never have to see the whole machinery together. When you use a point-free style, all the plumbing is explicit, you have to write everything down, and look at it afterwards, and sometimes it's just plain ugly.
PS: this plumbing vision is closely related to the stack programming languages, which are probably the least pointful programming languages (barely) in use. I would recommend trying to do some programming in them just to get of feeling of it (as I would recommend logic programming). See Factor, Cat or the venerable Forth.
I believe the purpose is to be succinct and to express pipelined computations as a composition of functions rather than thinking of threading arguments through. Simple example (in F#) - given:
let sum = List.sum
let sqr = List.map (fun x -> x * x)
Used like:
> sum [3;4;5]
12
> sqr [3;4;5]
[9;16;25]
We could express a "sum of squares" function as:
let sumsqr x = sum (sqr x)
And use like:
> sumsqr [3;4;5]
50
Or we could define it by piping x through:
let sumsqr x = x |> sqr |> sum
Written this way, it's obvious that x is being passed in only to be "threaded" through a sequence of functions. Direct composition looks much nicer:
let sumsqr = sqr >> sum
This is more concise and it's a different way of thinking of what we're doing; composing functions rather than imagining the process of arguments flowing through. We're not describing how sumsqr works. We're describing what it is.
PS: An interesting way to get your head around composition is to try programming in a concatenative language such as Forth, Joy, Factor, etc. These can be thought of as being nothing but composition (Forth : sumsqr sqr sum ;) in which the space between words is the composition operator.
PPS: Perhaps others could comment on the performance differences. It seems to me that composition may reduce GC pressure by making it more obvious to the compiler that there is no need to produce intermediate values as in pipelining; helping make the so-called "deforestation" problem more tractable.
While I'm attracted to the point-free concept and used it for some things, and agree with all the positives said before, I found these things with it as negative (some are detailed above):
The shorter notation reduces redundancy; in a heavily structured composition (ramda.js style, or point-free in Haskell, or whatever concatenative language) the code reading is more complex than linearly scanning through a bunch of const bindings and using a symbol highlighter to see which binding goes into what other downstream calculation. Besides the tree vs linear structure, the loss of descriptive symbol names makes the function hard to intuitively grasp. Of course both the tree structure and the loss of named bindings also have a lot of positives as well, for example, functions will feel more general - not bound to some application domain via the chosen symbol names - and the tree structure is semantically present even if bindings are laid out, and can be comprehended sequentially (lisp let/let* style).
Point-free is simplest when just piping through or composing a series of functions, as this also results in a linear structure that we humans find easy to follow. However, threading some interim calculation through multiple recipients is tedious. There are all kinds of wrapping into tuples, lensing and other painstaking mechanisms go into just making some calculation accessible, that would otherwise be just the multiple use of some value binding. Of course the repeated part can be extracted out as a separate function and maybe it's a good idea anyway, but there are also arguments for some non-short functions and even if it's extracted, its arguments will have to be somehow threaded through both applications, and then there may be a need for memoizing the function to not actually repeat the calculation. One will use a lot of converge, lens, memoize, useWidth etc.
JavaScript specific: harder to casually debug. With a linear flow of let bindings, it's easy to add a breakpoint wherever. With the point-free style, even if a breakpoint is somehow added, the value flow is hard to read, eg. you can't just query or hover over some variable in the dev console. Also, as point-free is not native in JS, library functions of ramda.js or similar will obscure the stack quite a bit, especially with the obligate currying.
Code brittleness, especially on nontrivial size systems and in production. If a new piece of requirement comes in, then the above disadvantages get into play (eg. harder to read the code for the next maintainer who may be yourself a few weeks down the line, and also harder to trace the dataflow for inspection). But most importantly, even something seemingly small and innocent new requirement can necessitate a whole different structuring of the code. It may be argued that it's a good thing in that it'll be a crystal clear representation of the new thing, but rewriting large swaths of point-free code is very time consuming and then we haven't mentioned testing. So it feels that the looser, less structured, lexical assignment based coding can be more quickly repurposed. Especially if the coding is exploratory, and in the domain of human data with weird conventions (time etc.) that can rarely be captured 100% accurately and there may always be an upcoming request for handling something more accurately or more to the needs of the customer, whichever method leads to faster pivoting matters a lot.
To the pointfree variant, the concatenative programming language, i have to write:
I had a little experience with Joy. Joy is a very simple and beautiful concept with lists. When converting a problem into a Joy function, you have to split your brain into a part for the stack plumbing work and a part for the solution in the Joy syntax. The stack is always handled from the back. Since the composition is contained in Joy, there is no computing time for a composition combiner.

Probabilistic Generation of Semantic Networks

I've studied some simple semantic network implementations and basic techniques for parsing natural language. However, I haven't seen many projects that try and bridge the gap between the two.
For example, consider the dialog:
"the man has a hat"
"he has a coat"
"what does he have?" => "a hat and coat"
A simple semantic network, based on the grammar tree parsing of the above sentences, might look like:
the_man = Entity('the man')
has = Entity('has')
a_hat = Entity('a hat')
a_coat = Entity('a coat')
Relation(the_man, has, a_hat)
Relation(the_man, has, a_coat)
print the_man.relations(has) => ['a hat', 'a coat']
However, this implementation assumes the prior knowledge that the text segments "the man" and "he" refer to the same network entity.
How would you design a system that "learns" these relationships between segments of a semantic network? I'm used to thinking about ML/NL problems based on creating a simple training set of attribute/value pairs, and feeding it to a classification or regression algorithm, but I'm having trouble formulating this problem that way.
Ultimately, it seems I would need to overlay probabilities on top of the semantic network, but that would drastically complicate an implementation. Is there any prior art along these lines? I've looked at a few libaries, like NLTK and OpenNLP, and while they have decent tools to handle symbolic logic and parse natural language, neither seems to have any kind of proabablilstic framework for converting one to the other.
There is quite a lot of history behind this kind of task. Your best start is probably by looking at Question Answering.
The general advice I always give is that if you have some highly restricted domain where you know about all the things that might be mentioned and all the ways they interact then you can probably be quite successful. If this is more of an 'open-world' problem then it will be extremely difficult to come up with something that works acceptably.
The task of extracting relationship from natural language is called 'relationship extraction' (funnily enough) and sometimes fact extraction. This is a pretty large field of research, this guy did a PhD thesis on it, as have many others. There are a large number of challenges here, as you've noticed, like entity detection, anaphora resolution, etc. This means that there will probably be a lot of 'noise' in the entities and relationships you extract.
As for representing facts that have been extracted in a knowledge base, most people tend not to use a probabilistic framework. At the simplest level, entities and relationships are stored as triples in a flat table. Another approach is to use an ontology to add structure and allow reasoning over the facts. This makes the knowledge base vastly more useful, but adds a lot of scalability issues. As for adding probabilities, I know of the Prowl project that is aimed at creating a probabilistic ontology, but it doesn't look very mature to me.
There is some research into probabilistic relational modelling, mostly into Markov Logic Networks at the University of Washington and Probabilstic Relational Models at Stanford and other places. I'm a little out of touch with the field, but this is is a difficult problem and it's all early-stage research as far as I know. There are a lot of issues, mostly around efficient and scalable inference.
All in all, it's a good idea and a very sensible thing to want to do. However, it's also very difficult to achieve. If you want to look at a slick example of the state of the art, (i.e. what is possible with a bunch of people and money) maybe check out PowerSet.
Interesting question, I've been doing some work on a strongly-typed NLP engine in C#: http://blog.abodit.com/2010/02/a-strongly-typed-natural-language-engine-c-nlp/ and have recently begun to connect it to an ontology store.
To me it looks like the issue here is really: How do you parse the natural language input to figure out that 'He' is the same thing as "the man"? By the time it's in the Semantic Network it's too late: you've lost the fact that statement 2 followed statement 1 and the ambiguity in statement 2 can be resolved using statement 1. Adding a third relation after the fact to say that "He" and "the man" are the same is another option but you still need to understand the sequence of those assertions.
Most NLP parsers seem to focus on parsing single sentences or large blocks of text but less frequently on handling conversations. In my own NLP engine there's a conversation history which allows one sentence to be understood in the context of all the sentences that came before it (and also the parsed, strongly-typed objects that they referred to). So the way I would handle this is to realize that "He" is ambiguous in the current sentence and then look back to try to figure out who the last male person was that was mentioned.
In the case of my home for example, it might tell you that you missed a call from a number that's not in its database. You can type "It was John Smith" and it can figure out that "It" means the call that was just mentioned to you. But if you typed "Tag it as Party Music" right after the call it would still resolve to the song that's currently playing because the house is looking back for something that is ITaggable.
I'm not exactly sure if this is what you want, but take a look at natural language generation wikipedia, the "reverse" of parsing, constructing derivations that conform to the given semantical constraints.

Delphi non visual TTree implementation

I'm looking for a non visual persistent tree (TStringTree) implementation. If someone known any good implentation of it, please let me know.
Thanks.
You'll find a flexible, non-visual tree structure in the DI Containers library (commercial). However, as others have noted above, it's really quite easy to roll your own, adding only the functionality that you need.
You can do with just two base objects: TNode and a TNodeList (e.g. a TObjectList descendant). At minimum, TNode needs just three members: your string data, a reference to its parent node (nil if the node is root), and a TNodeList, which is a list of its child nodes. What remains is the (somewhat tedious) implementation of the various attendant methods such as Add(), Delete(), IndexOf(), MoveTo(), GetFirstChild(), GetNext() etc. The basic tree should be less than a one-nighter.
What kind of tree? B-tree? Splay tree? Red-black tree? These are all common types of tree algorithms.
You might want to look at Julian Bucknall's book, Tomes of Delphi: Data Structures and Algorithms. It has all sorts of tree implementations with full Delphi source; you could easily adapt any of them to work with strings.
And of course there is still the funky DECAL (previously Rosetta), an attempt at creating a kind of STL using interfaces and variants.
http://sourceforge.net/projects/decal/
More for the people that flexibility over speed though. The base tree structure is Red-Black iirc.
Why not simply use an XML DOM document?
It may be overkill for a truly trivial string-tree, but would not be too burdensome to use for that purpose and has the benefit of being capable of accommodating just about any extension to a string-tree (for storing additional data with each string in the tree - as attributes etc) should the need arise.
In my experience often what starts out as a trivial need can quickly grow beyond the initial or anticipated requirement. :)
If you are concerned about the "overhead" of the COM based XML implementation and the VCL wrapper around it, you might look into TNativeXML
You could just use a tStringList and add objects which are other tStringLists... crude, but it works if your data can be represented as only string data.
Child := tStringlist.create;
ParentList.AddObject('Child',Child);
Of course a better solution would be to create your own objects which contain a tobjectlist of objects.

Do concepts like Map and Reduce apply to all Functional Programming Languages?

I have just started delving into the world of functional programming.
A lot of OOP (Object Oriented Programming) concepts such as inheritance and polymorphism apply to most modern OO languages like C#, Java and VB.NET.
But how about concepts such as Map, Reduce, Tuples and Sets, do they apply to all FP (Functional Programming) languages?
I have just started with F#. But do aforementioned concepts apply to other FP like Haskell, Nemerle, Lisp, etc.?
You bet. The desirable thing about function programming is that the mathematical concepts you describe are more naturally expressed in an FP.
It's a bit of tough going, but John Backus' Turing Award paper in which he described functional (or "applicative") programming is a good read. The Wikipedia article is good too.
Yes; higher-order functions, algebraic data types, folds/catamorphisms, etc are common to almost all functional languages (though they sometimes go by slightly different names in each language).
Functional tools apply to all programming, not just languages that handle that explicitly. For example, python has map and reduce builtin functions that do exactly what you expect, besides out of order evaluation. you'll need something like the multiprocessing module to get really clever.
Even if the language doesn't provide the exact primitives, most modern languages still make it possible to get the desired effect with a bit more work. This is similar to the way a class-like concept can be coded in pure C.
I would interpret what you're asking as, "Are higher-order functions (map, reduce, filter, ...) and immutable data structures (tuples, cons lists, records, maps, sets, ...) common across FP languages?" and I would say, absolutely yes.
Like you say, OOP has well known pillars (encapsulation, inheritance, polymorphism). The "pillars" of functional programming I'd say are 1) Using functions as first-class values and 2) Expressing yourself without side effects.
You'll likely find common tools to apply these ideas across various FP languages (F# is an excellent choice BTW!) and you'll see them finding their way into more mainstream languages; maybe in a less recognizable form (e.g. LINQ's Select = map, Aggregate = reduce/fold, Where = filter, C# has light weight lambda syntax, System.Tuple, etc.).
As an aside, the thing that seems to be generally missing from non-explicitly-FP languages is good immutable data structures and syntax support for them (not merely a library) which makes it hard to stick to pillar #2 in those languages. F# lists, records, tuples, etc. all are good examples of great language and library combined support for this.
If you really want to jump into the deep end and understand why these concepts are not just conventional but, ahem, foundational, check out the paper "Functional programming with bananas, lenses, envelopes and barbed wire".
They apply to all languages that contain data types that can be "mapped" and "reduced", i.e maps, arrays/vectors, or lists.
In a "pure lambda calculus" language, where every data structure is defined via function application, you can of course apply functions in parallel (i.e., in a call fn(expr1, expr2), you can evaluate expr1 and expr2 in parallel), but that isn't really what map/reduce is about.

Resources