What are the steps in doing incrementation in erlang? - erlang

increment([]) -> [];
increment([H|T]) -> [H+1|increment(T)].
decrement([]) -> [];
decrement([H|T]) -> [H-1|decrement(T)].
So I have this code but I don't know how they properly work like in java.

Java and Erlang are different beasts. I don't recommend trying to make comparisons to Java when learning Erlang, especially if Java is the only language you know so far. The code you've posted is a good example of the paradigm known as "functional programming". I'd suggest doing some reading on that subject to help you understand what's going on. To try to break this down as far as Erlang goes, you need to understand that an Erlang function is completely different from a Java method.
In Java, your method signature is composed of the method name and the types of its arguments. The return type can also be significant. A Java increment method like the function you wrote might be written like List<Integer> increment(List<Integer> input). The body of the Java method would probably iterate through the list an element at a time and set each element to itself plus one:
List<Integer> increment(List<Integer> input) {
for (int i = 0; i < input.size; i++) {
input.set(i, input.get(i) + 1);
}
}
Erlang has almost nothing in common with this. To begin with, an erlang function's "signature" is the name and arity of the function. Arity means how many arguments the function accepts. So your increment function is known as increment/1, and that's its unique signature. The way you write the argument list inside the parentheses after the function name has less to do with argument types than with the pattern of the data passed to it. A function like increment([]) -> ... can only successfully be called by passing it [], the empty list. Likewise, the function increment([Item]) -> ... can only be successfully called by passing it a list with one item in it, and increment([Item1, Item2]) -> ... must be passed a list with two items in it. This concept of matching data to patterns is quite aptly known as "pattern matching", and you'll find it in many functional languages. In Erlang functions, it's used to select which head of the function to execute. This bears a rough similarity to Java's method overloading, where you can have many methods with the same name but different argument types; however a pattern in an Erlang function head can bind variables to different pieces of the arguments that match the pattern.
In your code example, the function increment/1 has two heads. The first head is executed only if you pass an empty list to the function. The second head is executed only if you pass a non-empty list to the function. When that happens, two variables, H and T, are bound. H is bound to the first item of the list, and T is bound to the rest of the list, meaning all but the first item. That's because the pattern [H|T] matches a non-empty list, including a list with one element, in which case T would be bound to the empty list. The variables thus bound can be used in the body of the function.
The bodies of your functions are a very typical form of iterating a list in Erlang to produce a new list. It's typical because of another important difference from Java, which is that Erlang data is immutable. That means there's no such concept as "setting an element of a list" like I did in the Java code above. If you want to change a list, you have to build a new one, which is what your code does. It effectively says:
The result of incrementing the empty list is the empty list.
The result of incrementing a non-empty list is:
Take the first element of the list: H.
Increment the rest of the list: increment(T).
Prepend H+1 to the result of incrementing the rest of the list.
Note that you want to be careful about how you build lists in Erlang, or you can end up wasting a lot of resources. The List Handling User's Guide is a good place to learn about that. Also note that this code uses a concept known as "recursion", meaning that the function calls itself. In many popular languages, including Java, recursion is of limited usefulness because each new function call adds a stack frame, and your available memory space for stack frames is relatively limited. Erlang and many functional languages support a thing known as "tail call elimination", which is a feature that allows properly written code to recurse indefinitely without exhausting any resources.
Hopefully this helps explain things. If you can ask a more specific question, you might get a better answer.

Related

Julia create array of functions with matching arguments to execute in a loop

It's easy to create an array of functions and execute them in a loop.
It's easy to provide arguments in either a corresponding array of the same length or the array could be of tuples (fn, arg).
For 2, the loop is just
for fn_ar in arr # arr is [(myfunc, [1,2,3]), (func2, [10,11,12]), ...]
fn_ar[1](fn_ar[2])
end
Here is the problem: the arguments I am using are arrays of very large arrays. In #2, the argument that will be called with the function will be the current value of the array when the arg entry of the tuple is initially created. What I need is to provide the array names as the argument and defer evaluation of the arguments until the corresponding function is run in the loop body.
I could provide the arrays used as input as an expression and eval the expression in the loop to supply the needed arguments. But, eval can't eval in local scope.
What I did that worked (sort of) was to create a closure for each function that captured the arrays (which are really just a reference to storage). This works because the only argument to each function that varies in the loop body turns out to be the loop counter. The functions in question update the arrays in place. The array argument is really just a reference to the storage location, so each function executed in the loop body sees the latest values of the arrays. It worked. It wasn't hard to do. It is very, very slow. This is a known challenge in Julia.
I tried the recommended hints in the performance section of the manual. Make sure the captured variables are typed before they are captured so the JIT knows what they are. No effect on perf. The other hint is to put the definition of the curried function with the data for the closure in let block. Tried this. No effect on perf. It's possible I implemented the hints incorrectly--I can provide a code fragment if it helps.
But, I'd rather just ask the question about what I am trying to do and not muddy the waters with my past effort, which might not be going down the right path.
Here is a small fragment that is more realistic than the above:
Just a couple of functions and arguments:
(affine!, "(dat.z[hl], dat.a[hl-1], nnw.theta[hl], nnw.bias[hl])")
(relu!, "(dat.a[hl], dat.z[hl])")
Of course, the arguments could be wrapped as an expression with Meta.parse. dat.z and dat.a are matrices used in machine learning. hl indexes the layer of the model for the linear result and non-linear activation.
A simplified version of the loop where I want to run through the stack of functions across the model layers:
function feedfwd!(dat::Union{Batch_view,Model_data}, nnw, hp, ff_execstack)
for lr in 1:hp.n_layers
for f in ff_execstack[lr]
f(lr)
end
end
end
So, closures of the arrays is too slow. Eval I can't get to work.
Any suggestions...?
Thanks,
Lewis
I solved this with the beauty of function composition.
Here is the loop that runs through the feed forward functions for all layers:
for lr in 1:hp.n_layers
for f in ff_execstack[lr]
f(argfilt(dat, nnw, hp, bn, lr, f)...)
end
end
The inner function parameter to f called argfilt filters down from a generic list of all the inputs to return a tuple of arguments needed for the specific function. This also takes advantage of the beauty of method dispatch. Note that the function, f, is an input to argfilt. The types of functions are singletons: each function has a unique type as in typeof(relu!), for example. So, without any crazy if branching, method dispatch enables argfilt to return just the arguments needed. The performance cost compared to passing the arguments directly to a function is about 1.2 ns. This happens in a very hot loop that typically runs 24,000 times so that is 29 microseconds for the entire training pass.
The other great thing is that this runs in less than 1/10 of the time of the version using closures. I am getting slightly better performance than my original version that used some function variables and a bunch of if statements in the hot loop for feedfwd.
Here is what a couple of the methods for argfilt look like:
function argfilt(dat::Union{Model_data, Batch_view}, nnw::Wgts, hp::Hyper_parameters,
bn::Batch_norm_params, hl::Int, fn::typeof(affine!))
(dat.z[hl], dat.a[hl-1], nnw.theta[hl], nnw.bias[hl])
end
function argfilt(dat::Union{Model_data, Batch_view}, nnw::Wgts, hp::Hyper_parameters,
bn::Batch_norm_params, hl::Int, fn::typeof(relu!))
(dat.a[hl], dat.z[hl])
end
Background: I got here by reasoning that I could pass the same list of arguments to all of the functions: the union of all possible arguments--not that bad as there are only 9 args. Ignored arguments waste some space on the stack but it's teeny because for structs and arrays an argument is a pointer reference, not all of the data. The downside is that every one of these functions (around 20 or so) all need to have big argument lists. OK, but goofy: it doesn't make much sense when you look at the code of any of the functions. But, if I could filter down the arguments just to those needed, the function signatures don't need to change.
It's sort of a cool pattern. No introspection or eval needed; just functions.

Why are there two kinds of functions in Elixir?

I'm learning Elixir and wonder why it has two types of function definitions:
functions defined in a module with def, called using myfunction(param1, param2)
anonymous functions defined with fn, called using myfn.(param1, param2)
Only the second kind of function seems to be a first-class object and can be passed as a parameter to other functions. A function defined in a module needs to be wrapped in a fn. There's some syntactic sugar which looks like otherfunction(&myfunction(&1, &2)) in order to make that easy, but why is it necessary in the first place? Why can't we just do otherfunction(myfunction))? Is it only to allow calling module functions without parenthesis like in Ruby? It seems to have inherited this characteristic from Erlang which also has module functions and funs, so does it actually comes from how the Erlang VM works internally?
It there any benefit having two types of functions and converting from one type to another in order to pass them to other functions? Is there a benefit having two different notations to call functions?
Just to clarify the naming, they are both functions. One is a named function and the other is an anonymous one. But you are right, they work somewhat differently and I am going to illustrate why they work like that.
Let's start with the second, fn. fn is a closure, similar to a lambda in Ruby. We can create it as follows:
x = 1
fun = fn y -> x + y end
fun.(2) #=> 3
A function can have multiple clauses too:
x = 1
fun = fn
y when y < 0 -> x - y
y -> x + y
end
fun.(2) #=> 3
fun.(-2) #=> 3
Now, let's try something different. Let's try to define different clauses expecting a different number of arguments:
fn
x, y -> x + y
x -> x
end
** (SyntaxError) cannot mix clauses with different arities in function definition
Oh no! We get an error! We cannot mix clauses that expect a different number of arguments. A function always has a fixed arity.
Now, let's talk about the named functions:
def hello(x, y) do
x + y
end
As expected, they have a name and they can also receive some arguments. However, they are not closures:
x = 1
def hello(y) do
x + y
end
This code will fail to compile because every time you see a def, you get an empty variable scope. That is an important difference between them. I particularly like the fact that each named function starts with a clean slate and you don't get the variables of different scopes all mixed up together. You have a clear boundary.
We could retrieve the named hello function above as an anonymous function. You mentioned it yourself:
other_function(&hello(&1))
And then you asked, why I cannot simply pass it as hello as in other languages? That's because functions in Elixir are identified by name and arity. So a function that expects two arguments is a different function than one that expects three, even if they had the same name. So if we simply passed hello, we would have no idea which hello you actually meant. The one with two, three or four arguments? This is exactly the same reason why we can't create an anonymous function with clauses with different arities.
Since Elixir v0.10.1, we have a syntax to capture named functions:
&hello/1
That will capture the local named function hello with arity 1. Throughout the language and its documentation, it is very common to identify functions in this hello/1 syntax.
This is also why Elixir uses a dot for calling anonymous functions. Since you can't simply pass hello around as a function, instead you need to explicitly capture it, there is a natural distinction between named and anonymous functions and a distinct syntax for calling each makes everything a bit more explicit (Lispers would be familiar with this due to the Lisp 1 vs. Lisp 2 discussion).
Overall, those are the reasons why we have two functions and why they behave differently.
I don't know how useful this will be to anyone else, but the way I finally wrapped my head around the concept was to realize that elixir functions aren't Functions.
Everything in elixir is an expression. So
MyModule.my_function(foo)
is not a function but the expression returned by executing the code in my_function. There is actually only one way to get a "Function" that you can pass around as an argument and that is to use the anonymous function notation.
It is tempting to refer to the fn or & notation as a function pointer, but it is actually much more. It's a closure of the surrounding environment.
If you ask yourself:
Do I need an execution environment or a data value in this spot?
And if you need execution use fn, then most of the difficulties become much
clearer.
I may be wrong since nobody mentioned it, but I was also under the impression that the reason for this is also the ruby heritage of being able to call functions without brackets.
Arity is obviously involved but lets put it aside for a while and use functions without arguments. In a language like javascript where brackets are mandatory, it is easy to make the difference between passing a function as an argument and calling the function. You call it only when you use the brackets.
my_function // argument
(function() {}) // argument
my_function() // function is called
(function() {})() // function is called
As you can see, naming it or not does not make a big difference. But elixir and ruby allow you to call functions without the brackets. This is a design choice which I personally like but it has this side effect you cannot use just the name without the brackets because it could mean you want to call the function. This is what the & is for. If you leave arity appart for a second, prepending your function name with & means that you explicitly want to use this function as an argument, not what this function returns.
Now the anonymous function is bit different in that it is mainly used as an argument. Again this is a design choice but the rational behind it is that it is mainly used by iterators kind of functions which take functions as arguments. So obviously you don't need to use & because they are already considered arguments by default. It is their purpose.
Now the last problem is that sometimes you have to call them in your code, because they are not always used with an iterator kind of function, or you might be coding an iterator yourself. For the little story, since ruby is object oriented, the main way to do it was to use the call method on the object. That way, you could keep the non-mandatory brackets behaviour consistent.
my_lambda.call
my_lambda.call()
my_lambda_with_arguments.call :h2g2, 42
my_lambda_with_arguments.call(:h2g2, 42)
Now somebody came up with a shortcut which basically looks like a method with no name.
my_lambda.()
my_lambda_with_arguments.(:h2g2, 42)
Again, this is a design choice. Now elixir is not object oriented and therefore call not use the first form for sure. I can't speak for José but it looks like the second form was used in elixir because it still looks like a function call with an extra character. It's close enough to a function call.
I did not think about all the pros and cons, but it looks like in both languages you could get away with just the brackets as long as you make brackets mandatory for anonymous functions. It seems like it is:
Mandatory brackets VS Slightly different notation
In both cases you make an exception because you make both behave differently. Since there is a difference, you might as well make it obvious and go for the different notation. The mandatory brackets would look natural in most cases but very confusing when things don't go as planned.
Here you go. Now this might not be the best explanation in the world because I simplified most of the details. Also most of it are design choices and I tried to give a reason for them without judging them. I love elixir, I love ruby, I like the function calls without brackets, but like you, I find the consequences quite misguiding once in a while.
And in elixir, it is just this extra dot, whereas in ruby you have blocks on top of this. Blocks are amazing and I am surprised how much you can do with just blocks, but they only work when you need just one anonymous function which is the last argument. Then since you should be able to deal with other scenarios, here comes the whole method/lambda/proc/block confusion.
Anyway... this is out of scope.
I've never understood why explanations of this are so complicated.
It's really just an exceptionally small distinction combined with the realities of Ruby-style "function execution without parens".
Compare:
def fun1(x, y) do
x + y
end
To:
fun2 = fn
x, y -> x + y
end
While both of these are just identifiers...
fun1 is an identifier that describes a named function defined with def.
fun2 is an identifier that describes a variable (that happens to contain a reference to function).
Consider what that means when you see fun1 or fun2 in some other expression? When evaluating that expression, do you call the referenced function or do you just reference a value out of memory?
There's no good way to know at compile time. Ruby has the luxury of introspecting the variable namespace to find out if a variable binding has shadowed a function at some point in time. Elixir, being compiled, can't really do this. That's what the dot-notation does, it tells Elixir that it should contain a function reference and that it should be called.
And this is really hard. Imagine that there wasn't a dot notation. Consider this code:
val = 5
if :rand.uniform < 0.5 do
val = fn -> 5 end
end
IO.puts val # Does this work?
IO.puts val.() # Or maybe this?
Given the above code, I think it's pretty clear why you have to give Elixir the hint. Imagine if every variable de-reference had to check for a function? Alternatively, imagine what heroics would be necessary to always infer that variable dereference was using a function?
There's an excellent blog post about this behavior: link
Two types of functions
If a module contains this:
fac(0) when N > 0 -> 1;
fac(N) -> N* fac(N-1).
You can’t just cut and paste this into the shell and get the same
result.
It’s because there is a bug in Erlang. Modules in Erlang are sequences
of FORMS. The Erlang shell evaluates a sequence of
EXPRESSIONS. In Erlang FORMS are not EXPRESSIONS.
double(X) -> 2*X. in an Erlang module is a FORM
Double = fun(X) -> 2*X end. in the shell is an EXPRESSION
The two are not the same. This bit of silliness has been Erlang
forever but we didn’t notice it and we learned to live with it.
Dot in calling fn
iex> f = fn(x) -> 2 * x end
#Function<erl_eval.6.17052888>
iex> f.(10)
20
In school I learned to call functions by writing f(10) not f.(10) -
this is “really” a function with a name like Shell.f(10) (it’s a
function defined in the shell) The shell part is implicit so it should
just be called f(10).
If you leave it like this expect to spend the next twenty years of
your life explaining why.
Elixir has optional braces for functions, including functions with 0 arity. Let's see an example of why it makes a separate calling syntax important:
defmodule Insanity do
def dive(), do: fn() -> 1 end
end
Insanity.dive
# #Function<0.16121902/0 in Insanity.dive/0>
Insanity.dive()
# #Function<0.16121902/0 in Insanity.dive/0>
Insanity.dive.()
# 1
Insanity.dive().()
# 1
Without making a difference between 2 types of functions, we can't say what Insanity.dive means: getting a function itself, calling it, or also calling the resulting anonymous function.
fn -> syntax is for using anonymous functions. Doing var.() is just telling elixir that I want you to take that var with a func in it and run it instead of referring to the var as something just holding that function.
Elixir has a this common pattern where instead of having logic inside of a function to see how something should execute, we pattern match different functions based on what kind of input we have. I assume this is why we refer to things by arity in the function_name/1 sense.
It's kind of weird to get used to doing shorthand function definitions (func(&1), etc), but handy when you're trying to pipe or keep your code concise.
In elixir we use def for simply define a function like we do in other languages.
fn creates an anonymous function refer to this for more clarification
Only the second kind of function seems to be a first-class object and can be passed as a parameter to other functions. A function defined in a module needs to be wrapped in a fn. There's some syntactic sugar which looks like otherfunction(myfunction(&1, &2)) in order to make that easy, but why is it necessary in the first place? Why can't we just do otherfunction(myfunction))?
You can do otherfunction(&myfunction/2)
Since elixir can execute functions without the brackets (like myfunction), using otherfunction(myfunction)) it will try to execute myfunction/0.
So, you need to use the capture operator and specify the function, including arity, since you can have different functions with the same name. Thus, &myfunction/2.

What's the difference between luaL_checknumber and lua_tonumber?

Clarification (sorry the question was not specific): They both try to convert the item on the stack to a lua_Number. lua_tonumber will also convert a string that represents a number. How does luaL_checknumber deal with something that's not a number?
There's also luaL_checklong and luaL_checkinteger. Are they the same as (int)luaL_checknumber and (long)luaL_checknumber respectively?
The reference manual does answer this question. I'm citing the Lua 5.2 Reference Manual, but similar text is found in the 5.1 manual as well. The manual is, however, quite terse. It is rare for any single fact to be restated in more than one sentence. Furthermore, you often need to correlate facts stated in widely separated sections to understand the deeper implications of an API function.
This is not a defect, it is by design. This is the reference manual to the language, and as such its primary goal is to completely (and correctly) describe the language.
For more information about "how" and "why" the general advice is to also read Programming in Lua. The online copy is getting rather long in the tooth as it describes Lua 5.0. The current paper edition describes Lua 5.1, and a new edition describing Lua 5.2 is in process. That said, even the first edition is a good resource, as long as you also pay attention to what has changed in the language since version 5.0.
The reference manual has a fair amount to say about the luaL_check* family of functions.
Each API entry's documentation block is accompanied by a token that describes its use of the stack, and under what conditions (if any) it will throw an error. Those tokens are described at section 4.8:
Each function has an indicator like this: [-o, +p, x]
The first field, o, is how many elements the function pops from the
stack. The second field, p, is how many elements the function pushes
onto the stack. (Any function always pushes its results after popping
its arguments.) A field in the form x|y means the function can push
(or pop) x or y elements, depending on the situation; an interrogation
mark '?' means that we cannot know how many elements the function
pops/pushes by looking only at its arguments (e.g., they may depend on
what is on the stack). The third field, x, tells whether the function
may throw errors: '-' means the function never throws any error; 'e'
means the function may throw errors; 'v' means the function may throw
an error on purpose.
At the head of Chapter 5 which documents the auxiliary library as a whole (all functions in the official API whose names begin with luaL_ rather than just lua_) we find this:
Several functions in the auxiliary library are used to check C
function arguments. Because the error message is formatted for
arguments (e.g., "bad argument #1"), you should not use these
functions for other stack values.
Functions called luaL_check* always throw an error if the check is not
satisfied.
The function luaL_checknumber is documented with the token [-0,+0,v] which means that it does not disturb the stack (it pops nothing and pushes nothing) and that it might deliberately throw an error.
The other functions that have more specific numeric types differ primarily in function signature. All are described similarly to luaL_checkint() "Checks whether the function argument arg is a number and returns this number cast to an int", varying the type named in the cast as appropriate.
The function lua_tonumber() is described with the token [-0,+0,-] meaning it has no effect on the stack and does not throw any errors. It is documented to return the numeric value from the specified stack index, or 0 if the stack index does not contain something sufficiently numeric. It is documented to use the more general function lua_tonumberx() which also provides a flag indicating whether it successfully converted a number or not.
It too has siblings named with more specific numeric types that do all the same conversions but cast their results.
Finally, one can also refer to the source code, with the understanding that the manual is describing the language as it is intended to be, while the source is a particular implementation of that language and might have bugs, or might reveal implementation details that are subject to change in future versions.
The source to luaL_checknumber() is in lauxlib.c. It can be seen to be implemented in terms of lua_tonumberx() and the internal function tagerror() which calls typerror() which is implemented with luaL_argerror() to actually throw the formatted error message.
They both try to convert the item on the stack to a lua_Number. lua_tonumber will also convert a string that represents a number. luaL_checknumber throws a (Lua) error when it fails a conversion - it long jumps and never returns from the POV of the C function. lua_tonumber merely returns 0 (which can be a valid return as well.) So you could write this code which should be faster than checking with lua_isnumber first.
double r = lua_tonumber(_L, idx);
if (r == 0 && !lua_isnumber(_L, idx))
{
// Error handling code
}
return r;

Why don't F# lists have a tail pointer

Or phrased another way, what kind of benefits do you get from having a basic, singly linked list with only a head pointer? The benefits of a tail pointer that I can see are:
O(1) list concatenation
O(1) Appending stuff to the right side of the list
Both of which are rather convenient things to have, as opposed to O(n) list concatenation (where n is the length of the left-side list?). What advantages does dropping the tail pointer have?
F#, like many other functional[-ish] languages, has a cons-list (the terminology originally comes from LISP, but the concept is the same). In F# the :: operator (or List.Cons) is used for cons'ing: note the signature is ‘a –> ‘a list –> ‘a list (see Mastering F# Lists).
Do not confuse a cons-list with an opaque Linked List implementation which contains a discrete first[/last] node - every cell in a cons-list is the start of a [different] list! That is, a "list" is simply the chain of cells that starts at a given cons-cell.
This offers some advantages when used in a functional-like manner: one is that all the "tail" cells are shared and since each cons-cell is immutable (the "data" might be mutable, but that's a different issue) there is no way to make a change to a "tail" cell and flub up all the other lists which contain said cell.
Because of this property, [new] lists can be efficiently built - that is, they do not require a copy - simply by cons'ing to the front. In addition, it is also very efficient to deconstruct a list to head :: tail - once again, no copy - which is often very useful in recursive functions.
This immutable property generally does not exist in a [standard mutable] Double Linked List implementation in that appending would add side-effects: the internal 'last' node (the type is now opaque) and one of the "tail" cells are changed. (There are immutable sequence types that allow an "effectively constant time" append/update such as immutable.Vector in Scala -- however, these are heavy-weight objects compared to a cons-list that is nothing more than a series of cells cons'ed together.)
As mentioned, there are also disadvantages a cons-list is not appropriate for all tasks - in particular, creating a new list except by cons'ing to the head is an O(n) operation, fsvo n, and for better (or worse) the list is immutable.
I would recommend creating your own version of concat to see how this operation is really done. (The article Why I love F#: Lists - The Basics covers this.)
Happy coding.
Also see related post: Why can you only prepend to lists in functional languages?
F# lists are immutable, there's no such thing as "append/concat", rather there's just creating new lists (that may reuse some suffixes of old lists). Immutability has many advantages, outside the scope of this question. (All pure languages, and most functional languages have this data structure, it is not an F#-ism.)
See also
http://diditwith.net/2008/03/03/WhyILoveFListsTheBasics.aspx
which has good picture diagrams to explain things (e.g. why consing on the front is cheaper than at the end for an immutable list).
In addition to what the others said: if you need efficient, but yet immutable data structures (which should be an idiomatic F# way), you have to consider reading Chris Okasaki, Purely Functional Data Structures. There is also a thesis available (on which the book is based).
In addition to what has been already said, the Introducing Functional Programming section on MSDN has an article about Working with Functional Lists that explains how lists work and also implements them in C#, so it may be a good way to understand how they work (and why adding reference to the last element would not allow efficient implementation of append).
If you need to append things to the end of the list, as well as to the front, then you need a different data structure. For example, Norman Ramsey posted source code for DList which has these properties here (The implementation is not idiomatic F#, but it should be easy to fix).
If you find you want a list with better performance for append operations, have a look at the QueueList in the F# PowerPack and the JoinList in the FSharpx extension libraries.
QueueList encapsulates two lists. When you prepend using the cons, it prepends an element to the first list, just as a cons-list. However, if you want to append a single element, it can be pushed to the top of the second list. When the first list runs out of elements, List.rev is run on the second list, and the two are swapped putting your list back in order and freeing the second list to append new elements.
JoinList uses a discriminated union to more efficiently append whole lists and is a bit more involved.
Both are obviously less performant for standard cons-list operations but offer better performance for other scenarios.
You can read more about these structures in the article Refactoring Pattern Matching.
As others have pointed out, an F# list could be represented by a data structure:
List<T> { T Value; List<T> Tail; }
From here, the convention is that a list goes from the List you have a reference to until Tail is null. Based on that definition, the benefits/features/limitations in the other answers come naturally.
However, your original question seems to be why the list is not defined more like:
List<T> { Node<T> Head; Node<T> Tail; }
Node<T> { T Value; Node<T> Next; }
Such a structure would allow both appending and prepending to the list without any visible effects to the a reference to the original list, since it still only sees a "window" of the now expanded list. Although this would (sometimes) allow O(1) concatenation, there are several issues such a feature would face:
The concatenation only works once. This can lead to unexpected performance behavior where one concatenation is O(1), but the next is O(n). Say for example:
listA = makeList1 ()
listB = makeList2 ()
listC = makeList3 ()
listD = listA + listB //modified Node at tail of A for O(1)
listE = listA + listC //must now make copy of A to concat with C
You could argue that the time savings for the cases where possible are worth it, but the surprise of not knowing when it will be O(1) and when O(n) are strong arguments against the feature.
All lists now take up twice as much space, even if you never plan to concatenate them.
You now have a separate List and Node type. In the current implementation, I believe F# only uses a single type like the beginning of my answer. There may be a way to do what you are suggesting with only one type, but it is not obvious to me.
The concatenation requires mutating the original "tail" node instance. While this shouldn't affect programs, it is a point of mutation, which most functional languages tend to avoid.
Or phrased another way, what kind of benefits do you get from having a basic, singly linked list with only a head pointer? The benefits of a tail pointer that I can see are:
O(1) list concatenation
O(1) Appending stuff to the right side of the list
Both of which are rather convenient things to have, as opposed to O(n) list concatenation (where n is the length of the left-side list?).
If by "tail pointer" you mean a pointer from every list to the last element in the list, that alone cannot be used to provide either of the benefits you cite. Although you could then get at the last element in the list quickly, you cannot do anything with it because it is immutable.
You could write a mutable doubly-linked list as you say but the mutability would make programs using it significantly harder to reason about because every function you call with one might change it.
As Brian said, there are purely functional catenable lists. However, they are many times slower at common operations than the simple singly-linked list that F# uses.
What advantages does dropping the tail pointer have?
30% less space usage and better performance on virtually all list operations.

Comparing arrays elements in Erlang

I'm trying to learn how to think in a functional programming way, for this, I'm trying to learn Erlang and solving easy problems from codingbat. I came with the common problem of comparing elements inside a list. For example, compare a value of the i-th position element with the value of the i+1-th position of the list. So, I have been thinking and searching how to do this in a functional way in Erlang (or any functional language).
Please, be gentle with me, I'm very newb in this functional world, but I want to learn
Thanks in advance
Define a list:
L = [1,2,3,4,4,5,6]
Define a function f, which takes a list
If it matches a list of one element or an empty list, return the empty list
If it matches the first element and the second element then take the first element and construct a new list by calling the rest of the list recursivly
Otherwise skip the first element of the list.
In Erlang code
f ([]) -> [];
f ([_]) -> [];
f ([X, X|Rest]) -> [X | f(Rest)];
f ([_|Rest]) -> f(Rest).
Apply function
f(L)
This should work... haven't compiled and run it but it should get you started. Also in case you need to do modifications to it to behave differently.
Welcome to Erlang ;)
I try to be gentle ;-) So main thing in functional approach is thinking in terms: What is input? What should be output? There is nothing like comparing the i-th element with the i+1-th element alone. There have to be always purpose of it which will lead to data transformation. Even Mazen Harake's example doing it. In this example there is function which returns only elements which are followed by same value i.e. filters given list. Typically there are very different ways how do similar thing which depends of purpose of it. List is basic functional structure and you can do amazing things with it as Lisp shows us but you have to remember it is not array.
Each time you need access i-th element repeatable it indicates you are using wrong data structure. You can build up different data structures form lists and tuples in Erlang which can serve your purposes better. So when you face problem to compare i-th with i+1-th element you should stop and think. What is purpose of it? Do you need perform some stream data transformation as Mazen Harake does or You need random access? If second you should use different data structure (array for example). Even then you should think about your task characteristics. If you will be mostly read and almost never write then you can use list_to_tuple(L) and then read using element/2. When you need write occasionally you will start thinking about partition it to several tuples and as your write ratio will grow you will end up with array implementation.
So you can use lists:nth/2 if you will do it only once or several times but on short list and you are not performance freak as I'm. You can improve it using [X1,X2|_] = lists:nthtail(I-1, L) (L = lists:nthtail(0,L) works as expected). If you are facing bigger lists and you want call it many times you have to rethink your approach.
P.S.: There are many other fascinating data structures except lists and trees. Zippers for example.

Resources