Combiner never gets called in reduction operation (but is mandatory) - java-stream

I am trying to figure out what accumulator and combiner do in reduce stream operation.
List<User> users = Arrays.asList(new User("John", 30), new User("Julie", 35));
int result = users.stream()
.reduce(0,
(partialAgeResult, user) -> {
// accumulator is called twice
System.out.println(MessageFormat.format("partialAgeResult {0}, user {1}", partialAgeResult, user));
return partialAgeResult + user.getAge();
},
(integer, integer2) -> {
// combiner is never called
System.out.println(MessageFormat.format("integer {0}, integer2 {1}", integer, integer2));
return integer * integer2;
});
System.out.println(MessageFormat.format("Result is {0}", result));
I notice that the combiner is never executed, and the result is 65.
If I use users.parallelStream() then the combiner is executed once and the result is 1050.
Why stream and parallelStream yield different results? I don't see any side-effects of executing this in parallel.
What is the purpose of the combiner in the simple stream version?

The problem is here. You are multiplying and not adding in your combiner.
(integer, integer2) -> {
// combiner is never called
System.out.println(MessageFormat.format("integer {0}, integer2 {1}", integer, integer2));
return integer * integer2; //<----- Should be addition
});
The combiner is used to appropriately combine various parts of a parallel operation as these operations can perform independently on individual "pieces" of the original stream.
A simple example would be summing a list of elements. You could have a variety of partial sums in a parallel operation, so you need to sum the partial sums in the combiner to get the total sum (a good exercise for you to try and see for yourself).

For a sequential stream with a mismatch between the types of the accumulator arguments or implementation( BiFunction<U,? super T,U>), you have to give combiner but that never invoked since you there is no need to combine partial result those are parallelly calculated.
So you can simplify this by just convert into partial data before reduce to avoid giving combiner.
users.stream().map(e -> e.getAge()).reduce(0, (a, b) -> a + b);
So, there is no purpose using a combiner with an accumulator like BiFunction<U,? super T,U> for sequential stream actually, but you have to provide since there is no method like
reduce(U identity, BiFunction<U,? super T,U> accumulator)
But for parallel stream combiner called.
And you are getting 1050 because your multiplying in combiner that means (30*35).

Related

The question is to check whether the given linkedlist is palindrome. Please tell what am I doing wrong?

I understand other approaches such as using stack and reversing the second half of the linked list. But, what is wrong with my approach.
* Definition for singly-linked list.
* public class ListNode {
* int val;
* ListNode next;
* ListNode() {}
* ListNode(int val) { this.val = val; }
* ListNode(int val, ListNode next) { this.val = val; this.next = next; }
* }
*/
class Solution {
public boolean isPalindrome(ListNode head) {
if(head.next==null){return true;}
while(head!=null){
ListNode ptr=head, preptr=head;
while(ptr.next!=null){ptr=ptr.next;}
if(ptr==head){break;}
while(preptr.next.next!=null){preptr=preptr.next;}
if(head.val==ptr.val){
preptr.next=null;
head=head.next;
}
else{return false;}
}
return true;
}
}```
The following can be said about your solution:
It fails with an exception if head is null. To avoid that, you could just remove the first if statement. That case does not need a separate handling. When the list is a single node, then the first iteration will execute the break and so you'll get true as return value. But at least you will not access ->next when head is null
It mutates the given list. This is not very nice. The caller will not expect this will happen, and may need the original list for other purposes even after this call to isPalindrome.
It is slow. Its time complexity is quadratic. If this is part of a coding challenge, then the test data may be large, and the execution of your function may then exceed the allotted time.
Using a stack is indeed a solution, but it feels like cheating: then you might as well convert the whole list to an array and test whether the array is a palindrome using its direct addressing capabilities.
You can do this with just the list as follows:
Count the number of nodes in the list
Use that to identify the first node of the second half of the list. If the number of nodes is odd, let this be the node after the center node.
Apply a list reversal algorithm on that second half. Now you have two shorter lists.
Compare the values in those two lists are equal (ignore the center node if there was one). Remember the outcome (false or true)
Repeat step 3 so the reversal is rolled back, and the list is back in its original state.
Return the result that was found in step 4.
This takes linear time, and so for larger lists, this should outperform your solution.

Time Complexity Difference between Two Parsing Implementations Using Global Variables and Return Values

I'm trying to solve the following problem:
A string containing only lower-case letters can be encoded into NUM[encoded string] format. For example, aaa can be encoded into 3[a]. Given an encoded string, find its original string according to the following grammar.
S -> {E}
E -> NUM[S] | STR # NUM[S] means encoded, while STR means not.
NUM -> 1 | 2 | ... | 9
STR -> {LETTER}
LETTER -> a | b | ... | z
Note: in the above grammar {} represents "concatenate 0 or more times".
For example, given the encoded string 3[a2[c]], the result (original string) is accaccacc.
I think this can be parsed by recursive descent parsing, and there are two ways to implement it:
Method I: Let the parsing method to return the result string directly.
Method II: Use a global variable, and each parsing method can just append characters to it.
I'm wondering if the two methods share the same time complexity. Suppose the result string is of length t. Then for method II, I think its time complexity should be O(t) because we read and write every character in the result string exactly once. For method I, however, my intuition was that it could be slower because the same substring can be copied multiple times, depending on the depth of recursions. But I'm not able to figure out the exact time complexity to justify my intuition. Can anyone give a hint?
My first suggestion is that your parser should produce an abstract syntax tree rather than directly interpret the string, no matter whether you choose to write a recursive descent parser, a state-based parser or use a parser generator. This greatly enhances maintainability and allows you perform validation, analyses, and transformations much more easily.
Method I
If I understand you correctly, in Method I you have functions for each grammar construct that return an immutable string, which are then recursively repeated and concatenated. For example, for the top-level concatenation rule
S ::= E*
you would have an interpretation function that looks like this:
string interpretS(NodeS sNode) {
string result = "";
for (int i = 0; i < sNode.Expressions.Length; i++) {
result = result + interpretE(sNode.Expressions[i]);
}
return result;
}
... and similarly for the other rules. It is easy to see that the time complexity of Method I is O(n²) where n is the length of the output. (NB: It makes sense to measure the time complexity in terms of the output rather than the input, since the output length is exponential in the length of the input, and so any interpretation method must have time complexity at least exponential in the input, which is not very interesting.) For example, interpreting the input abcdef requires concatenating a and b, then concatenating the result with c, then concatenating that result with d etc., resulting in 1+2+3+4+5 steps. (See here for a more detailed discussion why repeated string concatenation with immutable strings has quadratic complexity.)
Method II
I interpret your description of Method II like this: instead of returning individual strings which have to be combined, you keep a reference to a mutable structure representing a string that supports appending. This could be a data structure like StringBuilder in Java or .NET, or just a dynamic-length list of characters. The important bit is that appending a string of length b to a string of length a can be done in O(b) (rather than O(a+b)).
Note that for this to work, you don't need a global variable! A cleaner solution would just pass the reference to the resulting structure through (this pattern is called accumulator parameter). So now we would have functions like these:
void interpretS2(NodeS sNode, StringBuilder accumulator) {
for (int i = 0; i < sNode.Expressions.Length; i++) {
interpretE2(sNode.Expressions[i], accumulator);
}
}
void interpretE2(NodeE eNode, StringBuilder accumulator) {
if (eNode is NodeNum numNode) {
for (int i = 0; i < numNode.Repetitions; i++) {
interpretS2(numNode.Expression, accumulator);
}
}
else if (eNode is NodeStr strNode) {
for (int i = 0; i < strNode.Letters.Length; i++) {
interpretLetter2(strNode.Letters[i], accumulator);
}
}
}
void interpretLetter2(NodeLetter letterNode, StringBuilder accumulator) {
accumulator.Append(letterNode.Letter);
}
...
As you stated correctly, here the time complexity is O(n), since at each step exactly one character of the output is appended to the accumulator, and no strings are ever copied (only at the very end, when the mutable structure is converted into the output string).
So, at least for this grammar, Method II is clearly preferable.
Update based on comment
Of course, my interpretation of Method I above is exceedingly naive. A more realistic implementation of the interpretS function would internally use a StringBuilder to concatenate the results from the subexpressions, resulting in linear complexity for the example given above, abcdef.
However, this wouldn't change the worst case complexity O(n²): consider the example
1[a1[b[1[c[1[d[1[e[1[f]]]]]]]]]]
Even the less naive version of Method I would first append f to e (1 step), then append ef to d (+ 2 steps), then append def to c (+ 3 steps) and so on, amounting to 1+2+3+4+5 steps in total.
The fundamental reason for the quadratic time complexity of Method I is that the results from the subexpressions are copied to create the new subresult to be returned.
Time Complexity is estimated by counting the number of elementary operations performed by an algorithm, supposing that each elementary operation takes a fixed amount of time to perform, see here. Of interest is, however, only how fast this number of operations increases, when the size of the input data set increases.
In your case, the size of the input data means the length of the string to be parsed.
I assume by your 1st method you mean that when a NUM is encountered, its argument is processed by the parser completely NUM times. In your example, when „3“ is read from the input string, „a2[c]“ is processed completely 3 times. Processing here means to transverse the syntax tree up to a leave, and append the leave value, here the „c“ to the output string.
I also assume by your 2nd method you mean that when a NUM is encountered, its argument is only evaluated once and all intermediate results are stored and re-used. In your example, when „3“ is read from the input string and stored, „a“ is read from the input string and stored, „2[c]“ is processed, i.e. „2“ is read from the input string and stored, and finally „c“ is processed. „c“ is due to the stored „2“ combined to „cc“, and due to the stored „a“ combined to „acc“. This is due to the stored „3“ combined then to „accaccacc“, and „accaccacc“ is output.
The question now is, what is the elementary operation that is relevant to the time complexity? My feeling is that in the 1st case, the stack operations during transversal of the syntax tree are important, while in the 2nd case, string copying operations are important.
Strictly speaking, one can thus not compare the time complexities of both algorithms.
If you are, however, interested in run times instead of time complexities, my guess is that the stack operations take more time than string copying, and that then method 2 is preferable.

Creating an 'add' computation expression

I'd like the example computation expression and values below to return 6. For some the numbers aren't yielding like I'd expect. What's the step I'm missing to get my result? Thanks!
type AddBuilder() =
let mutable x = 0
member _.Yield i = x <- x + i
member _.Zero() = 0
member _.Return() = x
let add = AddBuilder()
(* Compiler tells me that each of the numbers in add don't do anything
and suggests putting '|> ignore' in front of each *)
let result = add { 1; 2; 3 }
(* Currently the result is 0 *)
printfn "%i should be 6" result
Note: This is just for creating my own computation expression to expand my learning. Seq.sum would be a better approach. I'm open to the idea that this example completely misses the value of computation expressions and is no good for learning.
There is a lot wrong here.
First, let's start with mere mechanics.
In order for the Yield method to be called, the code inside the curly braces must use the yield keyword:
let result = add { yield 1; yield 2; yield 3 }
But now the compiler will complain that you also need a Combine method. See, the semantics of yield is that each of them produces a finished computation, a resulting value. And therefore, if you want to have more than one, you need some way to "glue" them together. This is what the Combine method does.
Since your computation builder doesn't actually produce any results, but instead mutates its internal variable, the ultimate result of the computation should be the value of that internal variable. So that's what Combine needs to return:
member _.Combine(a, b) = x
But now the compiler complains again: you need a Delay method. Delay is not strictly necessary, but it's required in order to mitigate performance pitfalls. When the computation consists of many "parts" (like in the case of multiple yields), it's often the case that some of them should be discarded. In these situation, it would be inefficient to evaluate all of them and then discard some. So the compiler inserts a call to Delay: it receives a function, which, when called, would evaluate a "part" of the computation, and Delay has the opportunity to put this function in some sort of deferred container, so that later Combine can decide which of those containers to discard and which to evaluate.
In your case, however, since the result of the computation doesn't matter (remember: you're not returning any results, you're just mutating the internal variable), Delay can just execute the function it receives to have it produce the side effects (which are - mutating the variable):
member _.Delay(f) = f ()
And now the computation finally compiles, and behold: its result is 6. This result comes from whatever Combine is returning. Try modifying it like this:
member _.Combine(a, b) = "foo"
Now suddenly the result of your computation becomes "foo".
And now, let's move on to semantics.
The above modifications will let your program compile and even produce expected result. However, I think you misunderstood the whole idea of the computation expressions in the first place.
The builder isn't supposed to have any internal state. Instead, its methods are supposed to manipulate complex values of some sort, some methods creating new values, some modifying existing ones. For example, the seq builder1 manipulates sequences. That's the type of values it handles. Different methods create new sequences (Yield) or transform them in some way (e.g. Combine), and the ultimate result is also a sequence.
In your case, it looks like the values that your builder needs to manipulate are numbers. And the ultimate result would also be a number.
So let's look at the methods' semantics.
The Yield method is supposed to create one of those values that you're manipulating. Since your values are numbers, that's what Yield should return:
member _.Yield x = x
The Combine method, as explained above, is supposed to combine two of such values that got created by different parts of the expression. In your case, since you want the ultimate result to be a sum, that's what Combine should do:
member _.Combine(a, b) = a + b
Finally, the Delay method should just execute the provided function. In your case, since your values are numbers, it doesn't make sense to discard any of them:
member _.Delay(f) = f()
And that's it! With these three methods, you can add numbers:
type AddBuilder() =
member _.Yield x = x
member _.Combine(a, b) = a + b
member _.Delay(f) = f ()
let add = AddBuilder()
let result = add { yield 1; yield 2; yield 3 }
I think numbers are not a very good example for learning about computation expressions, because numbers lack the inner structure that computation expressions are supposed to handle. Try instead creating a maybe builder to manipulate Option<'a> values.
Added bonus - there are already implementations you can find online and use for reference.
1 seq is not actually a computation expression. It predates computation expressions and is treated in a special way by the compiler. But good enough for examples and comparisons.

maps,filter,folds and more? Do we really need these in Erlang?

Maps, filters, folds and more : http://learnyousomeerlang.com/higher-order-functions#maps-filters-folds
The more I read ,the more i get confused.
Can any body help simplify these concepts?
I am not able to understand the significance of these concepts.In what use cases will these be needed?
I think it is majorly because of the syntax,diff to find the flow.
The concepts of mapping, filtering and folding prevalent in functional programming actually are simplifications - or stereotypes - of different operations you perform on collections of data. In imperative languages you usually do these operations with loops.
Let's take map for an example. These three loops all take a sequence of elements and return a sequence of squares of the elements:
// C - a lot of bookkeeping
int data[] = {1,2,3,4,5};
int squares_1_to_5[sizeof(data) / sizeof(data[0])];
for (int i = 0; i < sizeof(data) / sizeof(data[0]); ++i)
squares_1_to_5[i] = data[i] * data[i];
// C++11 - less bookkeeping, still not obvious
std::vec<int> data{1,2,3,4,5};
std::vec<int> squares_1_to_5;
for (auto i = begin(data); i < end(data); i++)
squares_1_to_5.push_back((*i) * (*i));
// Python - quite readable, though still not obvious
data = [1,2,3,4,5]
squares_1_to_5 = []
for x in data:
squares_1_to_5.append(x * x)
The property of a map is that it takes a collection of elements and returns the same number of somehow modified elements. No more, no less. Is it obvious at first sight in the above snippets? No, at least not until we read loop bodies. What if there were some ifs inside the loops? Let's take the last example and modify it a bit:
data = [1,2,3,4,5]
squares_1_to_5 = []
for x in data:
if x % 2 == 0:
squares_1_to_5.append(x * x)
This is no longer a map, though it's not obvious before reading the body of the loop. It's not clearly visible that the resulting collection might have less elements (maybe none?) than the input collection.
We filtered the input collection, performing the action only on some elements from the input. This loop is actually a map combined with a filter.
Tackling this in C would be even more noisy due to allocation details (how much space to allocate for the output array?) - the core idea of the operation on data would be drowned in all the bookkeeping.
A fold is the most generic one, where the result doesn't have to contain any of the input elements, but somehow depends on (possibly only some of) them.
Let's rewrite the first Python loop in Erlang:
lists:map(fun (E) -> E * E end, [1,2,3,4,5]).
It's explicit. We see a map, so we know that this call will return a list as long as the input.
And the second one:
lists:map(fun (E) -> E * E end,
lists:filter(fun (E) when E rem 2 == 0 -> true;
(_) -> false end,
[1,2,3,4,5])).
Again, filter will return a list at most as long as the input, map will modify each element in some way.
The latter of the Erlang examples also shows another useful property - the ability to compose maps, filters and folds to express more complicated data transformations. It's not possible with imperative loops.
They are used in almost every application, because they abstract different kinds of iteration over lists.
map is used to transform one list into another. Lets say, you have list of key value tuples and you want just the keys. You could write:
keys([]) -> [];
keys([{Key, _Value} | T]) ->
[Key | keys(T)].
Then you want to have values:
values([]) -> [];
values([{_Key, Value} | T}]) ->
[Value | values(T)].
Or list of only third element of tuple:
third([]) -> [];
third([{_First, _Second, Third} | T]) ->
[Third | third(T)].
Can you see the pattern? The only difference is what you take from the element, so instead of repeating the code, you can simply write what you do for one element and use map.
Third = fun({_First, _Second, Third}) -> Third end,
map(Third, List).
This is much shorter and the shorter your code is, the less bugs it has. Simple as that.
You don't have to think about corner cases (what if the list is empty?) and for experienced developer it is much easier to read.
filter searches lists. You give it function, that takes element, if it returns true, the element will be on the returned list, if it returns false, the element will not be there. For example filter logged in users from list.
foldl and foldr are used, when you have to do additional bookkeeping while iterating over the list - for example summing all the elements or counting something.
The best explanations, I've found about those functions are in books about Lisp: "Structure and Interpretation of Computer Programs" and "On Lisp" Chapter 4..

Is there a difference between foreach and map?

Ok this is more of a computer science question, than a question based on a particular language, but is there a difference between a map operation and a foreach operation? Or are they simply different names for the same thing?
Different.
foreach iterates over a list and performs some operation with side effects to each list member (such as saving each one to the database for example)
map iterates over a list, transforms each member of that list, and returns another list of the same size with the transformed members (such as converting a list of strings to uppercase)
The important difference between them is that map accumulates all of the results into a collection, whereas foreach returns nothing. map is usually used when you want to transform a collection of elements with a function, whereas foreach simply executes an action for each element.
In short, foreach is for applying an operation on each element of a collection of elements, whereas map is for transforming one collection into another.
There are two significant differences between foreach and map.
foreach has no conceptual restrictions on the operation it applies, other than perhaps accept an element as argument. That is, the operation may do nothing, may have a side-effect, may return a value or may not return a value. All foreach cares about is to iterate over a collection of elements, and apply the operation on each element.
map, on the other hand, does have a restriction on the operation: it expects the operation to return an element, and probably also accept an element as argument. The map operation iterates over a collection of elements, applying the operation on each element, and finally storing the result of each invocation of the operation into another collection. In other words, the map transforms one collection into another.
foreach works with a single collection of elements. This is the input collection.
map works with two collections of elements: the input collection and the output collection.
It is not a mistake to relate the two algorithms: in fact, you may view the two hierarchically, where map is a specialization of foreach. That is, you could use foreach and have the operation transform its argument and insert it into another collection. So, the foreach algorithm is an abstraction, a generalization, of the map algorithm. In fact, because foreach has no restriction on its operation we can safely say that foreach is the simplest looping mechanism out there, and it can do anything a loop can do. map, as well as other more specialized algorithms, is there for expressiveness: if you wish to map (or transform) one collection into another, your intention is clearer if you use map than if you use foreach.
We can extend this discussion further, and consider the copy algorithm: a loop which clones a collection. This algorithm too is a specialization of the foreach algorithm. You could define an operation that, given an element, will insert that same element into another collection. If you use foreach with that operation you in effect performed the copy algorithm, albeit with reduced clarity, expressiveness or explicitness. Let's take it even further: We can say that map is a specialization of copy, itself a specialization of foreach. map may change any of the elements it iterates over. If map doesn't change any of the elements then it merely copied the elements, and using copy would express the intent more clearly.
The foreach algorithm itself may or may not have a return value, depending on the language. In C++, for example, foreach returns the operation it originally received. The idea is that the operation might have a state, and you may want that operation back to inspect how it evolved over the elements. map, too, may or may not return a value. In C++ transform (the equivalent for map here) happens to return an iterator to the end of the output container (collection). In Ruby, the return value of map is the output sequence (collection). So, the return value of the algorithms is really an implementation detail; their effect may or may not be what they return.
Array.protototype.map method & Array.protototype.forEach are both quite similar.
Run the following code: http://labs.codecademy.com/bw1/6#:workspace
var arr = [1, 2, 3, 4, 5];
arr.map(function(val, ind, arr){
console.log("arr[" + ind + "]: " + Math.pow(val,2));
});
console.log();
arr.forEach(function(val, ind, arr){
console.log("arr[" + ind + "]: " + Math.pow(val,2));
});
They give the exact ditto result.
arr[0]: 1
arr[1]: 4
arr[2]: 9
arr[3]: 16
arr[4]: 25
arr[0]: 1
arr[1]: 4
arr[2]: 9
arr[3]: 16
arr[4]: 25
But the twist comes when you run the following code:-
Here I've simply assigned the result of the return value from the map and forEach methods.
var arr = [1, 2, 3, 4, 5];
var ar1 = arr.map(function(val, ind, arr){
console.log("arr[" + ind + "]: " + Math.pow(val,2));
return val;
});
console.log();
console.log(ar1);
console.log();
var ar2 = arr.forEach(function(val, ind, arr){
console.log("arr[" + ind + "]: " + Math.pow(val,2));
return val;
});
console.log();
console.log(ar2);
console.log();
Now the result is something tricky!
arr[0]: 1
arr[1]: 4
arr[2]: 9
arr[3]: 16
arr[4]: 25
[ 1, 2, 3, 4, 5 ]
arr[0]: 1
arr[1]: 4
arr[2]: 9
arr[3]: 16
arr[4]: 25
undefined
Conclusion
Array.prototype.map returns an array but Array.prototype.forEach doesn't. So you can manipulate the returned array inside the callback function passed to the map method and then return it.
Array.prototype.forEach only walks through the given array so you can do your stuff while walking the array.
the most 'visible' difference is that map accumulates the result in a new collection, while foreach is done only for the execution itself.
but there are a couple of extra assumptions: since the 'purpose' of map is the new list of values, it doesn't really matters the order of execution. in fact, some execution environments generate parallel code, or even introduce some memoizing to avoid calling for repeated values, or lazyness, to avoid calling some at all.
foreach, on the other hand, is called specifically for the side effects; therefore the order is important, and usually can't be parallelised.
Short answer: map and forEach are different. Also, informally speaking, map is a strict superset of forEach.
Long answer: First, let's come up with one line descriptions of forEach and map:
forEach iterates over all elements, calling the supplied function on each.
map iterates over all elements, calling the supplied function on each, and produces a transformed array by remembering the result of each function call.
In many languages, forEach is often called just each. The following discussion uses JavaScript only for reference. It could really be any other language.
Now, let's use each of these functions.
Using forEach:
Task 1: Write a function printSquares, which accepts an array of numbers arr, and prints the square of each element in it.
Solution 1:
var printSquares = function (arr) {
arr.forEach(function (n) {
console.log(n * n);
});
};
Using map:
Task 2: Write a function selfDot, which accepts an array of numbers arr, and returns an array wherein each element is the square of the corresponding element in arr.
Aside: Here, in slang terms, we are trying to square the input array. Formally put, we are trying to compute it's dot product with itself.
Solution 2:
var selfDot = function (arr) {
return arr.map(function (n) {
return n * n;
});
};
How is map a superset of forEach?
You can use map to solve both tasks, Task 1 and Task 2. However, you cannot use forEach to solve the Task 2.
In Solution 1, if you simply replace forEach by map, the solution will still be valid. In Solution 2 however, replacing map by forEach will break your previously working solution.
Implementing forEach in terms of map:
Another way of realizing map's superiority is to implement forEach in terms of map. As we are good programmers, we'll won't indulge in namespace pollution. We'll call our forEach, just each.
Array.prototype.each = function (func) {
this.map(func);
};
Now, if you don't like the prototype nonsense, here you go:
var each = function (arr, func) {
arr.map(func); // Or map(arr, func);
};
So, umm.. Why's does forEach even exist?
The answer is efficiency. If you are not interested in transforming an array into another array, why should you compute the transformed array? Only to dump it? Of course not! If you don't want a transformation, you shouldn't do a transformation.
So while map can be used to solve Task 1, it probably shouldn't. For each is the right candidate for that.
Original answer:
While I largely agree with #madlep 's answer, I'd like to point out that map() is a strict super-set of forEach().
Yes, map() is usually used to create a new array. However, it may also be used to change the current array.
Here's an example:
var a = [0, 1, 2, 3, 4], b = null;
b = a.map(function (x) { a[x] = 'What!!'; return x*x; });
console.log(b); // logs [0, 1, 4, 9, 16]
console.log(a); // logs ["What!!", "What!!", "What!!", "What!!", "What!!"]
In the above example, a was conveniently set such that a[i] === i for i < a.length. Even so, it demonstrates the power of map().
Here's the official description of map(). Note that map() may even change the array on which it is called! Hail map().
Hope this helped.
Edited 10-Nov-2015: Added elaboration.
Here is an example in Scala using lists: map returns list, foreach returns nothing.
def map(f: Int ⇒ Int): List[Int]
def foreach(f: Int ⇒ Unit): Unit
So map returns the list resulting from applying the function f to each list element:
scala> val list = List(1, 2, 3)
list: List[Int] = List(1, 2, 3)
scala> list map (x => x * 2)
res0: List[Int] = List(2, 4, 6)
Foreach just applies f to each element:
scala> var sum = 0
sum: Int = 0
scala> list foreach (sum += _)
scala> sum
res2: Int = 6 // res1 is empty
If you're talking about Javascript in particular, the difference is that map is a loop function while forEach is an iterator.
Use map when you want to apply an operation to each member of the list and get the results back as a new list, without affecting the original list.
Use forEach when you want to do something on the basis of each element of the list. You might be adding things to the page, for example. Essentially, it's great for when you want "side effects".
Other differences: forEach returns nothing (since it is really a control flow function), and the passed-in function gets references to the index and the whole list, whereas map returns the new list and only passes in the current element.
ForEach tries to apply a function such as writing to db etc on each element of the RDD without returning anything back.
But the map() applies some function over the elements of rdd and returns the rdd. So when you run the below method it won't fail at line3 but while collecting the rdd after applying foreach it will fail and throw an error which says
File "<stdin>", line 5, in <module>
AttributeError: 'NoneType' object has no attribute 'collect'
nums = sc.parallelize([1,2,3,4,5,6,7,8,9,10])
num2 = nums.map(lambda x: x+2)
print ("num2",num2.collect())
num3 = nums.foreach(lambda x : x*x)
print ("num3",num3.collect())

Resources