The question is to check whether the given linkedlist is palindrome. Please tell what am I doing wrong? - linked-list

I understand other approaches such as using stack and reversing the second half of the linked list. But, what is wrong with my approach.
* Definition for singly-linked list.
* public class ListNode {
* int val;
* ListNode next;
* ListNode() {}
* ListNode(int val) { this.val = val; }
* ListNode(int val, ListNode next) { this.val = val; this.next = next; }
* }
*/
class Solution {
public boolean isPalindrome(ListNode head) {
if(head.next==null){return true;}
while(head!=null){
ListNode ptr=head, preptr=head;
while(ptr.next!=null){ptr=ptr.next;}
if(ptr==head){break;}
while(preptr.next.next!=null){preptr=preptr.next;}
if(head.val==ptr.val){
preptr.next=null;
head=head.next;
}
else{return false;}
}
return true;
}
}```

The following can be said about your solution:
It fails with an exception if head is null. To avoid that, you could just remove the first if statement. That case does not need a separate handling. When the list is a single node, then the first iteration will execute the break and so you'll get true as return value. But at least you will not access ->next when head is null
It mutates the given list. This is not very nice. The caller will not expect this will happen, and may need the original list for other purposes even after this call to isPalindrome.
It is slow. Its time complexity is quadratic. If this is part of a coding challenge, then the test data may be large, and the execution of your function may then exceed the allotted time.
Using a stack is indeed a solution, but it feels like cheating: then you might as well convert the whole list to an array and test whether the array is a palindrome using its direct addressing capabilities.
You can do this with just the list as follows:
Count the number of nodes in the list
Use that to identify the first node of the second half of the list. If the number of nodes is odd, let this be the node after the center node.
Apply a list reversal algorithm on that second half. Now you have two shorter lists.
Compare the values in those two lists are equal (ignore the center node if there was one). Remember the outcome (false or true)
Repeat step 3 so the reversal is rolled back, and the list is back in its original state.
Return the result that was found in step 4.
This takes linear time, and so for larger lists, this should outperform your solution.

Related

Combiner never gets called in reduction operation (but is mandatory)

I am trying to figure out what accumulator and combiner do in reduce stream operation.
List<User> users = Arrays.asList(new User("John", 30), new User("Julie", 35));
int result = users.stream()
.reduce(0,
(partialAgeResult, user) -> {
// accumulator is called twice
System.out.println(MessageFormat.format("partialAgeResult {0}, user {1}", partialAgeResult, user));
return partialAgeResult + user.getAge();
},
(integer, integer2) -> {
// combiner is never called
System.out.println(MessageFormat.format("integer {0}, integer2 {1}", integer, integer2));
return integer * integer2;
});
System.out.println(MessageFormat.format("Result is {0}", result));
I notice that the combiner is never executed, and the result is 65.
If I use users.parallelStream() then the combiner is executed once and the result is 1050.
Why stream and parallelStream yield different results? I don't see any side-effects of executing this in parallel.
What is the purpose of the combiner in the simple stream version?
The problem is here. You are multiplying and not adding in your combiner.
(integer, integer2) -> {
// combiner is never called
System.out.println(MessageFormat.format("integer {0}, integer2 {1}", integer, integer2));
return integer * integer2; //<----- Should be addition
});
The combiner is used to appropriately combine various parts of a parallel operation as these operations can perform independently on individual "pieces" of the original stream.
A simple example would be summing a list of elements. You could have a variety of partial sums in a parallel operation, so you need to sum the partial sums in the combiner to get the total sum (a good exercise for you to try and see for yourself).
For a sequential stream with a mismatch between the types of the accumulator arguments or implementation( BiFunction<U,? super T,U>), you have to give combiner but that never invoked since you there is no need to combine partial result those are parallelly calculated.
So you can simplify this by just convert into partial data before reduce to avoid giving combiner.
users.stream().map(e -> e.getAge()).reduce(0, (a, b) -> a + b);
So, there is no purpose using a combiner with an accumulator like BiFunction<U,? super T,U> for sequential stream actually, but you have to provide since there is no method like
reduce(U identity, BiFunction<U,? super T,U> accumulator)
But for parallel stream combiner called.
And you are getting 1050 because your multiplying in combiner that means (30*35).

I don't understand the implementation of inserting a new node in linked list

In linked list implementation, the insertion of a new node to the linked list is usually written like this:
void push(struct node** head_ref, int new_data)
/* 1. allocate node */
struct node* new_node = (struct node*) malloc(sizeof(struct node));
/* 2. put in the data */
new_node->data = new_data;
/* 3. Make next of new node as head */
new_node->next = (*head_ref);
/* 4. move the head to point to the new node */
(*head_ref) = new_node;
(I took this code from http://quiz.geeksforgeeks.org/linked-list-set-2-inserting-a-node/)
and the node struct is:
struct node
{
int data;
struct node *next;
};
What I don't understand is the 3. and 4. of the insertion part. So you make the next pointer of new_node pointed to the head, and then the head points to the new_node? So that means the next pointer points to new_node?
It seems like a stupid question but I'm having trouble understanding it, so I hope someone can explain it to me. Thank you.
Well basically in a linked list all nodes are connected to each other. It depends upon u that where do u insert a new node either at end or start. Each time we insert a new node we will check the head pointer.
if(head == NULL) //it means that the node is empty
{
head = newNode; //so we will assign the new node to the head
}
else
{
node* temp = head; //creating a temp pointer that will go
// to the end of the linked list
while(temp -> next != NULL) { temp = temp->next; }
temp = newNode; //This will add the new node to the end
newNode->next = NULL;enter code here
}
If I understood correctly this is your scenario?
http://www.kkhsou.in/main/EVidya2/computer_science/comscience/313.gif
Your list is just a linked list with next pointer until list's last item that has null as pointer
Step 3 makes your new node to point to 2nd item that was at beginning of the list before this operation
Step 4 makes the list head to point to the new node
Hope this helps
/* 1. allocate node /
struct node new_node = (struct node*) malloc(sizeof(struct node));
/* 2. put in the data */
new_node->data = new_data;
/* 3. Make next of new node as head */
new_node->next = (*head_ref);
/* 4. move the head to point to the new node */
(*head_ref) = new_node;
In Step1 and 2, a new node is created and data is assigned to it.
When you list is empty, your *head_ref would be null.
or else if it has any elements, it would be pointing to that
Lets take an example
Now
*head_ref is null
when input is 1
newnode.data=1
newnode.next=null
*headref=newnode
now your *headref points to the latest node that is added ,this happens with step4
When you insert 2
newnode.data=2
newnode.next=*headref(to the node which is 1)
newnode=*headref
now your list is
1->2(*headref)
now if you add 3 here
it becomes
1->2->3(*headref)
Hope you understand
Rather than explaining it to you, I'm going to suggest a technique that will help you work out the answer for yourself.
Get a piece of paper, a pencil and an eraser.
Draw a box on the paper to represent each variable in your algorithm
Draw the initial linked list:
Draw a box to represent each existing node in the initial linked list.
Divide each box into sub-boxes representing the fields.
In each field write either a value, or a dot representing the "from" end of a pointer.
For each pointer, draw a line to the thing (e.g. node) that is pointed to, and put an arrowhead on the "to" end. A NULL pointer is a just a dot.
Now execute the algorithm.
Each time you allocate a new node, draw a new box.
Each time you assign something to a variable, or a field, rub out the current value and write / draw in the new value or the new dot / arrow.
If you do this carefully and systematically, you will be able to visualize exactly what the list insertion algorithm is doing.
This same technique can be used to visualize any list / tree / graph algorithm ... modulo your ability to get it all onto a sheet of paper, and the paper's ability to survive repeated rub-outs.
(This pencil and paper approach is very "old school". As in, this is what we were taught to do when I learned to program in the 1970's. A slightly more modern approach would be to use a whiteboard ...)
First of all head pointer is pointing to first node in list.
In (1) new node is created.
In (2) data is saved to new node.
In (3) The new node is pointing where the head is pointing(means to the first node)
In (4) now the head is made to point to new node, so therefore the new node is now the first node. thats it.

maps,filter,folds and more? Do we really need these in Erlang?

Maps, filters, folds and more : http://learnyousomeerlang.com/higher-order-functions#maps-filters-folds
The more I read ,the more i get confused.
Can any body help simplify these concepts?
I am not able to understand the significance of these concepts.In what use cases will these be needed?
I think it is majorly because of the syntax,diff to find the flow.
The concepts of mapping, filtering and folding prevalent in functional programming actually are simplifications - or stereotypes - of different operations you perform on collections of data. In imperative languages you usually do these operations with loops.
Let's take map for an example. These three loops all take a sequence of elements and return a sequence of squares of the elements:
// C - a lot of bookkeeping
int data[] = {1,2,3,4,5};
int squares_1_to_5[sizeof(data) / sizeof(data[0])];
for (int i = 0; i < sizeof(data) / sizeof(data[0]); ++i)
squares_1_to_5[i] = data[i] * data[i];
// C++11 - less bookkeeping, still not obvious
std::vec<int> data{1,2,3,4,5};
std::vec<int> squares_1_to_5;
for (auto i = begin(data); i < end(data); i++)
squares_1_to_5.push_back((*i) * (*i));
// Python - quite readable, though still not obvious
data = [1,2,3,4,5]
squares_1_to_5 = []
for x in data:
squares_1_to_5.append(x * x)
The property of a map is that it takes a collection of elements and returns the same number of somehow modified elements. No more, no less. Is it obvious at first sight in the above snippets? No, at least not until we read loop bodies. What if there were some ifs inside the loops? Let's take the last example and modify it a bit:
data = [1,2,3,4,5]
squares_1_to_5 = []
for x in data:
if x % 2 == 0:
squares_1_to_5.append(x * x)
This is no longer a map, though it's not obvious before reading the body of the loop. It's not clearly visible that the resulting collection might have less elements (maybe none?) than the input collection.
We filtered the input collection, performing the action only on some elements from the input. This loop is actually a map combined with a filter.
Tackling this in C would be even more noisy due to allocation details (how much space to allocate for the output array?) - the core idea of the operation on data would be drowned in all the bookkeeping.
A fold is the most generic one, where the result doesn't have to contain any of the input elements, but somehow depends on (possibly only some of) them.
Let's rewrite the first Python loop in Erlang:
lists:map(fun (E) -> E * E end, [1,2,3,4,5]).
It's explicit. We see a map, so we know that this call will return a list as long as the input.
And the second one:
lists:map(fun (E) -> E * E end,
lists:filter(fun (E) when E rem 2 == 0 -> true;
(_) -> false end,
[1,2,3,4,5])).
Again, filter will return a list at most as long as the input, map will modify each element in some way.
The latter of the Erlang examples also shows another useful property - the ability to compose maps, filters and folds to express more complicated data transformations. It's not possible with imperative loops.
They are used in almost every application, because they abstract different kinds of iteration over lists.
map is used to transform one list into another. Lets say, you have list of key value tuples and you want just the keys. You could write:
keys([]) -> [];
keys([{Key, _Value} | T]) ->
[Key | keys(T)].
Then you want to have values:
values([]) -> [];
values([{_Key, Value} | T}]) ->
[Value | values(T)].
Or list of only third element of tuple:
third([]) -> [];
third([{_First, _Second, Third} | T]) ->
[Third | third(T)].
Can you see the pattern? The only difference is what you take from the element, so instead of repeating the code, you can simply write what you do for one element and use map.
Third = fun({_First, _Second, Third}) -> Third end,
map(Third, List).
This is much shorter and the shorter your code is, the less bugs it has. Simple as that.
You don't have to think about corner cases (what if the list is empty?) and for experienced developer it is much easier to read.
filter searches lists. You give it function, that takes element, if it returns true, the element will be on the returned list, if it returns false, the element will not be there. For example filter logged in users from list.
foldl and foldr are used, when you have to do additional bookkeeping while iterating over the list - for example summing all the elements or counting something.
The best explanations, I've found about those functions are in books about Lisp: "Structure and Interpretation of Computer Programs" and "On Lisp" Chapter 4..

Linked-list representation of disjoint sets - omission in Intro to Algorithms text?

Having had success with my last CLRS question, here's another:
In Introduction to Algorithms, Second Edition, p. 501-502, a linked-list representation of disjoint sets is described, wherein each list member the following three fields are maintained:
set member
pointer to next object
pointer back to first object (the set representative).
Although linked lists could be implemented by using only a single "Link" object type, the textbook shows an auxiliary "Linked List" object that contains a pointer to the "head" link and the "tail" link. Having a pointer to the "tail" facilitates the Union(x, y) operation, so that one need not traverse all of the links in a larger set x in order to start appending the links of the smaller set y to it.
However, to obtain a reference to the tail link, it would seem that each link object needs to maintain a fourth field: a reference to the Linked List auxiliary object itself. In that case, why not drop the Linked List object entirely and use that fourth field to point directly to the tail?
Would you consider this an omission in the text?
I just opened the text and the textbook description seems fine to me.
From what I understand the data-structure is something like:
struct Set {
LinkedListObject * head;
LinkedListObject * tail;
};
struct LinkedListObject {
Value set_member;
Set *representative;
LinkedListObject * next;
};
The textbook does not talk of any "auxillary" linked list structure in the book I have (second edition). Can you post the relevant paragraph?
Doing a Union would be something like:
// No error checks.
Set * Union(Set *x, Set *y) {
x->tail->next = y->head;
x->tail = y->tail;
LinkedListObject *tmp = y->head;
while (tmp) {
tmp->representative = x;
tmp = tmp->next;
}
return x;
}
why not drop the Linked List object entirely and use that fourth field to point directly to the tail?
An insight can be taken from path compression. There all the elements are supposed to point to head of list. If it doesn't happen then the find-set operation does that (by changing p[x] and returning that). You talk similarly of tail. So if such function is implemented only then can we use that.

Can someone define what a closure is in real world language? [duplicate]

This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
What is a ‘Closure’?
I read more and more about closures but I don't really get what it is from a practical standpoint. I read the wikipedia page on it but it doesn't really clear anything up for me because I have more of a practical background in programming (self taught) as opposed to a computer science background. If this is a reundant question, I apologize as my initial search didn't yield anything that really answered it for me.
edit: Thanks for pointing me in the right direction! I see this has already been clearly answered before so I will close the question out.
Eric Lippert's blog does a pretty good job at explaining this in a practical sense.
And so does Kyle at SO
An operation is said to be closed over a set when the result of that operation also belongs to that set.
For example - Consider a set of positive integers. Addition is a closed operation over this set because adding two positive integers would always give a positive integer.
However subtraction is not closed over this set because 2 - 3 would give -1 which does not belong to the set of positive integers.
cheers
Two one sentence summaries:
• a closure is the local variables for a function - kept alive after the function has returned, or
• a closure is a stack-frame which is not deallocated when the function returns. (as if a 'stack-frame' were malloc'ed instead of being on the stack!)
http://blog.morrisjohns.com/javascript_closures_for_dummies
probably best demonstrated by an example
program output (artificially) prefixed by *
Javascript:
js> function newCounter() {
var i = 0;
var counterFunction = function () {
i += 1;
return i;
}
return counterFunction;
}
js> aCounter = newCounter()
* function () {
* i += 1;
* return i;
* }
js> aCounter()
* 1
js> aCounter()
* 2
js> aCounter()
* 3
js> bCounter = newCounter()
* function () {
* i += 1;
* return i;
* }
js> bCounter()
* 1
js> aCounter()
* 4
"Real world language" is a difficult measure to gauge objectively, but I'll give it a try since after all I also lack a CS background (EE major here) and am a little self-taught;-)
In many languages, a function "sees" more than one "scope" (group) of variables -- not only its local variables, and that of the module or namespace it's in, but also (if it's within another function) the local variables of the function that contains it.
So, for example (in Python, but many other languages work similarly!):
def outer(haystack):
def inner(needle):
eldeen = needle[::-1]
return (needle in haystack) or (eldeen in haystack)
return [x for x in ['gold','silver','diamond','straw'] if inner(x)]
inner can "see" haystack without needing to see it as an argument, just because its containing function outer has haystack "in scope" (i.e. "visible"). So far, so clear, I hope -- and this isn't yet about closures, it's about lexical scoping.
Now suppose the outer function (in a language treating functions as first-class objects, and in particular allowing them to be returned as results of other functions) the outer function returns the inner one instead of just calling it internally (in Python, that's for example what usually happens when you use decorator syntax #something).
How can the inner function (returned as a result) still refer to the outer function's variable, since the outer function has finished?
The answer is exactly this "closure" business -- those variables from the outer function which the inner (returned) function may still need are preserved and attached as attributes of the inner-function object that's returned.

Resources