Why does adding to a singly linked list require O(1) constant time? - linked-list

While doing leetcode, it says adding to a specific node in a singly linked list requires O(1) time complexity:
Unlike an array, we don’t need to move all elements past the inserted element. Therefore, you can insert a new node into a linked list in O(1) time complexity, which is very efficient.
When deleting it's O(n) time which makes sense because you need to traverse to the node-1 and change the pointers. Isn't it the same when adding, which means it should also be O(n) time complexity?
Specifically, when adding, you still need to traverse to the index-1 you want to add at and change that node's .next to the new node.
Leetcode reference - adding: here
Leetcode reference - conclusion: chart

It is important to know what the given input is. For instance, for the insert operation on a singly linked list you have highlighted the case where the node is given after which a new node should be inserted. This is indeed a O(1) operation. This is so because all the nodes that precede the given node are not affected by this operation.
This is different in this case: if for the delete operation the node is given that must be deleted, it cannot be done in O(1) in a singly linked list, because the node that precedes it must be updated. So now that preceding node must be retrieved by iterating from the start of the list.
We can "turn the tables":
What would be the time complexity if we were given a node and need to insert a new node before it? Then it will not be O(1), but O(n), for the simple reason that a change must be made to the node that precedes the given node.
What would be the time complexity if for a delete action we were given the node that precedes it? Then it can be done in O(1).
Still, if the input for either an insert or delete action is not a node reference, but an index, or a node's value, then both have a time complexity of O(n): the list must be traversed to find the given index or the given value.
So the time complexity for an action on a singly linked list depends very much on what the input is.

No, you do not need to traverse the list to insert an element past an existing, given element. For this, you only need to update the next pointers of the element you already have and of the element you are inserting. It's not necessary to know the previous element.
Note that even insertion past the last element can be implemented in O(1) on a singly-linked list, if you keep a reference to the last element of the list.

Related

LRU cache with a singly linked list

Most LRU cache tutorials emphasize using both a doubly linked list and a dictionary in combination. The dictionary holds both the value and a reference to the corresponding node on the linked list.
When we perform a remove operation, we look up the node from the linked list in the dictionary and we'll have to remove it.
Now here's where it gets weird. Most tutorials argue that we need the preceding node in order to remove the current node from the linked list. This is done in order to get O(1) time.
However, there is a way to remove a node from a singly linked list in O(1) time here. We set the current node's value to the next node and then kill the next node.
My question is: why are all these tutorials that show how to implement an LRU cache with a doubly linked list when we could save constant space by using a singly linked list?
You are correct, the single linked list can be used instead of the double linked list, as can be seen here:
The standard way is a hashmap pointing into a doubly linked list to make delete easy. To do it with a singly linked list without using an O(n) search, have the hashmap point to the preceding node in the linked list (the predecessor of the one you care about, or null if the element is at the front).
Retrieve list node:
hashmap(key) ? hashmap(key)->next : list.head
Delete:
successornode = hashmap(key)->next->next
hashmap( successornode ) = hashmap(key)
hashmap(key)->next = successornode
hashmap.delete(key)
Why is the double linked list so common with LRU solutions then? It is easier to understand and use.
If optimization is an issue, then the trade off of a slightly less simple solution of a single linked list is definitely worth it.
There are a few complications for swapping the payload
The payload could be large (such as buffers)
part of the application code may still refer to the payload (have it pinned)
there may be locks or mutexes involved (which can be owned by both the DLL/hash nodes and/or the payload)
In any case, modifying the DLL affects at most 2*2 pointers, swapping the payload will need (memcpy for the swap +) walking the hash-chain (twice), which could need access to any node in the structure.

Time Complexity for Insertion Sort on Ref. Based Linked-List?

Here is the actual question:
"what is the time complexity if the insertion sort is done on a reference-based linked list?"
I am thinking it would be O(1), right? Because you will check the nodes until you find the PREVIOUS, and what should be the node AFTER, set the pointers, and you're good. Therefore, not EVERY node would need to be checked so it can't be O(n).
Big O notation generally refers to the worst case complexity.
Inserting into an already sorted list (Which I think is how you are understanding the question based on your final paragraph) would have a complexity of O(n), since the worst case is inserting an element that goes at the end of the list, meaning there are n iterations.
Performing an insertion sort on an unsorted linked list would involve inserting n elements into a linked list, giving a complexity of O(n^2).

In practice is Linked List addition O(N) or O(1)?

It is said that addition and deletion in a Linked List happens in constant time ie O(1) but access to elements happen in time proportional to the size of the list ie O(N). My question is how can you remove or add any element without first traversing to it ?In that case isn't addition or deletion also of the order O(N)?
Taking the example of Java , what happens when we use the api like this :
LinkedList stamps = new LinkedList();
stamps.add(new Stamp("Brazil"));
stamps.add(new Stamp("Spain"));
---
----
stamps.add(new Stamp("UnitedStates"); //say this is kth element in the list
----
stamps.add(new Stamp("India");
Then when some one does stamps.remove(k) , how can this operation happen in constant time?
Deleting items from a linked list works in constant time only if you have a pointer to the actual node on the list. If the only thing you have is the information that you want to delete the "n"th node, then there is no way to know which one it is - in which case you are required to traverse the list first, which is of course O(n).
Adding, on the other hand, always works in constant time, since it is in no way connected to the number of elements already contained by the list. In the example provided, every call to add() is O(1), not including the cost of calling the constructor of class Stamp. Adding to a linked list is simply attaching another element to its end. This is, of course, assuming that the implementation of the linked list knows which node is currently at the end of the list. If it doesn't know that, then, of course, traversal of the entire list is needed.

Best Possible algorithm to check if two linked lists are merging at any point? If so, where? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Linked list interview question
This is an interview question for which I don't have an answer.
Given Two lists, You cannot change list and you dont know the length.
Give best possible algorithm to:
Check if two lists are merging at any point?
If merging, at what point they are merging?
If I allow you to change the list how would you modify your algorithm?
I'm assuming that we are talking about simple linked lists and we can safely create a hash table of the list element pointers.
Q1: Iterate to end of both lists, If the respective last elements are the same, the lists merge at some point.
Complexity - O(N), space complexity - O(1)
Q2:
Put all elements of one list into a hash table
Iterate over 2nd list, probing the hash table for each element of the list. The first hit (if any) is the merge point, and we have the position in the 2nd list.
To get the position in the 1st list, iterate over the first list again looking for the element found in the previous step.
Time complexity - O(N). Space complexity - O(N)
Q3:
As Q1, but also reverse the direction of the list pointers.
Then iterate the reversed lists looking for the last common element - that is the merge point - and restoring the list to the original order.
Time complexity - O(N). Space complexity - O(1)
Number 1: Just iterate both and then check if they end with the same element. Thats O(n) and it cant be beaten (as it might possibly be the last element that is common, and getting there always takes O(n)).
Walk those two lists parallel by one element, add each element to Set of visited nodes (can be hash map, or simple set, you only need to check if you visited that node before). At each step check if you visited that node (if yes, then it's merging point), and add it to set of nodes if you visit it first time. Another version (as pointed by #reinier) is to walk only first list, store its nodes in Set and then only check second list against that Set. First approach is faster when your lists merge early, as you don't need to store all nodes from first list. Second is better at worst case, where both list don't merge at all, since it didn't store nodes from second list in Set
see 1.
Instead of Set, you can try to mark each node, but if you cannot modify structure, then it's not so helpful. You could also try unlink each visited node and link it to some guard node (which you check at each step if you encountered it while traversing). It saves memory for Set if list is long enough.
Traverse both the list and have a global variable for finding the number of NULL encountered . If they merge at some point there will be only 1 NULL else there will be two NULL.

Why is inserting in the middle of a linked list O(1)?

According to the Wikipedia article on linked lists, inserting in the middle of a linked list is considered O(1). I would think it would be O(n). Wouldn't you need to locate the node which could be near the end of the list?
Does this analysis not account for the finding of the node operation (though it is required) and just the insertion itself?
EDIT:
Linked lists have several advantages over arrays. Insertion of an element at a specific point of a list is a constant-time operation, whereas insertion in an array may require moving half of the elements, or more.
The above statement is a little misleading to me. Correct me if I'm wrong, but I think the conclusion should be:
Arrays:
Finding the point of insertion/deletion O(1)
Performing the insertion/deletion O(n)
Linked Lists:
Finding the point of insertion/deletion O(n)
Performing the insertion/deletion O(1)
I think the only time you wouldn't have to find the position is if you kept some sort of pointer to it (as with the head and the tail in some cases). So we can't flatly say that linked lists always beat arrays for insert/delete options.
You are correct, the article considers "Indexing" as a separate operation. So insertion is itself O(1), but getting to that middle node is O(n).
The insertion itself is O(1). Node finding is O(n).
No, when you decide that you want to insert, it's assumed you are already in the middle of iterating through the list.
Operations on Linked Lists are often done in such a way that they aren't really treated as a generic "list", but as a collection of nodes--think of the node itself as the iterator for your main loop. So as you're poking through the list you notice as part of your business logic that a new node needs to be added (or an old one deleted) and you do so. You may add 50 nodes in a single iteration and each of those nodes is just O(1) the time to unlink two adjacent nodes and insert your new one.
For purposes of comparing with an array, which is what that chart shows, it's O(1) because you don't have to move all the items after the new node.
So yes, they are assuming that you already have the pointer to that node, or that getting the pointer is trivial. In other words, the problem is stated: "given node at X, what is the code to insert after this node?" You get to start at the insert point.
Insertion into a linked list is different than iterating across it. You aren't locating the item, you are resetting pointers to put the item in there. It doesn't matter if it is going to be inserted near the front end or near the end, the insertion still involves pointers being reassigned. It'll depend on how it was implemented, of course, but that is the strength of lists - you can insert easily. Accessing via index is where an array shines. For a list, however, it'll typically be O(n) to find the nth item. At least that's what I remember from school.
Inserting is O(1) once you know where you're going to put it.
Does this analysis not account for the finding of the node operation (though it is required) and just the insertion itself?
You got it. Insertion at a given point assumes that you already hold a pointer to the item that you want to insert after:
InsertItem(item * newItem, item * afterItem)
No, it does not account for searching. But if you already have hold of a pointer to an item in the middle of the list, inserting at that point is O(1).
If you have to search for it, you'd have to add on the time for searching, which should be O(n).
Because it does not involve any looping.
Inserting is like:
insert element
link to previous
link to next
done
this is constant time in any case.
Consequently, inserting n elements one after the other is O(n).
The most common cases are probably inserting at the begining or at the end of the list (and the ends of the list might take no time to find).
Contrast that with inserting items at the begining or the end of an array (which requires resizing the array if it's at the end, or resizing and moving all the elements if it's at the begining).
The article is about comparing arrays with lists. Finding the insert position for both arrays and lists is O(N), so the article ignores it.
O(1) is depending of that fact that you have a item where you will insert the new item. (before or after). If you don´t, it´s O(n) becuase you must find that item.
I think it's just a case of what you choose to count for the O() notation. In the case of inserting the normal operation to count is copy operations. With an array, inserting in the middle involves copying everything above the location up in memory. With a linked list, this becomes setting two pointers. You need to find the location no matter what to insert.
If you have the reference of the node to insert after the operation is O(1) for a linked list.
For an array it is still O(n) since you have to move all consequtive nodes.

Resources