Data management in the node of a double linked list - linked-list

I am implementing a double list library. i have a data element in each node. I have a function
InitList(ListPtr) which takes in the listPtr passed and initializes the first and the last elements and sets data to 1 and 2 respectively.
Now, if i append a node, i make the data in the node to be appended as 3 and make it last
I was thinking if of a function Insert(ListPtr, node). This node would have some number say, 4 and let us say the list already has 10 nodes. I insert the node at 4th position and make the data of the remaining nodes till the last +1.
My question is,what if i have 100 nodes in the list, each time i do a insert i will be doing data management.
Is it supposed to be done at all. i.e do i not need to care about data at all? I helped me during the initial development process but now it seems like it is not necessary.
Please let me know your thoughts
//Each node
typedef struct Node
{
int data;
node_t *next;
node_t *prev;
}node_t;
//List always begins with first and last nodes
typedef struct List
{
node_t *first; // Pointer to first node in List
node_t *last; // Pointer to last node in List
}list_t;

Your question seams a bit confusing/non-explanatory. What kind of data do you have? Why do you want to keep digits in that data. As far as I got to know is that you want to sort your data which is position sensitive. First of all I don't think that you need a double linked list. A simple linked list would that for you. You may also have an option of using dynamic arrays instead. But if you really want to implement a linked list then you should not make it complex by introducing a double linked list as they are difficult to manage.
I would be able to answer better if you tell me about the nature of your data and what do you want to implement. To me there is something definitely wrong with the logic.

Related

LRU cache with a singly linked list

Most LRU cache tutorials emphasize using both a doubly linked list and a dictionary in combination. The dictionary holds both the value and a reference to the corresponding node on the linked list.
When we perform a remove operation, we look up the node from the linked list in the dictionary and we'll have to remove it.
Now here's where it gets weird. Most tutorials argue that we need the preceding node in order to remove the current node from the linked list. This is done in order to get O(1) time.
However, there is a way to remove a node from a singly linked list in O(1) time here. We set the current node's value to the next node and then kill the next node.
My question is: why are all these tutorials that show how to implement an LRU cache with a doubly linked list when we could save constant space by using a singly linked list?
You are correct, the single linked list can be used instead of the double linked list, as can be seen here:
The standard way is a hashmap pointing into a doubly linked list to make delete easy. To do it with a singly linked list without using an O(n) search, have the hashmap point to the preceding node in the linked list (the predecessor of the one you care about, or null if the element is at the front).
Retrieve list node:
hashmap(key) ? hashmap(key)->next : list.head
Delete:
successornode = hashmap(key)->next->next
hashmap( successornode ) = hashmap(key)
hashmap(key)->next = successornode
hashmap.delete(key)
Why is the double linked list so common with LRU solutions then? It is easier to understand and use.
If optimization is an issue, then the trade off of a slightly less simple solution of a single linked list is definitely worth it.
There are a few complications for swapping the payload
The payload could be large (such as buffers)
part of the application code may still refer to the payload (have it pinned)
there may be locks or mutexes involved (which can be owned by both the DLL/hash nodes and/or the payload)
In any case, modifying the DLL affects at most 2*2 pointers, swapping the payload will need (memcpy for the swap +) walking the hash-chain (twice), which could need access to any node in the structure.

Having More than One Info Part in LinkedList

We have been given an assignment of implementing priority queue using linked lists. The logic in my mind is that if I add 2 info parts to the node, one for containing the data to print & other for storing a key to prioritize the node, then I can dequeue the node according to the priority.
Now I am just confused as to whether it is legal to add two info parts to a single node?
Like
private class Node {
private int priority;
private String job;
private Node Next;
}
If it is a doubly linked list then the reverse pointer is also necessary.
It's certainly fine to put two pieces of information in a node in a linked list. In fact, if you're building a priority queue, you will probably need some kind of priority 'key' to order your queue, as well as a 'value', (a.k.a. 'data' or 'payload') that the node is holding onto for later use.
In your case, the String is the value and the int is the key / priority. You can think of this node as having one piece of information (the String), besides its key.
If that's not exactly what you're after, you could make a more flexible linked list that could hold any data in its node, including a single piece of data that contained both an int and a String. This could therefore be used for a priority queue or any other kind of abstract data structure built on a linked list.
Your code looks like Java, so if you'd like to know how to make this more flexible node in Java you can look into Generics in Java.

What would be the benefit of having a root node in a linkedlist

From community college, I was told to implement linked list with starting node as empty node and append data node to the empty node, but in University, they don't use an empty node. I remember there were advantages of having an empty node but cannot recall it at this point.
What would be the benefit of having an empty node?
One that I can think of is that empty starting node can store list properties such as size of the linked list, and because it never gets deleted, we can extract list properties from it.
This is an example of having an empty node: (also refer to empty node implementation)
(EmptyNode)->(1st Data)->(2nd Data)->null
And this is an example of not having an empty node which is more common.
(1st Data)->(2nd Data)->null
Thank you in advance.
The advantage of an empty node is that it's easier to represent an empty list that still otherwise exists.
While you can sometimes represent an empty list as simply null, the disadvantage is that it assumes that lists are always represented as pointers. Another disadvantage is that you can't call any functions on null/ you make the interface awkward.
Imagine:
RootNodedListNode<char> list; // start empty
list.add('a');
list.add('b');
RootlessListNode<char> * list = null; // start empty
//list->add('a');
list = new RootlessListNode<char>('a');
list->add('b');

To find the Nth node from last when visited nodes are getting deleted

So the question is : find kth node frm last in a linked list if nodes a disappearing once read.This should be done in one pass only.
Try avoiding extra memory .
I am aware of the trivial solution to this problem where two pointers(P and Q lets say) to the header node are taken and P of them is incremented N times and after that both pointers are incremented.The pointer Q points to the Nth last element.
But the question is somewhat different here .here the nodes are disappearing once they are read so there is no way the two pointer way could be used .
And kindly don't close the question before reading it. because this question is different .
Thanks
Keep on storing K elements somewhere, for example if K is 6, then store 6 latest read nodes somewhere as you traverse the linked list, and on reading next node, store that and remove the oldest read node from those you have stored. Once the linked list ends, you will have the last K elements stored (use a linked list or an array etc for this), and the Kth element from the last will be the oldest stored element.
this may not be the most efficient solution as i was typing while i was thinking, but it should work.
Create a queue of size K
Sequentially read each element of the list.
As each node is read, make a copy of node and enqueue onto the queue. If queue is full, dequeue the queue as well.
After reading the last node in the list, dequeue the queue. This is the K-to-last element.

Best Possible algorithm to check if two linked lists are merging at any point? If so, where? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Linked list interview question
This is an interview question for which I don't have an answer.
Given Two lists, You cannot change list and you dont know the length.
Give best possible algorithm to:
Check if two lists are merging at any point?
If merging, at what point they are merging?
If I allow you to change the list how would you modify your algorithm?
I'm assuming that we are talking about simple linked lists and we can safely create a hash table of the list element pointers.
Q1: Iterate to end of both lists, If the respective last elements are the same, the lists merge at some point.
Complexity - O(N), space complexity - O(1)
Q2:
Put all elements of one list into a hash table
Iterate over 2nd list, probing the hash table for each element of the list. The first hit (if any) is the merge point, and we have the position in the 2nd list.
To get the position in the 1st list, iterate over the first list again looking for the element found in the previous step.
Time complexity - O(N). Space complexity - O(N)
Q3:
As Q1, but also reverse the direction of the list pointers.
Then iterate the reversed lists looking for the last common element - that is the merge point - and restoring the list to the original order.
Time complexity - O(N). Space complexity - O(1)
Number 1: Just iterate both and then check if they end with the same element. Thats O(n) and it cant be beaten (as it might possibly be the last element that is common, and getting there always takes O(n)).
Walk those two lists parallel by one element, add each element to Set of visited nodes (can be hash map, or simple set, you only need to check if you visited that node before). At each step check if you visited that node (if yes, then it's merging point), and add it to set of nodes if you visit it first time. Another version (as pointed by #reinier) is to walk only first list, store its nodes in Set and then only check second list against that Set. First approach is faster when your lists merge early, as you don't need to store all nodes from first list. Second is better at worst case, where both list don't merge at all, since it didn't store nodes from second list in Set
see 1.
Instead of Set, you can try to mark each node, but if you cannot modify structure, then it's not so helpful. You could also try unlink each visited node and link it to some guard node (which you check at each step if you encountered it while traversing). It saves memory for Set if list is long enough.
Traverse both the list and have a global variable for finding the number of NULL encountered . If they merge at some point there will be only 1 NULL else there will be two NULL.

Resources