Related
While doing leetcode, it says adding to a specific node in a singly linked list requires O(1) time complexity:
Unlike an array, we don’t need to move all elements past the inserted element. Therefore, you can insert a new node into a linked list in O(1) time complexity, which is very efficient.
When deleting it's O(n) time which makes sense because you need to traverse to the node-1 and change the pointers. Isn't it the same when adding, which means it should also be O(n) time complexity?
Specifically, when adding, you still need to traverse to the index-1 you want to add at and change that node's .next to the new node.
Leetcode reference - adding: here
Leetcode reference - conclusion: chart
It is important to know what the given input is. For instance, for the insert operation on a singly linked list you have highlighted the case where the node is given after which a new node should be inserted. This is indeed a O(1) operation. This is so because all the nodes that precede the given node are not affected by this operation.
This is different in this case: if for the delete operation the node is given that must be deleted, it cannot be done in O(1) in a singly linked list, because the node that precedes it must be updated. So now that preceding node must be retrieved by iterating from the start of the list.
We can "turn the tables":
What would be the time complexity if we were given a node and need to insert a new node before it? Then it will not be O(1), but O(n), for the simple reason that a change must be made to the node that precedes the given node.
What would be the time complexity if for a delete action we were given the node that precedes it? Then it can be done in O(1).
Still, if the input for either an insert or delete action is not a node reference, but an index, or a node's value, then both have a time complexity of O(n): the list must be traversed to find the given index or the given value.
So the time complexity for an action on a singly linked list depends very much on what the input is.
No, you do not need to traverse the list to insert an element past an existing, given element. For this, you only need to update the next pointers of the element you already have and of the element you are inserting. It's not necessary to know the previous element.
Note that even insertion past the last element can be implemented in O(1) on a singly-linked list, if you keep a reference to the last element of the list.
I was reading the Lua 4.0 manual and I came across this "tag" thing but I have no idea what it is referring to.
http://www.lua.org/manual/4.0/manual.html#3
That's the section where it mentions it but I still have no idea what the manual is talking about.
TL;DR: Tags are the precursor to modern-day meta-tables. Where now the event-handler-pairs are stored directly in the meta-table using normal table-manipulation, as it is a normal table, then we used those tags, normal though unique numbers, and special data-structures, which restricted the events we could set, and had a different interface.
One of the disadvantages of tags was they could not be garbage-collected as they were normal numbers, and thus their associated data could not either.
Quoting the important parts of section 3 "types and tags":
Besides a type, all values also have a tag.
Each of the types nil, number, and string has a different tag. All values of each of these types have the same pre-defined tag. As explained above, values of type function can have two different tags, depending on whether they are Lua functions or C functions. Finally, values of type userdata and table can have variable tags, assigned by the programmer (see Section 4.8). The tag function returns the tag of a given value. User tags are created with the function newtag. The settag function is used to change the tag of a table (see Section 6.1). The tag of userdata values can only be set from C (see Section 5.7). Tags are mainly used to select tag methods when some events occur. Tag methods are the main mechanism for extending the semantics of Lua (see Section 4.8).
So, think of tags as unique ids.
Every value has a tag, depending on its type:
All values of the types nil, number, string, function (C-flavor), function (Lua flavor) have a type-specific tag set on the C side.
All values of the types table and userdata have tags too, but those are set by the programmer for each value indepent from any other.
tag returns the tag, settag sets it for table and userdata, newtag creates a new one.
And looking at section 4.8 "tag methods", we understand that those unique ids are just used for comfortably associating all values of the same Lua type (or for tables and userdatas of the same semantic user-type) with special behavior:
Lua provides a powerful mechanism to extend its semantics, called tag methods. A tag method is a programmer-defined function that is called at specific key points during the execution of a Lua program, allowing the programmer to change the standard Lua behavior at these points. Each of these points is called an event.
The tag method called for any specific event is selected according to the tag of the values involved in the event (see Section 3). The function settagmethod changes the tag method associated with a given pair (tag, event). Its first parameter is the tag, the second parameter is the event name (a string; see below), and the third parameter is the new method (a function), or nil to restore the default behavior for the pair. The settagmethod function returns the previous tag method for that pair. A companion function gettagmethod receives a tag and an event name and returns the current method associated with the pair.
Which just boils down to settagmethod and gettagmethod being used to manage a mapping from tag+event to handler, and the runtime using that as an extension point.
As LHF mentions below, there's a wealth of additional detail and history in The evolution of Lua, for example how the tag-methods evolved from the previous extension-mechanism of "fallbacks", which did not support different behavior for separate groups of values, instead being global.
What is the basic difference between stack and queue??
Please help me i am unable to find the difference.
How do you differentiate a stack and a queue?
I searched for the answer in various links and found this answer..
In high level programming,
a stack is defined as a list or sequence of elements that is lengthened by placing new elements "on top" of existing elements and shortened by removing elements from the top of existing elements. It is an ADT[Abstract Data Type] with math operations of "push" and "pop".
A queue is a sequence of elements that is added to by placing the new element at the rear of existing and shortened by removing elements in front of queue. It is an ADT[Abstract Data Type]. There is more to these terms understood in programming of Java, C++, Python and so on.
Can i have an answer which is more detailed? Please help me.
Stack is a LIFO (last in first out) data structure. The associated link to wikipedia contains detailed description and examples.
Queue is a FIFO (first in first out) data structure. The associated link to wikipedia contains detailed description and examples.
Imagine a stack of paper. The last piece put into the stack is on the top, so it is the first one to come out. This is LIFO. Adding a piece of paper is called "pushing", and removing a piece of paper is called "popping".
Imagine a queue at the store. The first person in line is the first person to get out of line. This is FIFO. A person getting into line is "enqueued", and a person getting out of line is "dequeued".
A Visual Model
Pancake Stack (LIFO)
The only way to add one and/or remove one is from the top.
Line Queue (FIFO)
When one arrives they arrive at the end of the queue and when one leaves they leave from the front of the queue.
Fun fact: the British refer to lines of people as a Queue
You can think of both as an ordered list of things (ordered by the time at which they were added to the list). The main difference between the two is how new elements enter the list and old elements leave the list.
For a stack, if I have a list a, b, c, and I add d, it gets tacked on the end, so I end up with a,b,c,d. If I want to pop an element of the list, I remove the last element I added, which is d. After a pop, my list is now a,b,c again
For a queue, I add new elements in the same way. a,b,c becomes a,b,c,d after adding d. But, now when I pop, I have to take an element from the front of the list, so it becomes b,c,d.
It's very simple!
Queue
Queue is a ordered collection of items.
Items are deleted at one end called ‘front’ end of the queue.
Items are inserted at other end called ‘rear’ of the queue.
The first item inserted is the first to be removed (FIFO).
Stack
Stack is a collection of items.
It allows access to only one data item: the last item inserted.
Items are inserted & deleted at one end called ‘Top of the stack’.
It is a dynamic & constantly changing object.
All the data items are put on top of the stack and taken off the top
This structure of accessing is known as Last in First out structure (LIFO)
STACK:
Stack is defined as a list of element in which we can insert or delete elements only at the top of the stack.
The behaviour of a stack is like a Last-In First-Out(LIFO) system.
Stack is used to pass parameters between function. On a call to a function, the parameters and local variables are stored on a stack.
High-level programming languages such as Pascal, c, etc. that provide support for recursion use the stack for bookkeeping. Remember in each recursive call, there is a need to save the current value of parameters, local variables, and the return address (the address to which the control has to return after the call).
QUEUE:
Queue is a collection of the same type of element. It is a linear list in which insertions can take place at one end of the list,called rear of the list, and deletions can take place only at other end, called the front of the list
The behaviour of a queue is like a First-In-First-Out (FIFO) system.
A stack is a collection of elements, which can be stored and retrieved one at a time. Elements are retrieved in reverse order of their time of storage, i.e. the latest element stored is the next element to be retrieved. A stack is sometimes referred to as a Last-In-First-Out (LIFO) or First-In-Last-Out (FILO) structure. Elements previously stored cannot be retrieved until the latest element (usually referred to as the 'top' element) has been retrieved.
A queue is a collection of elements, which can be stored and retrieved one at a time. Elements are retrieved in order of their time of storage, i.e. the first element stored is the next element to be retrieved. A queue is sometimes referred to as a First-In-First-Out (FIFO) or Last-In-Last-Out (LILO) structure. Elements subsequently stored cannot be retrieved until the first element (usually referred to as the 'front' element) has been retrieved.
STACK:
Stack is defined as a list of element in which we can insert or delete elements only at the top of the stack
Stack is used to pass parameters between function. On a call to a function, the parameters and local variables are stored on a stack.
A stack is a collection of elements, which can be stored and retrieved one at a time. Elements are retrieved in reverse order of their time of storage, i.e. the latest element stored is the next element to be retrieved. A stack is sometimes referred to as a Last-In-First-Out (LIFO) or First-In-Last-Out (FILO) structure. Elements previously stored cannot be retrieved until the latest element (usually referred to as the 'top' element) has been retrieved.
QUEUE:
Queue is a collection of the same type of element. It is a linear list in which insertions can take place at one end of the list,called rear of the list, and deletions can take place only at other end, called the front of the list
A queue is a collection of elements, which can be stored and retrieved one at a time. Elements are retrieved in order of their time of storage, i.e. the first element stored is the next element to be retrieved. A queue is sometimes referred to as a First-In-First-Out (FIFO) or Last-In-Last-Out (LILO) structure. Elements subsequently stored cannot be retrieved until the first element (usually referred to as the 'front' element) has been retrieved.
To try and over-simplify the description of a stack and a queue,
They are both dynamic chains of information elements that can be accessed from one end of the chain and the only real difference between them is the fact that:
when working with a stack
you insert elements at one end of the chain and
you retrieve and/or remove elements from the same end of the chain
while with a queue
you insert elements at one end of the chain and
you retrieve/remove them from the other end
NOTE:
I am using the abstract wording of retrieve/remove in this context because there are instances when you just retrieve the element from the chain or in a sense just read it or access its value, but there also instances when you remove the element from the chain and finally there are instances when you do both actions with the same call.
Also the word element is purposely used in order to abstract the imaginary chain as much as possible and decouple it from specific programming language
terms. This abstract information entity called element could be anything, from a pointer, a value, a string or characters, an object,... depending on the language.
In most cases, though it is actually either a value or a memory location (i.e. a pointer). And the rest are just hiding this fact behind the language jargon<
A queue can be helpful when the order of the elements is important and needs to be exactly the same as when the elements first came into your program. For instance when you process an audio stream or when you buffer network data. Or when you do any type of store and forward processing. In all of these cases you need the sequence of the elements to be output in the same order as they came into your program, otherwise the information may stop making sense. So, you could break your program in a part that reads data from some input, does some processing and writes them in a queue and a part that retrieves data from the queue processes them and stores them in another queue for further processing or transmitting the data.
A stack can be helpful when you need to temporarily store an element that is going to be used in the immediate step(s) of your program. For instance, programming languages usually use a stack structure to pass variables to functions. What they actually do is store (or push) the function arguments in the stack and then jump to the function where they remove and retrieve (or pop) the same number of elements from the stack. That way the size of the stack is dependent of the number of nested calls of functions. Additionally, after a function has been called and finished what it was doing, it leaves the stack in the exact same condition as before it has being called! That way any function can operate with the stack ignoring how other functions operate with it.
Lastly, you should know that there are other terms used out-there for the same of similar concepts. For instance a stack could be called a heap. There are also hybrid versions of these concepts, for instance a double-ended queue can behave at the same time as a stack and as a queue, because it can be accessed by both ends simultaneously. Additionally, the fact that a data structure is provided to you as a stack or as a queue it does not necessarily mean that it is implemented as such, there are instances in which a data structure can be implemented as anything and be provided as a specific data structure simply because it can be made to behave like such. In other words, if you provide a push and pop method to any data structure, they magically become stacks!
STACK is a LIFO (last in, first out) list. means suppose 3 elements are inserted in stack i.e 10,20,30.
10 is inserted first & 30 is inserted last so 30 is first deleted from stack & 10 is last
deleted from stack.this is an LIFO list(Last In First Out).
QUEUE is FIFO list(First In First Out).means one element is inserted first which is to be
deleted first.e.g queue of peoples.
Stacks a considered a vertical collection. First understand that a collection is an OBJECT that gathers and organizes other smaller OBJECTS. These smaller OBJECTS are commonly referred to as Elements. These elements are "Pushed" on the stack in an A B C order where A is first and C is last. vertically it would look like this:
3rd element added) C
2nd element added) B
1st element added) A
Notice that the "A" which was first added to the stack is on the bottom.
If you want to remove the "A" from the stack you first have to remove "C", then "B", and then finally your target element "A". The stack requires a LIFO approach while dealing with the complexities of a stack.(Last In First Out) When removing an element from a stack, the correct syntax is pop. we don't remove an element off a stack we "pop" it off.
Recall that "A" was the first element pushed on to the stack and "C" was the last item Pushed on the stack. Should you decide that you would like to see what is on bottom the stack, being the 3 elements are on the stack ordered A being the first B being the second and C being the third element, the top would have to be popped off then the second element added in order to view the bottom of the stack.
Simply put, a stack is a data structure that retrieves data in opposite order that it was stored in. Meaning that Insertion and Deletion both follow the LIFO (Last In First Out) system. You only ever have access to the top of the stack.
With a queue, it retrieves data in the same order which it was sorted. You have access to the front of the queue when removing, and the back when adding. This follows the FIFO (First In First Out) system.
Stacks use push, pop, peek, size, and clear. Queues use Enqueue, dequeue, peek, size and clear.
It is said that addition and deletion in a Linked List happens in constant time ie O(1) but access to elements happen in time proportional to the size of the list ie O(N). My question is how can you remove or add any element without first traversing to it ?In that case isn't addition or deletion also of the order O(N)?
Taking the example of Java , what happens when we use the api like this :
LinkedList stamps = new LinkedList();
stamps.add(new Stamp("Brazil"));
stamps.add(new Stamp("Spain"));
---
----
stamps.add(new Stamp("UnitedStates"); //say this is kth element in the list
----
stamps.add(new Stamp("India");
Then when some one does stamps.remove(k) , how can this operation happen in constant time?
Deleting items from a linked list works in constant time only if you have a pointer to the actual node on the list. If the only thing you have is the information that you want to delete the "n"th node, then there is no way to know which one it is - in which case you are required to traverse the list first, which is of course O(n).
Adding, on the other hand, always works in constant time, since it is in no way connected to the number of elements already contained by the list. In the example provided, every call to add() is O(1), not including the cost of calling the constructor of class Stamp. Adding to a linked list is simply attaching another element to its end. This is, of course, assuming that the implementation of the linked list knows which node is currently at the end of the list. If it doesn't know that, then, of course, traversal of the entire list is needed.
According to the Wikipedia article on linked lists, inserting in the middle of a linked list is considered O(1). I would think it would be O(n). Wouldn't you need to locate the node which could be near the end of the list?
Does this analysis not account for the finding of the node operation (though it is required) and just the insertion itself?
EDIT:
Linked lists have several advantages over arrays. Insertion of an element at a specific point of a list is a constant-time operation, whereas insertion in an array may require moving half of the elements, or more.
The above statement is a little misleading to me. Correct me if I'm wrong, but I think the conclusion should be:
Arrays:
Finding the point of insertion/deletion O(1)
Performing the insertion/deletion O(n)
Linked Lists:
Finding the point of insertion/deletion O(n)
Performing the insertion/deletion O(1)
I think the only time you wouldn't have to find the position is if you kept some sort of pointer to it (as with the head and the tail in some cases). So we can't flatly say that linked lists always beat arrays for insert/delete options.
You are correct, the article considers "Indexing" as a separate operation. So insertion is itself O(1), but getting to that middle node is O(n).
The insertion itself is O(1). Node finding is O(n).
No, when you decide that you want to insert, it's assumed you are already in the middle of iterating through the list.
Operations on Linked Lists are often done in such a way that they aren't really treated as a generic "list", but as a collection of nodes--think of the node itself as the iterator for your main loop. So as you're poking through the list you notice as part of your business logic that a new node needs to be added (or an old one deleted) and you do so. You may add 50 nodes in a single iteration and each of those nodes is just O(1) the time to unlink two adjacent nodes and insert your new one.
For purposes of comparing with an array, which is what that chart shows, it's O(1) because you don't have to move all the items after the new node.
So yes, they are assuming that you already have the pointer to that node, or that getting the pointer is trivial. In other words, the problem is stated: "given node at X, what is the code to insert after this node?" You get to start at the insert point.
Insertion into a linked list is different than iterating across it. You aren't locating the item, you are resetting pointers to put the item in there. It doesn't matter if it is going to be inserted near the front end or near the end, the insertion still involves pointers being reassigned. It'll depend on how it was implemented, of course, but that is the strength of lists - you can insert easily. Accessing via index is where an array shines. For a list, however, it'll typically be O(n) to find the nth item. At least that's what I remember from school.
Inserting is O(1) once you know where you're going to put it.
Does this analysis not account for the finding of the node operation (though it is required) and just the insertion itself?
You got it. Insertion at a given point assumes that you already hold a pointer to the item that you want to insert after:
InsertItem(item * newItem, item * afterItem)
No, it does not account for searching. But if you already have hold of a pointer to an item in the middle of the list, inserting at that point is O(1).
If you have to search for it, you'd have to add on the time for searching, which should be O(n).
Because it does not involve any looping.
Inserting is like:
insert element
link to previous
link to next
done
this is constant time in any case.
Consequently, inserting n elements one after the other is O(n).
The most common cases are probably inserting at the begining or at the end of the list (and the ends of the list might take no time to find).
Contrast that with inserting items at the begining or the end of an array (which requires resizing the array if it's at the end, or resizing and moving all the elements if it's at the begining).
The article is about comparing arrays with lists. Finding the insert position for both arrays and lists is O(N), so the article ignores it.
O(1) is depending of that fact that you have a item where you will insert the new item. (before or after). If you don´t, it´s O(n) becuase you must find that item.
I think it's just a case of what you choose to count for the O() notation. In the case of inserting the normal operation to count is copy operations. With an array, inserting in the middle involves copying everything above the location up in memory. With a linked list, this becomes setting two pointers. You need to find the location no matter what to insert.
If you have the reference of the node to insert after the operation is O(1) for a linked list.
For an array it is still O(n) since you have to move all consequtive nodes.