Dequeue an item from a set of 1 elements - linked-list

Given a singly linked list implementation of a queue type with front and rear pointers, when you dequeue an item from a set of 1 elements, do you need to need to set the rear pointer to null?
I'm reading C++ Plus Data Structures by Nell Dale, and in chapter 5.2 he writes in his Dequeue method:
if (front == NULL)
rear == NULL;
I'm wondering why is this necessary. The only reason I can think of is the way he implements Enqueue in regards to an empty set:
if (rear == NULL)
front = newNode;
else
rear->next = newNode;
rear = newNode;
But couldn't this condition be changed to if (front == NULL)

For correct code, you need to maintain the data structure invariants with each operation. The way the original code is written, the invariants look like this:
A1: front points to the first element, if it exists; NULL otherwise
A2: rear points to the last element, if it exists; NULL otherwise
You're claiming that the invariants could alternatively be these:
B1: front points to the first element, if it exists; NULL otherwise
B2: rear points to the last element, if it exists
Now these are similar, but not at all identical. The B invariants don't require any particular value of rear when the list is empty. Now you can use the B invariants to implement the list, sure, since it admits discrimination between empty and non-empty states, and that's enough to provide correct implementations.
But that doesn't say anything about whether A or B is better in practice. If you use the B invariants, a function to peek at the last element cannot simply return the value of rear, since its value can be anything if the queue is empty; you have to test the front value first.
// for the A2 invariant
return rear ;
// for the B2 invariant
return ( front == NULL ) ? NULL : rear ;
This boils down to whether to want to test-and-assign when you dequeue or whether to want to test when you peek at the last element. This is an optimization question, not a correctness one. If you never need to peek at the last element, you can optimize for that.
All this said, this a prime case for the hazards of premature optimization. An extra pointer assignment to NULL is almost never going to be a performance problem. What's much more likely to be a problem is introducing a defect during maintenance when someone relies on the A invariants when the existing code is using the B ones. The A invariants are cognitively simpler, because front and rear are set up analogously, with neither having special behavior. The B invariants might perform better, but at the cost of complexity. Either choice might be better depending on your circumstances.
The Moral Of This Story: Always document your invariants.

Related

Lua - Most resource efficient way to compare a variable to a fixed list of values

So I have a variable, let's call it 'ID'. I need to check this value relative to a fixed amount of values. The ID, of course, can only match one of the values so there isn't an issue with stopping on the first matching value as none of the others would match. There is also a chance that the variable does not match any of the given values, too. My question is then, what is the most resource efficient way to do this? I can think of two easy ways of tackling the problem. Since I know the values at the time of programming I can setup a conditional with 'or' that just checks each value, like so:
if (ID == "1" or ID == "16" or ID == "58") then
--do something--
end
The problem with this is that it's quite verbose and tedious to write. The other option involves a foreach loop where I define a table beforehand.
values = {"1", "16", "58"}
for _, value in ipairs(values) do
if(ID == value) then
return true
end
end
The upside to this is it's reusable which is good since I'll need to do this exact check with a different set of values at least 10 times, the downside is I suspect it takes more resources.
Any help would be greatly appreciated.
Tables can be used as sets:
interesting = {
["1"] = true, ["16"] = true, ["58"] = true
}
if interesting[ID] then
-- ...
end
While it eats more memory (80 bytes per empty table plus 32 bytes (IIRC, on x86_64) per entry (while rounding the number of entries up to the next power of two) vs. 16 bytes per comparison plus storage for the value that you compare) the chain of comparisons happens in C and is therefore faster than a chain of comparisons as a sequence of Lua instructions (at least once things get larger).
For small numbers of values, this doesn't really matter. (If you are CPU-bound and this is very important in your case, measure in the context of your program and see what performs better. Don't put too much weight on micro-benchmarks with this – cache behavior in particular might produce funny effects here.)
For large numbers of comparisons, this is the right approach. It's also more flexible than if-then-else chains. (You can change things at runtime without reloading code.)
Also note that the value you use to anchor an element in the set doesn't really matter, so a relatively common idiom (especially for input handling) is putting the action as a function into the table:
keybindings = {
left = function() Player:move_left( ) end,
right = function() Player:move_right( ) end,
up = function() Player:jump( ) end,
-- ...
}
function onKey( k )
local action = keybindings[k]
if action then action( ) end
end
While this certainly is slower than a direct comparison and inline code, speed is essentially irrelevant here (generally happens much less often than ~100x per second) and flexibility is of high value.

Swift 3 and Index of a custom linked list collection type

In Swift 3 Collection indices have to conform to Comparable instead of Equatable.
Full story can be read here swift-evolution/0065.
Here's a relevant quote:
Usually an index can be represented with one or two Ints that
efficiently encode the path to the element from the root of a data
structure. Since one is free to choose the encoding of the “path”, we
think it is possible to choose it in such a way that indices are
cheaply comparable. That has been the case for all of the indices
required to implement the standard library, and a few others we
investigated while researching this change.
In my implementation of a custom linked list collection a node (pointing to a successor) is the opaque index type. However, given two instances, it is not possible to tell if one precedes another without risking traversal of a significant part of the chain.
I'm curious, how would you implement Comparable for a linked list index with O(1) complexity?
The only idea that I currently have is to somehow count steps while advancing the index, storing it within the index type as a property and then comparing those values.
Serious downside of this solution is that indices must be invalidated when mutating the collection. And while that seems reasonable for arrays, I do not want to break that huge benefit linked lists have - they do not invalidate indices of unchanged nodes.
EDIT:
It can be done at the cost of two additional integers as collection properties assuming that single linked list implements front insert, front remove and back append. Any meddling around in the middle would anyway break O(1) complexity requirement.
Here's my take on it.
a) I introduced one private integer type property to my custom Index type: depth.
b) I introduced two private integer type properties to the collection: startDepth and endDepth, which both default to zero for an empty list.
Each front insert decrements the startDepth.
Each front remove increments the startDepth.
Each back append increments the endDepth.
Thus all indices startIndex..<endIndex have a reflecting integer range startDepth..<endDepth.
c) Whenever collection vends an index either by startIndex or endIndex it will inherit its corresponding depth value from the collection. When collection is asked to advance the index by invoking index(_ after:) I will simply initialize a new Index instance with incremented depth value (depth += 1).
Conforming to Comparable boils down to comparing left-hand side depth value to the right-hand side one.
Note that because I expand the integer range from both sides as well, all the depth values for the middle indices remain unchanged (thus are not invalidated).
Conclusion:
Traded benefit of O(1) index comparisons at the cost of minor increase in memory footprint and few integer increments and decrements. I expect index lifetime to be short and number of collections relatively small.
If anyone has a better solution I'd gladly take a look at it!
I may have another solution. If you use floats instead of integers, you can gain kind of O(1) insertion-in-the-middle performance if you set the sortIndex of the inserted node to a value between the predecessor and the successor's sortIndex. This would require to store (and update) the predecessor's sortIndex on your nodes (I imagine this should not be to hard since it is only changed on insertion or removal and it can always be propagated 'up').
In your index(after:) method you need to query the successor node, but since you use your node as index, that is be straightforward.
One caveat is the finite precision of floating points, so if on insertion you the distance between the two sort indices are two small, you need to reindex at least part of the list. Since you said you only expect small scale, I would just go through the hole list and use the position for that.
This approach has all the benefits of your own, with the added benefit of good performance on insertion in the middle.

Is there a more efficient way to find the middle of a singly-linked list? (Any language)

Let's set the context/limitations:
A linked-list consists of Node objects.
Nodes only have a reference to their next node.
A reference to the list is only a reference to the head Node object.
No preprocessing or indexing has been done on the linked-list other than construction (there are no other references to internal nodes or statistics collected, i.e. length).
The last node in the list has a null reference for its next node.
Below is some code for my proposed solution.
Node cursor = head;
Node middle = head;
while (cursor != null) {
cursor = cursor.next;
if (cursor != null) {
cursor = cursor.next;
middle = middle.next;
}
}
return middle;
Without changing the linked-list architecture (not switching to a doubly-linked list or storing a length variable), is there a more efficient way to find the middle element of singly-linked list?
Note: When this method finds the middle of an even number of nodes, it always finds the left middle. This is ideal as it gives you access to both, but if a more efficient method will always find the right middle, that's fine, too.
No, there is no more efficient way, given the information you have available to you.
Think about it in terms of transitions from one node to the next. You have to perform N transitions to work out the list length. Then you have to perform N/2 transitions to find the middle.
Whether you do this as a full scan followed by a half scan based on the discovered length, or whether you run the cursor (at twice speed) and middle (at normal speed) pointers in parallel is not relevant here, the total number of transitions remains the same.
The only way to make this faster would be to introduce extra information to the data structure which you've discounted but, for the sake of completeness, I'll include it here. Examples would be:
making it a doubly-linked list with head and tail pointers, so you could find it in N transitions by "squeezing" in from both ends to the middle. That doubles the storage requirements for pointers however so may not be suitable.
having a skip list with each node pointing to both it's "child" and its "grandchild". This would speed up the cursor transitions resulting in only about N in total (that's N/2 for each of cursor and middle). Like the previous point, there's an extra pointer per node required for this.
maintaining the length of the list separately so you could find the middle in N/2 transitions.
same as the previous point but caching the middle node for added speed under certain circumstances.
That last point bears some extra examination. Like many optimisations, you can trade space for time and the caching shows one way to do it.
First, maintain the length of the list and a pointer to the middle node. The length is initially zero and the middle pointer is initially set to null.
If you're ever asked for the middle node when the length is zero, just return null. That makes sense because the list is empty.
Otherwise, if you're asked for the middle node and the pointer is null, it must be because you haven't cached the value yet.
In that case, calculate it using the length (N/2 transitions) and then store that pointer for later, before returning it.
As an aside, there's a special case here when adding to the end of the list, something that's common enough to warrant special code.
When adding to the end when the length is going from an even number to an odd number, just set middle to middle->next rather than setting it back to null.
This will save a recalculation and works because you (a) have the next pointers and (b) you can work out how the middle "index" (one-based and selecting the left of a pair as per your original question) changes given the length:
Length Middle(one-based)
------ -----------------
0 none
1 1
2 1
3 2
4 2
5 3
: :
This caching means, provided the list doesn't change (or only changes at the end), the next time you need the middle element, it will be near instantaneous.
If you ever delete a node from the list (or insert somewhere other than the end), set the middle pointer back to null. It will then be recalculated (and re-cached) the next time it's needed.
So, for a minimal extra storage requirement, you can gain quite a bit of speed, especially in situations where the middle element is needed more often than the list is changed.

Time Complexity of Doubly Linked List Element Removal?

A lot of what I'm reading says that removing an internal element in a doubly linked list (DLL) is O(1); but why is this the case?
I understand why it's O(n) for SLLs; traverse the list O(n) and remove O(1) but don't you still need to traverse the list in a DLL to find the element?
For a doubly linked list, it's constant time to remove an element once you know where it is.
For a singly linked list, it's constant time to remove an element once you know where it and its predecessor are.
Since that link you point to shows a singly linked list removal as O(n) and a doubly linked one as O(1), it's certain that's once you already know where the element is that you want to remove, but not anything else.
In that case, for a doubly linked list, you can just use the prev and next pointers to remove it, giving you O(1). Ignoring the edge cases where you're at the head or tail, that means something like:
corpse->prev->next = corpse->next
corpse->next->prev = corpse->prev
free (corpse)
However, in a singly linked list where you only know the node you want deleted, you can't use corpse->prev to get the one preceding it because there is no prev link.
You have to instead find the previous item by traversing the list from the head, looking for one which has a next of the element you want to remove. That will take O(n), after which it's once again O(1) for the actual removal, such as (again, ignoring the edge cases for simplicity):
lefty = head
while lefty->next != corpse:
lefty = lefty-> next
lefty->next = corpse->next
free (corpse)
That's why the two complexities are different in that article.
As an aside, there are optimisations in a singly-linked list which can make the deletion O(n) (the deletion being effectively O(1) once you've found the item you want to delete, and the previous item). In code terms, that goes something like:
# Delete a node, returns true if found, otherwise false.
def deleteItem(key):
# Special cases (empty list and deleting head).
if head == null: return false
if head.data == key:
curr = head
head = head.next
free curr
return true
# Search non-head part of list (so prev always exists).
prev = head
curr = head.next
while curr != null:
if curr.data == key:
# Found it so delete (using prev).
prev.next = curr.next
free curr
return true
# Advance to next item.
prev = curr
curr = curr.next
# Not found, so fail.
return false
As it's stated where your link points to:
The cost for changing an internal element is based on already having a pointer to it, if you need to find the element first, the cost for retrieving the element is also taken.
So, for both DLL and SLL linear search is O(n), and removal via pointer is O(1).
The complexity of removal in DLL is O(1).
It can also be O(1) in SLL if provided pointer to preceding element and not to the element itself.
This complexity is assuming you know where the element is.
I.e. the operation signature is akin to remove_element(list* l, link* e)
Searching for the element is O(n) in both cases.
#Matuku: You are correct.
I humbly disagree with most answers here trying to justify how delete operation for DLL is O(1). It's not.
Let me explain.
Why are we considering the scenario that we 'would' have the pointer to the node that is being deleted? LinkedLists (Singly/Doubly) are traversed linearly, that's their definition. They have pointers only to the head/tail. How can we suddenly have a pointer to some node in between? That defeats the purpose of this data structure. And going by that assumption, if I have a DLL list of say 1 million nodes, then do I also have to maintain 1 million pointers (let's call them access pointers) pointing to each of those nodes so that I can delete them in O(1)? So how would I store those 1 millions access pointers? And how do I know which access pointer points to the correct data/node that I want to delete?
Can we have a real world example where we 'have' the pointer to the data that has to be deleted 100% of the time?
And if you know the exact location/pointer/reference of/to the node to be deleted, why to even use a LinkedList? Just use array! That's what arrays are for - direct access to what you want!
By assuming that you have direct access to any node you want in DLL is going against the whole idea of LinkedList as a conceptual Data Structure. So I agree with OP, he's correct. I will stick with this - Doubly LinkedLists cannot have O(1) for deleting any node. You still need to start either from head or tail, which brings it down to O(n).
" If " we have the pointer to the node to be deleted say X, then of course it's O(1) because we have pointers to the next and prev node we can delete X. But that big if is imaginary, not real.
We cannot play with the definition of the sacred Data Structure called LinkedLists for some weird assumptions we may have from time to time.

Worse sin: side effects or passing massive objects?

I have a function inside a loop inside a function. The inner function acquires and stores a large vector of data in memory (as a global variable... I'm using "R" which is like "S-Plus"). The loop loops through a long list of data to be acquired. The outer function starts the process and passes in the list of datasets to be acquired.
for (dataset in list_of_datasets) {
for (datachunk in dataset) {
<process datachunk>
<store result? as vector? where?>
}
}
I programmed the inner function to store each dataset before moving to the next, so all the work of the outer function occurs as side effects on global variables... a big no-no. Is this better or worse than collecting and returning a giant, memory-hogging vector of vectors? Is there a superior third approach?
Would the answer change if I were storing the data vectors in a database rather than in memory? Ideally, I'd like to be able to terminate the function (or have it fail due to network timeouts) without losing all the information processed prior to termination.
use variables in the outer function instead of global variables. This gets you the best of both approaches: you're not mutating global state, and you're not copying a big wad of data. If you have to exit early, just return the partial results.
(See the "Scope" section in the R manual: http://cran.r-project.org/doc/manuals/R-intro.html#Scope)
Remember your Knuth. "Premature optimization is the root of all programming evil."
Try the side effect free version. See if it meets your performance goals. If it does, great, you don't have a problem in the first place; if it doesn't, then use the side effects, and make a note for the next programmer that your hand was forced.
It's not going to make much difference to memory use, so you might as well make the code clean.
Since R has copy-on-modify for variables, modifying the global object will have the same memory implications as passing something up in return values.
If you store the outputs in a database (or even in a file) you won't have the memory use issues, and the data will be incrementally available as it is created, rather than just at the end. Whether it's faster with the database depends primarily on how much memory you are using: is the reduction is garbage collection going to pay for the cost of writing to disk.
There are both time and memory profilers in R, so you can see empirically what the impacts are.
FYI, here's a full sample toy solution that avoids side effects:
outerfunc <- function(names) {
templist <- list()
for (aname in names) {
templist[[aname]] <- innerfunc(aname)
}
templist
}
innerfunc <- function(aname) {
retval <- NULL
if ("one" %in% aname) retval <- c(1)
if ("two" %in% aname) retval <- c(1,2)
if ("three" %in% aname) retval <- c(1,2,3)
retval
}
names <- c("one","two","three")
name_vals <- outerfunc(names)
for (name in names) assign(name, name_vals[[name]])
I'm not sure I understand the question, but I have a couple of solutions.
Inside the function, create a list of the vectors and return that.
Inside the function, create an environment and store all the vectors inside of that. Just make sure that you return the environment in case of errors.
in R:
help(environment)
# You might do something like this:
outer <- function(datasets) {
# create the return environment
ret.env <- new.env()
for(set in dataset) {
tmp <- inner(set)
# check for errors however you like here. You might have inner return a list, and
# have the list contain an error component
assign(set, tmp, envir=ret.env)
}
return(ret.env)
}
#The inner function might be defined like this
inner <- function(dataset) {
# I don't know what you are doing here, but lets pretend you are reading a data file
# that is named by dataset
filedata <- read.table(dataset, header=T)
return(filedata)
}
leif
Third approach: inner function returns a reference to the large array, which the next statement inside the loop then dereferences and stores wherever it's needed (ideally with a single pointer store and not by having to memcopy the entire array).
This gets rid of both the side effect and the passing of large datastructures.
It's tough to say definitively without knowing the language/compiler used. However, if you can simply pass a pointer/reference to the object that you're creating, then the size of the object itself has nothing to do with the speed of the function calls. Manipulating this data down the road could be a different story.

Resources