I have a set in my program
Set<MyObject> set = {};
after inserting some values
set.add(object);
how do i insert in the set at index 0?
set.insert(index, object,);
(insert does not exists in set class) https://api.dart.dev/stable/2.8.4/dart-core/Set-class.html
By default Set is a LinkedHashSet, which stores items in insertion order.
As Alok suggested, using a List instead of a Set would give you full control over element order, but it would come at the cost of needing to check for element uniqueness yourself. Consequently, insertions, removals, and lookups each would be O(n) instead of the usual O(1). If you frequently need to remove elements or frequently need to check if an element already exists, this would not be an efficient solution.
If you don't need full control over element order, you might be able to continue using a Set. To force an item to be at the beginning, you would need to remove and re-add all other items. One way to do that would be to create a new Set:
set = <MyObject>{object, ...set};
If you need to mutate an existing Set, you could add an extra step:
var temporarySet = <MyObject>{object, ...set};
set..clear()..addAll(temporarySet);
Note that in both cases, insertion would have runtime complexity of O(n) instead of the usual O(1). Removals and lookups would remain O(1).
If you need to insert at the beginning frequently and only at the beginning, you possibly could cheat by always iterating over the Set backwards, treating the last element as the first:
// Force `object` to be last.
set..remove(object)..add(object);
and then use set.last instead of set.first or use set.toList().reversed when iterating. This would allow insertions, removals, and lookups to continue being O(1).
Following OldProgrammer's suggestion, you must use List. However, I can see that there is a requirement of not inserting the duplicate, hence you are using a set. For that, what you can do is, you can use in operation in the loop and insert accordingly.
List<MyObject> data = [];
// you can add your object to the list with the check whether
// it is already there
if(object in data){
print('Already there');
}else{
data.insert(index, object);
}
//print the data to check
print(data);
I hope this gives you some clarity, and get what you're desired to get.
Related
While doing leetcode, it says adding to a specific node in a singly linked list requires O(1) time complexity:
Unlike an array, we don’t need to move all elements past the inserted element. Therefore, you can insert a new node into a linked list in O(1) time complexity, which is very efficient.
When deleting it's O(n) time which makes sense because you need to traverse to the node-1 and change the pointers. Isn't it the same when adding, which means it should also be O(n) time complexity?
Specifically, when adding, you still need to traverse to the index-1 you want to add at and change that node's .next to the new node.
Leetcode reference - adding: here
Leetcode reference - conclusion: chart
It is important to know what the given input is. For instance, for the insert operation on a singly linked list you have highlighted the case where the node is given after which a new node should be inserted. This is indeed a O(1) operation. This is so because all the nodes that precede the given node are not affected by this operation.
This is different in this case: if for the delete operation the node is given that must be deleted, it cannot be done in O(1) in a singly linked list, because the node that precedes it must be updated. So now that preceding node must be retrieved by iterating from the start of the list.
We can "turn the tables":
What would be the time complexity if we were given a node and need to insert a new node before it? Then it will not be O(1), but O(n), for the simple reason that a change must be made to the node that precedes the given node.
What would be the time complexity if for a delete action we were given the node that precedes it? Then it can be done in O(1).
Still, if the input for either an insert or delete action is not a node reference, but an index, or a node's value, then both have a time complexity of O(n): the list must be traversed to find the given index or the given value.
So the time complexity for an action on a singly linked list depends very much on what the input is.
No, you do not need to traverse the list to insert an element past an existing, given element. For this, you only need to update the next pointers of the element you already have and of the element you are inserting. It's not necessary to know the previous element.
Note that even insertion past the last element can be implemented in O(1) on a singly-linked list, if you keep a reference to the last element of the list.
I am working on UISearchBar on Swift 4.0. I have originalList[ModelItem] and filterList[ModelItem]. While in searching lets say user wants to delete on filterList 5th position which is 10th item on actual originalList. It make sense to delete this item from both of the list right? Items have no id or similar type field.
What would be the basic steps for such both-way deletion? I was looking for a general idea of achieving this.
If the model is a class and the filterList is created directly from the originalList (no new objects created, but both lists reference the same objects), then you can use this code:
let itemToDelete = filterList.remove(at: indexPath.row)
if let index = originalList.index(where: { $0 === itemToDelete }) {
originalList.remove(at: index)
}
print(originalList)
print(filterList)
=== operator will test the equality of the instances, thus identifying the proper instance to be removed from originalList.
In case you are using struct as a model, you will have to implement Equatable with some heuristics that would be able to detect if two instances are equal or not even without having an explicit identifier and then use == to find the proper instance in originalList to be removed.
Another alternative might be implementing search with index method, that would use the same filtering algorithm as your current filter method, but would take one more parameter - index in the filterList (filterIndex) along with the filter text, and based on that would compute and return an index in the originalList that matches the provided pair of filter text and filterIndex.
Yet another alternative, which I would not recommend (I would call it a hack) - you can keep a dictionary of indexes from originalList to filterList which you can use to have explicit mapping between originalList and filterList. This would however require that you always update that dictionary whatever change is made to one of the lists - every search, every deletion or removal or insertion would require an update of the mapping dictionary. This seems way to complicated and error prone.
You have a number of options.
You can maintain a mapping between the original and the filtered items positions, so you can perform deletion on both lists.
You can make your items identifiable, so you can search for the corresponding item in the original list and delete it. Note that all reference types can be tested for identity (===).
You can work with a filtered "view" to the original list, and not with a filtered copy, so the deletion will be performed on the original list naturally.
I don't think we have a standard solution for the latter option, which makes this approach the most complicated.
When choosing either of the first two options be careful with the original list updates that can happen while you operate on the filtered copy.
What is the suggested approach for updating an objects value in an array, bearing in mind the array may have been reordered?
I'm wondering how dangerous using index based paths is, when an array could have possibly changed via a deletion, or reorder.
Would it be better to use objects instead, I wonder.
If you are using a mutable list, it is inherently unsafe to update an object by its position in a list. The right thing to do is to use deref. Assuming you have a list of references (the most common case) you can dereference a Model at its position in the list. This will ensure it points to the object's identity path rather than the index in the list. Then you can update the object directly without worrying about whether it has moved around in the list.
It is said that addition and deletion in a Linked List happens in constant time ie O(1) but access to elements happen in time proportional to the size of the list ie O(N). My question is how can you remove or add any element without first traversing to it ?In that case isn't addition or deletion also of the order O(N)?
Taking the example of Java , what happens when we use the api like this :
LinkedList stamps = new LinkedList();
stamps.add(new Stamp("Brazil"));
stamps.add(new Stamp("Spain"));
---
----
stamps.add(new Stamp("UnitedStates"); //say this is kth element in the list
----
stamps.add(new Stamp("India");
Then when some one does stamps.remove(k) , how can this operation happen in constant time?
Deleting items from a linked list works in constant time only if you have a pointer to the actual node on the list. If the only thing you have is the information that you want to delete the "n"th node, then there is no way to know which one it is - in which case you are required to traverse the list first, which is of course O(n).
Adding, on the other hand, always works in constant time, since it is in no way connected to the number of elements already contained by the list. In the example provided, every call to add() is O(1), not including the cost of calling the constructor of class Stamp. Adding to a linked list is simply attaching another element to its end. This is, of course, assuming that the implementation of the linked list knows which node is currently at the end of the list. If it doesn't know that, then, of course, traversal of the entire list is needed.
According to the Wikipedia article on linked lists, inserting in the middle of a linked list is considered O(1). I would think it would be O(n). Wouldn't you need to locate the node which could be near the end of the list?
Does this analysis not account for the finding of the node operation (though it is required) and just the insertion itself?
EDIT:
Linked lists have several advantages over arrays. Insertion of an element at a specific point of a list is a constant-time operation, whereas insertion in an array may require moving half of the elements, or more.
The above statement is a little misleading to me. Correct me if I'm wrong, but I think the conclusion should be:
Arrays:
Finding the point of insertion/deletion O(1)
Performing the insertion/deletion O(n)
Linked Lists:
Finding the point of insertion/deletion O(n)
Performing the insertion/deletion O(1)
I think the only time you wouldn't have to find the position is if you kept some sort of pointer to it (as with the head and the tail in some cases). So we can't flatly say that linked lists always beat arrays for insert/delete options.
You are correct, the article considers "Indexing" as a separate operation. So insertion is itself O(1), but getting to that middle node is O(n).
The insertion itself is O(1). Node finding is O(n).
No, when you decide that you want to insert, it's assumed you are already in the middle of iterating through the list.
Operations on Linked Lists are often done in such a way that they aren't really treated as a generic "list", but as a collection of nodes--think of the node itself as the iterator for your main loop. So as you're poking through the list you notice as part of your business logic that a new node needs to be added (or an old one deleted) and you do so. You may add 50 nodes in a single iteration and each of those nodes is just O(1) the time to unlink two adjacent nodes and insert your new one.
For purposes of comparing with an array, which is what that chart shows, it's O(1) because you don't have to move all the items after the new node.
So yes, they are assuming that you already have the pointer to that node, or that getting the pointer is trivial. In other words, the problem is stated: "given node at X, what is the code to insert after this node?" You get to start at the insert point.
Insertion into a linked list is different than iterating across it. You aren't locating the item, you are resetting pointers to put the item in there. It doesn't matter if it is going to be inserted near the front end or near the end, the insertion still involves pointers being reassigned. It'll depend on how it was implemented, of course, but that is the strength of lists - you can insert easily. Accessing via index is where an array shines. For a list, however, it'll typically be O(n) to find the nth item. At least that's what I remember from school.
Inserting is O(1) once you know where you're going to put it.
Does this analysis not account for the finding of the node operation (though it is required) and just the insertion itself?
You got it. Insertion at a given point assumes that you already hold a pointer to the item that you want to insert after:
InsertItem(item * newItem, item * afterItem)
No, it does not account for searching. But if you already have hold of a pointer to an item in the middle of the list, inserting at that point is O(1).
If you have to search for it, you'd have to add on the time for searching, which should be O(n).
Because it does not involve any looping.
Inserting is like:
insert element
link to previous
link to next
done
this is constant time in any case.
Consequently, inserting n elements one after the other is O(n).
The most common cases are probably inserting at the begining or at the end of the list (and the ends of the list might take no time to find).
Contrast that with inserting items at the begining or the end of an array (which requires resizing the array if it's at the end, or resizing and moving all the elements if it's at the begining).
The article is about comparing arrays with lists. Finding the insert position for both arrays and lists is O(N), so the article ignores it.
O(1) is depending of that fact that you have a item where you will insert the new item. (before or after). If you don´t, it´s O(n) becuase you must find that item.
I think it's just a case of what you choose to count for the O() notation. In the case of inserting the normal operation to count is copy operations. With an array, inserting in the middle involves copying everything above the location up in memory. With a linked list, this becomes setting two pointers. You need to find the location no matter what to insert.
If you have the reference of the node to insert after the operation is O(1) for a linked list.
For an array it is still O(n) since you have to move all consequtive nodes.