In my scene world I have "blocks" with category ("blockCategory") and an object colliding with these blocks. I wish to identify which specific block (using block index) the object has touched.
The only way I found is giving a name to the blocks, but I already use :
block.name = blockOfType ..
In order to identify the block types. I also found I could use node.userData to place custom data, but it seems to be with a large overhead to use dictionary. Is there a simpler way?
Related
This one's kind of an open ended design question I'm afraid.
Anyway: I have a big two-dimensional array of stuff. This array is mutable, and is accessed by a bunch of threads. For now I've just been dealing with this as a Arc<Mutex<Vec<Vec<--owned stuff-->>>>, which has been fine.
The problem is that stuff is about to grow considerably in size, and I'll want to start holding references rather than complete structures. I could do this by inverting everything and going to Vec<Vec<Arc<Mutex>>, but I feel like that would be a ton of overhead, especially because each thread would need a complete copy of the grid rather than a single Arc/Mutex.
What I want to do is have this be an array of references, but somehow communicate that the items being referenced all live long enough according to a single top-level Arc or something similar. Is that possible?
As an aside, is Vec even the correct data type for this? For the grid in particular I really want a large, fixed-size block of memory that will live for the entire length of the program once it's initialized, and has a lot of reference locality (along either dimension.) Is there something else/more specialized I should be using?
EDIT:Giving some more specifics on my code (away from home so this is rough):
What I want:
Outer scope initializes a bunch of Ts and somehow collectively ensures they live long enough (that's the hard part)
Outer scope initializes a grid :Something<Vec<Vec<&T>>> that stores references to the Ts
Outer scope creates a bunch of threads and passes grid to them
Threads dive in and out of some sort of (problable RW) lock on grid, reading the Tsand changing the &Ts in the process.
What I have:
Outer thread creates a grid: Arc<RwLock<Vector<Vector<T>>>>
Arc::clone(& grid)s are passed to individual threads
Read-heavy threads mostly share the lock and sometimes kick each other out for the writes.
The only problem with this is that the grid is storing actual Ts which might be problematically large. (Don't worry too much about the RwLock/thread exclusivity stuff, I think it's perpendicular to the question unless something about it jumps out at you.)
What I don't want to do:
Top level creates a bunch of Arc<Mutex<T>> for individual T
Top level creates a `grid : Vec<Vec<Arc<Mutex>>> and passes it to threads
The problem with that is that I worry about the size of Arc/Mutex on every grid element (I've been going up to 2000x2000 so far and may go larger). Also while the threads would lock each other out less (only if they're actually looking at the same square), they'd have to pick up and drop locks way more as they explore the array, and I think that would be worse than my current RwLock implementation.
Let me start of by your "aside" question, as I feel it's the one that can be answered:
As an aside, is Vec even the correct data type for this? For the grid in particular I really want a large, fixed-size block of memory that will live for the entire length of the program once it's initialized, and has a lot of reference locality (along either dimension.) Is there something else/more specialized I should be using?
The documenation of std::vec::Vec specifies that the layout is essentially a pointer with size information. That means that any Vec<Vec<T>> is a pointer to a densely packed array of pointers to densely packed arrays of Ts. So if block of memory means a contiguous block to you, then no, Vec<Vec<T>> cannot give that you. If that is part of your requirements, you'd have to deal with a datatype (let's call it Grid) that is basically a (pointer, n_rows, n_columns) and define for yourself if the layout should be row-first or column-first.
The next part is that if you want different threads to mutate e.g. columns/rows of your grid at the same time, Arc<Mutex<Grid>> won't cut it, but you already figured that out. You should get clarity whether you can split your problem such that each thread can only operate on rows OR columns. Remember that if any thread holds a &mut Row, no other thread must hold a &mut Column: There will be an overlapping element, and it will be very easy for you to create a data races. If you can assign a static range of of rows to a thread (e.g. thread 1 processes rows 1-3, thread 2 processes row 3-6, etc.), that should make your life considerably easier. To get into "row-wise" processing if it doesn't arise naturally from the problem, you might consider breaking it into e.g. a row-wise step, where all threads operate on rows only, and then a column-wise step, possibly repeating those.
Speculative starting point
I would suggest that your main thread holds the Grid struct which will almost inevitably be implemented with some unsafe methods, e.g. get_row(usize), get_row_mut(usize) if you can split your problem into rows/colmns or get(usize, usize) and get(usize, usize) if you can't. I cannot tell you what exactly these should return, but they might even be custom references to Grid, which:
can only be obtained when the usual borrowing rules are fulfilled (e.g. by blocking the thread until any other GridRefMut is dropped)
implement Drop such that you don't create a deadlock
Every thread holds a Arc<Grid>, and can draw cells/rows/columns for reading/mutating out of the grid as needed, while the grid itself keeps book of references being created and dropped.
The downside of this approach is that you basically implement a runtime borrow-checker yourself. It's tedious and probably error-prone. You should browse crates.io before you do that, but your problem sounds specific enough that you might not find a fitting solution, let alone one that's sufficiently documented.
I'm trying to detect collisions between nodes that have not collided before in my SpriteKit game by calling node.hash and storing new nodes in a set. I'm seeing that after some time, new nodes have the same hash as nodes that I had previously called node.removeFromParent() on.
I'm guessing that because I am removing from parent and recreating very similar nodes over and over, SK is automatically recycling some nodes.
How can I get a truly unique hash from nodes in SpriteKit?
Please let me know if further clarification is needed. I feel like posting my code wouldn't be too relevant to this post.
Furthermore, I am not able to reproduce this issue when I'm debugging with my phone attached to xcode but I have added logging that shows node.hash not being unique for newly created nodes. Anyone know why recycling behavior would be different with my phone connected to Xcode?
I think you may be misunderstanding what a hash is and does.
A hash is not necessarily a unique value. It is a one way function of some kind (not necessarily cryptographic) that takes arbitrary data and produces a value. If the same data is hashed more than one time, it will produce the same hash value, not a different value.
Working against you, however, is the fact that the .hash value is not a cryptographic hash (which is somewhat computationally intensive). The quality of a hash function, cryptographic or not, is based on how frequently there are hash collisions. A hash collision occurs when two different values produce the same hash.
Cryptographic hashing functions are selected, amongst other things, based on a low hash collision rate. The .hash function may have a high collision rate, even if your data is different, depending on your particular data.
A far better solution is to add a property to your nodes that can be easily checked:
class MyNodeClass: SkShapeNode {
var hasCollided = false // Publicly accessible property
}
I do notice that in other comments you say, "I am interesting in finding the proper hash." I'd strongly recommend against this approach since, again, hash functions will definitely carry a computational load. The better the function, the higher that load.
If what you are really looking for is a unique identifier for each node (rather than a collision tracker) then why not implement an internal property that is initialized from the initializer based on a class value that simply produces a unique, incrementing ID?
SKNode is using NSObject's default implementation of hash which just returns a memory address
import Foundation
import SpriteKit
let node = SKNode()
let hex = String(node.hash, radix:16)
let addr = unsafeAddressOf(node)
print(hex) // 7f9a21d080a0
print(addr) // 0x00007f9a21d080a0
So basically, once a memory location is reused, the hash value is not going to be unique. Likely the difference between behaviour in debugging is due to compiler optimisations.
To get a unique hash you'll need to override the hash method of your SKNode and have it return something that is actually unique. A simple strategy would be to assign each node an id property at creation something like
class MyNode : SKNode {
var uid:Int
init(uid:Int) {
self.uid = uid
super.init()
}
override var hash:Int { get {
return self.uid
}}
required init(coder aDecoder: NSCoder) {
fatalError("not implemented, required to compile")
}
}
If you start a counter off at Int.min and increment it towards Int.max you'll have to create 18,446,744,073,709,551,613 nodes before you run out of uniqueness.
If I understand you correctly, you seem to be trying to determine if you’re colliding with new objects. What you could do is simply create a custom SKShapeNode object with the boolean property. This would remove the need to deal with colliding hashes, and instead provide a foolproof method of checking if the node has been dealt with.
class CustomShapeNode: SKShapeNode {
var hasAlreadyCollided = false
// Any required overidden methods
}
I am in a situation that I need to get the items in an array and time is sensitive. I have the option of using a separate variable to hold the current count or just use NSMutableArray's count method.
ex: if (myArray.count == ... ) or if (myArrayCount == ...)
How expensive is it to get the counting of items from the count method of an array?
The correct answer is, there is no difference in speed, so access the count of the array as you wish my child :)
Fetching NSArray's count method is no more expensive then fetching a local variable in which you've stored this value. It's not calculated when it's called. It's calculated when the array is created and stored.
For NSMutableArray, the only difference is that the property is recalculated any time you modify the contents of the array. The end result is still the same--when you call count, the number returned was precalculated. It's just returning the precalculated number it already stored.
Storing count in a variable, particularly for an NSMutableArray is actually a worse option because the size of the array could change, and access the count in this variable is not faster whatsoever. It only provides the added risk of potential inaccuracy.
The best way to prove to yourself that this is a preset value that is not calculated upon the count method being called is to create two arrays. One array has only a few elements. The other array has tens of thousands of elements. Now time how long it takes count to return. You'll find the time for count to return is identical no matter the size of the array.
As a correction to everyone above, NSArray does not have a count property. It has a count method. The method itself either physically counts all of the elements within the array or is a getter for a private variable the array stores. Unless you plan on subclassing NSArray and create a higher efficient system for counting dynamic and/or static arrays... you're not going to get better performance than using the count method on an NSArray. As a matter of fact, you should count on the fact that Apple has already optimized this method to it's max. My main ponder after this is that if you are doing an asynchronous call and your focus is optimizing the count of an NSArray how do you not know that you are seriously doing something wrong. If you are performing some high performance hitting method on the main thread or such... you should consider optimizing that. The performance hit of iterating and counting through the array using NSArray's count method should in no way effect your performance to any noticeable rate.
You should read up more on performance for NSArrays and NSMutableArrays if this is truly a concern for you. You can start here: link
If you need to get the item**s** then getting the count is not time critical. You'd also want to look at fast enumeration, or using enumeration with dispatch blocks, especially with parallel execution.
Edit:
Asa's is the most correct answer. I misunderstood the question.
Asa is right because the compiler will automatically optimize this and use the fastest way on its own.
TheGamingArt is correct about NSArray being as optimal as could be. However, this is only for obj-c.
Don't forget you have access to c and c++ which means you can use vectors which should be only 'slightly' faster considering it won't use obj-c messaging. However, it wouldn't surprise me if the difference isn't noticeable. c++ vector benchmarks: http://baptiste-wicht.com/posts/2012/12/cpp-benchmark-vector-list-deque.html
This is a good example of Premature Optimization (http://c2.com/cgi/wiki?PrematureOptimization). I suggest you look into GCD or NSOperations (http://www.raywenderlich.com/19788/how-to-use-nsoperations-and-nsoperationqueues)
TDictionary<TKey,TValue> uses an internal array that is doubled if it is full:
newCap := Length(FItems) * 2;
if newCap = 0 then
newCap := 4;
Rehash(newCap);
This performs well with medium number of items, but if one gets to the upper limit it is very unfortunate, because it might throw an EOutOfMemory exception even if there is almost half of the memory still available.
Is there any way to influence this behaviour? How do other collection classes deal with this scenario?
You need to understand how a Dictionary works. A dictionary contains a list of "hash buckets" where the items you insert are placed. That's a finite number, so once you fill it up you need to allocate more buckets, there's no way around it. Since the assignment of objects-to-buckets is based on the result of a hash function, you can't simply add buckets to the end of the array and put stuff in there, you need to re-allocate the whole list of blocks, re-hash everything and put it in the (new) corresponding buckets.
Given this behavior, the only way to make the dictionary not re-allocate once full is to make sure it never gets full. If you know the number of items you'll insert in the dictionary pass it as a parameter to the constructor and you'll be done, no more dictionary reallocations.
If you can't do that (you don't know the number of items you'll have in the dictionary) you'll need to reconsider what made you select the TDictionary in the first place and select a data structure that offers better compromise for your particular algorithm. For example you could use binary search trees, as they do the balancing by rotating information in existing nodes, no need for re-allocations ever.
I am creating a snake game in C#/XNA for fun but it's a good opportunity to practice some good object design.
Snake Object
There's a snake object which is essentially a linked list, each node being a vector with X & Y co-ordinates which relate to the map.
Also a few properties such as whether the snake has just eaten (in which case, the last body node is not removed for this update), direction that the snake is moving in etc.
Map Object
The map (game area) holds its contents inside 2D array of integers - using an array of primitives to store the map should keep down memory consumption and be quicker (and easier) to iterate over than an array of vectors.
Contents are defined inside an enum {Empty, Wall, Snake, Food} which are then stored inside the array at the relevant co-ordinates.
A reference is also kept to the snake object within the map so that every call to render, loops through the nodes that make up the snake and render it into the correct position on the map.
Question!!
My question is... is this coupling too tight, in which case are there any suggestions (i.e. observer pattern) or is it okay for this situation...
I've tried to think of a way to decouple the snake from needing to know the co-ordinate system being used by the map, but can't think of a way to make it work and keep the positions each nodes relative to each-other.
Any answers appreciated, cheers!
"is this coupling too tight?" No, it isn't.
In this case, the code required to decouple it is bigger, more complicated, and harder to maintain than the code required to simply implement it with the coupling.
Furthermore, there is always going to be some level of coupling required. Coupling to "a coordinate system" is usually one of them in game development. You could, in fact, rigorously decouple your map and snake objects (again, not worth the effort), but they still need to share a common coordinate system in order to communicate.
I think you are already hinted the answer yourself. The current design of making the Snake referenced in the map is tight coupling between the two.
You might want to consider creating another Interface such as MapListener that the Snake will implement. The Snake will listen to the event that maps will publish and react to it, effectively making the Snake the subscriber for the event that the Map is publishing (such as rendering in the correct position as you say). You could even have ArrayList of Listeners so you have the flexibility of adding new Object in the map that would react to the event in the maps as your game becoming more complex.
For reference on creating the Listener, see this SO question How can I code a custom listener. While this example is listening for finishing download, you should be able to see the pattern in the accepted answer for creating custom listener for your game. Let me know if you need any clarification and I will adapt the code to fit your case.
Here is simple first thought structure:
create an interface called MapContainable
//marker interface
interface MapContainable {
}
interface MapMovable extends MapContainable {
///map movement specific contract methods here
}
class Snake implements MapMovable {
.....
}
This way, your map need not know if there are concrete objects called snake, food etc. You snake object need not know the existence of a Map. A snake just moves!