The real use of Autorelease pool [duplicate] - ios

On page 17 of this WWDC14 presentation, it says
Working with Objective-C? Still have to manage autorelease pools
autoreleasepool { /* code */ }
What does that mean? Does it mean that if my code base doesn't have any Objective-C files, autoreleasepool {} is unnecessary?
In an answer of a related question, there is an example where autoreleasepool can be useful:
- (void)useALoadOfNumbers {
for (int j = 0; j < 10000; ++j) {
#autoreleasepool {
for (int i = 0; i < 10000; ++i) {
NSNumber *number = [NSNumber numberWithInt:(i+j)];
NSLog(#"number = %p", number);
}
}
}
}
If the code above gets translated into Swift with autoreleasepool dropped, will Swift be smart enough to know that the number variable should be released after the first } (like some other languages does)?

The autoreleasepool pattern is used in Swift when returning autorelease objects (created by either your Objective-C code or using Cocoa classes). The autorelease pattern in Swift functions much like it does in Objective-C. For example, consider this Swift rendition of your method (instantiating NSImage/UIImage objects):
func useManyImages() {
let filename = pathForResourceInBundle
for _ in 0 ..< 5 {
autoreleasepool {
for _ in 0 ..< 1000 {
let image = NSImage(contentsOfFile: filename)
}
}
}
}
If you run this in Instruments, you'll see an allocations graph with 5 small hills (because outer for-loop), like the following:
But if you do it without the autorelease pool, you'll see that peak memory usage is higher:
The autoreleasepool allows you to explicitly manage when autorelease objects are deallocated in Swift, just like you were able to in Objective-C.
Note: When dealing with Swift native objects, you generally will not receive autorelease objects. This is why the presentation mentioned the caveat about only needing this when "working with Objective-C", though I wish Apple was more clear on this point. But if you're dealing with Objective-C objects (including Cocoa classes), they may be autorelease objects, in which case this Swift rendition of the Objective-C #autoreleasepool pattern is still useful.

If you would use it in the equivalent Objective-C code, then you would use it in Swift.
will Swift be smart enough to know that the number variable should be
released after the first }
Only if Objective-C does. Both operate along the Cocoa memory management rules.
Of course ARC knows that number goes out of scope at the end of that iteration of the loop, and if it retained it, it will release it there. However, that does not tell you whether the object was autoreleased, because -[NSNumber numberWithInt:] may or may not have returned an autoreleased instance. There is no way you can know, because you don't have access to the source of -[NSNumber numberWithInt:].

#autoreleasepool can be used in Objective-C and Swift code to guarantee working with Objective-C code which relies on autorelease
[Under the hood]

Related

How does Vala handle reference counting with multithreading?

As far as I understand, in multithreaded environment reference counting should be performed with locking to ensure all threads see the same snapshot of memory. But locking slows down perfomance. How does Vala solve this problem?
Reference counting is mostly handled in GObject (for GLib.Object-derived types), which in turn uses the Atomic Operations in GLib. Atomics are a tricky subject; if you want to get into details a good place to start is Herb Sutter's Atomic Weapons talk from a few years ago. I would recommend watching the videos even if you're never going to put them to use (and 99.9% of programmers should never put them to use) because it will give you a much better understanding of how computers really work.
The name "atomics" can be a bit misleading; it's not really about atomicicity, though that's part of it. The operations are atomic in the sense that the change is either made in its entirety or not at all, which is vital, but the more interesting part is that atomics act as barriers which prevent the compiler from re-ordering operations across the barrier. Herb Sutter's talk goes into a lot of detail about this which I'm not going to repeat here.
For example, think about a simple unprotected reference counter:
typedef struct {
int reference_count = 0;
} Foo;
Foo* foo_create(void) {
Foo* foo = malloc(sizeof(Foo));
foo->reference_count = 1;
}
void ref(Foo* foo) {
++(foo->reference_count);
}
void unref(Foo* foo) {
if (--(foo->reference_count) == 0) {
free(foo);
}
}
I'm going to assume you can see the problems with leaving this unprotected because I'm writing a SO post not a book.
The specific atomic operation we're interested in is compare-and-swap (CAS), which basically provides the ability to perform this operation safely:
bool cas(int* value, int* expected, int desired) {
if (*value == *expected) {
*value = desired;
return true;
} else {
return false;
}
}
Using this, we would change our refcounting implementation above to something like:
typedef struct {
int reference_count = 0;
} Foo;
Foo* foo_create(void) {
Foo* foo = malloc(sizeof(Foo));
/* No atomics needed, we haven't made the value public yet */
foo->reference_count = 1;
}
void ref(Foo* foo) {
int old_refcount;
int new_refcount;
do {
current_refcount = foo->reference_count;
new_refcount = current_refcount + 1;
} while (!cas (&(foo->reference_count), &old_refcount, new_refcount))
}
void unref(Foo* foo) {
int old_refcount;
int new_refcount;
do {
current_refcount = foo->reference_count;
new_refcount = current_refcount - 1;
} while (!cas (&(foo->reference_count), &old_refcount, new_refcount));
if (new_refcount == 0) {
free(foo);
} else if (new_recount < 0) {
// Double-free bug, code should not be reached!
}
}
But locking slows down perfomance.
So do atomics. A lot. But also a lot less than a higher-level lock would. For one thing, if you were working with a mutex what you are doing would basically be:
Acquire the lock.
Perform the operation.
Release the lock.
With atomics, we're basically begging forgiveness instead of asking permission:
Attempt to perform the operation.
Then we just look to see whether the operation was successful (i.e., if cas() returned true).
The operation is also a lot smaller and faster; with a mutext, you would probably acquire the lock then read the current value, increment / decrement it, then release the lock. With atomics, the CAS operation gets wrapped up in a single CPU instruction.
The CPU still has to deal with cache coherency by making sure that next time any other core (a bit oversimplified since even within a core there are multiple caches) asks to read the data they are presented with the new data. In other words, atomic reference counting is bad for performance, but it's a lot less bad than a mutex. Frankly, if you want reference counting instead of tracing garbage collection atomics are pretty much your least-bad option.

Converting Objective-C malloc to Swift

I am working on a project that was written in Objective-C and needs to be updated to Swift. We use a C file for transferring data.
Here is the code I was given in Objective-C:
- (NSData *)prepareEndPacket {
UInt8 *buff_data;
buff_data = (uint8_t *)malloc(sizeof(uint8_t)*(PACKET_SIZE+5));
// Call to C File
PrepareEndPacket(buff_data);
NSData *data_first = [NSData dataWithBytes:buff_data length:sizeof(uint8_t)*(PACKET_SIZE+5)];
return data_first;
}
In the C .h file I have this for reference:
#define PACKET_SIZE ((uint32_t)128)
I can not seem to find a good way of converting this to Swift. Any help would be appreciated.
malloc and free actually work fine in Swift; however, the UnsafeMutablePointer API is more "native". I'd probably use Data's bytesNoCopy for better performance. If you want, you can use Data(bytes:count:), but that will make a copy of the data (and then you need to make sure to deallocate the pointer after making the copy, or you'll leak memory, which is actually a problem in the Objective-C code above since it fails to free the buffer).
So, something like:
func prepareEndPacket() -> Data {
let count = PACKET_SIZE + 5
let buf = UnsafeMutablePointer<UInt8>.allocate(capacity: count)
PrepareEndPacket(buf)
return Data(bytesNoCopy: buf, count: count, deallocator: .custom { ptr, _ in
ptr.deallocate()
})
}
By using bytesNoCopy, the Data object returned is basically a wrapper around the original pointer, which will be freed by the deallocator when the Data object is destroyed.
Alternatively, you can create the Data object from scratch and get a pointer to its contents to pass to PrepareEndPacket():
func prepareEndPacket() -> Data {
var data = Data(count: PACKET_SIZE + 5)
data.withUnsafeMutableBytes { (ptr: UnsafeMutablePointer<UInt8>) in
PrepareEndPacket(ptr)
}
return data
}
This is slightly less efficient, since the Data(count:) initializer will initialize all the Data's bytes to zero (similar to using calloc instead of malloc), but in many cases, that may not make enough of a difference to matter.

Passing an object around increases retain count

iOS, transitioning to ARC. I've observed a curious behavior regarding CF/NS bridging. In the following scenario:
CFStringRef cfs = ComesFromSomewhere();
NSString *ns = (__bridge NSString*)cfs;
the retain count of the string object is 2 at the end. However, in the following:
NSString *ToNS(CFStringRef cfs)
{
return (__bridge NSString*)cfs;
}
CFStringRef cfs = ComesFromSomewhere();
NSString *ns = ToNS(cfs);
the retain count is 3 at the end. What's going on, please? Who holds the extra reference? Is the object being added to the autorelease pool by the mere act of passing it around?
Preemptive response to "don't worry, ARC just works": I'm mixing Core Foundation with Cocoa here, no way around it. This is leak prone. Without the ability to account for the retain counts explicitly, I'm flying blind.
EDIT: it's an artifact of the debug build. In the release build, the retain count under the latter scenario is still 2.
There's a tangible difference between a fragment that leaves large autoreleased objects around and one that doesn't; you don't want the former in a big loop without a pool in the loop body. Helps to know it's an artifact of zero optimization, but still, not cool.
CFStringRef cfs = ComesFromSomewhere();
// retainCount -> 1
NSString *ns = ToNS(cfs);
// ToNS(cfs)
//
// ToNS is not object creating method,
// thus the returned object was automatically autoreleased
// retainCount += 1
// NSString *ns
//
// It's __strong variable, ns variable has an ownership of the object
// retainCount += 1
// retainCount -> 3
The definition of object creating method is a method whose name begins with “alloc”, “new”, “copy”, or “mutableCopy”, of a Objective-C class. See Basic Memory Management Rules - You own any object you create.
In the release build, the retain count under the latter scenario is still 2.
Also compiler can omit to send autorelease message to object if it's eligible.
EDITED
You can use C++ reference to avoid autorelease.
void ToNS(CFStringRef cfs, NSString __strong *& ns)
{
ns = (__bridge NSString*)cfs;
}
NSString *nsstr;
ToNS(cfstr, nsstr);
// retainCount -> 2
EDITTED
NS_RETURNS_RETAINED NSString *ToNS(CFStringRef cfs)
{
return (__bridge NSString*)cfs;
}
NS_RETURNS_RETAINED makes the framework treat the function as an object creating one (which it really is). Cocoa has a name convention that lets you designate a method as an object creator, but the convention only applies to Objective C class methods, not to C style functions and not to C++ class member functions.

NSMutableArray Interaction Troubles With Collisions

I am having trouble getting objects added to my NSMutableArray to log properly (which definitely means they won't process any of the appropriate functions correctly) with Spritebuilder [version 1.4.9, from the Apple App Store]. I am creating several objects using the same class, but each new one is overriding the older objects which exist. I thought an array would help keep things in order (and then on collision, I could call the array to check for which object was collided with), but it simply is not working that way - at all. Here is the relevant code.
Main.h
#property Coven *coven;
#property Nellie *nellie;
#property NSMutableArray *array;
//Physics, other things
Main.m
/Adding other things...
-(void) addCovenMember{
//This function is called on a RANDOM time interval
_array = [[NSMutableArray] alloc]init];
for (i = 0, i < 15, i++){
_coven = (Coven*) [CCBReader load:#"CovenMember"];
[_array addChild:_coven];
}
[_physicNode addChild:_coven];
}
-(BOOL)ccPhysicsCollisionBegin:(CCPhysicsCollisionPair *)pair nellie:(Nellie*)nellie coven:(Coven*)coven{
for (_coven in _array){
NSLog(#"%#",_coven.name)
if (CGRectIntersectsRect(_nellie.boundingBox, _coven.boundingBox){
NSLog(#"We're intersecting!");
}
}
Coven. h
//Nothing Important Here
Coven.m
-(void)didLoadFromCCB{
self.physicsBody.CollisionType = #"coven";
}
Nellie.h
//Nothing Here
Nellie.m
-(void) didLoadFromCCB{
self.physicsBody.CollisionType = #"nellie";
}
The collision is logging with every collision - but only as the name of the LATEST _coven member to be generated, no matter what I am colliding with. This also means that the _coven.boundingBox is solely on the latest _coven member and interaction only occurs when I hit the new member as soon as it generates on to the screen.
Any ideas? Any help?
Note: This is also posted on the Spritebuilder website - I decided to post it here as well because answers can be a little slow on those forums.
The -(void) addCovenMember overwrites (creates a new instance) of _array every time it's called. Thus, when you try to iterate in -ccPhysicsCollisionBegin: you'll only ever see 1 coven.
Add a nil check around your array creation:
if(_array == nil) {
_array = [[NSMutableArray] alloc]init];
}
The for loop in the -addCovenMember method looks broken (at least not a c loop). Reaplace the , with ;:
for (i = 0; i < 15 i++){
Also, using for(_coven in _array) seems wrong, you already have a property self.coven (presumably) with a backing _coven ivar. Try changing it to for(Coven * c in self.array) and use the local c in the loop:
for (Coven * c in _array){
NSLog(#"%#",c.name)
if (CGRectIntersectsRect(_nellie.boundingBox, c.boundingBox){
NSLog(#"We're intersecting!");
}
}
To everyone out in the world struggling with their ccPhysicsCollisions, arrays may not be the answer - this was a simple fix that left me incapacitated for days.
Using the basic ccPhysicsCollisionsBegan that ships with spritebuilder, try this without arrays first:
Scene.m
-(BOOL)ccPhysicsCollisionBegin:(CCPhysicsCollisionPair *)pair nellie:(Nellie*)nellie coven:(Coven*)coven{
[_coven stopAction:coven.path];
}
I initially created the method with:
[_coven stopAction:_coven.path];
Yes, that (underscore) set me back three weeks. Be sure you refer to the object interacting through the physics delegate, and not the object itself, which in my case, was constantly being overwritten by the new ones being generated.
Check your underscores.
Solved! :D
Thanks to #thelaws for your help! I'll get better at Obj C... eventually.

Xcode / iOS: Simple example of a mutable C-Array as a class instance variable?

For some reason I just cant seem to get my head around the process of creating a C-Array instance variable for a class that can have elements added to it dynamically at runtime.
My goal is to create a class called AEMesh. All AEMesh objects will have a c-array storing the vertexdata for that specific AEMesh's 3D model for use with OpenGL ES (more specifically it's functionality for drawing a model by passing it a simple C-Array of vertexdata).
Initially I was using an NSMutableArray, on the assumption that I could simply pass this array to OpenGL ES, however that isnt the case as the framework requires a C-Array. I got around the issue by essentially creating a C-Array of all of the vertexdata for the current AEMesh when it came time to render that specific mesh, and passing that array to OpenGL ES. Obviously the issue here is performance as I am constantly allocating and deallocating enough memory to hold every 3D model's vertexdata in the app about a dozen times a second.
So, Im not one to want the answer spoon fed to me, but if anyone would be willing to explain to me the standard idiom for giving a class a mutable c-array (some articles Ive read mention using malloc?) I would greatly appreciate it. From the information Ive gathered, using malloc might work, but this isn't creating a standard c-array I can pass in to OpenGL ES, instead its more of a pseudo-c-array that works like a c-array?
Anyways, I will continue to experiment and search the internet but again, if anyone can offer a helping hand I would greatly appreciate it.
Thanks,
- Adam Eisfeld
The idea would just be to add a pointer to an array of AEMesh structures to your class, and then maintain the array as necessary. Following is a little (untested) code that uses malloc() to create such an array and realloc() to resize it. I'm growing the array 10 meshes at a time:
#interface MyClass : NSObject
{
int meshCount;
AEMesh *meshes;
}
#end
#implementation MyClass
-(id)init {
if ((self = [super init])) {
meshCount = 0;
meshes = malloc(sizeof(AEMesh)*10);
}
return self;
}
-(void)addMesh:(AEMesh)mesh {
if (meshCount % 10 = 0) {
meshCount = realloc(sizeof(AEMesh) * (meshCount + 10));
}
if (meshCount != nil) {
meshes[meshCount] = mesh;
meshCount++;
}
}
#end
It might be worthwhile to factor the array management into it's own Objective-C class, much as Brian Coleman's answer uses std::vector to manage the meshes. That way, you could use it for C arrays of any type, not just AEMesh.
From the information Ive gathered, using malloc might work, but this
isn't creating a standard c-array I can pass in to OpenGL ES, instead
its more of a pseudo-c-array that works like a c-array?
A C array is nothing more than a series of objects ("objects" used here in the C sense of contiguous memory, not the OO sense) in memory. You can create one by declaring it on the stack:
int foo[10]; // array of 10 ints
or dynamically on the heap:
int foo[] = malloc(sizeof(int)*10); // array of 10 ints, not on the stack
int *bar = malloc(sizeof(int)*10); // another way to write the same thing
Don't forget to use free() to deallocate any blocks of memory you've created with malloc(), realloc(), calloc(), etc. when you're done with them.
I know it doesn't directly answer your question, but an even easier approach would be to work with an NSMutableArray instance variable until the point where you need to pass it to the API, where you would use getObjects:range: in order to convert it to a C-Array. That way you won't have to deal with "mutable" C-Arrays and save yourself the trouble.
If you're willing to use ObjectiveC++ and stray outside the bounds of C and ObjectiveC, then you can use a std::vector to amortise the cost of resizing the array of vertex data. Here's what things would look like:
include <vector>
include <gl.h>
#interface MyClass {
std::vector<GLfloat> vertexData;
}
-(void) createMyVertexData;
-(void) useMyVertexData;
#end
#implementation
-(void) createMyVertexData {
// Erase all current data from vertexData
vertexData.erase(vertexData.begin(),
std::remove(vertexData.begin(),
vertexData.end());
// The number of vertices in a triangle
std::size_t nVertices = 3;
// The number of coordinates required to specify a vertex (x, y, z)
std::size_t nDimensions = 3;
// Reserve sufficient capacity to store the vertex data
vertexData.reserve(nVertices * nDimensions);
// Add the vertex data to the vector
// First vertex
vertexData.push_back(0);
vertexData.push_back(0);
vertexData.push_back(0);
// And so on
}
-(void) useMyVertexData {
// Get a pointer to the first element in the vertex data array
GLfloat* rawVertexData = &vertexData[0];
// Get the size of the vertex data
std::size_t sizeVertexData = vertexData.size();
// Use the vertex data
}
#end
The neat bit is that vertexData is automatically destroyed along with the instance of MyClass. You don't have to add anything to the dealloc method in MyClass. Remember to define MyClass in a .mm file

Resources