Irregular retain counts of NSNumbers - ios

I am using NSNumbers throughout my app (non-ARC) using different syntaxes. Just to be a little more informed, I tried to see how NSNumbers are retained depending on their initialization syntaxes. So I did the following:
NSNumber* a = #1;
NSNumber* b = [[NSNumber alloc] initWithInt:2];
NSNumber* c = [NSNumber numberWithInt:3];
NSLog(#"%d | %d | %d", a.retainCount, b.retainCount, c.retainCount);
This code fragment is executed with a button tap, and the output has perplexed me (repetitive taps) :
73 | 27 | 6
78 | 159 | 22
78 | 160 | 22
78 | 161 | 22
78 | 162 | 22
78 | 163 | 22
85 | 169 | 22
85 | 170 | 22
85 | 171 | 22
85 | 172 | 22
Now this does not really have a purpose (at least not in my case), but I would like to know how these NSNumbers get to these retain counts.

You should never use retainCount. NEVER. look here

In Objective-C, retainCount is the number which controls the lifespan of an object. The object remains alive until the retainCount turns 0, and then the object gets deallocated. This is the big picture, with many exceptions, but this is the rule that applies here.
Those retain counts mean that those numbers are used somewhere in your application. Some other objects have retained them. Since your own code does not, this means that some other system objects do.
We'll profile your application with the "Allocation" instrument, and see what it can tell us. Here is the code we'll run:
NSNumber* a = #1;
NSNumber* b = [[[NSNumber alloc] initWithInt:2] autorelease];
NSNumber* c = [NSNumber numberWithInt:3];
NSLog(#"%d | %d | %d", a.retainCount, b.retainCount, c.retainCount);
[[[UIAlertView alloc] initWithTitle:#"number b"
message:[NSString stringWithFormat:#"address: %p, retainCount: %d", b, b.retainCount] delegate:nil
cancelButtonTitle:nil
otherButtonTitles:nil] show];
This alert will tell us what is the address of the number. Instrument will let us track this object's life.
Let's choose the Debug configuration in the profile setup of our scheme. Let's check the "Record reference count" in the "Allocations" instrument options. And see what we can get.
See? This number is indeed used by many system frameworks. Now you know why it has such a big retain count :-)

Related

Decode UDP message with LUA

I'm relatively new to lua and programming in general (self taught), so please be gentle!
Anyway, I wrote a lua script to read a UDP message from a game. The structure of the message is:
DATAxXXXXaaaaBBBBccccDDDDeeeeFFFFggggHHHH
DATAx = 4 letter ID and x = control character
XXXX = integer shows the group of the data (groups are known)
aaaa...HHHHH = 8 single-precision floating point numbers
The last ones is those numbers I need to decode.
If I print the message as received, it's something like:
DATA*{V???A?A?...etc.
Using string.byte(), I'm getting a stream of bytes like this (I have "formatted" the bytes to reflect the structure above.
68 65 84 65/42/20 0 0 0/237 222 28 66/189 59 182 65/107 42 41 65/33 173 79 63/0 0 128 63/146 41 41 65/0 0 30 66/0 0 184 65
The first 5 bytes are of course the DATA*. The next 4 are the 20th group of data. The next bytes, the ones I need to decode, and are equal to those values:
237 222 28 66 = 39.218
189 59 182 65 = 22.779
107 42 41 65 = 10.573
33 173 79 63 = 0.8114
0 0 128 63 = 1.0000
146 41 41 65 = 10.573
0 0 30 66 = 39.500
0 0 184 65 = 23.000
I've found C# code that does the decode with BitConverter.ToSingle(), but I haven't found any like this for Lua.
Any idea?
What Lua version do you have?
This code works in Lua 5.3
local str = "DATA*\20\0\0\0\237\222\28\66\189\59\182\65..."
-- Read two float values starting from position 10 in the string
print(string.unpack("<ff", str, 10)) --> 39.217700958252 22.779169082642 18
-- 18 (third returned value) is the next position in the string
For Lua 5.1 you have to write special function (or steal it from François Perrad's git repo )
local function binary_to_float(str, pos)
local b1, b2, b3, b4 = str:byte(pos, pos+3)
local sign = b4 > 0x7F and -1 or 1
local expo = (b4 % 0x80) * 2 + math.floor(b3 / 0x80)
local mant = ((b3 % 0x80) * 0x100 + b2) * 0x100 + b1
local n
if mant + expo == 0 then
n = sign * 0.0
elseif expo == 0xFF then
n = (mant == 0 and sign or 0) / 0
else
n = sign * (1 + mant / 0x800000) * 2.0^(expo - 0x7F)
end
return n
end
local str = "DATA*\20\0\0\0\237\222\28\66\189\59\182\65..."
print(binary_to_float(str, 10)) --> 39.217700958252
print(binary_to_float(str, 14)) --> 22.779169082642
It’s little-endian byte-order of IEEE-754 single-precision binary:
E.g., 0 0 128 63 is:
00111111 10000000 00000000 00000000
(63) (128) (0) (0)
Why that equals 1 requires that you understand the very basics of IEEE-754 representation, namely its use of an exponent and mantissa. See here to start.
See #Egor‘s answer above for how to use string.unpack() in Lua 5.3 and one possible implementation you could use in earlier versions.

iOS CoreMIDI Skipping MidiPackets

I'm having issues with implementing MIDI into my iOS app as the receiver callback seems to be skipping MIDI messages and packets. I'm using Midi Monitor to check what MIDI messages I'm missing, skipping over, etc.
So the million dollar question is why is iOS skipping certain MIDI messages? Sometimes it doesn't skip MIDI messages, but other times it does. I'm not sure how to approach debugging this as I have exhausted my brain at this point.
My receiver code:
void MidiReceiver(const MIDIPacketList *packets,
void *context, void *sourceContext) {
dispatch_async(dispatch_get_main_queue(), ^{
if (packets->numPackets > 0) {
MIDIPacket *packet = (MIDIPacket *)packets->packet;
// Loop through total number of packets
for (int i = 0; i < packets->numPackets; i++) {
// Go through each packet, iOS sometimes clumps all data into one packet
// if the MIDI messages are triggered at the same time
for (int j = 0; j < packet->length; j += 3) {
NSArray *array = [[NSArray alloc] initWithObjects:[NSNumber numberWithUnsignedInt:packet->data[j]],
[NSNumber numberWithUnsignedInt:packet->data[j+1]],
[NSNumber numberWithUnsignedInt:packet->data[j+2]], nil];
// Use the data to create do meaningful in the app
[myViewController processMidiData:array];
}
// Next packet
packet = MIDIPacketNext(packet);
}
}
});
The monitor code format is : (TIME) - (MIDI Command Type) - (CC Val or Velocity)
Midi Monitor Debug:
12:45:32.697 Control 0
12:45:32.720 Control 1
12:45:32.737 Control 1
12:45:32.740 Control 2
12:45:32.750 Control 3
12:45:32.763 Note Off A♯1 0
12:45:32.763 Note Off F2 0
12:45:32.763 Note Off D3 0
12:45:32.763 Control 4
12:45:32.770 Control 5
12:45:32.780 Control 6
12:45:32.790 Control 8
12:45:32.800 Control 9
12:45:32.810 Control 11
12:45:32.820 Control 13
12:45:32.832 Control 14
12:45:32.845 Control 16
12:45:32.850 Control 18
12:45:32.873 Control 21
12:45:32.883 Control 22
12:45:32.898 Control 24
12:45:32.913 Control 26
12:45:32.933 Control 27
12:45:32.948 Control 28
12:45:33.020 Control 27
12:45:33.030 Control 26
12:45:33.040 Control 25
12:45:33.050 Control 24
12:45:33.060 Control 22
My App's Debug Monitor:
12:45:33.050 Control 0
12:45:33.051 Control 1
12:45:33.051 Control 1
12:45:33.051 Control 2
12:45:33.051 Control 3
12:45:33.083 Note Off D3 0 <----- Where's A#1 and F2!!! :(
12:45:33.087 Control 4
12:45:33.087 Control 4
12:45:33.097 Control 5
12:45:33.100 Control 6
12:45:33.110 Control 8
12:45:33.120 Control 9
12:45:33.130 Control 11
12:45:33.140 Control 13
12:45:33.153 Control 14
12:45:33.165 Control 16
12:45:33.170 Control 18
12:45:33.193 Control 21
12:45:33.203 Control 22
12:45:33.218 Control 24
12:45:33.233 Control 26
12:45:33.256 Control 27
12:45:33.268 Control 28
12:45:33.341 Control 27
12:45:33.351 Control 26
12:45:33.361 Control 25
12:45:33.374 Control 24
12:45:33.381 Control 22
Got some help from Kurt Revis and it seemed like I was sending the packets too late due to my usage of dispatch_async.
My revised code (I parsed the packets first):
void MidiReceiver(const MIDIPacketList *packets,
void *context, void *sourceContext) {
NSMutableArray *packetData = [[NSMutableArray alloc] init];
if (packets->numPackets > 0 && object != nil) {
MIDIPacket *packet = &packets->packet[0];
// Loop through total number of packets
for (int i = 0; i < packets->numPackets; ++i) {
int idx = 0;
while (idx < packet->length) {
NSArray *array = [[NSArray alloc] initWithObjects:[NSNumber numberWithUnsignedInt:packet->data[idx]],
[NSNumber numberWithUnsignedInt:packet->data[idx+1]],
[NSNumber numberWithUnsignedInt:packet->data[idx+2]], nil];
[packetData addObject:array];
idx += 3;
}
packet = MIDIPacketNext(packet);
}
}
dispatch_async(dispatch_get_main_queue(), ^{
for (NSArray *packet in packetData) {
[object receiveMIDIInput:packet];
}
});
}

Golang append memory allocation VS. STL push_back memory allocation

I compared the Go append function and the STL vector.push_back and found that different memory allocation strategy which confused me. The code is as follow:
// CPP STL code
void getAlloc() {
vector<double> arr;
int s = 9999999;
int precap = arr.capacity();
for (int i=0; i<s; i++) {
if (precap < i) {
arr.push_back(rand() % 12580 * 1.0);
precap = arr.capacity();
printf("%d %p\n", precap, &arr[0]);
} else {
arr.push_back(rand() % 12580 * 1.0);
}
}
printf("\n");
return;
}
// Golang code
func getAlloc() {
arr := []float64{}
size := 9999999
pre := cap(arr)
for i:=0; i<size; i++ {
if pre < i {
arr = append(arr, rand.NormFloat64())
pre = cap(arr)
log.Printf("%d %p\n", pre, &arr)
} else {
arr = append(arr, rand.NormFloat64())
}
}
return;
}
But the memory address is invarient to the increment of size expanding, this really confused me.
By the way, the memory allocation strategy is different in this two implemetation (STL VS. Go), I mean the expanding size. Is there any advantage or disadvantage? Here is the simplified output of code above[size and first element address]:
Golang CPP STL
2 0xc0800386c0 2 004B19C0
4 0xc0800386c0 4 004AE9B8
8 0xc0800386c0 6 004B29E0
16 0xc0800386c0 9 004B2A18
32 0xc0800386c0 13 004B2A68
64 0xc0800386c0 19 004B2AD8
128 0xc0800386c0 28 004B29E0
256 0xc0800386c0 42 004B2AC8
512 0xc0800386c0 63 004B2C20
1024 0xc0800386c0 94 004B2E20
1280 0xc0800386c0 141 004B3118
1600 0xc0800386c0 211 004B29E0
2000 0xc0800386c0 316 004B3080
2500 0xc0800386c0 474 004B3A68
3125 0xc0800386c0 711 004B5FD0
3906 0xc0800386c0 1066 004B7610
4882 0xc0800386c0 1599 004B9768
6102 0xc0800386c0 2398 004BC968
7627 0xc0800386c0 3597 004C1460
9533 0xc0800386c0 5395 004B5FD0
11916 0xc0800386c0 8092 004C0870
14895 0xc0800386c0 12138 004D0558
18618 0xc0800386c0 18207 004E80B0
23272 0xc0800386c0 27310 0050B9B0
29090 0xc0800386c0 40965 004B5FD0
36362 0xc0800386c0 61447 00590048
45452 0xc0800386c0 92170 003B0020
56815 0xc0800386c0 138255 00690020
71018 0xc0800386c0 207382 007A0020
....
UPDATE:
See comments for Golang memory allocation strategy.
For STL, the strategy depends on the implementation. See this post for further information.
Your Go and C++ code fragments are not equivalent. In the C++ function, you are printing the address of the first element in the vector, while in the Go example you are printing the address of the slice itself.
Like a C++ std::vector, a Go slice is a small data type that holds a pointer to an underlying array that holds the data. That data structure has the same address throughout the function. If you want the address of the first element in the slice, you can use the same syntax as in C++: &arr[0].
You're getting the pointer to the slice header, not the actual backing array. You can think of the slice header as a struct like
type SliceHeader struct {
len,cap int
backingArray unsafe.Pointer
}
When you append and the backing array is reallocated, the pointer backingArray will likely be changed (not necessarily, but probably). However, the location of the struct holding the length, cap, and pointer to the backing array doesn't change -- it's still on the stack right where you declared it. Try printing &arr[0] instead of &arr and you should see behavior closer to what you expect.
This is pretty much the same behavior as std::vector, incidentally. Think of a slice as closer to a vector than a magic dynamic array.

Creating Core Data Entity working but fetching an Entity doesn't work

So, I have a `NSManagedObject's User, Boundary, and Preset. A Preset is always tied to a single user. A user can have many Presets. Each Preset may be tied a boundary.
Basicly, the User has a Preset that he can save to each Boundary he has.
User has a to-many relationship to Preset.
Boundary has a many-to-many relationship to Preset.
I am trying to generate a list of Presets that the User has minus the ones already tied to the boundary.
I am using Magical Records.
My issue is when I create a new User and Boundary, this works:
Boundary *boundary = [Boundary MR_createEntity];
boundary.name = #"test boundary";
UserDB *user = [UserDB MR_createEntity];
user.username = #"test User";
Preset *preset01 = [Preset MR_findFirstByAttribute:#"nameDisplay" withValue:#"C4"];
DLog(#"preset01.nameDisplay: %#", preset01.nameDisplay);
Preset *preset02 = [Preset MR_findFirstByAttribute:#"nameDisplay" withValue:#"B"];
DLog(#"preset02.nameDisplay: %#", preset02.nameDisplay);
[boundary setPresets:[NSSet setWithObject:preset01]];
[user setPresets:[NSSet setWithObjects:preset01, preset02, nil]];
NSPredicate *predicate = [NSPredicate predicateWithFormat:#"user == %# AND boundary != %#", user, boundary];
NSArray *presetsList = [Preset MR_findAllWithPredicate:predicate];
DLog(#"presetsList: %#", presetsList);
So I set preset01 to the boundary. I set preset01 and preset02 to the user. So I need a list that shows preset02 to the user (since preset01 is already tied to the boundary, the user shouldn't be able to add it again).
DEBUG | -[LoginViewController viewDidLoad] | preset01.nameDisplay: C4
DEBUG | -[LoginViewController viewDidLoad] | preset02.nameDisplay: B
DEBUG | -[LoginViewController viewDidLoad] | presetList: (
"<SoilTestPointPreset: 0x1e06b8b0> (entity: Preset; id: 0x1e06bca0 <x-coredata://7476DB86-AF79-445C-B3AE-6C91088704A0/Preset/p98> ; data: {\n attributes = \"<relationship fault: 0x1e074000 'attributes'>\";\n boundary = nil;\n gpsLocation = nil;\n nameDisplay = B;\n nameTitle = nil;\n rgbColor = \"0x1e06b1d0 <x-coredata://7476DB86-AF79-445C-B3AE-6C91088704A0/RGBColor/p49>\";\n testing = nil;\n user = \"0x1e05ff60 <x-coredata:///UserDB/t59757D99-2FBF-4FBF-97AF-39582FC4B5503>\";\n})"
)
Thats what I expected. But now when I fetch the User and Boundary objects:
Boundary *boundary = [Boundary MR_findFirstByAttribute:#"boundaryID" withValue:#3748];
DLog(#"boundary.name: %#", boundary.name);
UserDB *user = [UserDB MR_findFirstByAttribute:#"uid" withValue:#99];
DLog(#"user.username: %#", user.username);
My array is empty:
DEBUG | -[LoginViewController viewDidLoad] | boundary.name: 997677
DEBUG | -[LoginViewController viewDidLoad] | user.username: thatPerson
DEBUG | -[LoginViewController viewDidLoad] | preset01.nameDisplay: C4
DEBUG | -[LoginViewController viewDidLoad] | preset02.nameDisplay: B
DEBUG | -[LoginViewController viewDidLoad] | presetList: (
)
Why does fetching the User and Boundary from Core Data change the results as opposed to creating them?
UPDATE:
I added:
Boundary *boundary = [Boundary MR_findFirstByAttribute:#"boundaryID" withValue:#3748];
DLog(#"boundary.name: %#", boundary.name);
DLog(#"boundary.presets.count: %d", boundary.presets.count); // Added
UserDB *user = [UserDB MR_findFirstByAttribute:AVI_UID withValue:#99];
DLog(#"user.username: %#", user.username);
DLog(#"user.presets.count: %d", user.presets.count); // Added
DLog(#"AFTER | boundary.presets.count: %d", boundary.presets.count); //Added
DLog(#"AFTER | user.presets.count: %d", user.presets.count); //Added
[[NSManagedObjectContext MR_contextForCurrentThread] MR_saveToPersistentStoreAndWait]; //Added
They User and Boundary have Presets relationships:
DEBUG | -[LoginViewController viewDidLoad] | boundary.name: 997677
DEBUG | -[LoginViewController viewDidLoad] | BEFORE | boundary.presets.count: 0
DEBUG | -[LoginViewController viewDidLoad] | user.username: iDealer
DEBUG | -[LoginViewController viewDidLoad] | BEFORE | user.presets.count: 0
DEBUG | -[LoginViewController viewDidLoad] | preset01.nameDisplay: C4
DEBUG | -[LoginViewController viewDidLoad] | preset02.nameDisplay: B
DEBUG | -[LoginViewController viewDidLoad] | AFTER | boundary.presets.count: 1
DEBUG | -[LoginViewController viewDidLoad] | AFTER | user.presets.count: 2
-[NSManagedObjectContext(MagicalSaves) MR_saveWithOptions:completion:](0x20831a80) → Saving <NSManagedObjectContext (0x20831a80): *** DEFAULT ***> on *** MAIN THREAD ***
-[NSManagedObjectContext(MagicalSaves) MR_saveWithOptions:completion:](0x20831a80) → Save Parents? 1
-[NSManagedObjectContext(MagicalSaves) MR_saveWithOptions:completion:](0x20831a80) → Save Synchronously? 1
-[NSManagedObjectContext(MagicalSaves) MR_saveWithOptions:completion:](0x1f59e860) → Saving <NSManagedObjectContext (0x1f59e860): *** BACKGROUND SAVING (ROOT) ***> on *** MAIN THREAD ***
-[NSManagedObjectContext(MagicalSaves) MR_saveWithOptions:completion:](0x1f59e860) → Save Parents? 0
-[NSManagedObjectContext(MagicalSaves) MR_saveWithOptions:completion:](0x1f59e860) → Save Synchronously? 1
__70-[NSManagedObjectContext(MagicalSaves) MR_saveWithOptions:completion:]_block_invoke21(0x1f59e860) → Finished saving: <NSManagedObjectContext (0x1f59e860): *** BACKGROUND SAVING (ROOT) ***> on *** MAIN THREAD ***
DEBUG | -[LoginViewController viewDidLoad] | presetList: (
)
You need to save your data before you can fetch it from a store. Use MR_saveToPersistentStoreAndWait or other variants. Fetching always goes it 'disk', so saving first should fix this.
After a bunch of playing around, I narrowed down the issue to the NSPredicate I was using:
NSPredicate *predicate = [NSPredicate predicateWithFormat:#"user == %# AND boundary != %#", user, boundary];
There seems to be an issue with NOT and NONE in the queries. Got the answer in a different question I post: NSPredicate with a !=?
The answer involves using a SUBQUERY instead of NOT.
NSPredicate *predicate = [NSPredicate predicateWithFormat:#"user == %# AND SUBQUERY(boundary, $p, $p == %#).#count == 0", user, boundary];
I'm guessing the fetches I was making were fine, it would return 0 because of the bad NSPredicate, but I have not positive.

Using yyparse() to make a two pass assembler?

I'm writing an assembler for a custom micro controller I'm working on. I've got the assembler to a point where it will assemble instructions down to binary.
However, I'm now having problems with getting labels to work. Currently, when my assembler encounters a new label, it stores the name of the label and the memory location its referring to. When an instruction references a label, the assembler looks up the label and replaces the label with the appropriate value.
This is fine and dandy, but what if the label is defined after the instruction referencing it? Because of this, I need to have my parser run over the code twice.
Here's what I currently have for my main function:
303 int main(int argc, char* argv[])
304 {
305
306 if(argc < 1 || strcmp(argv[1],"-h")==0 || 0==strcmp(argv[1],"--help"))
307 {
308 //printf("%s\n", usage);
309 return 1;
310 }
311 // redirect stdin to the file pointer
312 int stdin = dup(0);
313 close(0);
314
315 // pass 1 on the file
316 int fp = open(argv[1], O_RDONLY, "r");
317 dup2(fp, 0);
318
319 yyparse();
320
321 lseek(fp, SEEK_SET, 0);
322
323 // pass 2 on the file
324 if(secondPassNeeded)
325 {
326 fp = open(argv[1], O_RDONLY, "r");
327 dup2(fp, 0);
328 yyparse();
329 }
330 close(fp);
331
332 // restore stdin
333 dup2(0, stdin);
334
335 for(int i = 0; i < labels.size(); i++)
336 {
337 printf("Label: %s, Loc: %d\n", labels[i].name.c_str(), labels[i].memoryLoc);
338 }
339 return 0;
340 }
I'm using this inside a flex/bison configuration.
If that is all you need, you don't need a full two-pass assembler. If the label is not defined when you reference it, you simply output a stand-in address (say 0x0000) and have a data structure that lists all of the places with forward references and what symbol they refered to. At the end of the file (or block if you have local symbols), you simply go through that list and patch the addresses.

Resources