Trouble with phone number in ipad mini - ios

I am developing an app that shows a 10 digit number. When I run it in my iPhone 6 it shows ok. It displays 8183874201. But when I run it in my old iPad Mini it show the number in a format like this -467821389.
The code I am running is:
var telefonoCasaStaff = self.timelineData[0].objectForKey("TelCasa") as Int
self.telCasaTextLabel.text = String(telefonoCasaStaff)
Any ideas?

Int is a 64-bit integer on 64-bit devices, and a 32-bit integer
on 32-bit devices.
8183874201 = 0x1E7CC0299 exceeds the range of 32-bit integers, and
apparently is truncated to 32-bit.
You could use Int64, but generally, storing phone numbers
as integers makes not much sense, and you should use strings instead.

Related

Getting Garbage value while convert into long Objective -C

I am trying to convert NSString to long but I am getting garbage value. Below is my code :
long t1 = [[jsonDict valueForKeyPath:#"detail.amount"]doubleValue] * 1000000000000000000;
long t2 = [[jsonDict valueForKeyPath:#"detail.fee"]doubleValue] * 10000000000000000;
NSLog(#"t1: %ld",t1);
NSLog(#"t2: %ld",t2);
detail.amout = 51.74
detail.fee = 2.72
O/P :
t1: 9223372036854775807 (Getting Garbage value here)
t2: 27200000000000000 (Working fine)
Thanks in advance.
Each number types (int, long, double, float) has limits. For your long 64 bit (because your device is 64bit) number the upper limit is :9,223,372,036,854,775,807 (see here: https://en.wikipedia.org/wiki/9,223,372,036,854,775,807)
In your case, 51.74 * 1,000,000,000,000,000,000 =
51,740,000,000,000,000,000
While Long 64bit only has a maximum of
9,223,372,036,854,775,807
So an overflow happens at 9,223,372,036,854,775,808 and above. Which is what your calculation evaluates into.
Also to note, that what you are doing will also cause problem if you only cater for 64bit long range, because what happens when your app runs on a 32bit (like iPhone 5c or below)?
Generally a bad idea to use large numbers, unless you're doing complex maths. If number accuracies are not critical, then you should consider simplifying the number like 51,740G (G = Giga). etc.
It's because you're storing the product to long type variables t1 and t2.
Use either float or double, and you'll get the correct answer.
Based on C's data types:
Long signed integer type. Capable of containing at least the
[−2,147,483,647, +2,147,483,647] range; thus, it is at least 32
bits in size.
Ref: https://en.wikipedia.org/wiki/C_data_types
9223372036854775807 is the maximum value of a 64-bit signed long. I deduce that [[jsonDict valueForKeyPath:#"detail.amount"]doubleValue] * 1000000000000000000 is larger than the maximum long value, so when you cast it to long, you get the closest value that long can represent.
As you read, it is not possible with long. Since it looks like you do finance math, you should use NSDecimalNumber instead of double to solve that problem.

Integer literal '115000159351' overflows when stored into 'Int' but works fine in one project but not another - Swift

I have a library from zendesk for iOS and I use a number they give me to sort help desk items by category. This is how I tell it what category I want:
hcConfig.groupIds = [115000159351]
However, XCODE is throwing the error of
Integer literal '115000159351' overflows when stored into 'Int'
Ok, I understand. Probably because the number is more than 32 bits. But I have another app I made that I have an equally long number with, and that one builds just fine with no errors. Same exact code, except slightly different number.
hcConfig.groupIds = [115000158052]
Why will one project build but the other will not?
For reference here is their instructions:
https://developer.zendesk.com/embeddables/docs/ios_support_sdk/help_center#filter-articles-by-category
The Swift Int is a 32-bit or 64-bit integer, depending on the platform.
To create a NSNumber (array) from a 64-bit literal on all platforms, use
let groupIds = [NSNumber(value: 115000159351 as Int64)]
or
let groupIds = [115000159351 as Int64 as NSNumber]
When both the integers converted to binary they needed equal bits around ~37
1101011000110100010110010110001110111 = 115000159351
1101011000110100010110010011101100100 = 115000158052
So, it seems that the one which is working is compiled as 64 bit app, where in the one failing is being compiled as 32 bit app.
Please verify once.
Please refer How to convert xcode 32 bit app into 64 bit xcode app to convert your app from 32 bit to 64 bit
For using large numbers in NSNumber use following method :
NSNumber *value = [NSNumber numberWithLongLong:115000159351];
Further, following code works fine for me on 32 bit also :
var groupIds = [NSNumber]();
groupIds = [115000158052, 115000158053, 115000158054]
groupIds = [115000158052 as NSNumber, 115000158053 as NSNumber, 115000158054 as NSNumber]
groupIds = [115000158052 as Int64 as NSNumber, 115000158053 as Int64 as NSNumber, 115000158054 as Int64 as NSNumber]
I think groupIds are not NSNumber but Int.

Different size of int type on different devices

Consider the following code fragment:
if (((int)[#"foo" rangeOfString #"a"].location+1) > 0)
{
// found a
}
else
{
// not found a
}
On release builds, it works fine (i.e. goes to //not found a) on newer devices such as iPad Air. But on old devices such as iPad 2, it does not (I.e. goes to // found a).
When debugging via Xcode it works fine on all devices.
PS: I know the above is poor coding practice and I should be using the following. But I am trying to understand the above behavior.
if ([#"foo" rangeOfString #"a"].location != NSNotFound)
{
// found a
}
else
{
// not found a
}
rangeOfString.location returns NSNotFound if the string does not contain the substring.
NSNotFound is declared as NSIntegerMax which is 32 bit on 32 bit systems and 64 bit on 64 bit systems.
The problem occurs by casting the type to int which is always 32 bit.
Casting a 64 bit integer to int will loose either precision and/or the sign.
I believe it may have to do with 32 bit and 64 bit architecture, and perhaps the optimizer is messing with it
print the values you are getting out of it
NSInteger is either 32 or 64 bit. int is 32 bit. That should explain everything.

Xcode 7 Memory Address

In Xcode 6:
int i = 17;
printf ("i stores its value at %p\n", &i);
I will see something like this:
i stores its value at 0xbffff738
But in Xcode 7, its output format is:
i stores its value at 0x7fff5fbff7cc
Can someone explain the difference?
It has to do with the built architecture. The first is a 32-bit address. The second is a 64-bit address.

Best technique for iPad 1 vs iPad 2 GPU determination?

The performance of the iPad 2 GPU is way better than the iPad 1. I'd like to switch in my app and add some extra nice graphical subtlety when I know the GPU can handle it.
So I'd like to be able to detect essentially the distinction between the iPad 1 and 2 (and later), ideally using as close to a capability detection as I can. There are plenty of unrelated things I could switch on (presence of camera, etc), but ideally I'd like to find something, maybe an OpenGL capability, that distinguishes the GPU more directly.
This Apple page doesn't list anything useful for iPad 1 vs 2, and this article talks about benchmarking and GPU arch differences but doesn't pinpoint anything that looks like I can query directly (e.g. number of texture units or whatever).
Anyone have any thoughts on how to do this, or am I missing something obvious? Thanks.
One distinction you can query for is maximum texture size. On iPad 2 and iPhone 4S, the maximum texture size is 4096 x 4096, where on all other iOS devices it's 2048 x 2048. It would seem to me to be a safe assumption that future, more powerful iOS devices would also have a maximum texture size at least this large.
To query for the maximum texture size, first create your OpenGL ES context, then set it as the current context and run the following query:
GLint maxTextureSize;
glGetIntegerv(GL_MAX_TEXTURE_SIZE, &maxTextureSize);
On my iPhone 4, this returns 2048 in maxTextureSize, but on my iPad 2 and iPhone 4S this gives back the value of 4096.
You can also test for the presence of some new extensions that the iPad 2 supports, such as EXT_shadow_samplers (more are documented in "What's New in iOS: iOS 5.0"), but those tests will only work on iOS 5.0. Stragglers still on iOS 4.x won't have those capabilities register.
Today with more GPU's available, here is what I came up with for my own needs.
enum GpuClass {
kGpuA5 = 0,
kGpuA6,
kGpuA7,
kGpuA8,
kGpuUnknown,
} ;
- (enum GpuClass)reportGpuClass {
NSString *glVersion = [NSString stringWithUTF8String:(char *)glGetString(GL_VERSION)];
if ([glVersion containsString:#"Apple A5"] || [glVersion containsString:#"S5L8"]) {
NSLog(#"Running on a A5 GPU");
return kGpuA5;
}
if ([glVersion containsString:#"Apple A6"] || [glVersion containsString:#"IMGSGX5"]) {
NSLog(#"Running on a A6 GPU");
return kGpuA6;
}
if ([glVersion containsString:#"Apple A7"] || [glVersion containsString:#"G6430"]) {
NSLog(#"Running on a A7 GPU");
return kGpuA7;
}
if ([glVersion containsString:#"Apple A8"] || [glVersion containsString:#"GXA6850"]) {
NSLog(#"Running on a A8 GPU");
return kGpuA8;
}
return kGpuUnknown;
}
You may further differentiate between specific chips by specifying more full version numbers. e.g. specify IMGSGX543 instead of just IMGSGX5.

Resources