I'm using NSJSONSerialization to convert a dictionary into JSON.
If I include an NSDecimalNumber ( == 0 ) in that dictionary it outputs as 0. This is wrong. 0 is an int. I need it to output as 0.0.
This is what I'm doing:
NSDecimalNumber *decimal = [[NSDecimalNumber alloc] initWithFloat:0.0f];
// when fed into NSJSONSerialization it outputs as 0
Is there any way to output as 0.0?
OR am I incorrect in my assumption? Is 0 a valid float?
There is no way to affect the way NSJSONSerialization outputs numbers. But you really should not worry about this. JSON doesn’t distinguish between different types of numbers, so you always should accept numbers with and without decimal points, no matter what the actual type of number you are doing your calculations with.
Related
I found a bug in my code that is caused by NSDecimalNumber.notANumber.intValue returning 9, while I would expect NaN (as floatValue or doubleValue return). Does anybody know why?
Like mentioned by Joakim Danielson and noted in the Apple Developer Documentation
... Because numeric types have different storage capabilities, attempting to initialize with a value of one type and access the value of another type may produce an erroneous result ...
And since Swift's Int struct cannot represent NaN values, you get this erroneous result.
Instead you could use Int's Failable Initialiser init(exactly:) that converts your NSDecimalNumber to an Int? that will either contain it's value or be nil if it is not representable by an Int.
let strangeNumber = NSDecimalNumber.notANumber // nan
let integerRepresentation = Int(exactly: strangeNumber) // nil
I have a problem, want to convert a decimal byte to a hexadecimal byte, pass it to string to be able to make the conversion more quickly but now my question is as follows. Know how can I convert a string to a byte the string
example:
NSString *var = #"0x21";
To
Byte cmd = 0x21;
You can convert an instance of NSString to an instance of NSData with -dataUsingEncoding:allowLossyConversion: or to a C array with -getCString:maxLength:encoding:(NSStringEncoding)encoding.
In both cases you have a pointer to an object resp. to a char[]. Putting that pointer into the Byte array, will convert the pointer and copy its value, but not the referenced data.
Additionally: In your example you try to save 0.4 (zero – period – four) and 0.5 (zero – period – 5) into a Byte[]. This will not do the job, you probably expect. It will convert the value to a value of type Byte (an integer type!) and store that value. Integer values greater than 255 will be converted, too.
Therefore you have to use a mutable data object and concat the binary representation of the different types individually.
I have some very simple code that does a calculation and converts the resulting double to an int.
let startingAge = (Double(babyAge/2).rounded().nextDown)
print(startingAge)
for each in 0..<allQuestions.count {
if allQuestions[each] == "\(Int(startingAge))"
The first print of startingAge gives me the correct answer, for example 5.0. But when it converts to an Int, it gives me an answer of 4. When the Double is 6.0, the int is 5.
I'm feeling stupid, but can't figure out what I'm doing wrong.
When you call rounded(), you round your value to the nearest integer.
When you call .nextDown, you get the next possible value less than the existing value, which means you now have the highest value that's less than the nearest integer to your original value. This still displays as the integer when you print it, but that's just rounding; it's really slightly less than the integer. So if it's printing as "4.0", it's really something like 3.9999999999999 or some such.
When you convert the value to an Int, it keeps the integer part and discards the part to the right of the decimal. Since the floating-point value is slightly less than the integer you rounded to thanks to .nextDown, the integer part is going to be one less than that integer.
Solution: Get rid of the .nextDown.
When you cast you lose precession.
In your case the line returns a double: Assume baby age is 9 then startingAge is 3.999999
let startingAge = (Double(babyAge/2).rounded().nextDown)
and when you print it your answer becomes 3
print("\(Int(startingAge))")
To fix this use this line instead:
let startingAge = (Double(babyAge/2).rounded().nextDown).rounded()
This is what nextdown does, it does not round values, and if the number is
a floating point number it becomes slightly less. If the number was to be an int it would become 1 less I presume.
Hi everyone,
I'm working with a private API and I need to send integer and double values.
For integers, I don't have any problem, I convert the integer to NSNumber and everything works fine.
But with double with no decimal numbers ( 46 for instance ) my request is rejected because the server sees an integer where there should be a double.
My sys admin told me to send round double value with ".0", so if I want to send the double 46, I have to send 46.0.
The problem is that I can't send an NSString or the server will also reject my request ( it will see a string where there should be a double ).
So here is my question : is there a way to add representative numbers to NSNumbers ? So my double 46 would be NSNumber 46.0
Can anyone help me ?
Thanks in advance.
When ever you are initialising the NSNumber you should use
mynumber = [NSNumber numberWithDouble: myDoublevalue];
and when you want to send it to server try like
[mynumber doubleValue];
I guess the question is: What do you use to create your NSNumber?
Using
NSNumber *intNumber = #(46);
Will give you an NSNumber that contains an int, whereas
NSNumber *doubleNumber = [NSNumber numberWithDouble:46];
should result in an NSNumber containing a double .
(You can check this by calling [intNumber objCType], which will give "i" and [doubleNumber objCType will give "d")
The other problem you might experience is that you (or the API you use) uses some format conversion to JSON or something else.
Normal JSON converters will omit the .0 and therefore you might get an error.
Therefore if you have access to the JSON (or what ever else format conversion you use, since you will never send an NSNumber to any server), you can fix it there.
Last point, if nothing helps with the two points above, you could think about adding a very small number, that will force a point, but won't make much of a difference in normal cases:
input = pow(2,log2(fabs(input))-50) //use 20 instead of 50 for float numbers!
NSNumber *result = #(input);
I am trying to create a NSDecimalNumber to store a currency value in Core Data based in a string that contains the value 11.90. I want to keep the 2 decimals but the 0 is being ignored and it always turns 11.9 as NSDecimalNumber. I tried to use the Behavior protocol rounding and add the scale but it didn't work for me.
Is it working as designed and should I just use a formatter to add the 0 when retrieving the 11.9 from Core Data or is there any step I'm missing ?
Thanks.
NSDecimalNumberHandler *behavior = [[NSDecimalNumberHandler alloc] initWithRoundingMode:NSRoundPlain
scale:2
raiseOnExactness:NO
raiseOnOverflow:NO
raiseOnUnderflow:NO
raiseOnDivideByZero:NO];
[NSDecimalNumber setDefaultBehavior:behavior];
NSDecimalNumber *priceDecimal;
NSDictionary *locale = [NSDictionary dictionaryWithObject:currentLocaleSeparator forKey:NSLocaleDecimalSeparator];
priceDecimal = [NSDecimalNumber decimalNumberWithString:price locale:locale];
priceDecimal = [priceDecimal decimalNumberByRoundingAccordingToBehavior:behavior];
Right, the formatter is the right approach. You have to distinguish between the precision of the decimal number being stored, which seems to be working fine, and its representation or formatting.
I have made lots of apps with currencies - I have always found that it is easier and more convenient to simply count the cents (or hundreds unit) using 64 bit Integers.
Check out NSNumberFormatter's minimumFractionDigits.