How can I express a number in Objective-C "infinitely" close to zero but still larger. Essentially I want the smallest positive number.
I want to express the number, .0000000000000001 in a simpler form.
What's the smallest number I can get without it being zero?
Use scientific notation when dealing with really small or really large numbers:
double reallyTiny = 1.0e-16; // .0000000000000001
But the best way to start with the smallest number possible is to use:
double theTiniestPositive = DBL_MIN; // 2.2250738585072014e-308
Use the nextafter function, as found here. It is of the format nextafter(x, y) and returns the closest value to x in direction of y.
Try the value DBL_MIN or FLT_MIN, they should be 1E-37.
Related
I have some features that are zero-centered values and supposed to represent change between a current value and previous value. Generally speaking i believe there should be some symmetry between these values. Ie. there should be roughly the same amount of positive values as negative values and roughly these values should operate on the same scale.
When i try to scale my samples using MaxAbsScaler, i notice that my negative values for this feature get almost completely drowned out by the positive values. And i don't really have any reason to believe my positive values should be that much larger than my negative values.
So what i've noticed is that fundamentally, the magnitude of percentage change values are not symmetrical in scale. For example if i have a value that goes from 50 to 200, that would result in a 300.0% change. If i have a value that goes from 200 to 50 that would result in a -75.0% change. I get there is a reason for this, but in terms of my feature, i don't see a reason why a change of 50 to 100 should be 3x+ more "important" than the same change in value but the opposite direction.
Given this information, i do not believe there would be any reason to want my model to treat a change of 200-50 as a "lesser" change than a change of 50-200. Since i am trying to represent the change of a value over time, i want to abstract this pattern so that my model can "visualize" the change of a value over time that same way a person would.
Right now i am solving this by using this formula
if curr > prev:
return curr / prev - 1
else:
return (prev / curr - 1) * -1
And this does seem to treat changes in value, similarly regardless of the direction. Ie from the example of above 50>200 = 300, 200>50 = -300. Is there a reason why i shouldn't be doing this? Does this accomplish my goal? Has anyone ran into similar dilemmas?
This is a discussion question and it's difficult to know the right answer to it without knowing the physical relevance of your feature. You are calculating a percentage change, and a percent change is dependent on the original value. I am not a big fan of a custom formula only to make percent change symmetric since it adds a layer of complexity when it is unnecessary in my opinion.
If you want change to be symmetric, you can try direct difference or factor change. There's nothing to suggest that difference or factor change are less correct than percent change. So, depending on the physical relevance of your feature, each of the following symmetric measures would be correct ways to measure change -
Difference change -> 50 to 200 yields 150, 200 to 50 yields -150
Factor change with logarithm -> 50 to 200 yields log(4), 200 to 50 yields log(1/4) = -log(4)
You're having trouble because you haven't brought the abstract questions into your paradigm.
"... my model can "visualize" ... same way a person would."
In this paradigm, you need a metric for "same way". There is no such empirical standard. You've dropped both of the simple standards -- relative error and absolute error -- and you posit some inherently "normal" standard that doesn't exist.
Yes, we run into these dilemmas: choosing a success metric. You've chosen a classic example from "How To Lie With Statistics"; depending on the choice of starting and finishing proportions and the error metric, you can "prove" all sorts of things.
This brings us to your central question:
Does this accomplish my goal?
We don't know. First of all, you haven't given us your actual goal. Rather, you've given us an indefinite description and a single example of two data points. Second, you're asking the wrong entity. Make your changes, run the model on your data set, and examine the properties of the resulting predictions. Do those properties satisfy your desired end result?
For instance, given your posted data points, (200, 50) and (50, 200), how would other examples fit in, such as (1, 4), (1000, 10), etc.? If you're simply training on the proportion of change over the full range of values involved in that transaction, your proposal is just what you need: use the higher value as the basis. Since you didn't post any representative data, we have no idea what sort of distribution you have.
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 8 years ago.
I am reading from a txt file and populating a core data entity.
At some point I have read the value form the TXT file and the value is #"0.9".
Now I assign it to a CGFloat.
CGFloat value = (CGFloat)[stringValue floatValue];
debugger shows value as 0.89999997615814208 !!!!!!?????
why? bug? Even if it things [stringValue floatValue] is a double, casting it to CGFloat should not produce that abnormality.
The binary floating point representation used for float can't store the value exactly. So it uses the closest representable value.
It's similar to decimal numbers: It's impossible to represent one third in decimal (instead we use an approximate representation like 0.3333333).
Because to store a float in binary you can only approximate it by summing up fractions like 1/2, 1/4, 1/8 etc. For 0.9 (any many other values) there is no exact representation that can be constructed from summing fractions like this. Whereas if the value was say, 0.25 you could represent that exactly as 1/4.
Floating point imprecision, check out http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
Basically has to do with how floating points work, they don't just store a number, they store a math problem. Broken down by a base number, and a precision. It then must combine the two numbers via math operation to retrieve the actual value, which doesn't always turn out to be exactly what you assigned to it.
Good morning all,
I'm having some issues with floating point math, and have gotten totally lost in ".to_f"'s, "*100"'s and ".0"'s!
I was hoping someone could help me with my specific problem, and also explain exactly why their solution works so that I understand this for next time.
My program needs to do two things:
Sum a list of decimals, determine if they sum to exactly 1.0
Determine a difference between 1.0 and a sum of numbers - set the value of a variable to the exact difference to make the sum equal 1.0.
For example:
[0.28, 0.55, 0.17] -> should sum to 1.0, however I keep getting 1.xxxxxx. I am implementing the sum in the following fashion:
sum = array.inject(0.0){|sum,x| sum+ (x*100)} / 100
The reason I need this functionality is that I'm reading in a set of decimals that come from excel. They are not 100% precise (they are lacking some decimal points) so the sum usually comes out of 0.999999xxxxx or 1.000xxxxx. For example, I will get values like the following:
0.568887955,0.070564759,0.360547286
To fix this, I am ok taking the sum of the first n-1 numbers, and then changing the final number slightly so that all of the numbers together sum to 1.0 (must meet validation using the equation above, or whatever I end up with). I'm currently implementing this as follows:
sum = 0.0
array.each do |item|
sum += item * 100.0
end
array[i] = (100 - sum.round)/100.0
I know I could do this with inject, but was trying to play with it to see what works. I think this is generally working (from inspecting the output), but it doesn't always meet the validation sum above. So if need be I can adjust this one as well. Note that I only need two decimal precision in these numbers - i.e. 0.56 not 0.5623225. I can either round them down at time of presentation, or during this calculation... It doesn't matter to me.
Thank you VERY MUCH for your help!
If accuracy is important to you, you should not be using floating point values, which, by definition, are not accurate. Ruby has some precision data types for doing arithmetic where accuracy is important. They are, off the top of my head, BigDecimal, Rational and Complex, depending on what you actually need to calculate.
It seems that in your case, what you're looking for is BigDecimal, which is basically a number with a fixed number of digits, of which there are a fixed number of digits after the decimal point (in contrast to a floating point, which has an arbitrary number of digits after the decimal point).
When you read from Excel and deliberately cast those strings like "0.9987" to floating points, you're immediately losing the accurate value that is contained in the string.
require "bigdecimal"
BigDecimal("0.9987")
That value is precise. It is 0.9987. Not 0.998732109, or anything close to it, but 0.9987. You may use all the usual arithmetic operations on it. Provided you don't mix floating points into the arithmetic operations, the return values will remain precise.
If your array contains the raw strings you got from Excel (i.e. you haven't #to_f'd them), then this will give you a BigDecimal that is the difference between the sum of them and 1.
1 - array.map{|v| BigDecimal(v)}.reduce(:+)
Either:
continue using floats and round(2) your totals: 12.341.round(2) # => 12.34
use integers (i.e. cents instead of dollars)
use BigDecimal and you won't need to round after summing them, as long as you start with BigDecimal with only two decimals.
I think that algorithms have a great deal more to do with accuracy and precision than a choice of IEEE floating point over another representation.
People used to do some fine calculations while still dealing with accuracy and precision issues. They'd do it by managing the algorithms they'd use and understanding how to represent functions more deeply. I think that you might be making a mistake by throwing aside that better understanding and assuming that another representation is the solution.
For example, no polynomial representation of a function will deal with an asymptote or singularity properly.
Don't discard floating point so quickly. I could be that being smarter about the way you use them will do just fine.
I am parsing some vertice information from an XML file which reads as follows (partial extract) :
21081.7 23447.6 2781.62 24207.4 18697.3 -2196.96
I save the string as an NSString and then convert to a float value (which I will later feed into OpenGL ES)
NSString * xPoint = [finishedParsingArray objectAtIndex:baseIndex];
NSLog(#"xPoiint is %#", xPoint);
float x = [xPoint floatValue];
The problem is that float x changes the values as follows :
21081.699219, 23447.599609, 2781.620117, 24207.400391, 18697.300781, -2196.959961
As you can see, it is changing the number of decimal places (not sure how it is doing this - must be hidden formatting in the xml file ?)
My question is how can I store the float to match the original number in the NSString / XML file to the same number of decimal places ?
Thanks in advance !
Your issue seems to be that you don't understand how floats are stored in memory and don't know that floats aren't precise.
Exact values often can't be stored and so the system picks the closest number it can to represent it. If you look carefully, you can see that each of the outputted numbers is very close to your inputted values.
For better accuracy, try using double instead. Double does encounter the same problems, but with better precision. Floats have about 6 significant digits; doubles have more than twice that. Source
Here are some other StackOverflow answers and external articles you should read:
What Every Computer Scientist Should Know About Floating-Point Arithmetic
Floating Points on Wikipedia
This answer on a similar question
All primitives which store floating point numbers have an accuracy issue. Most of the time it's so small it doesn't matter, but sometimes it's vital.
When it's important to keep the exact number, I would suggest using NSDecimalNumber.
I'm trying to implement decimal arithmetic in (La)TeX. I'm trying to use dimens to store the values. I want the arithmetic to be exact to some (fixed) number of decimal places. If I use 1pt as my base unit, then this fails, because \divide rounds down, so 1pt / 10 gives 0.09999pt. If I use something like 1000sp as my base unit, then I get working fixed point arithmetic with 3 decimal places, but I can't figure out an easy way to format the numbers. If I try to convert them to pt, so I can use TeX's display mechanism, I have the same problem with \divide.
How do I fix this problem, or work around it?
The fp package provides fixed point arithmetic for LaTeX. The LaTeX3 Project are currently implementing something similar as part of the expl3 bundle. The code is currently not on CTAN, but can be grabbed from the SVN (or will appear when the next update from the SVN to CTAN takes place).
I would represent all the values as integers and scale them appropriately. For example, when you need three decimal digits, 0.124 would be represented as 124. This is nice because addition and subtraction are trivial. When multiplying two numbers a and b, you would have to divide the result by 1000 to get the proper representation. Dividing works by multiplying the result with 1000.
You still have to get the rounding issues correct, but this isn't very difficult. At least if you don't get near the maximum representable integer (I don't remember if it's 2^31-1 or 2^30-1).
Here is some code:
\def\fixadd#1#2#3{%
#1=#2\relax
\advance #1 by #3\relax
}
\def\fixsub#1#2#3{%
#1=#2\relax
#1=-#1\relax
\advance #1 by #3\relax
#1=-#1\relax
}
\def\fixmul#1#2#3{%
#1=#2\relax
\multiply #1 by #3\relax
\divide #1 by 1000\relax
}
\def\fixdiv#1#2#3{%
#1=#2\relax
\divide #1 by #3\relax
\multiply #1 by 1000\relax
}
\newcount\numa
\newcount\numb
\newcount\numc
\numa=1414
\numb=2828
\fixmul\numc\numa\numb
\the\numc
\bye
The operations are modeled after a three register machine, where the first is the destination and the other two are the operands. The rounding after the multiplication and division, including corner cases for very large or very small numbers are left as an exercise to you.