SPSS did not accept numbers between 0.1 to .999 - spss

I cant change output data when want to change between .1 to .999 and SPSS automaticlly change them to 0.000.
I dont know what is wrong with SPSS, i tried use Pearson analyze but output is 0.000 only.

Not sure what exactly you're after so please respecify your question, 0.0E0 = 0.
If you want the output to be in decimals only mark the area and right click and specify your decimals. Else you could change the width and decimals of VAR00001 and VAR00002 before you run pearsons.

Related

How to stop SPSS from altering cell decimal values at import from .csv

I'm importing roughly 50.000 cases/rows into SPSS via .csv file.
The data in question consists of a total of 17 variables some of which contain numbers.
They're basically Decimals but they get changed by SPSS when I import them.
The problem is I can't set a particular variable to have 3 Decimals because the actual value can sometimes be 2, which is important to keep as is and at other times it is actually 3. Hence, if I set the whole variable to 3 Decimals the values containing only 2 Decimals get added a 0 at the end, which screws everything up for me.
Snippet from actual data:
I need 1.667 to stay as-is. Then I need 1.50 to stay as-is. Then 1.40, 1.364 and so on for everything.
What happens when I import it is 1.50 becomes 1.500 1.40 becomes 1.400 and so on and so forth..
Any suggestions?
If the original data is 1.25 them the actual data stored is 1.25, which is equal to 1.250, and to 1.250000 for that matter. So this shouldn't screw up any calculations you are making - just the display.
You are forced to decide whether to rounded to two decimal points (`1.25') or three ('1.250'). If indeed this is what's bothering you - To the best of my knowledge there is no way (unlike Excel) to have a different number of decimals for different parts of one column, nor is there a way to remove trailing zeros.
This being said here is a weird workaround: changing the number format to 'restricted numeric' should, in theory, make your data unacceptable (as numbers in this format aren't supposed to have fractions), but will show the data without trailing zeros (well, in version 23 on my machine it does at least).
you can change the format through syntax like this:
formats var1 to var7 (n8).

SPSS percentile issue

I am working with SPSS 18.
I am using FREQUENCIES to calculate the 95th percentile of a variable.
FREQUENCIES SdrelPromSldDeu_Acr_5_0
/FORMAT=NOTABLE
/PERCENTILES 1,5,95,99.
The result is given in a table
Statistics
SdrelPromSldDeu_Acr_5_0
N Valid 8881
Missing 0
Percentiles 1 -1,001060644014
5 -1,000541440102
95 6619,140632636228
99 9223372,036854776000
But if I double-click the 9223372,036854776 to copy it, another number appears: 1.0757943411193715E7.
If I use MEANS to get the maximum value, the result is 2.4329524990388575E8, so the number that appears on the double-click seems possible.
I have seen 9223372,03 in other cases as well, as if it were some kind of upper limit SPSS is able to display.
Can anybody tell me if the 9223372,03 represents anything useful? Should I trust the bigger number?
Thanks!
It appears to be a bug in the display of SPSS.
The number you have shown is eerily similar to
9223372036854775807
which is the highest value possible if a variable is declared as a long integer.
see also:
https://en.wikipedia.org/wiki/9223372036854775807
Since your actual number is 11 degrees smaller, it should not reach this limit. Hence the conclusion that it must be a bug in the display software.
Do not trust it.
(the number behind may or may not be right, but the 9223372,03 is surely wrong)

What is the smallest positive floating point value?

How can I express a number in Objective-C "infinitely" close to zero but still larger. Essentially I want the smallest positive number.
I want to express the number, .0000000000000001 in a simpler form.
What's the smallest number I can get without it being zero?
Use scientific notation when dealing with really small or really large numbers:
double reallyTiny = 1.0e-16; // .0000000000000001
But the best way to start with the smallest number possible is to use:
double theTiniestPositive = DBL_MIN; // 2.2250738585072014e-308
Use the nextafter function, as found here. It is of the format nextafter(x, y) and returns the closest value to x in direction of y.
Try the value DBL_MIN or FLT_MIN, they should be 1E-37.

How do I winsorize data in SPSS?

Does anyone know how to winsorize data in SPSS? I have outliers for some of my variables and want to winsorize them. Someone taught me how to do use the Transform -> compute variable command, but I forgot what to do. I believe they told me to just compute the square root of the subjects measurement that I want to winsorize. Could someone please elucidate this process for me?
There is a script online to do it already it appears. It could perhaps be simplified (the saving to separate files is totally unnecessary), but it should do the job. If you don't need a script and you know the values of the percentiles you need it would be as simple as;
Get the estimates for the percentiles for variable X (here I get the 5th and 95th percentile);
freq var X /format = notable /percentiles = 5 95.
Then lets say (just by looking at the output) the 5th percentile is equal to 100 and the 95th percentile is equal to 250. Then lets make a new variable named winsor_X replacing all values below the 5th and 95th percentile with the associated percentile.
compute winsor_X = X.
if X <= 100 winsor_X = 100.
if X >= 250 winsor_X = 250.
You could do the last part a dozen different ways, but hopefully that is clear enough to realize what is going on when you winsorize a variable.

Ruby Floating Point Math - Issue with Precision in Sum Calc

Good morning all,
I'm having some issues with floating point math, and have gotten totally lost in ".to_f"'s, "*100"'s and ".0"'s!
I was hoping someone could help me with my specific problem, and also explain exactly why their solution works so that I understand this for next time.
My program needs to do two things:
Sum a list of decimals, determine if they sum to exactly 1.0
Determine a difference between 1.0 and a sum of numbers - set the value of a variable to the exact difference to make the sum equal 1.0.
For example:
[0.28, 0.55, 0.17] -> should sum to 1.0, however I keep getting 1.xxxxxx. I am implementing the sum in the following fashion:
sum = array.inject(0.0){|sum,x| sum+ (x*100)} / 100
The reason I need this functionality is that I'm reading in a set of decimals that come from excel. They are not 100% precise (they are lacking some decimal points) so the sum usually comes out of 0.999999xxxxx or 1.000xxxxx. For example, I will get values like the following:
0.568887955,0.070564759,0.360547286
To fix this, I am ok taking the sum of the first n-1 numbers, and then changing the final number slightly so that all of the numbers together sum to 1.0 (must meet validation using the equation above, or whatever I end up with). I'm currently implementing this as follows:
sum = 0.0
array.each do |item|
sum += item * 100.0
end
array[i] = (100 - sum.round)/100.0
I know I could do this with inject, but was trying to play with it to see what works. I think this is generally working (from inspecting the output), but it doesn't always meet the validation sum above. So if need be I can adjust this one as well. Note that I only need two decimal precision in these numbers - i.e. 0.56 not 0.5623225. I can either round them down at time of presentation, or during this calculation... It doesn't matter to me.
Thank you VERY MUCH for your help!
If accuracy is important to you, you should not be using floating point values, which, by definition, are not accurate. Ruby has some precision data types for doing arithmetic where accuracy is important. They are, off the top of my head, BigDecimal, Rational and Complex, depending on what you actually need to calculate.
It seems that in your case, what you're looking for is BigDecimal, which is basically a number with a fixed number of digits, of which there are a fixed number of digits after the decimal point (in contrast to a floating point, which has an arbitrary number of digits after the decimal point).
When you read from Excel and deliberately cast those strings like "0.9987" to floating points, you're immediately losing the accurate value that is contained in the string.
require "bigdecimal"
BigDecimal("0.9987")
That value is precise. It is 0.9987. Not 0.998732109, or anything close to it, but 0.9987. You may use all the usual arithmetic operations on it. Provided you don't mix floating points into the arithmetic operations, the return values will remain precise.
If your array contains the raw strings you got from Excel (i.e. you haven't #to_f'd them), then this will give you a BigDecimal that is the difference between the sum of them and 1.
1 - array.map{|v| BigDecimal(v)}.reduce(:+)
Either:
continue using floats and round(2) your totals: 12.341.round(2) # => 12.34
use integers (i.e. cents instead of dollars)
use BigDecimal and you won't need to round after summing them, as long as you start with BigDecimal with only two decimals.
I think that algorithms have a great deal more to do with accuracy and precision than a choice of IEEE floating point over another representation.
People used to do some fine calculations while still dealing with accuracy and precision issues. They'd do it by managing the algorithms they'd use and understanding how to represent functions more deeply. I think that you might be making a mistake by throwing aside that better understanding and assuming that another representation is the solution.
For example, no polynomial representation of a function will deal with an asymptote or singularity properly.
Don't discard floating point so quickly. I could be that being smarter about the way you use them will do just fine.

Resources