How can I mark all negative values of all variables as missing in SPSS? I have a dataset of more then 300 variables but none of the have missing values defined, but for all of them -1 and -2 should be treated as missing values. Is there a better way than to do it by hand for every variable?
If you really mean all values <= -1 and all variables, you can use a missing range, like this:
missing values all (lo thru -1).
Simplest way is
mis val v1 to v399 ((-2),(-1)).
Related
I calculated a velocity vector and its module from the equations of motion of a point in wxMaxima:
x:3*sin(4*t);
y:2*cos(4*t);
r:[x,y];
v:diff(r,t,1);
v_mod:sqrt(v.v);
Now I would like to calculate the velocity for t=5. How can I do this? When I add (t) and := everywhere, like this:
x(t):=3*sin(4*t);
y(t):=2*cos(4*t);
r(t):=[x(t),y(t)];
v(t):=diff(r(t),t,1);
v_mod(t):=sqrt(v(t).v(t));
and then add this line at the end:
v_mod(5);
I get the following error:
diff: second argument must be a variable; found 5
What am I doing wrong here?
The problem is that when you say v(5), you're getting diff(<something>, 5) and Maxima is complaining about that.
Try v(t) := at(diff(r(u), u), u = t) -- i.e., differentiate wrt a dummy variable u, and then evaluate that derivative at u equal to the argument t.
There are other ways to go about it. If at doesn't work for you, we can try something else.
In a loop like this:
cur := -999999; // represent a minimal possible value hold by a Single type
while ... do
begin
if some_value > cur then
cur := some_value;
end;
There is MaxSingle/NegInfinitydefined in System.Math
MaxSingle = 340282346638528859811704183484516925440.0;
NegInfinity = -1.0 / 0.0;
So should I use -MaxSingle or NegInfinity in this case?
I assume you are trying to find the largest value in a list.
If your values are in an array, just use the library function MaxValue(). (If you look at the implementation of MaxValue, you'll see that it takes the first value in the array as the starting point.)
If you must implement it yourself, use -MaxSingle as the starting value, which is approximately -3.40e38. This is the most negative value that can be represented in a Single.
Special values like Infinity and NaN have special rules in comparisons, so I would avoid these unless you are sure about what those rules are. (See also How do arbitrary floating point values compare to infinity?. In fact, it seems NegInfinity would work OK.)
It might help to understand the range of values that can be represented by a Single. In order, most negative to most positive, they are:
NegInfinity
-MaxSingle .. -MinSingle
0
MinSingle .. MaxSingle
Infinity
I'm working in a project that uses the IBM SPSS but I had some problems to set a dummy variable(binary variable).The process to get the variable is following : Consider an any variable(width for example), to get the dummy variable, we need
to sort this variable in the decreasing way; The next step is make a somatory of the cases until a limit, the cases before the limit receive the value 1 in the dummy variable the other values receive 0.
Your explanation is rather vague. And the critical value you give in the printscreen should be 2.009 in stead of 20.09?
But I think you mean the following.
When using syntax, use:
compute newdummyvariable eq (ABr gt 2.009477106).
To check if it's okay:
fre newdummyvariable.
UPDATE:
In order to compute a dummy based on the cumulative sum, the answer is as follows:
If your critical value is predetermined, the fastest way is to sort in decending order, and to use the command create with csum() to compute an extra variable which I called ABr_cumul. This one, you use to compute the newdummyvariable. As follows:
sort cases by ABr (d).
create ABr_cumul = csum(VAR00001).
compute newdummyvariable = (ABr_cumul le 20.094771061766488).
fre newdummyvariable.
the dummy comes from the sum of all cases, after decreasing order raqueados when cases of a variable representing 50% of the variable t0tal, these cases receive 1 and the other 0 ...
I work with SPSS and have difficulty finding/generating a syntax for counting cases.
I have about 120 cases and five variables. I need to know the count /proportion of cases where just one, more than one, or all of the cases have a value of 1 (dichotomous variable). Then I need to compute a new variable that shows the number / proportion of cases which include all of the aforementioned cases (also dichotomous).
For example case number one: var1=1, var2=1, var3=1, var4=0, var5=0 --> newvariable=1.
Case number two: var1=0, var2=0, var3=0, var4=0, var5=0 --> newvariable=1.
And so on...
Can anybody help me with a syntax?
Help would much appreciated!
Here we can use the sum of the variables to determine your conditions. So using a scratch variable that is the sum, we can see if it is equal to 1, more than 1 or 5 in your example.
compute #sum = SUM(var1 to var5).
compute just_one = (#sum = 1).
compute more_one = (#sum > 1).
compute all_one = (#sum = 5).
Similarly, all_one could be computed using the ANY command to evaluate if any zeroes exist, i.e. compute all_one = ANY(0,var1 to var5).. These code snippets assume that var1 to var5 are contiguous in the data frame, if not they just need to be replaced with var1,var2,var3,var4,var5 in all given instances.
You could read up on the logical function ANY in the Command Syntax Reference manual, if you negated a test for ANY with "0", then that is effectively a test for all "1"s. Use of the COUNT command would be another approach.
In Gforth, is there a way to add an integer value to a floating point value?
Something like 1 + 2.1? If I do 1 2.1e f+ I get an error which I'm guessing is because the values are not on the same stack. I know that I could just do 1.0e 2.1e f+, but that's not what I'm trying to figure out how to do.
Gforth has the s>f and d>f words that convert an int (single cell and double cell respectively) to a double - Gforth floating point functions doc is here
1 s>f 2.1e f+
should do the trick in this case.