Selecting a cut-off score in SPSS - spss

I have 5 variables for one questionnaire about social support. I want to define the group with low vs. high support. According to the authors low support is defined as a sum score <= 18 AND two items scoring <= 3.
It would be great to get a dummy variable which shows which people are low vs high in support.
How can I do this in the syntax?
Thanks ;)

Assuming your variables are named Var1, Var2 .... Var5, and that they are consecutive in the dataset, this should work:
recode Var1 to Var5 (1 2 3=1)(4 thr hi=0) into L1 to L5.
compute LowSupport = sum(Var1 to Var5) <= 18 and sum(L1 to L5)>=2.
execute.
New variable LowSupport will have value 1 for rows that have the parameters you defined and 0 for other rows.
Note: If your variables are not consecutive you'll have to list all of them instead of using Var1 to var5.

Related

Loop to select which variable to omit from analysis

I have datasets with a large number of variables and I need to run PCA over these datasets with one variable removed each time. Below are 20 variables for an example dataset. I would like to run PCA with one variable removed from each PCA solution. For example, the first PCA solution will include all variables excluding Var_1_GroupA, the second will include all variables excluding Var_2_GroupA, etc. I am familiar with using macros to write loops but unsure how to complete the following task using macros or code in python.
Var_1_GroupA
Var_2_GroupA
Var_1_GroupB
Var_2_GroupB
Var_3_GroupB
Var_1_GroupC
Var_2_GroupC
Var_3_GroupC
Var_4_GroupC
Var_5_GroupC
Var_1_GroupD
Var_1_GroupE
new_Var_1_GroupA
new_Var_1_GroupB
new_Var_1_GroupC
new_Var_2_GroupC
Var_1_GroupF
Var_1_GroupG
Var_1_GroupH
Var_2_GroupH
In the example below I create 10 variables, and then run a simple means command with a different set of variables each time - excluding one of the variables at a time. You can edit the code to match your variables and your analysis code.
data list list/var1 to var10 (10F1).
begin data
1 2 3 4 5 6 7 8 9 9
5 4 3 6 3 8 1 2 5 8
0 8 6 4 2 1 3 5 7 9
end data.
dataset name wrk.
define !loopit (!pos=!cmdend)
!do !a !in(!1)
means
!do !b !in(!1) !if (!b<>!a) !then !b !ifend !doend
.
!doend
!enddefine.
!loopit var1 var2 var3 var4 var5 var6 var7 var8 var9 var10 .
note you vave to list the variable names in the macro call, can't use var1 to var10.
If you run into trouble while adapting this to your exact needs, these are very helpful in debugging macros:
set mexpand=on.
set mprint=on.

How to compare all possible group combinations with EMMEANS in SPSS?

Suppose you have a 2x2 design and you're testing differences between those 4 groups using ANOVA in SPSS.
This is a graph of your data:
After performing ANOVA, there are 6 possible pairwise comparisons between groups that we can perform. These are:
A - C
B - D
A - D
B - C
A - B
C - D
If I want to perform pairwise comparisons, I would usually use this script after the UNIANOVA command:
/EMMEANS=TABLES(Var1*Var2) COMPARE (Var1) ADJ(LSD)
/EMMEANS=TABLES(Var1*Var2) COMPARE (Var2) ADJ(LSD)
However, after running this script, the output only contains 4 of the 6 possible comparisons - there are two pairwise comparisons that are missing, and those are:
A - B
C - D
How can I calculate those comparisons?
EMMEANS in UNIANOVA does not provide all pairwise comparisons among the cells in an interaction like this. There are some other procedures, such as GENLIN, that do offer these, but use large-sample chi-square statistics rather than t or F statistics. In UNIANOVA, you can get these using the LMATRIX subcommand, or you can use some trickery with EMMEANS.
For the trickery with EMMEANS, create a single factor with four levels that index the 2x2 layout of cells, then handle that as a one-way model. The main effect for that is the same as the overall 3 degree of freedom model for the 2x2 layout, and of course EMMEANS with COMPARE works fine on that.
Without creating a new variable, you can use LMATRIX with:
/LMATRIX "(1,1) - (2,2)" var1 1 -1 var2 1 -1 var1*var2 1 0 0 -1
/LMATRIX "(1,2) - (2,1)" var1 1 -1 var1 -1 1 var1*var2 0 1 -1 0
The quoted pieces are labels, indicating the cells in the 2x2 design being compared.
Another trick you can use to make specifying the LMATRIX simpler, but without creating a new variable, is to specify the DESIGN with just the interaction term and suppress the intercept. That makes the parameter estimates just the four cell means:
UNIANOVA Y BY var1 var2
/INTERCEPT=EXCLUDE
/DESIGN var1*var1
/LMATRIX "(1,1) - (2,2)" var1*var2 1 0 0 -1
/LMATRIX "(1,2) - (2,1)" var1*var1 0 1 -1 0.
In this case the one effect shown in the ANOVA table is a 4 df effect testing all means against 0, so it's not of interest, but the comparisons you want are easily obtained. Note that this trick only works with procedures that don't reparameterize to full rank.

Possible to use less/greater than operators with IF ANY?

Is it possible to use <,> operators with the if any function? Something like this:
select if (any(>10,Q1) AND any(<2,Q2 to Q10))
You definitely need to create an auxiliary variable to do this.
#Jignesh Sutar's solution is one that works fine. However there are often multiple ways in SPSS to accomplish a certain task.
Here is another solution where the COUNT command comes in handy.
It is important to note that the following solution assumes that the values of the variables are integers. If you have float values (1.5 for instance) you'll get a wrong result.
* count occurrences where Q2 to Q10 is less then 2.
COUNT #QLT2 = Q2 TO Q10 (LOWEST THRU 1).
* select if Q1>10 and
* there is at least one occurrence where Q2 to Q10 is less then 2.
SELECT (Q1>10 AND #QLT2>0).
There is also a variant for this sort of solution that deals with float variables correctly. But I think it is less intuitive though.
* count occurrences where Q2 to Q10 is 2 or higher.
COUNT #QGE2 = Q2 TO Q10 (2 THRU HIGHEST).
* select if Q1>10 and
* not every occurences of (the 9 variables) Q2 to Q10 is two or higher.
SELECT IF (Q1>10 AND #QGE2<9).
Note: Variables beginning with # are temporary variables. They are not stored in the data set.
I don't think you can (would be nice if you could - you can do something similar in Excel with COUNTIF & SUMIF IIRC).
You've have to construct a new variable which tests the multiple ANY less than condition, as per below example:
input program.
loop #j = 1 to 1000.
compute ID=#j.
vector Q(10).
loop #i = 1 to 10.
compute Q(#i) = trunc(rv.uniform(-20,20)).
end loop.
end case.
end loop.
end file.
end input program.
execute.
vector Q=Q2 to Q10.
loop #i=1 to 9 if Q(#i)<2.
compute #QLT2=1.
end loop if Q(#i)<2.
select if (Q1>10 and #QLT2=1).
exe.

Generating means of a variable using dummy variables & foreach in Stata

My dataset includes TWO main variables X and Y.
Variable X represents distinct codes (e.g. 001X01, 001X02, etc) for multiple computer items with different brands.
Variable Y represents the tax charged for each code of variable X (e.g. 15 = 15% for 001X01) at a store.
I've created categories for these computer items using dummy variables (e.g. HD dummy variable for Hard-Drives, takes value of 1 when variable X represents a HD, etc). I have a list of over 40 variables (two of them representing X and Y, and the rest is a bunch of dummy variables for the different categories I've created for computer items).
I would like to display the averages of all these categories using a loop in Stata, but I'm not sure how to do this.
For example the code:
mean Y if HD == 1
Mean estimation Number of obs = 5
--------------------------------------------------------------
| Mean Std. Err. [95% Conf. Interval]
-------------+------------------------------------------------
Tax | 7.1 2.537716 1.154172 15.24583
gives me the mean Tax for the category representing Hard Drives. How can I use a loop in Stata to automatically display all the mean Taxes charged for each category? I would do it by hand without a problem, but I want to repeat this process for multiple years, so I would like to use a loop for each year in order to come up with this output.
My goal is to create a separate Excel file with each of the computer categories I've created (38 total) and the average tax for each category by year.
Why bother with the loop and creating the indicator variables? If I understand correctly, your initial dataset allows the use of a simple collapse:
clear all
set more off
input ///
code tax str10 categ
1 0.15 "hd"
2 0.25 "pend"
3 0.23 "mouse"
4 0.29 "pend"
5 0.16 "pend"
6 0.50 "hd"
7 0.54 "monitor"
8 0.22 "monitor"
9 0.21 "mouse"
10 0.76 "mouse"
end
list
collapse (mean) tax, by(categ)
list
To take to Excel you can try export excel or put excel.
Run help collapse and help export for details.
Edit
Because you insist, below is an example that gives the same result using loops.
I assume the same data input as before. Some testing using this example database
with expand 1000000, shows that speed is virtually the same. But almost surely,
you (including your future you) and your readers will prefer collapse.
It is much clearer, cleaner and concise. It is even prettier.
levelsof categ, local(parts)
gen mtax = .
quietly {
foreach part of local parts {
summarize tax if categ == "`part'", meanonly
replace mtax = r(mean) if categ == "`part'"
}
}
bysort categ: keep if _n == 1
keep categ mtax
Stata has features that make it quite different from other languages. Once you
start getting a hold of it, you will find that many things done with loops elsewhere,
can be made loop-less in Stata. In many cases, the latter style will be preferred.
See corresponding help files using help <command> and if you are not familiarized with saved results (e.g. r(mean)), type help return.
A supplement to Roberto's excellent answer: After collapse, you will need a loop to export the results to excel.
levelsof categ, local(levels)
foreach x of local levels {
export excel `x', replace
}
I prefer to use numerical codes for variables such as your category variable. I then assign them value labels. Here's a version of Roberto's code which does this and which, for closer correspondence to your problem, adds a "year" variable
input code tax categ year
1 0.15 1 1999
2 0.25 2 2000
3 0.23 3 2013
4 0.29 1 2010
5 0.16 2 2000
6 0.50 1 2011
7 0.54 4 2000
8 0.22 4 2003
9 0.21 3 2004
10 0.76 3 2005
end
#delim ;
label define catl
1 hd
2 pend
3 mouse
4 monitor
;
#delim cr
label values categ catl
collapse (mean) tax, by(categ year)
levelsof categ, local(levels)
foreach x of local levels {
export excel `:label (categ) `x'', replace
}
The #delim ; command makes it possible to easily list each code on a separate line. The"label" function in the export statement is an extended macro function to insert a value label into the file name.

Syntax for counting cases

I work with SPSS and have difficulty finding/generating a syntax for counting cases.
I have about 120 cases and five variables. I need to know the count /proportion of cases where just one, more than one, or all of the cases have a value of 1 (dichotomous variable). Then I need to compute a new variable that shows the number / proportion of cases which include all of the aforementioned cases (also dichotomous).
For example case number one: var1=1, var2=1, var3=1, var4=0, var5=0 --> newvariable=1.
Case number two: var1=0, var2=0, var3=0, var4=0, var5=0 --> newvariable=1.
And so on...
Can anybody help me with a syntax?
Help would much appreciated!
Here we can use the sum of the variables to determine your conditions. So using a scratch variable that is the sum, we can see if it is equal to 1, more than 1 or 5 in your example.
compute #sum = SUM(var1 to var5).
compute just_one = (#sum = 1).
compute more_one = (#sum > 1).
compute all_one = (#sum = 5).
Similarly, all_one could be computed using the ANY command to evaluate if any zeroes exist, i.e. compute all_one = ANY(0,var1 to var5).. These code snippets assume that var1 to var5 are contiguous in the data frame, if not they just need to be replaced with var1,var2,var3,var4,var5 in all given instances.
You could read up on the logical function ANY in the Command Syntax Reference manual, if you negated a test for ANY with "0", then that is effectively a test for all "1"s. Use of the COUNT command would be another approach.

Resources