Can I search through and compare commonly named variables in SPSS? - spss

I have a list of about 30 variables, all named something like test_1, test_2, test_3, etc. I need to check if the values are all the same, and typically do so by exporting to excel and using an if statement comparing the min value to the max (i.e. if the min=max then all the values are the same).
Is there a way I can do this right in SPSS without having to export? It seems inefficient to compare if test_1=test_2 and test_2=test_3 etc.

This is sort of a hack, but it get's the job done: can calculate the standard deviation of all your variables:
compute sd_test=SD(test_1, test_2, ..., test_n).
EXECUTE.
sd_test=0 for records where all test_i variables are equal.

Related

Optimized machine learning technique

Question: I'm looking for a technique that I can use to reduce the number of iterations my application has to perform to find the optimal variable combination out of all possible variable combinations without testing every variable combination.
Current situation: I have a list of variables and each variable has a valid list of values. At the moment I'm creating a cartesian product of the list of valid variable values and I run logic across each possible variable combination. This means I'm wanting to run 2 000 000 different iterations and this takes a lot of time. I'm not interested in how to more efficiently run 2 000 000 different variable combinations but, instead after a technique I could use to hone in on an optimal variable combination without running through all the combinations.
Example: lets say I've got 3 variables named "one", "two" & "three". Each variable can be any value between 1 and 2. This means I have 2 to the power of 3 or a 8 different variable combinations. My list of possible variable combinations would look something like:
[
[one:1,two:1,three:1],
[one:1,two:1,three:2],
[one:1,two:2,three:1],
[one:1,two:2,three:2],
[one:2,two:1,three:1],
[one:2,two:1,three:2],
[one:2,two:2,three:1],
[one:2,two:2,three:2]
]
I would then run logic against each possible variable combination and this gives me the result of that variable combination. The end result being that I know which variable combination gives me the best result. This works great across smaller variable sets but takes days across larger sets.

How to merge zero values (vector(0) with metric values in PromQL

I'm using flexlm_exporter to export my license usage to Prometheus and from Prometheus to custom service (Not Graphana).
As you know Prometheus hides missing values.
However, I need those missing values in my metric values, therefore I added to my prom query or vector(0)
For example:
flexlm_feature_used_users{app="vendor_lic-server01",name="Temp"} or vector(0)
This query adds a empty metric with zero values.
My question is if there's a way to merge the zero vector with each metric values?
Edit:
I need grouping, at least for a user and name labels, so vector(0) is probably not the best option here?
I tried multiple solutions in different StackOverflow threads, however, nothing works.
Please assist.
It would help if you used Absent with labels to convert the value from 1 to zero, use clamp_max
( Metrics{label=“a”} OR clamp_max(absent(notExists{label=“a”}),0))
+
( Metrics2{label=“a”} OR clamp_max(absent(notExists{label=“a”}),0)
Vector(0) has no label.
clamp_max(Absent(notExists{label=“a”},0) is 0 with label.
If you do sum(flexlm_feature_used_users{app="vendor_lic-server01",name="Temp"} or vector(0)) you should get what you're looking for, but you'll lose possibility to do group by, since vector(0) doesn't have any labels.
I needed a similar thing, and ended up flattening the options. What worked for me was something like:
(sum by xyz(flexlm_feature_used_users{app="vendor_lic-server01",name="Temp1"} + sum by xyz(flexlm_feature_used_users{app="vendor_lic-server01",name="Temp2"}) or
sum by xyz(flexlm_feature_used_users{app="vendor_lic-server01",name="Temp1"} or
sum by xyz(flexlm_feature_used_users{app="vendor_lic-server01",name="Temp2"}
There is no an easy generic way to fill gaps in returned time series with zeroes in Prometheus. But this can be easily done via default operator in VictoriaMetrics:
flexlm_feature_used_users{app="vendor_lic-server01",name="Temp"} default 0
The q default N fills gaps with the given default value N per each time series returned from q. See more details in MetricsQL docs.

SPSS Populate Scratch Variable with Standard Deviation

I am just learning SPSS, I have a background in PL/SQL and T-SQL
I have a dataset and need to make three groups based on deviation from the mean for a specific variable
Greater than 1 Standard Deviation Above the Mean
Greater than 1 Standard Deviation Below the Mean
All Others
I wanted to use a scratch variable but have no idea how to find the standard deviation of an existing variable and populate it into a scratch variable to use for my grouping conditions.
Any help appreciated
The aggregate command can calculate the SD of a variable and add it to the dataset, like this:
aggregate/outfile=* mode=addvariables/break=
/SDyourvar=sd(yourvar) /MEANyourvar=mean(yourvar).
Now you can use the variables to create groups like this for example:
do if yourvar < (MEANyourvar - SDyourvar).
compute group=-1.
else if yourvar > (MEANyourvar + SDyourvar).
compute group=1.
else.
compute group=0.
end if.
Or for a shorter version:
compute group=(yourvar > (MEANyourvar+SDyourvar)) - (yourvar < (MEANyourvar-SDyourvar)).

How do you include categories with 0 responses in SPSS frequency output?

Is there a way to display response options that have 0 responses in SPSS frequency output? The default is for SPSS to omit in the frequency table output any response option that is not selected by at least a single respondent. I looked for a syntax-driven option to no avail. Thank you in advance for any assistance!
It doesn't show because there is no one single case in the data is with that attribute. So, by forcing a row of zero you'll need to realize we're asking SPSS to do something incorrect.
Having said that, you can introduce a fake case with the missing category. E.g. if you have Orange, Apple, and Pear, but no one answered they like Pear, the add one fake case that says Pear.
Now, make a new weight variable that consists of only 1. But for the Pear case, make it very very small like 0.00001. Then, go to Data > Weight Cases > Weight cases by and put that new weight variable over. Click OK to apply. Now what happens is that SPSS will treat the "1" with a weight of 1 and the fake case with a weight that is 1/10000 of a normal case. If you rerun the frequency you should see the one with zero count shows up.
If you have purchased the Custom Table module you can also do that directly as well, as far as I can tell from their technical document. That module costs 637 to 3630 depending on license type, so probably only worth a try if your institute has it.
So, I'm a noob with SPSS, I (shame on me) have a cracked version of SPSS 22 and if I understood your question correctly, this is my solution:
double click the Frequency table in Output
right click table, select Table Properties
go to General and then uncheck the Hide empty rows and columns option
Hope this helps someone!
If your SPSS version has no Custom Tables installed and you haven't collected money for that module yet then use the following (run this syntax):
*Note: please use variable names up to 8 characters long.
set mxloops 1000. /*in case your list of values is longer than 40
matrix.
get vars /vari= V1 V2 /names= names /miss= omit. /*V1 V2 here is your categorical variable(s)
comp vals= {1,2,3,4,5,99}. /*let this be the list of possible values shared by the variables
comp freq= make(ncol(vals),ncol(vars),0).
loop i= 1 to ncol(vals).
comp freq(i,:)= csum(vars=vals(i)).
end loop.
comp names= {'vals',names}.
print {t(vals),freq} /cnames= names /title 'Frequency'. /*here you are - the frequencies
print {t(vals),freq/nrow(vars)*100} /cnames= names /format f8.2 /title 'Percent'. /*and percents
end matrix.
*If variables have missing values, they are deleted listwise. To include missings, use
get vars /vari= V1 V2 /names= names /miss= -999. /*or other value
*To exclude missings individually from each variable, analyze by separate variables.

Please help on using SPSS to add scales of Likert-type

Since the last post is closed due to unclear expression, here is a edited one.
There are in total 20 items from 5 Likert-type scale questions from a questionnaire. I need to add the 20 items from 5 separate questions to create a total scale. I already got the data.
The question is just like the picture above. How can I run the command to add the 20 items from 5 separate questions? What is the command?
Is it something like Transform > Compute variable. Enter a variable name, specify which items to add up, and hey presto (e.g. "V1+V2+V3" etc)?
You can do exactly as you suggested, using the Transform -> Compute variable... function. Simply type in the name of your new scale in the Target variable box and the addition you want in the Numeric variable box.
You will see that the following SPSS syntax command is run:
COMPUTE total=v1 + v2 + v3 + v4.
EXECUTE.
If any of the variables has a missing value, the simply adding them will result in a missing value as well. If you don't want to impute for missing values, using the MEAN command in syntax works well. Also, if the variables are contiguous in the data file, you can make the syntax much more readable by using the TO modifier.
COMPUTE myscore=MEAN(variable1 TO variable5)*5.
The resulting value provides an efficient expected value.
However, it seems like the problem in this case is that the data entry process has dummy coded all of the items, producing 20 separate variables instead of 5, where each block of 4 variables has a value of 0 or 1 but represents the values 1 to 4. In this case, you can use the following syntax:
COMPUTE mycounter=1.
COMPUTE myscore=0.
EXECUTE.
DO REPEAT a=variable1 TO variable20.
COMPUTE myscore=myscore+mycounter*a.
COMPUTE mycounter=mycounter+1.
IF (mycounter=5) mycounter=1.
END REPEAT.
EXECUTE.
Note that the variables from variable1 to variable20 must have each set of dummy codes from the original items clustered together in ascending order.

Resources