Is there an SPSS syntax that can help me suppress smaller values when displaying frequencies? I'm able to hide small values using cross-tabs, but when I use Frequencies command I'm not able to suppress or hide small values.
This is my current syntax: Is there another command to add in suppressing output less than 10? Thank you!
Frequencies variables = alcoholany
/Order=ANALYSIS.
As far as I know you can't get this through the frequencies command alone - you'll need to identify those cases first and filter them out before running it:
aggregate /out=* mode=add/break=alcoholany/Ncat=n.
compute filt=Ncat>=10.
filter by filt.
Frequencies variables = alcoholany /Order=ANALYSIS.
filter off.
Related
I'm using flexlm_exporter to export my license usage to Prometheus and from Prometheus to custom service (Not Graphana).
As you know Prometheus hides missing values.
However, I need those missing values in my metric values, therefore I added to my prom query or vector(0)
For example:
flexlm_feature_used_users{app="vendor_lic-server01",name="Temp"} or vector(0)
This query adds a empty metric with zero values.
My question is if there's a way to merge the zero vector with each metric values?
Edit:
I need grouping, at least for a user and name labels, so vector(0) is probably not the best option here?
I tried multiple solutions in different StackOverflow threads, however, nothing works.
Please assist.
It would help if you used Absent with labels to convert the value from 1 to zero, use clamp_max
( Metrics{label=“a”} OR clamp_max(absent(notExists{label=“a”}),0))
+
( Metrics2{label=“a”} OR clamp_max(absent(notExists{label=“a”}),0)
Vector(0) has no label.
clamp_max(Absent(notExists{label=“a”},0) is 0 with label.
If you do sum(flexlm_feature_used_users{app="vendor_lic-server01",name="Temp"} or vector(0)) you should get what you're looking for, but you'll lose possibility to do group by, since vector(0) doesn't have any labels.
I needed a similar thing, and ended up flattening the options. What worked for me was something like:
(sum by xyz(flexlm_feature_used_users{app="vendor_lic-server01",name="Temp1"} + sum by xyz(flexlm_feature_used_users{app="vendor_lic-server01",name="Temp2"}) or
sum by xyz(flexlm_feature_used_users{app="vendor_lic-server01",name="Temp1"} or
sum by xyz(flexlm_feature_used_users{app="vendor_lic-server01",name="Temp2"}
There is no an easy generic way to fill gaps in returned time series with zeroes in Prometheus. But this can be easily done via default operator in VictoriaMetrics:
flexlm_feature_used_users{app="vendor_lic-server01",name="Temp"} default 0
The q default N fills gaps with the given default value N per each time series returned from q. See more details in MetricsQL docs.
In my AnyLogic model there is a source where the parameter agent.type is one of Options from an OptionList called Types.
I want to create a Histogram that shows how many agents there are with each of the different possible Types.
I can do it by setting up a variable for each Type that increments the count() using a longwinded function, but I would prefer to use a dataset or histogram_data optionsHistogram using the OptionsList as the Horizontal axis value, and the count of the number of agents with that type as the Vertical axis value.
Is this possible, and what would you recommend as the best way to achieve this?
Thanks
A histogram is used to plot the spread of one type of data.
If you want to plot the number of agents by type (and that is defined by an OptionList), you should use simple bar chart and use a statistic on your agent population as below:
You can then plot it in a bar chart using this setup:
PS: There is a lot of info about how those agent-pop statistics work in the help, worth a read.
I'm using Cronbach's alpha to analyze data in order to build/refine a scale. This is a tedious process in SPSS, since it doesn't automatically optimize the scale, so I'm hoping there is a way to use syntax to speed it up.
So I start with a set of items, set up the OMS control panel to capture the item-total statistics table, and the run the alpha analysis. This pushes the item-total stats into a new dataset. Then I check the alpha value, and use it in syntax to screen out items that have a greater alpha-if-deleted value.
Then I re-run the analysis with only the items passed the screening. And I repeat, until all the items pass the screening. Here is the syntax:
* First syntax sets up OMS, and then runs the alpha analysis.
* In the reliability syntax, I have to manually add the variables and the Scale name.
* OMS.
DATASET DECLARE alpha_worksheet.
OMS
/SELECT TABLES
/IF COMMANDS=['Reliability'] SUBTYPES=['Item Total Statistics']
/DESTINATION FORMAT=SAV NUMBERED=TableNumber_
OUTFILE='alpha_worksheet' VIEWER=YES.
RELIABILITY
/VARIABLES=
points_18618
points_18618
points_3286
points_3290
points_3583
points_4018
points_7775
points_7789
points_7792
points_18631
points_18652
/SCALE('2017 Fall CRN 4157 Exam 01 v. 1.0') ALL
/MODEL=ALPHA
/SUMMARY=TOTAL.
* Second syntax identifies any variables in the OMS dataset that are LTE the alpha value.
* I have to manually enter the alpha value...
DATASET ACTIVATE alpha_worksheet.
IF (CronbachsAlphaifItemDeleted <= .694) Keep =1.
EXECUTE.
SORT CASES BY Keep(D).
Ideally, instead of having to repeat this process over and over, I'd like syntax that would automate this process.
Hope that makes sense, and if you have a solution thanks in advance (this has been bugging me for years!) Cheers
I am running a huge syntax, with lots of CTABLES and FREQUENCIES commands. Some of them have a filter:
TEMPORARY.
SELECT IF [condition].
FREQUENCIES VAR1.
In some cases, this results in no cases being selected, so the output is just a warning text. Is it possible to still get a table with 0 counts...?
If all cases are screened out, a procedure never gets a chance to run. However, suppose you create one case with everything missing but a filter value of 1. Then use CTABLES instead of FREQUENCIES and specify that empty categories should be shown (on the Categories subdialog if using the gui.)
If you want to make this perfectly accurate, create a weight variable with case 1 weighted by a very small value (1e-8, say), and all the other cases with a a weight of 1.
Since the last post is closed due to unclear expression, here is a edited one.
There are in total 20 items from 5 Likert-type scale questions from a questionnaire. I need to add the 20 items from 5 separate questions to create a total scale. I already got the data.
The question is just like the picture above. How can I run the command to add the 20 items from 5 separate questions? What is the command?
Is it something like Transform > Compute variable. Enter a variable name, specify which items to add up, and hey presto (e.g. "V1+V2+V3" etc)?
You can do exactly as you suggested, using the Transform -> Compute variable... function. Simply type in the name of your new scale in the Target variable box and the addition you want in the Numeric variable box.
You will see that the following SPSS syntax command is run:
COMPUTE total=v1 + v2 + v3 + v4.
EXECUTE.
If any of the variables has a missing value, the simply adding them will result in a missing value as well. If you don't want to impute for missing values, using the MEAN command in syntax works well. Also, if the variables are contiguous in the data file, you can make the syntax much more readable by using the TO modifier.
COMPUTE myscore=MEAN(variable1 TO variable5)*5.
The resulting value provides an efficient expected value.
However, it seems like the problem in this case is that the data entry process has dummy coded all of the items, producing 20 separate variables instead of 5, where each block of 4 variables has a value of 0 or 1 but represents the values 1 to 4. In this case, you can use the following syntax:
COMPUTE mycounter=1.
COMPUTE myscore=0.
EXECUTE.
DO REPEAT a=variable1 TO variable20.
COMPUTE myscore=myscore+mycounter*a.
COMPUTE mycounter=mycounter+1.
IF (mycounter=5) mycounter=1.
END REPEAT.
EXECUTE.
Note that the variables from variable1 to variable20 must have each set of dummy codes from the original items clustered together in ascending order.