I am writing my master thesis and I run a generalized linear mixed regression model in SPSS (version 28) using count data.
Research question: which effect has the population mobility on the Covid-19 incidence at the federal state level in Germany during the period from February 2020 to November 2021.
To test the effect of population mobility (independent variable) on Covid-19 incidence (dependent variable) hierarchical models were used, with fixed factors:
mobility variables in 6 places.(scale)
cumulative vaccination rate (only second dose).( scale)
season (summer as the reference category) (nominal)
and random effects:
one model with days variable (Time level). (Scale)
Second model with federal states variable ( each state has a number from 1 to 16) ( place level). (Nominal)
Third model with both days and federal states (Time and place level).
First I have built intercept-only model to check which type of regression is more suitable for the count data (Possion or Negativ binomial) and to choose also the best variable as an offset from two variables..It showed that negative binomial regression is the best for this data. (Based on the BIC or AIC)
Secondly I have checked the collinearity between the original 6 mobility variables and I have excluded mobility variables that are highly correlated based on VIF. (Only one Variable was excluded)
Thirdly I have built 7 generalized linear models by adding only the fixed effects or the fixed factors which are the 5 mobility variables, the cumulative vaccination rate dose 2 and the season (with summer as a reference category) to the intercept only model gradually. From these 7 models the final model with best model fit was selected.
Finally I have built a generalized linear mixed model with the above final model and a classic random effect by adding Days variable only ((random-intercept component for time; TIME level)) and then with federal states variable only ((random-intercept component for place; PLACE level)) and finally with adding both of them together.
I am not sure if I ran the last step regarding the generalized linear mixed models correctly or not??
These are my Steps:
Analyze-> mixed models-> generalized linear mixed model-> fields and effects:
1.target-> case
Target distribution and relationship (link) with the linear model-> custom :
Distribution-> negative binomial
Link Funktion -> log
2.Fixed effects-> include intercept & 5 mobility variables & cumulative vaccination rate & season
3.random effects-> no intercept & days variable (TIME LEVEL)
Random effect covariance type: variance component
4.weight and offset-> use offset field-> log expected cases adjusted wave variable
Build options like general and estimation remain unchanged (suggested by spss)
Model options like Estimated means remain unchanged (suggested by spss)
I have done the same steps with the other 2 models except with random effects:
3.random effects-> no intercept & Federal state variable (PLACE LEVEL)
3.random effects-> no intercept & days variable & Federal state variable (TIME & PLACE LEVEL)
Output:
1.the variance of the random effect of days variable ( time level ) was very small 5,565E-6, indicating only marginal effect in the model. (MODEL 1)
2.the covariance of the random effect of the federal states was zero and the variance was 0.079 ( place level )(MODEL 2)
3.the variance of the random effect of days variable was very small 4,126E-6 and the covariance of the random effect of the federal states was zero and the variance was 0.060 ( Time and place level )(MODEL 3)
Can someone please check my steps and tell me which model from the models in the last step is the best for the presentation of results and explain also the last point in the output within the picture?
Thanks in advance to all of you...
This is my first data mining project. I am using SAS Enterprise miner to train and test a classifier.
I have 3 files at my disposal,
Training file : 85 input variables and 1 target variable, with 5800+ observations
Prediction file : 85 input variables with 4000 observations
Verification file : 1 variable containing the correct predictions for the second file. Since this is an academic project, this file is here to tell us if we are doing a good job or not.
My problem is that the dataset is unbalanced (95% of 0s and 5% of 1s for the target variable in the training file). So naturally, I tried to re-sample the model using the "sampling node" as described in the following link
Here are the 2 approaches I used, they give slightly different results. But here is the general unsatisfactory result I am getting:
Without resampling : The model predicts less than ten solicited individuals (target variable = 1) over 4000 observations
With the resampling : The model predicts about 1500 solicited individuals over 4000 observations.
I am looking for 100 to 200 solicited individuals to have a model that would be considered acceptable.
Why do you think our predictions are way off this way, and how can we remedy to this situation?
Here is a screen shot of both models
There are some Technics to deal with unbalanced data. One that I remember many years ago was this approach:
say you have 100 observation solicited(minority) that are 5% of all your observations
cluster other none solicited(maturity) class, to 20 groups(each of with have 100 observation of none solicited individuals) with clustering algorithms like KMEAN, MEANSHIF, DBSCAN and...
then for each group of maturity clustered observation, create a dataset with all 100 observation solicited(minority) class. It means that you have 20 group of dataset each of witch is balanced with 100 solicited and 100 none solicited observations
train each balanced group and create a model for each of them
at prediction, predict all 20 models. for example if 15 out of 20 models say it is solicited, it is solicited
Let's say I want to calculate which courses a final year student will take and which grades they will receive from the said courses. We have data of previous students'courses and grades for each year (not just the final year) to train with. We also have data of the grades and courses of the previous years for students we want to estimate the results for. I want to use a recurrent neural network with long-short term memory to solve this problem. (I know this problem can be solved by regression, but I want the neural network specifically to see if this problem can be properly solved using one)
The way I want to set up the output (label) space is by having a feature for each of the possible courses a student can take, and having a result between 0 and 1 in each of those entries to describe whether if a student will attend the class (if not, the entry for that course would be 0) and if so, what would their mark be (ie if the student attends class A and gets 57%, then the label for class A will have 0.57 in it)
Am I setting the output space properly?
If yes, what optimization and activation functions I should use?
If no, how can I re-shape my output space to get good predictions?
If I understood you correctly, you want that the network is given the history of a student, and then outputs one entry for each course. This entry is supposed to simultaneously signify whether the student will take the course (0 for not taking the course, 1 for taking the course), and also give the expected grade? Then the interpretation of the output for a single course would be like this:
0.0 -> won't take the course
0.1 -> will take the course and get 10% of points
0.5 -> will take the course and get half of points
1.0 -> will take the course and get full points
If this is indeed your plan, I would definitely advise to rethink it.
Some obviously realistic cases do not fit into this pattern. For example, how would you represent an (A+)-student is "unlikely" to take a course? Should the network output 0.9999, because (s)he is very likely to get the maximum amount of points if (s)he takes the course, OR should the network output 0.0001, because the student is very unlikely to take the course?
Instead, you should output two values between [0,1] for each student and each course.
First value in [0, 1] gives the probability that the student will participate in the course
Second value in [0, 1] gives the expected relative number of points.
As loss, I'd propose something like binary cross-entropy on the first value, and simple square error on the second, and then combine all the losses using some L^p metric of your choice (e.g. simply add everything up for p=1, square and add for p=2).
Few examples:
(0.01, 1.0) : very unlikely to participate, would probably get 100%
(0.5, 0.8): 50%-50% whether participates or not, would get 80% of points
(0.999, 0.15): will participate, but probably pretty much fail
The quantity that you wanted to output seemed to be something like the product of these two, which is a bit difficult to interpret.
There is more than one way to solve this problem. Andrey's answer gives a one good approach.
I would like to suggest simplifying the problem by bucketing grades into categories and adding an additional category for "did not take", for both input and output.
This turns the task into a classification problem only, and solves the issue of trying to differentiate between receiving a low grade and not taking the course in your output.
For example your training set might have m students, n possible classes, and six possible results: ['A', 'B', 'C', 'D', 'F', 'did_not_take'].
And you might choose the following architecture:
Input -> Dense Layer -> RELU -> Dense Layer -> RELU -> Dense Layer -> Softmax
Your input shape is (m, n, 6) and your output shape could be (m, n*6), where you apply softmax for every group of 6 outputs (corresponding to one class) and sum into a single loss value. This is an example of multiclass, multilabel classification.
I would start by trying 2n neurons in each hidden layer.
If you really want a continuous output for grades, however, then I recommend using separate classification and regression networks. This way you don't have to combine classification and regression loss into one number, which can get messy with scaling issues.
You can keep the grade buckets for input data only, so the two networks take the same input data, but for the grade regression network your last layer can be n sigmoid units with log loss. These will output numbers between 0 and 1, corresponding the predicted grade for each class.
If you want to go even further, consider using an architecture that considers the order in which students took previous classes. For example if a student took French I the previous year, it is more likely he/she will take French II this year than if he/she took French Freshman year and did not continue with French after that.
I have processed output for churn analysis in telecom industry where in following format:
labelsAndPredictions = testData.map(lambda lp: lp.label).zip(predictions)
Where in if I print it the output looks like this:
[(0.0, 0.074), (0.0, 0.132),(1.0, 0.853),.....]
As you can see each element has two parts - a label and a prediction. For example the 1st element has label 0.0 (or 0) and its prediction is 0.074 (in other words it has 7% chances of churning(or switching to other network)). The features I used in training my model is a set of 5 different features.
Now I want to visualize this output (the prediction) in ipython using hostogram or scatterplot but I am having difficulty in understanding how to plot the above result in histogram or scatterplot in ipython
I was trying to data model a Classification Machine Learning algorithm on a data set which has 32 attributes,the last column being Target class.I refined the attributes number in to 6 from 32 ,which I felt would be more useful for my Classification model.
I tried to perform J48 and some incremental classification algorithm.
I expected output structure which consists of confusion matrix,correctlt and incorrectly classified instances,kappa value.
But my result did not give any information on Correctly and Incorrectly classified instances.Also,it did not predict confusion matrix and Kappa value.All I received is like this:
=== Summary ===
Correlation coefficient 0.9482
Mean absolute error 0.2106
Root mean squared error 0.5673
Relative absolute error 13.4077 %
Root relative squared error 31.9157 %
Total Number of Instances 1461
Can anyone tell me why I did not get Confusion matrix,kappa and Correct,Incorrect instances information.
Unfortunately you didnt write your code, or what version of weka do you apply.
BTW, to calculate confusion mtx, kappa etc. you can use methods of Evaluation class, http://weka.sourceforge.net/doc.dev/weka/classifiers/Evaluation.html
for example, after you train your model:
classifier.buildClassifier(train); \\train is an instances
Evaluation eval = new Evaluation(train);
//evaulate your model at 10 fold cross validation manner
eval.crossValidateModel(classifier, train, 10, new Random(1));
System.out.println(classifier);
//print different stats with
System.out.println(eval.toSummaryString());
System.out.println(eval.toMatrixString());
System.out.println(eval.toClassDetailsString());