Specifying emmeans for negative binomial model - glm

I've been trying out different models for my count data that is zero inflated and it looks like after looking at the Akaike information criterion (AIC) and the rootgram of different models, the generalized linear model negative binomial is the best fit. I'd like to do a post-hoc on my data but I keep running into some errors and I'm not sure the best way to frame the code in order to get what I'm looking for.
I used the following code for the model, where "flwrs" is my response variable (the count data), "tre" is a treatment group (which there are four of - F400, F800, GE400, GE800, and "wa" is for my weeks after transplant (ranging from 1-26).
negbin <- glm.nb(flwrs ~ tre*wa + (1 | replic), data = test)
summary(negbin)
Anova(negbin)
With the anova I see p < 2.2e-16 for tre, wa, and tre:wa. I'd like to explore more the significance between the tre and wa in particular.
Analysis of Deviance Table (Type II tests)
Response: flwrs
LR Chisq Df Pr(>Chisq)
tre 263.6 3 < 2.2e-16 ***
wa 4423.8 25 < 2.2e-16 ***
1 | replic 0
tre:wa 531.1 72 < 2.2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
When I try to do a post-hoc with emmeans though I keep running into errors that I can't seem to work through.
This is what I tried based on what I read about emmeans in the help tab of R
emm <- emmeans(negbin, ~ tre | wa)
pairs(emm, adjust = "tukey")
The reponse is this :
> emm <- emmeans(negbin, ~ tre | wa)
Error in `[.data.frame`(tbl, , vars, drop = FALSE) :
undefined columns selected
Error in (function (object, at, cov.reduce = mean, cov.keep = get_emm_option("cov.keep"), :
Perhaps a 'data' or 'params' argument is needed
> pairs(emm, adjust = "tukey")
Error in if (nc < 2L) stop("only one column in the argument to 'pairs'") :
argument is of length zero
So I'm not very sure how to specify parameters or define the columns. Mainly I'm hoping to get the p values for the different tre and wa so that I can add significance values to a plot I created. I'm very new to statistical modelling in R so any suggestions to what I'm doing are welcome!

Related

Accounting for time with repeated-measures in lmer when not interested in time

I am trying to conduct a repeated-measures mixed-effects test with lmer and lmerTest, but I am not sure if I am doing it appropriately.
I have 6 sites with 3 plots per site that have been sampled once per year for 24 consecutive years. I have several environmental and species variables, but for simplicity, let's say I have two environmental variables (depth and temperature) and two species (species 1 and species 2). I am not interested in the time variable, changes with time, or the interactions, as this system has strong wet/dry seasonality where the effects of the dry season outweigh carry over effects of species from the prior year. I do not necessarily have data for all variables and plots every year, with some plots not sampled at times.
The question is whether species2 (a predator) has any effect on populations of species1, relative to the environmental variables.
Is it appropriate to include year as its own random effect in the model, along with plot within site?
model1 <- lmer(species1 ~ depth + temperature + species2 + (1|year) + (1|site/plot), data=data)
For this particular analysis, there were 435 total observations (plot/year), but I worry that it is not appropriately conducting repeated-measures.
anova(model1)
Type III Analysis of Variance Table with Satterthwaite's method
Sum Sq Mean Sq NumDF DenDF F value Pr(>F)
depth 0.0221 0.0221 1 145.75 0.0908 0.7635
temperature 9.0213 9.0213 1 422.19 37.0429 2.596e-09 ***
species2 0.0597 0.0597 1 418.95 0.2450 0.6208
This does not seem right. Is the a better way to incorporate year, or should I include year at all?
If I exclude year, why does the DenDF for depth change so drastically?
model1 <- lmer(species1 ~ depth + temperature + species2 + (1|year) + (1|site/plot), data=data)
Type III Analysis of Variance Table with Satterthwaite's method
Sum Sq Mean Sq NumDF DenDF F value Pr(>F)
depth 2.599 2.599 1 431.77 7.1096 0.007955 **
temperature 58.788 58.788 1 432.10 160.7955 < 2.2e-16 ***
species2 0.853 0.853 1 429.62 2.3336 0.127343
summary(M1)
Linear mixed model fit by maximum likelihood . t-tests use Satterthwaite's method ['lmerModLmerTest']
Formula: species1 ~ depth + temperature + species2 + (1 | site/plot)
Data: data
AIC BIC logLik deviance df.resid
833.4 861.9 -409.7 819.4 428
Scaled residuals:
Min 1Q Median 3Q Max
-2.20675 -0.66119 -0.07051 0.52722 2.99942
Random effects:
Groups Name Variance Std.Dev.
plot:site (Intercept) 0.0003221 0.01795
site (Intercept) 0.2051143 0.45290
Residual 0.3656072 0.60465
Number of obs: 435, groups: plot:site, 24; site, 6
Fixed effects:
Estimate Std. Error df t value Pr(>|t|)
(Intercept) -0.538258 0.325072 50.071940 -1.656 0.10401
depth 0.006338 0.002377 431.768539 2.666 0.00796 **
temperature 0.391023 0.030837 432.101095 12.681 < 2e-16 ***
species2 -0.353264 0.231252 429.615226 -1.528 0.12734
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Correlation of Fixed Effects:
(Intr) depth temp
depth -0.316
temperature -0.467 -0.204
specie2 -0.544 0.040 0.007
I may have asked more questions than I answered, but I hope some of this is helpful.
"The question is whether species2 (a predator) has any effect on populations of species1, relative to the environmental variables."
I think when you word it this way, it is not entirely clear. Are you interested in the effect that species2 has on species1 - depending on what the environmental variables are (in other words the effect of species2 on species1 can change depending on depth or temperature? Or do you mean you would like to compare the effects of species2 on species1 to the effects of depth or temperature on species1? Or what do you mean, exactly, by "relative to the environmental variables"?
Yes, (1|year) + (1|site/plot) is a random intercept for both year and for plot within site. If you wanted a variable to be able to vary over each group (i.e. have a random slope) you would do something like (Temperature|year) + (1|site/plot) if you thought the effect of temperature on species1 might be different in different years.
Exactly how you specify the model is going to be based on your knowledge of the biological system and your knowledge of statistics. Based on the information in your question, this random effects formulation that you have suggested appears completely reasonable to me. Yes, this is allowing you to account for grouped data (grouped by each year and by each plot within site). It is possible that with only 435 observations you may have convergence issues with an overly complex model, which you may or may not have - just something to look out for.
I am not sure what you mean by "this does not seem right" - what are you expecting to see? What is missing?
I am seeing the same model twice (below), with different values as the output, is there a copy and pasting error here, or am I missing something? The values shouldn't be off with the same model structure.
model1 <- lmer(species1 ~ depth + temperature + species2 + (1|year) + (1|site/plot), data=data)
You haven't removed year in the above line, but have below this in the summary(M1) call.
My simple answer about the year question would be yes, I would include year. Every year is so different in any biological dataset I have seen that it is worth including as a random intercept at least - exactly as you have done. If the variance of the random effect mean is estimated to be zero, then this term is as if you didn't have it there in the first place. At that point you can choose to fit that random effect as a fixed effect instead if you still would like to account for the grouped nature of the data.
Also, there are lots of resources on this. Some examples:
Bolker, Benjamin M., Mollie E. Brooks, Connie J. Clark, Shane W. Geange, John R. Poulsen, M. Henry H. Stevens, and Jada-Simone S. White. "Generalized linear mixed models: a practical guide for ecology and evolution." Trends in ecology & evolution 24, no. 3 (2009): 127-135.
Harrison, Xavier A., Lynda Donaldson, Maria Eugenia Correa-Cano, Julian Evans, David N. Fisher, Cecily ED Goodwin, Beth S. Robinson, David J. Hodgson, and Richard Inger. "A brief introduction to mixed effects modelling and multi-model inference in ecology." PeerJ 6 (2018): e4794.
https://peerj.com/articles/4794/

Difference between the weight parameter in xgb.DMatrix and scale_pos_weight in hyper params list?

I am having a little difficulty understanding what's the difference between the weight function in xgb.DMatrix and the sum_pos_weight parameter in the param list. I am going through the following code which is using the Higgs data;
Due to the data being unbalanced, the author defines a weight parameter:
weight <- as.numeric(dtrain[[32]]) * testsize / length(label)
sumwpos <- sum(weight * (label==1.0))
sumwneg <- sum(weight * (label==0.0))
However column 32 is already a weight variable, so the author is modifying an already defined weight variable?
Then, the modified weight variable is being set as the "weight" argument of xgb.DMatrix:
xgmat <- xgb.DMatrix(data, label = label, weight = weight, missing = -999.0)
Additionally, in the param list the author has: "scale_pos_weight" = sumwneg / sumwpos,.
so scale_pos_weight is a function of sumneg which is a function of weight which is a function of a previously defined weight (column 32). So I am confused.
What does the author do in the following line: weight <- as.numeric(dtrain[[32]]) * testsize / length(label)
What is the difference in setting the weight in xgb.DMatrix and again in sum_pos_weight?
When you set
xgmat <- xgb.DMatrix(data, label = label, weight = weight, missing = -999.0)
weight should be a vector corresponding to your data rows
If for example you have the following data:
A B C
1 1 1 1
2 2 2 2
you need to set weight as a vector of 2 weights
weight <- c(1, 2)
So you will have a weight of 1 to the first event and weight of 2 to the 2nd event. You ask your self why is it good? Assume event 1 has happened 1 time and event 2 happened 2 times, you'd like co responsive weights to them specifically mentioning the amount of time that event has occurred.
Here are few more examples for using weights:
If you want recent events to have more "value"
The amount of confidence you have in a data row. you will set all weights to be between 0 to 1 and the weight will represent how much you sure of that data. for example if weight = 0.88 you gave that row 88% confidence
If you have repetitive events. instead of creating more rows, you can set them once and give them a weight as the number they've repeated
scale_pos_weight is usually used when you have "imbalanced data". for example, assuming you have a classification problem where you have 5% of the data as 1 and 95% of the data as 0, you would like to give more weight for every positive "event". So you can just set scale_pos_weight = 19 (or as the author wrote: sumneg/sumpos)
As for the "author" re defining weight. I cannot know without the full code what he did there, but I assume he's doing some sort of normalization to the weights.

ML coursera submission (week 2) Feature Normalization

I have written the following code for section "feature normalization"
Here X is the Feature matrix (m*n) such that
m = number of examples
n = number of features
Code
mu = mean(X);
sigma = std(X);
m = size(X,1);
% Subtracting the mean from each row
for i = 1:m
X_norm(i,:) = X(i,:)-mu;
end;
% Dividing the STD from each row
for i = 1:m
X_norm(i,:) = X(i,:)./sigma;
end;
But on submitting it to the server built for Andrew Ng's class, It's not giving me any confirmation if it's wrong or correct.
==
== Part Name | Score | Feedback
== --------- | ----- | --------
== Warm-up Exercise | 10 / 10 | Nice work!
== Computing Cost (for One Variable) | 40 / 40 | Nice work!
== Gradient Descent (for One Variable) | 50 / 50 | Nice work!
== Feature Normalization | 0 / 0 |
== Computing Cost (for Multiple Variables) | 0 / 0 |
== Gradient Descent (for Multiple Variables) | 0 / 0 |
== Normal Equations | 0 / 0 |
== --------------------------------
== | 100 / 100 |
Is this a bug in the web frontend presentation layer or my code?
When the submit() does not give you any points, this means your answer is not correct.
This usually means, that either you have not implemented it yet, or there is a mistake in your implementation.
From what I can see, your indices are not correct. However, in order not to violate the code of conduct of this course, you should ask your question in the Coursera forum (without posting your code).
There are also tutorials with each programming exercise. Those are usually very helpful and guide you through the entire exercise.
You need to iterate for EACH feature
m = size(X,1);
What you are actually getting with m is the number of ROWS (example), but you want to get the number of COLUMNS (Features)
solution:
m = size(X,2);
Try this it worked for and notice that you're making mistake with dividing each row of X without subtracting to mean.
Combine both and do with less code like this -
% Subtracting the mean and Dividing the STD from each row:
for i = 1:m
X_norm(i,:) = (X(i,:) - mu) ./ sigma;
end;
At the end of the class, the final correct answer was given as featureNormalize.m:
function [X_norm, mu, sigma] = featureNormalize(X)
%description: Normalizes the features in X
% FEATURENORMALIZE(X) returns a normalized version of X where
% the mean value of each feature is 0 and the standard deviation
% is 1. This is often a good preprocessing step to do when
% working with learning algorithms.
X_norm = X;
mu = zeros(1, size(X, 2));
sigma = zeros(1, size(X, 2));
% Instructions: First, for each feature dimension, compute the mean
% of the feature and subtract it from the dataset,
% storing the mean value in mu. Next, compute the
% standard deviation of each feature and divide
% each feature by it's standard deviation, storing
% the standard deviation in sigma.
%
% Note that X is a matrix where each column is a
% feature and each row is an example. You need
% to perform the normalization separately for
% each feature.
mu = mean(X);
sigma = std(X);
X_norm = (X - mu)./sigma;
end
If you're taking this class and feel the urge to copy and paste, you're in a grey-area on academic honesty. You're supposed to figure it out from first principles, not google it and regurgitate the answer.

Couting or integrating multivariate Gaussian probabilities on opposite side of non-linear decision line

So I have something that looks like the following:
However, I am having real trouble integrating the data on the other side of this decision line to get my errors.
In general, given analytic form of the decision boundary you could compute the integrals exactly. However, why not use monte carlo which is fast, simple and generic (will work for any distributions and decision boundaries). All you have to do is repeatedly sample from your gaussians, check if the sampled point is on the correct side (N_c) or incorrect (N_i) and in the limit you will get your integrals from
INTEGRAL_of_distributions_being_on_correct_side ~ N_c / (N_c + N_i)
INTEGRAL_of_distributions_being_on_incorrect_side ~ N_i / (N_c + N_i)
thus in pseudo code:
N_c = 0
N_i = 0
for i=1 to N do
y ~ P({-, +}) # sample distribution
x ~ P(X|y) # sample point from given class
if side_of_decision(x) == y then
N_c += 1
else
N_i += 1
end
end
return N_c, N_i
In your case P({-, +}) is probably just 50-50 chance and P(X|-) and P(X|+) are your two Gaussians.

Learning Weka - Precision and Recall - Wiki example to .Arff file

I'm new to WEKA and advanced statistics, starting from scratch to understand the WEKA measures. I've done all the #rushdi-shams examples, which are great resources.
On Wikipedia the http://en.wikipedia.org/wiki/Precision_and_recall examples explains with an simple example about a video software recognition of 7 dogs detection in a group of 9 real dogs and some cats.
I perfectly understand the example, and the recall calculation.
So my first step, let see in Weka how to reproduce with this data.
How do I create such a .ARFF file?
With this file I have a wrong Confusion Matrix, and the wrong Accuracy By Class
Recall is not 1, it should be 4/9 (0.4444)
#relation 'dogs and cat detection'
#attribute 'realanimal' {dog,cat}
#attribute 'detected' {dog,cat}
#attribute 'class' {correct,wrong}
#data
dog,dog,correct
dog,dog,correct
dog,dog,correct
dog,dog,correct
cat,dog,wrong
cat,dog,wrong
cat,dog,wrong
dog,?,?
dog,?,?
dog,?,?
dog,?,?
dog,?,?
cat,?,?
cat,?,?
Output Weka (without filters)
=== Run information ===
Scheme:weka.classifiers.rules.ZeroR
Relation: dogs and cat detection
Instances: 14
Attributes: 3
realanimal
detected
class
Test mode:10-fold cross-validation
=== Classifier model (full training set) ===
ZeroR predicts class value: correct
Time taken to build model: 0 seconds
=== Stratified cross-validation ===
=== Summary ===
Correctly Classified Instances 4 57.1429 %
Incorrectly Classified Instances 3 42.8571 %
Kappa statistic 0
Mean absolute error 0.5
Root mean squared error 0.5044
Relative absolute error 100 %
Root relative squared error 100 %
Total Number of Instances 7
Ignored Class Unknown Instances 7
=== Detailed Accuracy By Class ===
TP Rate FP Rate Precision Recall F-Measure ROC Area Class
1 1 0.571 1 0.727 0.65 correct
0 0 0 0 0 0.136 wrong
Weighted Avg. 0.571 0.571 0.327 0.571 0.416 0.43
=== Confusion Matrix ===
a b <-- classified as
4 0 | a = correct
3 0 | b = wrong
There must be something wrong with the False Negative dogs,
or is my ARFF approach totally wrong and do I need another kind of attributes?
Thanks
Lets start with the basic definition of Precision and Recall.
Precision = TP/(TP+FP)
Recall = TP/(TP+FN)
Where TP is True Positive, FP is False Positive, and FN is False Negative.
In the above dog.arff file, Weka took into account only the first 7 tuples, it ignored the remaining 7. It can be seen from the above output that it has classified all the 7 tuples as correct(4 correct tuples + 3 wrong tuples).
Lets calculate the precision for correct and wrong class.
First for the correct class:
Prec = 4/(4+3) = 0.571428571
Recall = 4/(4+0) = 1.
For wrong class:
Prec = 0/(0+0)= 0
recall =0/(0+3) = 0

Resources