I am fitting a Spatial Error Model using the errorsarlm() function in the spdep library.
The Breusch-Pagan test for spatial models, calculated using the bptest.sarlm() function, suggest the presence of heteroskedasticity.
A natural next step would be to get the robust standard error estimates and update the p-values. In the documentation of the bptest.sarlm() function says the following:
"It is also technically possible to make heteroskedasticity corrections to standard error estimates by using the “lm.target” component of sarlm objects - using functions in the lmtest and sandwich packages."
and the following code (as reference) is presented:
lm.target <- lm(error.col$tary ~ error.col$tarX - 1)
if (require(lmtest) && require(sandwich)) {
print(coeftest(lm.target, vcov=vcovHC(lm.target, type="HC0"), df=Inf))}
where error.col is the spatial error model estimated.
Now, I can easily adapt the code to my problem and get the robust standard errors.
Nevertheless, I was wondering:
What exactly is the “lm.target” component of sarlm objects? I can not find any mention to it in the spdep documentation.
What exactly are $tary and $tarX? Again, it does not seem to be mentioned on the documentation.
Why documentation says it is "technically possible to make heteroskedasticity corrections"? Does it mean that proposed approach is not really recommended to overcome issues of heteroskedasticity?
I report this issue on github and had a response by Roger Bivand:
No, the approach is not recommended at all. Either use sphet or a Bayesian approach giving the marginal posterior distribution. I'll drop the confusing documentation. tary is $y - \rho W y$ and similarly for tarX in the spatial error model case. Note that tary etc. only occur in spdep in documentation for localmoran.exact() and localmoran.sad(); were you using out of date package versions?
Related
I try to understand and implement modifications to the Veins framework. Right now, I have some difficulties figuring out how the "Mapping" structure works. It is used to set the transmission power in "Mac1609_4.cc"
ConstMapping* txPowerMapping = createSingleFrequencyMapping(start, end, frequency, 5.0e6, power)
and to calculate received power, SNR and SINR in "Decider80211p.cc". Could you give some insight and some examples related to the structure manipulation?
The mapping structure is from MiXiM, as Veins initially forked that project. MiXiM, however, is deprecated now and should not be used anymore [2]. Unfortunately, there is no real documentation available (anymore).
As replacement, there is either INET, which also is supported by Veins, or, as it will be introduced in the next release, a much simpler representation of Signals, removing the Mapping structure [4].
If you still need to understand the structure, you can have a look at this paper where the authors explained the physical layer including the Mapping structure.
I'm looking for some guidance on the approach I should take to mapping some points with R.
Earlier this year I went off to a forest to map the spatial distribution of some seedlings. I created a grid—every two meters I set down a flag with a tagname, and what I did is I would measure the distance from a flag to a seedling, as well as the angle using a military compass. I chose this method in hopes of getting better accuracy (GPS Garmins prove useless for this sort of task under canopy cover).
I am really new to spatial distribution work altogether, so I was hoping someone could provide guidance on what R packages I should use.
First step is to create a grid with my reference points (the flags). Second step is to tell R to use a reference point and my directions to mark the location of a seedling. From there come other things, such as cluster analysis.
The largest R package for analysing point pattern data like yours is spatstat which has a very detailed documentation and an accompanying book.
For more specific help you would need to upload (part of) your data so we can see how it is organised and how you should read it in and convert to standard x,y coordinates.
Full disclosure: I'm a co-author of both the package and the book.
I have been using QuantLib 1.6.2 to bootstrap the hazard rates from a CDS
curve. My code is similar to the example "CDS.cpp" that comes with the
QuantLib distribution, i.e.,
boost::shared_ptr<PiecewiseDefaultCurve<HazardRate, BackwardFlat> >
hazardRateStructure(new PiecewiseDefaultCurve<HazardRate, BackwardFlat>
(todaysDate, instruments, Actual365Fixed()));
I tried to experiment with different non-linear interpolation methods (instead of BackwardFlat listed above) such as:
CubicNaturalSpline
KrugerCubic
Parabolic
FritschButlandCubic
MonotonicParabolic
but I am getting the error "no appropriate default constructor available". What is the proper way of passing one of these interpolators to the
PiecewiseDefaultCurve class?
Thank you,
Chris
[Note: in case someone stumbles on this question, I'm copying here the answer I gave you on the QuantLib mailing list.]
The classes you're listing are the actual interpolation classes, but the curve is expecting a corresponding factory class (for instance, BackwardFlat in the example is the factory for the BackwardFlatInterpolation class). In the case of cubic interpolations, you'll have to use the Cubic class. By default, it builds Kruger interpolations (I'm not aware of the reason for the choice) so if you write:
PiecewiseDefaultCurve<HazardRate, Cubic>(todaysDate, instruments, Actual365Fixed())
you'll get a curve using the KrugerCubic class. To get the other interpolations, you can pass a Cubic instance with the corresponding parameters (you can look them up in the constructors of the interpolation classes); for instance,
PiecewiseDefaultCurve<HazardRate, Cubic>(todaysDate, instruments, Actual365Fixed(),
1e-12, Cubic(CubicInterpolation::Spline, false))
will give you a curve using the CubicNaturalSpline class, and
PiecewiseDefaultCurve<HazardRate, Cubic>(todaysDate, instruments, Actual365Fixed(),
1e-12, Cubic(CubicInterpolation::Parabolic, true))
will use the MonotonicParabolic class.
I have run the following model using lmerTest and using lme4:
model2 = lmer(log(RT)~Group*A*B*C+(1|item)+(1+A+B+C|subject),data=dt)
Using lmerTest I get the following error when typing the summary() command:
> summary(model1)
Error in `colnames<-`(`*tmp*`, value = c("Estimate", "Std. Error", "df", :
length of 'dimnames' [2] not equal to array extent
I saw this has already been an issue for other users and that one user was able to bypass the issue running lsmeans().
When I tried lsmeans, I got the error:
Error in asMethod(object) : not a positive definite matrix.
I did not see any NAs when looking into the covariance matrix.
Note that I am able to run this model if I simply inverse the contrasts in the Group factor.
I have difficulties understanding why this is the case.
When I run the same model using lme4 and not lmerTest, I am able to get all the outputs of summary() but no p-values (as expected). pvals.fnc is discontinued in lme4 and I have not found an alternative yet. Plus it would be nice to have the p-values estimated in the same way for model2 as for the other models for which I was successfully able to use lmerTest.
Does anyone know what I should do at this point? Any help would be much appreciated!
If A or B or C are factors then you might get errors - such models are not yet supported by the lmerTest package (we will put the warning message together with the restrictions for such models in the help page)
What kind of settings Hyperopt provides to adjust balance between exploration with exploitation ? There's something like "bandit" and "bandit_algo" in the code but no explanation.
Could someone provide any code sample.
Thanks a lot for any help!
I just found hyperopt partial() a magical wrapper function for the optimizer algo. It allows to balance between different strategies and then E/E:
Partial returns the result of a randomly-chosen suggest function. For example to search by sometimes using random search, sometimes anneal, and sometimes tpe, type:
fmin(...,
algo=partial(mix.suggest,
p_suggest=[
(.1, rand.suggest),
(.2, anneal.suggest),
(.7, tpe.suggest),]),
)
Parameter "p_suggest": list of (probability, suggest) pairs. Make a suggestion from one of the suggest functions, in proportion to its corresponding probability. sum(probabilities) must be [close to] 1.0.
If you want an even sharper control of algo progression: you can use the fact that hyperopt optimizer algos are stateless and return the trial object which can be provided as an input to a new fmin to continue the process. Then you can call fmin with max_evals at 1 and handle the process in a loop, therefore you could modify "trials" and "suggest algo" between each iteration.
For the best bet, read the papers by Bergstra et. al. 1 2 and 3. I am not 100% clear on what the bandit_algo is, except that one of the papers mentions it as an alternative method to Gaussian Process and Tree of Parzen Estimators - maybe you can use it in the same way as those two?
My guess is that if it not documented, it may not be finished yet. You can try raising an issue on Github - the devs are fairly responsive from what I have seen.
EDIT: Looking at this paper, these bandit algorithms may be the base class that the others inherit from.