what are the additional parameters reported by "summary.frontier" when "extraPar = TRUE"? - r-package

According to the package description for "frontier", when extraPar is set to be TRUE, some additional parameters will be reported such as "sigmaSqU", "sigmaSqV" etc. However, it also states that "the sigmaSqU and sigmaU are not the variance and standard errors respectively of u".
So, my question is, what are sigmaSqU and sigmaU if they are not the variance and standard errors respectively of u?
Thank you very much.

Find a useful lecture notes which answers this question and is also informative with regard to stochastic frontier analysis using R.
https://files.itslearning.com/data/ku/103018/teaching/lecturenotes.pdf?

Related

Robust Standard Errors in spatial error models

I am fitting a Spatial Error Model using the errorsarlm() function in the spdep library.
The Breusch-Pagan test for spatial models, calculated using the bptest.sarlm() function, suggest the presence of heteroskedasticity.
A natural next step would be to get the robust standard error estimates and update the p-values. In the documentation of the bptest.sarlm() function says the following:
"It is also technically possible to make heteroskedasticity corrections to standard error estimates by using the “lm.target” component of sarlm objects - using functions in the lmtest and sandwich packages."
and the following code (as reference) is presented:
lm.target <- lm(error.col$tary ~ error.col$tarX - 1)
if (require(lmtest) && require(sandwich)) {
print(coeftest(lm.target, vcov=vcovHC(lm.target, type="HC0"), df=Inf))}
where error.col is the spatial error model estimated.
Now, I can easily adapt the code to my problem and get the robust standard errors.
Nevertheless, I was wondering:
What exactly is the “lm.target” component of sarlm objects? I can not find any mention to it in the spdep documentation.
What exactly are $tary and $tarX? Again, it does not seem to be mentioned on the documentation.
Why documentation says it is "technically possible to make heteroskedasticity corrections"? Does it mean that proposed approach is not really recommended to overcome issues of heteroskedasticity?
I report this issue on github and had a response by Roger Bivand:
No, the approach is not recommended at all. Either use sphet or a Bayesian approach giving the marginal posterior distribution. I'll drop the confusing documentation. tary is $y - \rho W y$ and similarly for tarX in the spatial error model case. Note that tary etc. only occur in spdep in documentation for localmoran.exact() and localmoran.sad(); were you using out of date package versions?

Tiny YOLOv3 (Darknet) training "too quickly" and produces different output

I am pretty new to YOLO/Darknet and am walking in circles with the solutions. I have looked at the Github and Stackexchange fora pages corresponding with similar issues, but none seems to directly address this output issue (i.e. where the region IOU line is missing). Here is my output (training/testing):
Here is my directory structure:
Other details:
I am using the AlexeyAB fork.
6 classes in total (following this convention of annotating occluded and truncated items, so two "items" with three classes each)
I'm using 200+ training images (definitely too few, but I don't know if this is the root cause of my troubles).
There is no predictions.png, just predictions.jpg. However, I don't think this should be an issue.
I followed this tutorial.
Any help is very much appreciated; thank you in advance!
If it finish too soon on training, try adding -clear 1at the end of your training command.
EDIT:
This is the correct answer (ergo why I accepted it), but lacks an explanation. The "-clear 1" flag is, according to this answer, clears past stats.

OpenCV Background Model Component Extraction

I am working with the BackgroundSubtractorMOG2 class in OpenCV (Python), and am trying to extract the individual components of the background model. As I understand it, each pixel will be modeled by the mixture of a varying number of gaussian distributions, each defined by a mean and variance. So, how can I determine what all of these components (means and variances) are after feeding the background subtractor a given number of frames?
The documentation here:
https://docs.opencv.org/3.4.3/d7/d7b/classcv_1_1BackgroundSubtractorMOG2.html#adbb1d295befaff88a54a929e50aaf879
Does not seem to discuss doing this.
This information must be contained somewhere in the background subtractor object. Does anyone know how to get to it?
Thanks!
Edit: A little more searching has led me to believe that the cv2.Algorithm class is required to read the parameters from the BackgroundSubtractorMOG2 object. I think the two questions posed here:
http://answers.opencv.org/question/28008/how-to-derive-from-algorithm/
Reading algorithm parameters from file in OpenCV
are similar to what I am asking, but I am unable to interpret the answers. I thought the solution would be something along the lines of:
Parameters = cv2.Algorithm.read('name_of_backgroundsubtractorMOG2_object')
but this returns an error of: 'Required argument 'fn' (pos 1) not found'
Edit 2: Unfortunately I think this question has been answered here:
Save opencv BackgroundSubtractorMOG to file?
Short answer: It cannot be done! Sad!

Variation in BLEU Score

I have some question on BLUE Score calculation for machine translation. I realized they may have a different metrics for BLEU. I found the code reports five value for BLEU, namely BLEU-1, BLEU-2, BLEU-3, BLEU-4 and finally BLEU, which seems to be an exponential average of the previous four BLEUs. Still it is not clear to me what the difference between those is. Do you have any ideas? Thanks
P.s. At first I thought that this question is more of a theoretical content and posted it on meta stackexange. A moderator has closed and commented it as a stackoverflow type question . So please don't punish me again. =)
source: http://www.statmt.org/book/slides/08-evaluation.pdf
I haven't heard of BLEU-1 and BLEU-2 but I guess it means 1-gram, 2-gram, 3-gram and 4-gram in the formula of BLEU score, I mean in the formula precision[i] = BLEU-i in your question:
Actually, BLEU-n doesn't use the n-gram scores only. It computes the 1-gram through n-gram scores and gives them equal weight to compute a final score. See the "Cumulative N-Gram Scores" section at this link for more info.

Original paper for DisparityWLSFilter in openCV?

I am working on post processing of disparity map.
My disparity image, even though it is WLS filtered, has too many 'holes'.
This is what i get for now. Rectified, but in fish eye way. Anyway rectified for sure, but have many holes. Disparity matching algorithm is SGBM. WLS filter sigma is 2.1, lambda is 30000. Black regions are holes.
I am referring official opencv site which says Disparity map post-filtering and it is using DisparityWLSFilter extensively. But I wonder how it works internally and want to read theoretical paper regarding this implementation. I want to know what Sigma and Lambda does, and how it will filter my image.
And, is there any other good disparity filter that i can use? WLS filter cannot fill the 'holes' effectively. Or, any algorithm that is easy to use or easy to implement, or library that is not GPL?
Self reply.
Got answer from Opencv.
Orig question is HERE.
Reply says
References have been added here, documentation reference
cc #sbokov
—
You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub
Check out the comments here, and the code here. That should answer some of your questions. To see how the code author has come up with this method perhaps should contact him directly as there is no reference for that in the code comments.

Resources