What is the error correction capability of polar codes? For example consider block length(N=8) and message length(k=4) then polar codes can correct up to 1 error. Like this for any N what is error correction capability?
This is theory based question. I am not perfect in polar codes concept.
Related
I am using the Learning from Data textbook by Yaser Abu-Mostafa et al. I am curious about the following statement in the linear regression chapter and would like to verify that my understanding is correct.
After talking about the "pseudo-inverse" way to get the "best weights" (best for minimizing squared error), i.e w_lin = (X^T X)^-1 X^T y
The statement is "The linear regression weight vector is an attempt to map the inputs X to the outputs y. However, w_lin does not produce y exactly, but produces an estimate X w_lin which differs from y due to in sample error.
If the data is on a hyper-plane, won't X w_lin exactly match y (i.e in-sample error = 0)? I.e above statement is only talking about data that is not linearly separable.
Here, 'w_lin' is the not the same for all data points (all pairs of (X,y)).
The linear regression model finds the best possible weight vector (or best possible 'w_lin') considering all data points such that X*w_lin gives a result very close to 'y' for any data point.
Hence the error will not be zero unless all data points line on a straight line.
The community might not get whole context unless the book is opened because not everything that the author of the book says might have been covered in your post. But let me try to answer.
Whenever any model is formed, there are certain constants used whose value is not known beforehand but are used to fit the line/curve as good as possible. Also, the equations, many a times, contain an element of randomness. Variables that take random values cause some errors when actual and expected outputs are computed.
Suggested reading: Errors and residuals
Studying ORB feature descriptors from it is the official paper I found it stating:
We empirically
choose r to be the patch size, so that that x and y run from
[−r, r]. As |C| approaches 0,
I did not understand how r is calculated, please tell me how to calculate r.
I tried a lot to dig deeper using the internet but I couldn't find formula or explaining and I did not understand what it stated means.
Would you please explain it for me? And give me the formula if you may.
The paper says:
"We empirically choose r to be the patch size,..."
In OpenCV a patch size of 31 (seems to be the standard value) is used.
The intensity patches with the specified size are used for the description of a FAST keypoint. Since ORB uses the BRIEF descriptor an image patch is transferred into a binary string which is later compared to match the keypoints. More details are found in the BRIEF paper.
So if you increase r you will increase the size of the binary string.
So the radius is not calculated by some formula but instead chosen by the developers/user.
I've tried using the code given https://docs.opencv.org/3.2.0/da/de9/tutorial_py_epipolar_geometry.html to find the epipolar lines, but instead of getting the output given in the link, I am getting the following output.
but when changing the line F, mask = cv2.findFundamentalMat(pts1,pts2,cv2.FM_LMEDS) to
F, mask = cv2.findFundamentalMat(pts1,pts2,cv2.FM_RANSAC) i.e: using RANSAC algorithm to find Fundamental matrix instead of LMEDS this is the following output.
When the same line is replaced with F, mask = cv2.findFundamentalMat(pts1,pts2,cv2.FM_8POINT) i.e: use eight point algorithm this is the following output.
All of the above about output does not have any visual sanity nor anyway near to close to the given output in opencv documentation for finding epipolar lines. But ironically, if the same code if executed by changing the algorithm to find fundamental matrix in this particular sequence
FM_LMEDS
FM_8POINT
FM_7POINT
FM_LMEDS
most accurate results are generated. This is the output.
I thought we are suppose to get the above output in one run of any of the algorithm (with the variations in matrix values and error). Am I running the code incorrectly? What is that I've to do, to get the correct epipolar lines (i.e.; visually sane)? I am using opencv version 3.3.0 and python 2.7.
Looking forward for reply.
Thank you.
In the function cv::caclOpticalFlowPyrLK(..) there's a parameter cv::OutputArray err. What does this parameter specify? Is it the distance at which the corresponding match was found for a feature?
Lucas Kanade | OpenCV
This question arose because I checked the difference between err[i], and the Euclidean distance of prevPts[i] and nextPts[i] and it turns out to be somewhere in the range -1 or +1, occasionally outside it.
Optical flow basically works by matching a patch, around each input point, from the input image to the second image.
The parameter err allows you to retrieve the matching error (e.g. you may think of that as the correlation error) for each input point. As said in the documentation, the actual error measure depends on what flags were specified.
Hi (sorry for my english) .. i'm working in a project for University in this project i need to use the MBA (Multilevel B-Spline Approximation) algorithm to get some points (control points) of a image to use in other operations.
I'm reading a lot of papers about this algorithm, and i think i understand, but i can't writing.
The idea is: Read a image, process a image (OpenCV), then get control points of the image, use this points.
So the problem here is:
The algorithm use a set of points {(x,y,z)} , this set of points are approximated with a surface generated with the control points obtained from MBA. the set of points {(x,y,z)} represents de data we need to approximate (the image)..
So, the image is in a cv::Mat format , how can transform this format to an ordinary array to simply access to the data an manipulate...
Here are one paper with an explanation of the method:
(Paper) REGULARIZED MULTILEVEL B-SPLINE REGISTRATION
(Paper)Scattered Data Interpolation with Multilevel B-splines
(Matlab)MBA
If someone can help, maybe a guideline, idea or anything will be appreciate ..
Thanks in advance.
EDIT: Finally i wrote the algorithm in C++ using armadillo and OpenCV ...
Well i'm using armadillo a C++ linear algebra library to works with matrix for the algorithm