I want to convert a grayscale image into a 3D object model with the software Halcon. Then, I want to export it into a .om3 file.
I'm able to export a 3D object model in .om3 but I'm stuck at converting the grayscale image.
Try xyz_to_object_model_3d where x, y, and z are your range images. As far as I know you can't directly convert a z image into a 3d model. You need the X and Y images too, which you could just generate as gradients with their intensity going from 0 to how ever wide your model is if you're using 8 bit images, or convert the images to reals then your gradient can have positive and negative numbers in it.
Related
I have a 512x512 grayscale image (or MultiArray) which is the output of a CoreML depth estimation model.
In Python, one can use Matplotlib or other packages to visualise grayscale images in different colormaps, like so:
Grayscale
Magma
[Images from https://ai.googleblog.com/2019/08/turbo-improved-rainbow-colormap-for.html]
I was wondering if there was any way to take said output and present it as a cmap in Swift/iOS?
If you make the model output an image, you get a CVPixelBuffer object. This is easy enough to draw on the screen by converting it to a CIImage and then a CGImage.
If you want to draw it with a colormap, you'll have to replace each of the grayscale values with a color manually. One way to do this is to output an MLMultiArray and loop through each of the output values, and use a lookup table for the colors. A quicker way is to do this in a Metal compute shader.
How to take pixels from an input image by using Gaussian sub-sampling (shotgun pattern like)?
I want to take the locations of pixels that are to be taken like in a shotgun pattern concentrated in the middle of the image. Because I do not want to extract features of all pixels in an image. The output should be the coordinates of sampled pixels. I will be thankful if you guide me.
Is there any function or code that I can get help from that.
Your help is appreciated.
If you are looking for a method to define a Region of Interest (ROI) of an image in Matlab in order to perform some operation in a restricted are, remembering that x coordinates represent column and y is on the rows (matlab reads images as matrices):
For cut an image from x1 to x2 and from y1 to y2 try something like
ROI = image[y1:y2,x1:x2]
but how to determine these 4 values without a specific example is up to you
I'm trying to do image recognition / tracking inside of video file play from unity. Is it possible to do image recognition of a video file (not augmented reality) using Vuforia API?
If not, does anyone else have any suggestions on how to accomplish this?
Thanks!
If you want to recognize a particular frame in your video stream, most simple but effective solution is matching histograms of your sample frame and frames in your video stream, i don't know whether it can be done using Vuforia API, but if you are interested in implementation of some image processing algorithm the process is quite simple:
1) Convert your sample image in gray (if its a color image).
2) Calculate image histogram for certain number of bin.
3) Store this histogram in a variable.
4) Now run your video file in a loop and extract frames from it, apply above 3 steps and get histogram of same size as of sample image.
5) Find out distance between two histogram using simple some of square, put a similarity threshold there, if distance is less than your threshold frame is quite similar to sample image.
Another approach may be:
1) Find out color co-variance matrix from the input sample(if its a color image):
2) to find out it convert your color channel (R,G,B) into column vector and put them column wise in a single variable e.g. [R,G,B].
3) get column wise mean and subtract it with each value of the respected column (Centralize your data around the mean).
4) now transpose your 3 column matrix and multiply it like:
Cov = [R,G,B]^T * [R,G,B];
5) above will give you a 3 by 3 matrix.
6) do above for each frame and find distance between cov matrix of sample image and query frame. put threshold to find similarity.
Further extension in above may be finding eigen values of cov matrix and then use them as features for similarity caculation.
you can also try extraction of color histogram rather than a gray scale histogram.
For more complex situation you can go with key-point detection and matching approaches.
Thank You
I have a vector of Point2f which have color space CV_8UC4 and need to convert them to CV_64F, is the following code correct?
points1.convertTo(points1, CV_64F);
More details:
I am trying to use this function to calculate the essential matrix (rotation/translation) through the 5-point algorithm, instead of using the findFundamentalMath included in OpenCV, which is based on the 8-point algorithm:
https://github.com/prclibo/relative-pose-estimation/blob/master/five-point-nister/five-point.cpp#L69
As you can see it first converts the image to CV_64F. My input image is a CV_8UC4, BGRA image. When I tested the function, both BGRA and greyscale images produce valid matrices from the mathematical point of view, but if I pass a greyscale image instead of color, it takes way more to calculate. Which makes me think I'm not doing something correctly in one of the two cases.
I read around that when the change in color space is not linear (which I suppose is the case when you go from 4 channels to 1 like in this case), you should normalize the intensity value. Is that correct? Which input should I give to this function?
Another note, the function is called like this in my code:
vector<Point2f>imgpts1, imgpts2;
for (vector<DMatch>::const_iterator it = matches.begin(); it!= matches.end(); ++it)
{
imgpts1.push_back(firstViewFeatures.second[it->queryIdx].pt);
imgpts2.push_back(secondViewFeatures.second[it->trainIdx].pt);
}
Mat mask;
Mat E = findEssentialMat(imgpts1, imgpts2, [camera focal], [camera principal_point], CV_RANSAC, 0.999, 1, mask);
The fact I'm not passing a Mat, but a vector of Point2f instead, seems to create no problems, as it compiles and executes properly.
Is it the case I should store the matches in a Mat?
I am no sure do you mean by vector of Point2f in some color space, but if you want to convert vector of points into vector of points of another type you can use any standard C++/STL function like copy(), assign() or insert(). For example:
copy(floatPoints.begin(), floatPoints.end(), doublePoints.begin());
or
doublePoints.insert(doublePoints.end(), floatPoints.begin(), floatPoints.end());
No, it is not. A std::vector<cv::Pointf2f> cannot make use of the OpenCV convertTo function.
I think you really mean that you have a cv::Mat points1 of type CV_8UC4. Note that those are RxCx4 values (being R and C the number of rows and columns), and that in a CV_64F matrix you will have RxC values only. So, you need to be more clear on how you want to transform those values.
You can do points1.convertTo(points1, CV_64FC4) to get a RxCx4 matrix.
Update:
Some remarks after you updated the question:
Note that a vector<cv::Point2f> is a vector of 2D points that is not associated to any particular color space, they are just coordinates in the image axes. So, they represent the same 2D points in a grey, rgb or hsv image. Then, the execution time of findEssentialMat doesn't depend on the image color space. Getting the points may, though.
That said, I think your input for findEssentialMat is ok (the function takes care of the vectors and convert them into their internal representation). In this cases, it is very useful to draw the points in your image to debug the code.
When given an image such as this:
And not knowing the color of the object in the image, I would like to be able to automatically find the best H, S and V ranges to threshold the object itself, in order to get a result such as this:
In this example, I manually found the values and thresholded the image using cv::inRange.The output I'm looking for, are the best H, S and V ranges (min and max value each, total of 6 integer values) to threshold the given object in the image, without knowing in advance what color the object is. I need to use these values later on in my code.
Keypoints to remember:
- All given images will be of the same size.
- All given images will have the same dark background.
- All the objects I'll put in the images will be of full color.
I can brute force over all possible permutations of the 6 HSV ranges values, threshold each one and find a clever way to figure out when the best blob was found (blob size maybe?). That seems like a very cumbersome, long and highly ineffective solution though.
What would be good way to approach this? I did some research, and found that OpenCV has some machine learning capabilities, but I need to have the actual 6 values at the end of the process, and not just a thresholded image.
You could create a small 2 layer neural network for the task of dynamic HSV masking.
steps:
create/generate ground truth annotations for image and its HSV range for the required object
design a small neural network with at least 1 conv layer and 1 fcn layer.
Input : Mask of the image after applying the HSV range from ground truth( mxn)
Output : mxn mask of the image in binary
post processing : multiply the mask with the original image to get the required object highligted