Currently I'm using OpenCV Java to extract image features and store them to Hbase table. But I get a problem that image features of an image are in Mat or MatOfKeyPoint type in OpenCV while if we want to insert data to Hbase table then we have to use byte[].
......
featureDetector.detect(trainImages, trainKeypoints);
descriptorExtractor.compute(trainImages, trainKeypoints, trainDescriptors);
//Save to Hbase
Put put = new Put(key.getBytes());
put.add(family, keypoints, trainKeypoints);//???trainKeypoints is MatOfKeyPoint type
put.add(family, descriptors, trainDescriptors));//????trainDescriptors is Mat type
........
Anyone who know any good solutions about how to do it, please help me.
Thanks.
If MatOfKeyPoint and Mat is serializable/deserializable to byte[] then you can just do that.
To store MatOfKeyPoint and Mat, get their byte[] representation. Write those byte[] to HBase.
To read data from HBase, read the byte[] from HBase and then build corresponding object from byte[].
Related
I have an existing pre-allocated CV_8UC3 cv::Mat, and I would like to copy 24bpp (strided RGB) data from a buffer into the Mat's data buffer. Is there a routine for achieving this?
note: I am familiar with the constructor route for achieving this, but I do not want to create a new cv::Mat.
I have a lot of code that is based on open cv but there are many ways in which the Arm Compute library improves performance, so id like to integrate some arm compute library code into my project. Has anyone tried converting between the two corresponding Image structures? If so, what did you do? Or is there a way to share a pointer to the underlying data buffer without needing to copy image data and just set strides and flags appropriately?
I was able to configure an arm_compute::Image corresponding to my cv::Mat properties, allocate the memory, and point it to the data portion of my cv:Mat.
This way, I can process my image efficiently using arm_compute and maintain the opencv infrastructure I had for the rest of my project.
// cv::Mat mat defined and initialized above
arm_compute::Image image;
image.allocator()->init(arm_compute::TensorInfo(mat.cols, mat.rows, Format::U8));
image.allocator()->allocate();
image.allocator()->import_memory(Memory(mat.data));
Update for ACL 18.05 or newer
You need to implement IMemoryRegion.h
I have created a gist for that: link
When I am trying to read image from file, then after load Mat.Data array is alway null. But when I am looking into Mat object during debug there is byte array in which are all data from image.
Mat image1 = CvInvoke.Imread("minion.bmp", Emgu.CV.CvEnum.LoadImageType.AnyDepth);
Do you have any idea why?
I recognize this question is super old, but I hit the same issue and I suspect the answer lies in the Emgu wiki. Specifically:
Accessing the pixels from Mat
Unlike the Image<,> class, where memory are pre-allocated and fixed, the memory of Mat can be automatically re-allocated by Open CV function calls. We cannot > pre-allocate managed memory and assume the same memory are used through the life time of the Mat object. As a result, Mat class do not contains a Data > property like the Image<,> class, where the pixels can be access through a managed array. To access the data of the Mat, there are a few possible choices.
The easy way and safe way that cost an additional memory copy
The first option is to copy the Mat to an Image<,> object using the Mat.ToImage function. e.g.
Image<Bgr, Byte> img = mat.ToImage<Bgr, Byte>();
The pixel data can then be accessed using the Image<,>.Data property.
You can also convert the Mat to an Matrix<> object. Assuming the Mat contains 8-bit data,
Matrix<Byte> matrix = new Matrix<Byte>(mat.Rows, mat.Cols, mat.NumberOfChannels);
mat.CopyTo(matrix);
Note that you should create Matrix<> with a matching type to the Mat object. If the Mat contains 32-bit floating point value, you should replace Matrix in the above code with Matrix. The pixel data can then be accessed using the Matrix<>.Data property.
The fastest way with no memory copy required. Be caution!!!
The second option is a little bit tricky, but will provide the best performance. This will usually require you to know the size of the Mat object before it is created. So you can allocate managed data array, and create the Mat object by forcing it to use the pinned managed memory. e.g.
//load your 3 channel bgr image here
Mat m1 = ...;
//3 channel bgr image data, if it is single channel, the size should be m1.Width * m1.Height
byte[] data = new byte[m1.Width * m1.Height * 3];`
GCHandle handle = GCHandle.Alloc(data, GCHandleType.Pinned);`
using (Mat m2 = new Mat(m1.Size, DepthType.Cv8U, 3, handle.AddrOfPinnedObject(), m1.Width * 3))`
CvInvoke.BitwiseNot(m1, m2);`
handle.Free();
At this point the data array contains the pixel data of the inverted image. Note that if the Mat m2 was allocated with the wrong size, data[] array will contains all 0s, and no exception will be thrown. So be really careful when performing the above operations.
TL;DR: You can't use the Data object in the way you're hoping to (as of version 3.2 at least). You must copy it to another object which allows use of the Data object.
I want to copy the data from a cv::Mat to an std::vector. I could obviously go through the entire Mat and copy each value one by one, but I was hoping that there might be an easier way using copyTo, clone, or some sort of pointer manipulation.
Does anyone have any insight on this problem?
Thanks
Assuming your Mat is CV_8UC1, you can do following.
cv::Mat mat(nrows,ncols,CV_8UC1);
...
std::vector<unsigned char> vec;
vec.assign(mat.data,mat.data+nrows*ncols);
For multiple channel image with different pixel type, I think you will be able to easily generalize the code above.
Here is what worked for myself. I had Mat matVec2f of size Nx1, type Vec2f, and a vector of size N. The following code copies the Mat's data to the vector. I believe this should work equally well for data types other than Vec2f.
int N = 10;
vector<Point2f> vec(N);
matVec2f.copyTo(Mat(vec, false));
I have a sqlite .db file which consists of data in string, int and blob data types. The blob data consists of images. Can anyone tell me how to convert the blob data into images and then display in a list field. Any code snippets or tutorials will be of great help...
You should be able to convert BLOB data into a byte[] b array. When you have that a simple call to
EncodedImage icon = EncodedImage.createEncodedImage(b, 0, -1);
Will give you an image which you can later convert to bitmap with icon.getBitmap();