I have a sqlite .db file which consists of data in string, int and blob data types. The blob data consists of images. Can anyone tell me how to convert the blob data into images and then display in a list field. Any code snippets or tutorials will be of great help...
You should be able to convert BLOB data into a byte[] b array. When you have that a simple call to
EncodedImage icon = EncodedImage.createEncodedImage(b, 0, -1);
Will give you an image which you can later convert to bitmap with icon.getBitmap();
Related
I want to store the raw bytes of a captured picture using AVCaptureSession. But I have only seen examples of pngrepresentation and jpegrepresentation. I want to store the raw data bytes in local disk documents of phone so it can be reopened at other times and converted into a UIImage for post processing. Is there a way to do this?
for example:
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
GLubyte *rawImageBytes = CVPixelBufferGetBaseAddress(pixelBuffer);
Can I store rawImageBytes in documents to open it later?
Sure you can. Create an NSData object containing your bytes and save that using one of the NSData file saving methods (e.g. writeToURL:atomically:.)
You'll need to know the number of bytes in your pixelBuffer though. It looks like you should use CVPixelBufferGetDataSize to get the number of bytes.
Am working on Qccpack for Hyperspectral image compression which uses .icb extension.
How can I convert from ENVI .hdr to .icb in order to work with Qccpack ?
I just had a quick look into the Qccpack documentation. (the first thing I found via google, I guess this is what you are talking about)
http://qccpack.sourceforge.net/Documentation/QccIMGImageCubeFree.3.html
.icb is a file that stores "image cubes". They say that image cubes are a data structure for saving volumetric image data.
ENVI .hdr instead is a file format that stores meta data for an image that is stored in another file.
You cannot convert image meta data into image data.
I have a 16bit grayscale image. I have tried both .png and .tif. .tif works somewhat. I have the following code:
CGDataProviderRef l_Img_Data_Provider = CGDataProviderCreateWithFilename( [m_Name cStringUsingEncoding:NSASCIIStringEncoding] );
CGImageRef l_CGImageRef = CGImageCreate( m_Width, m_Height, 16, 16, m_Width * 2,
CGColorSpaceCreateDeviceGray(), kCGBitmapByteOrder16Big, l_Img_Data_Provider, NULL, false, kCGRenderingIntentDefault );
test_Image = [[UIImage alloc] initWithCGImage:l_CGImageRef];
[_test_Image_View setImage:test_Image];
This results in the following image:
faulty gradient
As you can see, there seems to be an issue at the beginning of the image ( could it be trying to use the byte data from the header? ), and the image is offset by about a fifth ( a little harder to see, look at the left and the right, there is a faint line about a fifth away from the right.
My goal is to convert this to a metal texture and use it from there. Also having issues there. Seem like a byte order issue but maybe we can come back to that.
dave
CGDataProvider doesn't know about the format of the data that it stores. It is just meant for handling generic data:
"The CGDataProvider header file declares a data type that supplies
Quartz functions with data. Data provider objects abstract the
data-access task and eliminate the need for applications to manage
data through a raw memory buffer."
CGDataProvider
Because CGDataProvider is generic you must provide the format of the image data using the CGImageCreate parameters. PNGs and JPGs have their own CGImageCreateWith.. functions for handling encoded data.
The CGImage parameters in your example correctly describe a 16 bit grayscale raw byte format but nothing about TIF encoding so I would guess you are correct in guessing that the corrupted pixels you are see are from the file headers.
There may be other ways to load a 16 bit grayscale image on iOS, but to use that method (or the very similar Metal method) you would need to parse the image bytes from the TIF file and pass that into the function, or create another way to store and parse the image data.
Currently I'm using OpenCV Java to extract image features and store them to Hbase table. But I get a problem that image features of an image are in Mat or MatOfKeyPoint type in OpenCV while if we want to insert data to Hbase table then we have to use byte[].
......
featureDetector.detect(trainImages, trainKeypoints);
descriptorExtractor.compute(trainImages, trainKeypoints, trainDescriptors);
//Save to Hbase
Put put = new Put(key.getBytes());
put.add(family, keypoints, trainKeypoints);//???trainKeypoints is MatOfKeyPoint type
put.add(family, descriptors, trainDescriptors));//????trainDescriptors is Mat type
........
Anyone who know any good solutions about how to do it, please help me.
Thanks.
If MatOfKeyPoint and Mat is serializable/deserializable to byte[] then you can just do that.
To store MatOfKeyPoint and Mat, get their byte[] representation. Write those byte[] to HBase.
To read data from HBase, read the byte[] from HBase and then build corresponding object from byte[].
I have a database of images that i processed and saved as:
IplImage* database[10000];
database[0]= image1
database[1]= image2
... etc
Now I want to save this database IplImage matrix. How can i do this ? I know i can loop over all the images and save them one by one, but that is not really what i am looking for.
I read something like cvSave and cvLoad which allows me to save and load in one command but i am getting an error when i use it (cvSave("myimagedatabase.xml",&database);). Can you please guide me ?
Thank you in advance
If you don't want to save them as individual images what about using a sequence of images - also known as a movie?
See opencv cvVideoWriter, if you select 0 as the codec they will be saved uncompressed
What I gather from your question is that you are interested in saving strictly the data to some file, and then reloading back at a later point and that you would not be interested in opening the stored data in an image editing program or something like this.
First thing to consider, what are the important parts of IplImage
* depth
* height
* width
* nChannels
* imageData array
What I would do is in a loop for each IplImage make a function to write each data value into a binary file. Give the file a header noting how many images there are. Using depth, height width and nChannels you can compute the size of the imageData array. Assuming all have the same depth (IPL_DEPTH_8U for example) then this should be easy, if they vary this can get tricky.
To load the data simply read in the binary data and header information and one by one loop through all the data and create new IplImages based on that data.