How do I convert bitmap format of a UIImage? - ios

I need to convert my bitmap from the normal camera format of kCVPixelFormatType_32BGRA to the kCVPixelFormatType_24RGB format so it can be consumed by a 3rd party library.
How can this be done?
My c# code looks like this in an effort of doing it directly with the byte data:
byte[] sourceBytes = UIImageTransformations.BytesFromImage(sourceImage);
// final source is to be RGB
byte[] finalBytes = new byte[(int)(sourceBytes.Length * .75)];
int length = sourceBytes.Length;
int finalByte = 0;
for (int i = 0; i < length; i += 4)
{
byte blue = sourceBytes[i];
byte green = sourceBytes[i + 1];
byte red = sourceBytes[i + 2];
finalBytes[finalByte] = red;
finalBytes[finalByte + 1] = green;
finalBytes[finalByte + 2] = blue;
finalByte += 3;
}
UIImage finalImage = UIImageTransformations.ImageFromBytes(finalBytes);
However I'm finding that my sourceBytes length is not always divisible by 4 which doesn't make any sense to me.

Related

How to calculate perimeter of a binary image using OpenCV 4.2 in C++

I want to calculate perimeter of a white blob in a 512*512 dimension binary image. Image will have only one blob. I used following code earlier in OpenCV 3 but somehow it doesn't work in OpenCV 4.2. IplImage
is deprecated in latest version. And I cannot pass Mat object directly to cvFindContours function. I am new to opencv and I don't know how does it work. Other related questions regarding perimeter are still unanswered.
To summaries, following works in opencv 3 but does not work in current opencv version (4.2).
int getPerimeter(unsigned char* inImagePtr, int inW, int inH)
{
int sumEven = 0; int sumOdd = 0;
int sumCorner = 0; int prevCode = 0;
//create a mat input Image
cv::Mat inImage(inH, inW, CV_8UC1, inImagePtr);
//create four connected structuring element
cv::Mat element = cv::Mat::zeros(3, 3, CV_8UC1);
element.data[1] = 1; element.data[3] = 1;
element.data[4] = 1; element.data[5] = 1;
element.data[7] = 1;
//erode input image
cv::Mat erodeImage;
erode(inImage, erodeImage, element);
//Invert eroded Image
cv::threshold(erodeImage, erodeImage, 0, 255, THRESH_BINARY_INV);
//multiply with original binary Image to get the edge Image
cv::Mat edge = erodeImage.mul(inImage);
//Get chain code of the blob
CvChain* chain = 0;
CvMemStorage* storage = 0;
storage = cvCreateMemStorage(0);
auto temp = new IplImage(edge);
cvFindContours(temp, storage, (CvSeq**)(&chain), sizeof(*chain), CV_RETR_EXTERNAL, CV_CHAIN_CODE);
delete temp;
for (; chain != NULL; chain = (CvChain*)chain->h_next)
{
CvSeqReader reader;
int i, total = chain->total;
cvStartReadSeq((CvSeq*)chain, &reader, 0);
for (i = 0; i < total; i++)
{
char code;
CV_READ_SEQ_ELEM(code, reader);
if (code % 2 == 0)
sumEven++;
else
sumOdd++;
if (i > 0) {
if (code != prevCode)
sumCorner++;
}
prevCode = code;
}
}
float perimeter = (float)sumEven*0.980 + (float)sumOdd*1.406 - (float)sumCorner*0.091;
return (roundf(perimeter));
}
This worked just fine for me!
int getPerimeter(unsigned char* inImagePtr, int inW, int inH) {
// create a mat input Image
cv::Mat inImage(inH, inW, CV_8UC1, inImagePtr);
// create four connected structuring element
cv::Mat element = cv::Mat::zeros(3, 3, CV_8UC1);
element.data[1] = 1;
element.data[3] = 1;
element.data[4] = 1;
element.data[5] = 1;
element.data[7] = 1;
// erode input image
cv::Mat erodeImage;
erode(inImage, erodeImage, element);
// Invert eroded Image
cv::threshold(erodeImage, erodeImage, 0, 255, THRESH_BINARY_INV);
// multiply with original binary Image to get the edge Image
cv::Mat edge = erodeImage.mul(inImage);
vector<vector<Point>> contours;
findContours(edge, contours, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE); // Retrieve only external contour
int preValue[2];
int nextValue[2];
int sumEven = 0;
int sumOdd = 0;
//vector<Point>::iterator itr;
for (int ii = 0; ii < contours[0].size(); ii++) {
Point pt = contours[0].at(ii);
preValue[0] = pt.x;
preValue[1] = pt.y;
if (ii != contours[0].size() - 1) {
Point pt_next = contours[0].at(ii + 1);
nextValue[0] = pt_next.x;
nextValue[1] = pt_next.y;
} else {
Point pt_next = contours[0].at(0);
nextValue[0] = pt_next.x;
nextValue[1] = pt_next.y;
}
if ((preValue[0] == nextValue[0]) or (preValue[1] == nextValue[1])) {
sumEven = sumEven + abs(nextValue[0] - preValue[0]) + abs(nextValue[1] - preValue[1]);
} else {
sumOdd = sumOdd + abs(nextValue[0] - preValue[0]);
}
}
int sumCorner = contours[0].size() - 1;
float perimeter = round(sumEven * 0.980 + sumOdd * 1.406 - sumCorner * 0.091);
return (roundf(perimeter));
}

How do I convert ByteArray from ImageMetaData() to Bitmap?

I have this code:
Frame frame = mSession.update();
Camera camera = frame.getCamera();
...
bytes=frame.getImageMetadata().getByteArray(0);
System.out.println("Byte Array "+frame.getImageMetadata().getByteArray(0));
Bitmap bmp = BitmapFactory.decodeByteArray(bytes,0,bytes.length);
System.out.println(bmp);
When I print Bitmap, I get a null object. I'm trying to get the image from the camera, that's the reason I'm trying to convert byteArray to Bitmap. If there's an alternative way, it would also be helpful.
Thank You.
The ImageMetaData describes the background image, but does not actually contain the image itself.
If you want to capture the background image as a Bitmap, you should look at the computervision sample which uses a FrameBufferObject to copy the image to a byte array.
I've tried something similar. It works. But I don't recommend anyone to try this way. It takes time because of nested loops.
CameraImageBuffer inputImage;
final Bitmap bmp = Bitmap.createBitmap(inputImage.width, inputImage.height, Bitmap.Config.ARGB_8888);
int width = inputImage.width;
int height = inputImage.height;
int frameSize = width*height;
// Write Bytebuffer to byte[]
byte[] imageBuffer= new byte[inputImage.buffer.remaining()];
inputImage.buffer.get(imageBuffer);
int[] rgba = new int[frameSize];
for (int i = 0; i < height; i++){
for (int j = 0; j < width; j++) {
int r =imageBuffer[(i * width + j)*4 + 0];
int g =imageBuffer[(i * width + j)*4 + 1];
int b =imageBuffer[(i * width + j)*4 + 2];
rgba[i * width + j] = 0xff000000 + (b << 16) + (g << 8) + r;
}
}
bmp.setPixels(rgba, 0, width , 0, 0, width, height);
Bytebuffer is converted to rgba buffer, and is written to Bitmap. CameraImageBuffer is the class provided in computervision sample app.
You may not able to get bitmap using image metadata. Use below approach.Use onDrawFrame override method of surface view render.
#Override public void onDrawFrame(GL10 gl) {
int w = 1080;
int h = 1080;
int b[] = new int[w * (0 + h)];
int bt[] = new int[w * h];
IntBuffer ib = IntBuffer.wrap(b);
ib.position(0);
GLES20.glReadPixels(0, 0, w, h, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, ib);
for (int i = 0, k = 0; i < h; i++, k++) {
for (int j = 0; j < w; j++) {
int pix = b[i * w + j];
int pb = (pix >> 16) & 0xff;
int pr = (pix << 16) & 0x00ff0000;
int pix1 = (pix & 0xff00ff00) | pr | pb;
bt[(h - k - 1) * w + j] = pix1;
}
}
Bitmap mBitmap = Bitmap.createBitmap(bt, w, h, Bitmap.Config.ARGB_8888);
runOnUiThread(new Runnable() {
#Override public void run() {
image_test.setImageBitmap(resizedBitmap);
}
});
}

IOS Compare 2 images that are 80% - 90% same?

i want to compare 2 image that are 80% - 90% same means if i take two images i 1st image I'm standing in the middle of the image and in another i standing little bit away from the centre with the same pose then this will not work of me and one image is blur and another is clear then also it will not return true or if the 1 image is little dark another is bright then must return true ...
how to make it run using hashing technic or if any another technic any help will be appreciated a lot ....thank you
one of the technics m using code is as below but its not working of me:
-(CGFloat)compareImage:(UIImage *)imgPre capturedImage:(UIImage *)imgCaptured
{
int colorDiff;
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(imgPre.CGImage));
int myWidth = (int )CGImageGetWidth(imgPre.CGImage)/2;
int myHeight =(int )CGImageGetHeight(imgPre.CGImage)/2;
const UInt8 *pixels = CFDataGetBytePtr(pixelData);
int bytesPerPixel_ = 4;
int pixelStartIndex = (myWidth + myHeight) * bytesPerPixel_;
UInt8 alphaVal = pixels[pixelStartIndex];
UInt8 redVal = pixels[pixelStartIndex + 1];
UInt8 greenVal = pixels[pixelStartIndex + 2];
UInt8 blueVal = pixels[pixelStartIndex + 3];
UIColor *color = [UIColor colorWithRed:(redVal/255.0f) green:(greenVal/255.0f) blue:(blueVal/255.0f) alpha:(alphaVal/255.0f)];
NSLog(#"color of image=%#",color);
NSLog(#"color of R=%hhu/G=%hhu/B=%hhu",redVal,greenVal,blueVal);
CFDataRef pixelDataCaptured = CGDataProviderCopyData(CGImageGetDataProvider(imgCaptured.CGImage));
int myWidthCaptured = (int )CGImageGetWidth(imgCaptured.CGImage)/2;
int myHeightCaptured =(int )CGImageGetHeight(imgCaptured.CGImage)/2;
const UInt8 *pixelsCaptured = CFDataGetBytePtr(pixelDataCaptured);
int pixelStartIndexCaptured = (myWidthCaptured + myHeightCaptured) * bytesPerPixel_;
UInt8 alphaValCaptured = pixelsCaptured[pixelStartIndexCaptured];
UInt8 redValCaptured = pixelsCaptured[pixelStartIndexCaptured + 1];
UInt8 greenValCaptured = pixelsCaptured[pixelStartIndexCaptured + 2];
UInt8 blueValCaptured = pixelsCaptured[pixelStartIndexCaptured + 3];
UIColor *colorCaptured = [UIColor colorWithRed:(redValCaptured/255.0f) green:(greenValCaptured/255.0f) blue:(blueValCaptured/255.0f) alpha:(alphaValCaptured/255.0f)];
NSLog(#"color of captured image=%#",colorCaptured);
NSLog(#"color of captured image R=%hhu/G=%hhu/B=%hhu",redValCaptured,greenValCaptured,blueValCaptured);
colorDiff=sqrt((redVal-249)*(redVal-249)+(greenVal-greenValCaptured)*(greenVal-greenValCaptured)+(blueVal-blueValCaptured)*(blueVal-blueValCaptured));
return colorDiff;
}

Convert matrix to UIImage

I need to convert a matrix representing a b/w image to UIImage.
For example:
A matrix like this (just the representation). This image would be the symbol '+'
1 0 1
0 0 0
1 0 1
This matrix represents an image in black and white, where black is 0 and white is 1. I need to convert this matrix to UIImage. In this case width would be 3 and height would be 3
I use this method to create an image for my Game Of Life app. The advantages over drawing to a graphics context is that this is ridiculously fast.
This was all written a long time ago so it's a bit messier than what I might do now but the method would stay the same. For some reasons I defined these outside the method...
{
unsigned int length_in_bytes;
unsigned char *cells;
unsigned char *temp_cells;
unsigned char *changes;
unsigned char *temp_changes;
GLubyte *buffer;
CGImageRef imageRef;
CGDataProviderRef provider;
int ar, ag, ab, dr, dg, db;
float arf, agf, abf, drf, dgf, dbf, blah;
}
You won't need all of these for the image.
The method itself...
- (UIImage*)imageOfMapWithDeadColor:(UIColor *)deadColor aliveColor:(UIColor *)aliveColor
{
//translate colours into rgb components
if ([deadColor isEqual:[UIColor whiteColor]]) {
dr = dg = db = 255;
} else if ([deadColor isEqual:[UIColor blackColor]]) {
dr = dg = db = 0;
} else {
[deadColor getRed:&drf green:&dgf blue:&dbf alpha:&blah];
dr = drf * 255;
dg = dgf * 255;
db = dbf * 255;
}
if ([aliveColor isEqual:[UIColor whiteColor]]) {
ar = ag = ab = 255;
} else if ([aliveColor isEqual:[UIColor blackColor]]) {
ar = ag = ab = 0;
} else {
[aliveColor getRed:&arf green:&agf blue:&abf alpha:&blah];
ar = arf * 255;
ag = agf * 255;
ab = abf * 255;
}
// dr = 255, dg = 255, db = 255;
// ar = 0, ag = 0, ab = 0;
//create bytes of image from the cell map
int yRef, cellRef;
unsigned char *cell_ptr = cells;
for (int y=0; y<self.height; y++)
{
yRef = y * (self.width * 4);
int x = 0;
do
{
cellRef = yRef + 4 * x;
if (*cell_ptr & 0x01) {
//alive colour
buffer[cellRef] = ar;
buffer[cellRef + 1] = ag;
buffer[cellRef + 2] = ab;
buffer[cellRef + 3] = 255;
} else {
//dead colour
buffer[cellRef] = dr;
buffer[cellRef + 1] = dg;
buffer[cellRef + 2] = db;
buffer[cellRef + 3] = 255;
}
cell_ptr++;
} while (++x < self.width);
}
//create image
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// render the byte array into an image ref
imageRef = CGImageCreate(self.width, self.height, 8, 32, 4 * self.width, colorSpace, kCGBitmapByteOrderDefault, provider, NULL, NO, kCGRenderingIntentDefault);
// convert image ref to UIImage
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGColorSpaceRelease(colorSpace);
//return image
return image;
}
You should be able to adapt this to create an image from your matrix.
In order to convert a matrix to UIImage :
CGSize size = CGSizeMake(lines, columns);
UIGraphicsBeginImageContextWithOptions(size, YES, 0);
for (int i = 0; i < lines; i++)
{
for (int j = 0; j < columns; j++)
{
// Choose color to draw
if ( matrixDraw[i*lines + j] == 1 ) {
[[UIColor whiteColor] setFill];
} else {
// Draw black pixel
[[UIColor blackColor] setFill];
}
// Draw just one pixel in i,j
UIRectFill(CGRectMake(i, j, 1, 1));
}
}
// Create UIImage with the current context that we have just created
UIImage *imageFinal = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Basically what we are doing is :
Create a context with the size of our image
Looping for each pixel to see the value. Black is 0 and white is 1. So depends on the value, we set the color.
The most important function :
UIRectFill(CGRectMake(i,j,1,1));
This function let us to fill a pixel in the i,j position with width and height (1 both cases for fill one single pixel)
Finally we create an UIImage with the current context and we call to finish the image context.
Hope it helps someone!

How to convert int* (bgr image) from c++ to unity3d texture image?

I have a bgr image uchar format from opencv c++.
the function is like int* texture(int* data, int width, int height); the function processes the image in c++ end and returns the pointer to the data. How do I convert this data in Unity to texture. basically make this data available to be put as a texture. I dont want to write it to a file. Please help.
Code snippet (I am using dlls) :::
public static WebCamTexture webCamTexture;
private Color32[] data;
private int[] imageData;
private int[] imdat;
void Start () {
....
data = new Color32[webCamTexture.width * webCamTexture.height];
imageData = new int[data.Length * 3];
}
void Update()
{
webCamTexture.GetPixels32(data);
// Convert the Color32[] in int* and emit it in bgr format
for (int i = 0; i < data.Length; ++i)
{
imageData[i * 3] = (int)data[i].b;
imageData[i * 3 + 1] = (int)data[i].g;
imageData[i * 3 + 2] = (int)data[i].r;
}
//this is the function called from dll
imdat = texture(imageData, int width, int height);
}
And the DLL end looks like ::
char *tmp;
int* texture(int* imageData ,int width ,int height)
{
int n = w * h * 3;
tmp = new char[n];
//ImageData inverted here and then passed onto tmp 3 channels image
for (int i = 0; i < (w*3); ++i)
for (int j = 0; j < h; ++j)
tmp[i + j * (w*3)] = (char)imageData[i + (h - j - 1) * (w*3)];
return (int)tmp;
}
I'm not sure what format texture you have is, but if you can convert it into byte[] you can use Texture2D.LoadImage(byte[]) to turn in into working texture.
You should be able to achieve what to want with BitConverter.GetBytes() and Texture2D.LoadImage(). Make sure you take special note of the image format restrictions in the Unity manual page there.
Not sure how your binding between your C++land and C#land code but you should be able to do something a little like this:
/* iImportedTexture = [Your c++ function here]; */
byte[] bImportedTexture = BitConverter.GetBytes(iImportedTexture);
Texture2D importedTexture = Texture2D.LoadImage(bImportedTexture);

Resources