ios - ImageMagick How to code to apply ShepardsDistortion on image - ios

I am new to ImageMagick,and i want to develop an effect of ShepardsDistortion on source image. i gone through many posts and sites, but i didn't find way to implement "ShepardsDistortion" in iOS.
MagickWand *mw = NewMagickWand();
MagickSetFormat(mw, "png");
UIImage *sourceImage=[_sourceImgView image];
NSData *imgData=UIImagePNGRepresentation(sourceImage);
MagickReadImageBlob(mw, [imgData bytes], [imgData length]);
Image *image=GetImageFromMagickWand(mw);
DistortImage(image, ShepardsDistortion, , ,);
I done upto this, but i dont know what to pass as arg in DitortImage(). So if anyone knows then help me.
EDIT:
-(void)distortImage{
MagickWandGenesis();
MagickWand * wand;
MagickBooleanType status;
wand = NewMagickWand();
MagickSetFormat(wand, "png");
status = MagickReadImage(wand,"chess.png");
// Arguments for Shepards
double points[8];
points[0] = 250; // First X point (starting)
points[1] = 250; // First Y point (starting)
points[2] = 50; // First X point (ending)
points[3] = 150; // First Y point (ending)
points[4] = 500; // Second X point (starting)
points[5] = 380; // Second Y point (starting)
points[6] = 600; // Second X point (ending)
points[7] = 460; // Second Y point (ending)
MagickDistortImage(wand,ShepardsDistortion,8,points,MagickFalse);
NSString * tempFilePath = [NSTemporaryDirectory() stringByAppendingPathComponent:#"out.png"];
MagickWriteImage(wand,[tempFilePath cStringUsingEncoding:NSASCIIStringEncoding]);
UIImage * imgObj = [UIImage imageWithContentsOfFile:tempFilePath];
_resultImgView.image=imgObj;
//
// unsigned char * cBlob;
// size_t data_size;
// cBlob = MagickGetImageBlob(wand, &data_size);
// NSData * nsBlob = [NSData dataWithBytes:cBlob length:data_size];
// UIImage *uiImage = [UIImage imageWithData:nsBlob];
// _resultImgView.image=uiImage;
MagickWriteImage(wand,"out.png");
wand=DestroyMagickWand(wand);
MagickWandTerminus();
}

This might help:
MagickWandGenesis();
magick_wand = NewMagickWand();
double points[24];
points[0] = 250;
points[1] = 250;
points[2] = 50;
points[3] = 150;
points[4] = 0;
points[5] = 0;
points[6] = 0;
points[7] = 0;
points[8] = self.frame.width;
points[9] = 0;
points[10] = self.frame.width;
points[11] = 0;
points[12] = self.frame.width;
points[13] = self.frame.height;
points[14] = self.frame.width;
points[15] = self.frame.height;
points[16] = self.frame.width;
points[17] = self.frame.height;
points[18] = self.frame.width;
points[19] = self.frame.height;
points[20] = 0;
points[21] = self.frame.height;
points[22] = 0;
points[23] = self.frame.height;
NSData * dataObject = UIImagePNGRepresentation([UIImage imageNamed:#"Imagemagick-logo.png"]);//UIImageJPEGRepresentation([imageViewButton imageForState:UIControlStateNormal], 90);
MagickBooleanType status;
status = MagickReadImageBlob(magick_wand, [dataObject bytes], [dataObject length]);
if (status == MagickFalse) {
ThrowWandException(magick_wand);
}
// posterize the image, this filter uses a configuration file, that means that everything in IM should be working great
status = MagickDistortImage(magick_wand,ShepardsDistortion,24,points,MagickFalse);
//status = MagickOrderedPosterizeImage(magick_wand, "h8x8o");
if (status == MagickFalse) {
ThrowWandException(magick_wand);
}
size_t my_size;
unsigned char * my_image = MagickGetImageBlob(magick_wand, &my_size);
NSData * data = [[NSData alloc] initWithBytes:my_image length:my_size];
free(my_image);
magick_wand = DestroyMagickWand(magick_wand);
MagickWandTerminus();
UIImage * image = [[UIImage alloc] initWithData:data];
[data release];
[imageViewButton setImage:image forState:UIControlStateNormal];
[image release];

Arguments are passed to DistortImage as the start of a list of doubles, and size information about the list. Example:
size_t SizeOfPoints = 8;
double Points[SizeOfPoints];
DistortImage(image,
ShepardsDistoration,
SizeOfPoints,
Points,
MagickFalse,
NULL
);
In your example, you seem to be mixing MagickWand & MagickCore methods; which, seems unnecessary and confusing. I would keep this distortion simple, and only use MagickWand's MagickDistortImage method. Here's a example in c
int main(int argc. const char **argv)
{
MagickWandGenesis();
MagickWand * wand;
MagickBooleanType status;
wand = NewMagickWand();
status = MagickReadImage(wand,"logo:");
// Arguments for Shepards
double points[8];
// 250x250 -> 50x150
points[0] = 250; // First X point (starting)
points[1] = 250; // First Y point (starting)
points[2] = 50; // First X point (ending)
points[3] = 150; // First Y point (ending)
// 500x380 -> 600x460
points[4] = 500; // Second X point (starting)
points[5] = 380; // Second Y point (starting)
points[6] = 600; // Second X point (ending)
points[7] = 460; // Second Y point (ending)
MagickDistortImage(wand,ShepardsDistortion,8,points,MagickFalse);
MagickWriteImage(wand,"out.png");
wand=DestroyMagickWand(wand);
MagickWandTerminus();
return 0;
}
Resulting in a distorted translated image (details)
Edit
For iOS, you can use NSTemporaryDirectory (like in this answer), or create an image dynamically using NSData (like in this question).
Example with temporary path:
NSString * tempFilePath = [NSTemporaryDirectory()
stringByAppendingPathComponent:#"out.png"];
MagickWriteImage(self.wand,
[tempFilePath cStringUsingEncoding:NSASCIIStringEncoding]);
UIImage * imgObj = [UIImage imageWithContentsOfFile:tempFilePath];
And an example with NSData + blob
unsigned char * cBlob;
size_t data_size;
cBlob = MagickGetImageBlob(wand, &data_size);
NSData * nsBlob = [NSData dataWithBytes:cBlob length:data_size];
UIImage * uiImage = [UIImage imageWithData:nsBlob];

Related

OpenCV detect corners of a pattern hidden in a image

I have to create a mobile application able to detect hidden (standard) pattern in a image.
The purpose is to detect corners and get some information from image (like a link).
I'm focusing on iOS for the moment but I don't know how implement the pattern and recognize it with OpenCV
So the first question is: how can I add hidden information in a image?
I found this library that implement steganography to hide some information in a image. Is this the right way?
Next step is detect with the phone's camera the image and the its corners. My idea is to create a standard pattern (like points or lines) to add on a .png image and use template matching to detect, during capture, the area where the pattern is present. But reading online I have saw that this technique is not the best for this problem.
I have successfully implemented the HSV conversation for color-tracking following this tutorial but I don't know how to proceed to the next step.
So, the second question is: how can I recognize a standard pattern and detects corners in a frame captured with the camera?
This is the code that I use to convert the sample buffet to UIImage:
- (UIImage *)imageFromSampleBuffer:(CMSampleBufferRef)sampleBuffer {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
uint8_t *yBuffer = (uint8_t*)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
size_t yPitch = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0);
uint8_t *cbCrBuffer = (uint8_t*)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 1);
size_t cbCrPitch = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 1);
int bytesPerPixel = 4;
uint8_t *rgbBuffer = (uint8_t*)malloc(width * height * bytesPerPixel);
for(int y = 0; y < height; y++) {
uint8_t *rgbBufferLine = &rgbBuffer[y * width * bytesPerPixel];
uint8_t *yBufferLine = &yBuffer[y * yPitch];
uint8_t *cbCrBufferLine = &cbCrBuffer[(y >> 1) * cbCrPitch];
for(int x = 0; x < width; x++) {
int16_t y = yBufferLine[x];
int16_t cb = cbCrBufferLine[x & ~1] - 128;
int16_t cr = cbCrBufferLine[x | 1] - 128;
uint8_t *rgbOutput = &rgbBufferLine[x*bytesPerPixel];
int16_t r = (int16_t)roundf( y + cr * 1.4 );
int16_t g = (int16_t)roundf( y + cb * -0.343 + cr * -0.711 );
int16_t b = (int16_t)roundf( y + cb * 1.765);
rgbOutput[0] = 0xff;
rgbOutput[1] = clamp(b);
rgbOutput[2] = clamp(g);
rgbOutput[3] = clamp(r);
}
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rgbBuffer, width, height, 8, width * bytesPerPixel, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast);
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
UIImage *image = [UIImage imageWithCGImage:quartzImage];
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
CGImageRelease(quartzImage);
free(rgbBuffer);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return image;
}
And this is for HSV conversation:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
#autoreleasepool {
if (self.isProcessingFrame) {
return;
}
self.isProcessingFrame = YES;
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
cv::Mat matFrame = [self cvMatFromUIImage:image];
cv::cvtColor(matFrame, matFrame, CV_BGR2HSV);
cv::inRange(matFrame, cv::Scalar(0, 100,100,0), cv::Scalar(10,255,255,0), matFrame);
image = [self UIImageFromCVMat:matFrame];
// Convert to base64
NSData *imageData = UIImagePNGRepresentation(image);
NSString *encodedString = [imageData base64EncodedStringWithOptions:NSDataBase64Encoding64CharacterLineLength];
self.isProcessingFrame = NO;
}
}
I hope some help, thanks!

iOS remove faces and vertices from .obj file

I use SceneKit to display .obj file. But to get an .obj file I use SDK from a sensor, so this sensor scans the arm of a man and returns the .obj file as a result. But when I load the .obj file, there are a lot of not proper parts (part of chair, part of the surface and so on), I need to remove these parts of the object, so as a result I has to see only the arm of the man.
So for example I want to select a rectangle or a sphere and to remove all vertices and faces in this sphere.
Are there any SDK or frameworks in iOS to do that?
P.S. I tried nineveh and some other frameworks, but they can only view objects, they can't edit them.
Edit
I found the code to manipulate vertices (it merges vertices from different child nodes) in SceneKit. Can I use the same approach to find vertices I need to remove (that are inside my rectangle) or it will be very slow with 65 K vertices?
//
// VertexManager.m
// Test
//
#import "VertexManager.h"
#import <SceneKit/SceneKit.h>
#import <GLKit/GLKit.h>
#implementation VertexManager
+ (SCNNode *) flattenNodeHierarchy:(SCNNode *) input
{
SCNNode *result = [SCNNode node];
NSUInteger nodeCount = [[input childNodes] count];
if(nodeCount > 0){
SCNNode *node = [[input childNodes] objectAtIndex:0];
NSArray *vertexArray = [node.geometry geometrySourcesForSemantic:SCNGeometrySourceSemanticVertex];
SCNGeometrySource *vertex = [vertexArray objectAtIndex:0];
SCNGeometryElement *element = [node.geometry geometryElementAtIndex:0]; //todo: support multiple elements
NSUInteger primitiveCount = element.primitiveCount;
NSUInteger newPrimitiveCount = primitiveCount * nodeCount;
size_t elementBufferLength = newPrimitiveCount * 3 * sizeof(int); //nTriangle x 3 vertex * size of int
int* elementBuffer = (int*)malloc(elementBufferLength);
/* simple case: here we consider that all the objects to flatten are the same
In the regular case we should iterate on every geometry and accumulate the number of vertex/triangles etc...*/
NSUInteger vertexCount = [vertex vectorCount];
NSUInteger newVertexCount = vertexCount * nodeCount;
SCNVector3 *newVertex = malloc(sizeof(SCNVector3) * newVertexCount);
SCNVector3 *newNormal = malloc(sizeof(SCNVector3) * newVertexCount); //assume same number of normal/vertex
//fill
NSUInteger vertexFillIndex = 0;
NSUInteger primitiveFillIndex = 0;
for(NSUInteger index=0; index< nodeCount; index++){
#autoreleasepool {
node = [[input childNodes] objectAtIndex:index];
NSArray *vertexArray = [node.geometry geometrySourcesForSemantic:SCNGeometrySourceSemanticVertex];
NSArray *normalArray = [node.geometry geometrySourcesForSemantic:SCNGeometrySourceSemanticNormal];
SCNGeometrySource *vertex = [vertexArray objectAtIndex:0];
SCNGeometrySource *normals = [normalArray objectAtIndex:0];
if([vertex bytesPerComponent] != sizeof(float)){
NSLog(#"todo: support other byte per component");
continue;
}
float *vertexBuffer = (float *)[[vertex data] bytes];
float *normalBuffer = (float *)[[normals data] bytes];
SCNMatrix4 t = [node transform];
GLKMatrix4 matrix = MyGLKMatrix4FromCATransform3D(t);
//append source
for(NSUInteger vIndex = 0; vIndex < vertexCount; vIndex++, vertexFillIndex++){
GLKVector3 v = GLKVector3Make(vertexBuffer[vIndex * 3], vertexBuffer[vIndex * 3+1], vertexBuffer[vIndex * 3 + 2]);
GLKVector3 n = GLKVector3Make(normalBuffer[vIndex * 3], normalBuffer[vIndex * 3+1], normalBuffer[vIndex * 3 + 2]);
//transform
v = GLKMatrix4MultiplyVector3WithTranslation(matrix, v);
n = GLKMatrix4MultiplyVector3(matrix, n);
newVertex[vertexFillIndex] = SCNVector3Make(v.x, v.y, v.z);
newNormal[vertexFillIndex] = SCNVector3Make(n.x, n.y, n.z);
}
//append elements
//here we assume that all elements are SCNGeometryPrimitiveTypeTriangles
SCNGeometryElement *element = [node.geometry geometryElementAtIndex:0];
const void *inputPrimitive = [element.data bytes];
size_t bpi = element.bytesPerIndex;
NSUInteger offset = index * vertexCount;
for(NSUInteger pIndex = 0; pIndex < primitiveCount; pIndex++, primitiveFillIndex+=3){
elementBuffer[primitiveFillIndex] = offset + _getIndex(inputPrimitive, bpi, pIndex*3);
elementBuffer[primitiveFillIndex+1] = offset + _getIndex(inputPrimitive, bpi, pIndex*3+1);
elementBuffer[primitiveFillIndex+2] = offset + _getIndex(inputPrimitive, bpi, pIndex*3+2);
}
}
}
NSArray *sources = #[[SCNGeometrySource geometrySourceWithVertices:newVertex count:newVertexCount],
[SCNGeometrySource geometrySourceWithNormals:newNormal count:newVertexCount]];
NSData *newElementData = [NSMutableData dataWithBytesNoCopy:elementBuffer length:elementBufferLength freeWhenDone:YES];
NSArray *elements = #[[SCNGeometryElement geometryElementWithData:newElementData
primitiveType:SCNGeometryPrimitiveTypeTriangles
primitiveCount:newPrimitiveCount bytesPerIndex:sizeof(int)]];
result.geometry = [SCNGeometry geometryWithSources:sources elements:elements];
//cleanup
free(newVertex);
free(newNormal);
}
return result;
}
//helpers:
GLKMatrix4 MyGLKMatrix4FromCATransform3D(SCNMatrix4 transform) {
GLKMatrix4 m = {{transform.m11, transform.m12, transform.m13, transform.m14,
transform.m21, transform.m22, transform.m23, transform.m24,
transform.m31, transform.m32, transform.m33, transform.m34,
transform.m41, transform.m42, transform.m43, transform.m44}};
return m;
}
GLKVector3 MySCNVector3ToGLKVector3(SCNVector3 vector) {
GLKVector3 v = {{vector.x, vector.y, vector.z}};
return v;
}
#end
No. You'll want to use a 3d tool like Blender, Maya, 3ds Max, or Cheetah 3D.

How to convert from YUV to CIImage for iOS

I am trying to convert a YUV image to CIIMage and ultimately UIImage. I am fairly novice at these and trying to figure out an easy way to do it. From what I have learnt, from iOS6 YUV can be directly used to create CIImage but as I am trying to create it the CIImage is only holding a nil value. My code is like this ->
NSLog(#"Started DrawVideoFrame\n");
CVPixelBufferRef pixelBuffer = NULL;
CVReturn ret = CVPixelBufferCreateWithBytes(
kCFAllocatorDefault, iWidth, iHeight, kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,
lpData, bytesPerRow, 0, 0, 0, &pixelBuffer
);
if(ret != kCVReturnSuccess)
{
NSLog(#"CVPixelBufferRelease Failed");
CVPixelBufferRelease(pixelBuffer);
}
NSDictionary *opt = #{ (id)kCVPixelBufferPixelFormatTypeKey :
#(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) };
CIImage *cimage = [CIImage imageWithCVPixelBuffer:pixelBuffer options:opt];
NSLog(#"CURRENT CIImage -> %p\n", cimage);
UIImage *image = [UIImage imageWithCIImage:cimage scale:1.0 orientation:UIImageOrientationUp];
NSLog(#"CURRENT UIImage -> %p\n", image);
Here the lpData is the YUV data which is an array of unsigned character.
This also looks interesting : vImageMatrixMultiply, can't find any example on this. Can anyone help me with this?
I have also faced with this similar problem. I was trying to Display YUV(NV12) formatted data to the screen. This solution is working in my project...
//YUV(NV12)-->CIImage--->UIImage Conversion
NSDictionary *pixelAttributes = #{kCVPixelBufferIOSurfacePropertiesKey : #{}};
CVPixelBufferRef pixelBuffer = NULL;
CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
640,
480,
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
(__bridge CFDictionaryRef)(pixelAttributes),
&pixelBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer,0);
unsigned char *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
// Here y_ch0 is Y-Plane of YUV(NV12) data.
memcpy(yDestPlane, y_ch0, 640 * 480);
unsigned char *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
// Here y_ch1 is UV-Plane of YUV(NV12) data.
memcpy(uvDestPlane, y_ch1, 640*480/2);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
if (result != kCVReturnSuccess) {
NSLog(#"Unable to create cvpixelbuffer %d", result);
}
// CIImage Conversion
CIImage *coreImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIContext *MytemporaryContext = [CIContext contextWithOptions:nil];
CGImageRef MyvideoImage = [MytemporaryContext createCGImage:coreImage
fromRect:CGRectMake(0, 0, 640, 480)];
// UIImage Conversion
UIImage *Mynnnimage = [[UIImage alloc] initWithCGImage:MyvideoImage
scale:1.0
orientation:UIImageOrientationRight];
CVPixelBufferRelease(pixelBuffer);
CGImageRelease(MyvideoImage);
Here I am showing data structure of YUV(NV12) data and how we can get the Y-Plane(y_ch0) and UV-Plane(y_ch1) which is used to create CVPixelBufferRef. Let's look at the YUV(NV12) data structure..
If we look at the picture we can get following information about YUV(NV12),
Total Frame Size = Width * Height * 3/2,
Y-Plane Size = Frame Size * 2/3,
UV-Plane size = Frame Size * 1/3,
Data stored in Y-Plane -->{Y1, Y2, Y3, Y4, Y5.....}.
U-Plane-->(U1, V1, U2, V2, U3, V3,......}.
I hope it will be helpful to all. :) Have fun with IOS Development :D
If you have a video frame object that looks like this:
int width,
int height,
unsigned long long time_stamp,
unsigned char *yData,
unsigned char *uData,
unsigned char *vData,
int yStride
int uStride
int vStride
You can use the following to fill up a pixelBuffer:
NSDictionary *pixelAttributes = #{(NSString *)kCVPixelBufferIOSurfacePropertiesKey:#{}};
CVPixelBufferRef pixelBuffer = NULL;
CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, // NV12
(__bridge CFDictionaryRef)(pixelAttributes),
&pixelBuffer);
if (result != kCVReturnSuccess) {
NSLog(#"Unable to create cvpixelbuffer %d", result);
}
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
unsigned char *yDestPlane = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
for (int i = 0, k = 0; i < height; i ++) {
for (int j = 0; j < width; j ++) {
yDestPlane[k++] = yData[j + i * yStride];
}
}
unsigned char *uvDestPlane = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
for (int i = 0, k = 0; i < height / 2; i ++) {
for (int j = 0; j < width / 2; j ++) {
uvDestPlane[k++] = uData[j + i * uStride];
uvDestPlane[k++] = vData[j + i * vStride];
}
}
Now you can convert it to CIImage:
CIImage *coreImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIContext *tempContext = [CIContext contextWithOptions:nil];
CGImageRef coreImageRef = [tempContext createCGImage:coreImage
fromRect:CGRectMake(0, 0, width, height)];
And UIImage if you need that. (image orientation can vary depending on your input)
UIImage *myUIImage = [[UIImage alloc] initWithCGImage:coreImageRef
scale:1.0
orientation:UIImageOrientationUp];
Don't forget to release the variables:
CVPixelBufferRelease(pixelBuffer);
CGImageRelease(coreImageRef);

How to save modified UIImage with original meta data?

In my iOS app, I have to make some modification on a UIImage, for example adding a watermark on it. After that, I want to save the image with original meta data, but if I tried to using writeImageDataToSavedPhotosAlbum:metadata:completionBlock:, I got nothing in the produced image.
Since I have to modify the image, so I have to pass the NSData for metadata: part using the following method: UIImageJPEGRepresentation(UIImage *, float).
And right before saving the image, the meta data is like:
Meta: {
ApertureValue = "2.526068811667587";
BrightnessValue = "-1.807443054797895";
ExposureMode = 0;
ExposureProgram = 2;
ExposureTime = "0.05882352941176471";
FNumber = "2.4";
Flash = 16;
FocalLenIn35mmFilm = 33;
FocalLength = "4.12";
ISOSpeedRatings = (
800
);
LensMake = Apple;
LensModel = "iPhone 5c back camera 4.12mm f/2.4";
LensSpecification = (
"4.12",
"4.12",
"2.4",
"2.4"
);
MeteringMode = 3;
PixelXDimension = 3264;
PixelYDimension = 2448;
SceneType = 1;
SensingMethod = 2;
ShutterSpeedValue = "4.058917734171294";
WhiteBalance = 0;
"{GPS}" = {
Altitude = "147.9932";
DOP = "76.42908";
Latitude = "45.7398";
LatitudeRef = N;
Longitude = "126.6266";
LongitudeRef = E;
TimeStamp = "2013:12:23 08:45:30";
};
So, what is wrong here?
Try this,
UIImage *newImage = self.yourOriginalImage;
UIImage *anotherImage = #"waterMark.png";
CGImageRef imageRef = anotherImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(imageRef),
CGImageGetHeight(imageRef),
CGImageGetBitsPerComponent(imageRef),
CGImageGetBitsPerPixel(imageRef),
CGImageGetBytesPerRow(imageRef),
CGImageGetDataProvider(imageRef), NULL, false);
CGImageRef maskedImage = CGImageCreateWithMask([newImage CGImage], mask);
self.imageView.image = [UIImage imageWithCGImage: maskedImage];
Assuming you have a UIImage called myEditedImage and the metadata in an NSDictionary (or an NSMutableDictionary...) called "metadata":
ALAssetsLibrary *library;
ALAssetsLibraryWriteImageCompletionBlock completionBlock = ^(NSURL *assetURL, NSError *error) {};
float compressionLevel = 0.75; // Or whatever you want
NSData *dataIn = UIImageJPEGRepresentation(myEditedImage, compressionLevel);
[library writeImageDataToSavedPhotosAlbum: dataIn
metadata: metadata
completionBlock:completionBlock];

Convert UIImage to 8 bits

I wish to convert a UIImage to 8 bits. I have attempted to do this but I am not sure if I have done it right because I am getting a message later when I try to use the image proccessing library leptonica that states it is not 8 bits. Can anyone tell me if I am doing this correctly or the code on how to do it?
Thanks!
CODE
CGImageRef myCGImage = image.CGImage;
CFDataRef data = CGDataProviderCopyData(CGImageGetDataProvider(myCGImage));
const UInt8 *imageData = CFDataGetBytePtr(data);
Following code will work for images without alpha channel:
CGImageRef c = [[UIImage imageNamed:#"100_3077"] CGImage];
size_t bitsPerPixel = CGImageGetBitsPerPixel(c);
size_t bitsPerComponent = CGImageGetBitsPerComponent(c);
size_t width = CGImageGetWidth(c);
size_t height = CGImageGetHeight(c);
CGImageAlphaInfo a = CGImageGetAlphaInfo(c);
NSAssert(bitsPerPixel == 32 && bitsPerComponent == 8 && a == kCGImageAlphaNoneSkipLast, #"unsupported image type supplied");
CGContextRef targetImage = CGBitmapContextCreate(NULL, width, height, 8, 1 * CGImageGetWidth(c), CGColorSpaceCreateDeviceGray(), kCGImageAlphaNone);
UInt32 *sourceData = (UInt32*)[((__bridge_transfer NSData*) CGDataProviderCopyData(CGImageGetDataProvider(c))) bytes];
UInt32 *sourceDataPtr;
UInt8 *targetData = CGBitmapContextGetData(targetImage);
UInt8 r,g,b;
uint offset;
for (uint y = 0; y < height; y++)
{
for (uint x = 0; x < width; x++)
{
offset = y * width + x;
if (offset+2 < width * height)
{
sourceDataPtr = &sourceData[y * width + x];
r = sourceDataPtr[0+0];
g = sourceDataPtr[0+1];
b = sourceDataPtr[0+2];
targetData[y * width + x] = (r+g+b) / 3;
}
}
}
CGImageRef newImageRef = CGBitmapContextCreateImage(targetImage);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGContextRelease(targetImage);
CGImageRelease(newImageRef);
With this code I converted an rgb image to an grayscale image:
Hope this helps

Resources