let image_data = UIImageJPEGRepresentation(self.imagetoadd.image!,0.0)
The image in ios, am using swift 3 to do this is being uploaded rotated.How can I solve such thing?
JPEG images usually contain an EXIF dictionary, here are stored a lot information about how the image was taken, image rotation is one of it.
UIImage instances keeps these information (if the original image has it) as well inside a specific property called imageOrientation.
As far as I remember this information is ripped of by using the method UIImageJPEGRepresentation.
To create a correct data instance with the above information you must use Core Graphics methods, or normalize the rotation before sending the image.
To normalize the image something like that should be enough:
CGImageRef cgRef = imageToSave.CGImage;
UIImage * fixImage = [[UIImage alloc] initWithCGImage:cgRef scale:imageToSave.scale orientation:UIImageOrientationUp];
To keep the rotation information:
CFURLRef url = (__bridge_retained CFURLRef)[NSURL fileURLWithPath:path];//Save data path
NSDictionary * metadataDictionary = [self imageMetadataForPath:pathToOriginalImage];
CFMutableDictionaryRef metadataImage = (__bridge_retained CFMutableDictionaryRef) metadata;
CGImageDestinationRef destination = CGImageDestinationCreateWithURL(url, kUTTypeJPEG, 1, NULL);
CGImageDestinationAddImage(destination, image, metadataImage);
if (!CGImageDestinationFinalize(destination)) {
DLog(#"Failed to write image to %#", path);
}
Where the -imageMetadataForPath:
- (NSDictionary*) imageMetadataForPath:(NSString*) imagePath{
NSURL *imageURL = [NSURL fileURLWithPath:imagePath];
CGImageSourceRef mySourceRef = CGImageSourceCreateWithURL((__bridge CFURLRef)imageURL, NULL);
NSDictionary * dict = (NSDictionary *) CFBridgingRelease(CGImageSourceCopyPropertiesAtIndex(mySourceRef,0,NULL));
CFRelease(mySourceRef);
return dict;
}
This is a copy and paste from a project of mine, you probably need to do a huge refactoring, also because it is using manual memory management in core foundation and you are using SWIFT. Of course by using this last set of instructions, the backend code must be prepared to deal with image orientation too.
If you want to know more about rotation, here is a link.
Related
I am using the following code to extract depth map (by following Apple's own example):
- (nullable AVDepthData *)depthDataFromImageData:(nonnull NSData *)imageData orientation:(CGImagePropertyOrientation)orientation {
AVDepthData *depthData = nil;
CGImageSourceRef imageSource = CGImageSourceCreateWithData((CFDataRef)imageData, NULL);
if (imageSource) {
NSDictionary *auxDataDictionary = (__bridge NSDictionary *)CGImageSourceCopyAuxiliaryDataInfoAtIndex(imageSource, 0, kCGImageAuxiliaryDataTypeDisparity);
if (auxDataDictionary) {
depthData = [[AVDepthData depthDataFromDictionaryRepresentation:auxDataDictionary error:NULL] depthDataByApplyingExifOrientation:orientation];
}
CFRelease(imageSource);
}
return depthData;
}
And I call this from:
[[PHAssetResourceManager defaultManager] requestDataForAssetResource:[PHAssetResource assetResourcesForAsset:asset].firstObject options:nil dataReceivedHandler:^(NSData * _Nonnull data) {
AVDepthData *depthData = [self depthDataFromImageData:data orientation:[self CGImagePropertyOrientationForUIImageOrientation:pickedUiImageOrientation]];
CIImage *image = [CIImage imageWithDepthData:depthData];
UIImage *uiImage = [UIImage imageWithCIImage:image];
UIGraphicsBeginImageContext(uiImage.size);
[uiImage drawInRect:CGRectMake(0, 0, uiImage.size.width, uiImage.size.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *pngData = UIImagePNGRepresentation(newImage);
UIImage* pngImage = [UIImage imageWithData:pngData]; // rewrap
UIImageWriteToSavedPhotosAlbum(pngImage, nil, nil, nil);
} completionHandler:^(NSError * _Nullable error) {
}];
Here is the result: it's a low quality (and rotated but let's put orientation aside for now) image:
Then I've transferred the original HEIC file, opened in Photoshop, went to Channels, and selected depth map as below:
Here is the result:
It's a higher resolution/quality, correctly oriented depth map. Why is the code (actually Apple's own code at https://developer.apple.com/documentation/avfoundation/avdepthdata/2881221-depthdatafromdictionaryrepresent?language=objc) resulting in lower-quality result?
I've found the issue. Actually, it was hiding in plain sight. What is obtained from the +[AVDepthData depthDataFromDictionaryRepresentation:error:] method returns disparity data. I've converted it to depth using the following code:
if(depthData.depthDataType != kCVPixelFormatType_DepthFloat32){
depthData = [depthData depthDataByConvertingToDepthDataType:kCVPixelFormatType_DepthFloat32];
}
(Haven't tried but 16-bit Depth, kCVPixelFormatType_DepthFloat16, should also work well)
After converting disparity to depth, the image is exactly the same as in Photoshop. I should have woken up as I was using CGImageSourceCopyAuxiliaryDataInfoAtIndex(imageSource, 0, kCGImageAuxiliaryDataTypeDisparity); (note the "disparity" in the end) and Photoshop was clearly saying "depth map", converting disparity to depth (or just somehow reading as depth, I honestly don't know the physical encoding, maybe iOS was converting depth to disparity when I was copying the aux data in the first place) on the fly.
Side note: I've also solved the orientation issue by creating the image source directly from [PHAsset requestContentEditingInputWithOptions:completionHandler:] method and passing the contentEditingInput.fullSizeImageURL into CGImageSourceCreateWithURL method. It took care of the orientation.
I'm working on creating and storing OpenGL ES1 3D models, and want to include image files to be used as textures, within the same file as the 3D model data. I am having trouble loading the image data in a usable format. I'm using UIImageJPEGRepresentation to convert the image data and store it into an NSData. I then append it to a NSMutableData object, along with all the 3D data, and write it out to a file. The data seems to write and read without error, but I encounter problems when trying use the image data to create a "CGImageRef" which I use to generate the texture data for the 3D model. The image data seems to be in an unrecognized format after it is loaded from the file, because it generates the error "CGContextDrawImage: invalid context 0x0.” when I attempt to create the "CGImageRef". I suspect that the image data is gettin misaligned somehow, causing it to be rejected when attempting to create the "CGImageRef". I appreciate any help. I'm stumped at this point. All of the data sizes and offsets add up and look fine. Saves and loads happen without error. The image data just seems off a bit, but I don't know why.
Here's my code:
//======================================================
- (BOOL)save3DFile: (NSString *)filePath {
// load TEST IMAGE into UIIMAGE
UIImage *image = [UIImage imageNamed:#“testImage.jpg"];
// convert image to JPEG encoded NSDATA
NSData *imageData = UIImageJPEGRepresentation(image,1.0);
// Save length of imageData to global "imDataLen" to use later in “load3DFile”
imDataLen = [imageData length];
// TEST: this works fine for CGImageRef creation in “loadTexture”
// traceView.image=[UIImage imageWithData:[imageData subdataWithRange:NSMakeRange(0, imageDataLen)]];
// [self loadTexture];
// TEST: this also works fine for CGImageRef creation in “loadTexture”
// traceView.image=[UIImage imageWithData:txImData];
// [self loadTexture];
fvoh.fileVersion = FVO_VERSION;
fvoh.obVertDatLen = obVertDatLen;
fvoh.obFaceDatLen = obFaceDatLen;
fvoh.obNormDatLen = obNormDatLen;
fvoh.obTextDatLen = obTextDatLen;
fvoh.obCompCount = obCompCount;
fvoh.obVertCount = obVertCount;
fvoh.obElemCount = obElemCount;
fvoh.obElemSize = obElemSize;
fvoh.obElemType = obElemType;
NSMutableData *obSvData;
obSvData=[NSMutableData dataWithBytes:&fvoh length:(sizeof(fvoh))];
[obSvData appendBytes:obElem length:obFaceDatLen];
[obSvData appendBytes:mvElem length:obVertDatLen];
[obSvData appendBytes:mvNorm length:obNormDatLen];
[obSvData appendBytes:obText length:obTextDatLen];
[obSvData appendBytes:&ds length:(sizeof(ds))];
// next, we append image data, and write all data to a file
// seems to work fine, no errors, at this point
[obSvData appendBytes: imageData length:[imageData length]];
BOOL success=[obSvData writeToFile: filePath atomically:YES];
return success;
}
//======================================================
- (void) load3DFile:(NSString *)filePath {
NSData *fvoData;
NSUInteger offSet,fiLen,fhLen,dsLen;
[[FileList sharedFileList] setCurrFile:(NSString *)filePath];
fvoData=[NSData dataWithContentsOfFile:filePath];
fiLen=[fvoData length];
fhLen=sizeof(fvoh);
dsLen=sizeof(ds);
memcpy(&fvoh,[fvoData bytes],fhLen);offSet=fhLen;
//+++++++++++++++++++++++++++++++
obVertDatLen = fvoh.obVertDatLen;
obFaceDatLen = fvoh.obFaceDatLen;
obNormDatLen = fvoh.obNormDatLen;
obTextDatLen = fvoh.obTextDatLen;
obCompCount = fvoh.obCompCount;
obVertCount = fvoh.obVertCount;
obElemCount = fvoh.obElemCount;
obElemSize = fvoh.obElemSize;
obElemType = fvoh.obElemType;
//+++++++++++++++++++++++++++++++
memcpy(obElem, [fvoData bytes]+offSet,obFaceDatLen);offSet+=obFaceDatLen;
memcpy(mvElem, [fvoData bytes]+offSet,obVertDatLen);offSet+=obVertDatLen;
memcpy(mvNorm, [fvoData bytes]+offSet,obNormDatLen);offSet+=obNormDatLen;
memcpy(obText, [fvoData bytes]+offSet,obTextDatLen);offSet+=obTextDatLen;
memcpy(&ds, [fvoData bytes]+offSet,dsLen);offSet+=dsLen;
// the following seem to read the data into “imageData” just fine, no errors
// NSData *imageData = [fvoData subdataWithRange:NSMakeRange(offSet, imDataLen)];
// NSData *imageData = [fvoData subdataWithRange:NSMakeRange((fiLen-imDataLen), imDataLen)];
// NSData *imageData = [NSData dataWithBytes:[fvoData bytes]+offSet length: imDataLen];
NSData *imageData = [NSData dataWithBytes:[fvoData bytes]+(fiLen-imDataLen) length: imDataLen];
// but the contents of imageData seem to end up in an unexpected format, causing error:
// “CGContextDrawImage: invalid context 0x0.” during CGImageRef creation in “loadTexture”
traceView.image=[UIImage imageWithData:imageData];
[self loadTexture];
}
//======================================================
- (void)loadTexture {
CGImageRef image=[traceView.image].CGImage;
CGContextRef texContext;GLubyte* bytes=nil;GLsizei width,height;
if(image){
width=(GLsizei)CGImageGetWidth(image);
height=(GLsizei)CGImageGetHeight(image);
bytes=(GLubyte*) calloc(width*height*4,sizeof(GLubyte));
texContext=CGBitmapContextCreate(bytes,width,height,8,width*4,CGImageGetColorSpace(image),
kCGImageAlphaPremultipliedLast);
CGContextDrawImage(texContext,CGRectMake(0.0,0.0,(CGFloat)width,(CGFloat)height),image);
CGContextRelease(texContext);
}
if(bytes){
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,width,height,0,GL_RGBA,GL_UNSIGNED_BYTE,bytes);
free(bytes);
}
}
//======================================================
I failed to receive any answers to this question. I finally stumbled across the answer myself. When I execute the save3DFile code, instead of adding the image data to NSMutableData *obSvData, using 'appendBytes' as illustrated below:
[obSvData appendBytes: imageData length:[imageData length]];
I instead use 'appendData' as shown here:
[obSvData appendData: imageData];
where imageData was previously filled with the contents of a UIImage and converted to JPEG format in the process as follows:
NSData *imageData = UIImageJPEGRepresentation(image,1.0);
See the complete code listing above for context. Anyway, the using 'appendData' instead of 'appendBytes' made all the difference, and allowed me to store the image data in the same file along with all the other 3D model data (vertices, indices, normals, et cetera), reloading all that data without problem, and successfully create 3D models with textures from a single file.
CGImageRef thumbnailImage = NULL;
CGImageSourceRef imageSource = NULL;
CFDictionaryRef createOptions = NULL;
CFStringRef createKeys[3];
CFTypeRef createValues[3];
CFNumberRef thumbnailSize = 0;
UIImage * thumbnail;
NSData * squareData = UIImagePNGRepresentation(sourceImage);
NSData * thumbnailData = nil;
imageSource = CGImageSourceCreateWithData((__bridge CFDataRef)squareData,NULL);
if (imageSource)
{
thumbnailSize = CFNumberCreate(NULL, kCFNumberIntType, &imageSize);
if (thumbnailSize)
{
createKeys[0] = kCGImageSourceCreateThumbnailWithTransform;
createValues[0] = (CFTypeRef)kCFBooleanTrue;
createKeys[1] = kCGImageSourceCreateThumbnailFromImageIfAbsent;
createValues[1] = (CFTypeRef)kCFBooleanTrue;
createKeys[2] = kCGImageSourceThumbnailMaxPixelSize;
createValues[2] = (CFTypeRef)thumbnailSize;
createOptions = CFDictionaryCreate(NULL, (const void **) createKeys,
createValues, sizeof(createValues)/ sizeof(createValues[0]),
&kCFTypeDictionaryKeyCallBacks,
& kCFTypeDictionaryValueCallBacks);
if (createOptions)
{
thumbnailImage = CGImageSourceCreateThumbnailAtIndex(imageSource,0,createOptions);
if(thumbnailImage)
{
thumbnail = [UIImage imageWithCGImage:thumbnailImage];
if (thumbnail)
{
thumbnailData = UIImagePNGRepresentation(thumbnail);
}
}
}
}
}
Getting different thumbnailData.length value for the same image in iOS12. I am trying to create a thumbnail image using CGImageSourceCreateThumbnailAtIndex() and passing sourceImage as a parameter. Is it an iOS12 bug? Is there a workaround for it? I'm using iOS12 beta4.
The data size is different, but the resulting image is fine. They’ve clearly made some very modest changes to the algorithm. But there is no bug here.
Personally, I notice two changes:
In non-square images, the algorithm for determining the size of the thumbnail has obviously changed. E.g., with my sample 3500×2335px image, when I created a 100px thumbnail, it resulted in a 100×67px image in iOS 12.2, but was 100×66px in iOS 11.0.1.
In square images, the two iOS versions both generated suitably square thumbnails. Regarding the image, itself, I could not see much of any observable difference with the naked eye. In fact, I dropped this into Photoshop and analyzed the differences (where black == no difference), it first seemed to suggest no change at all:
Only when I started to really pixel-peep could I detect the very modest changes. The individual channels rarely differed by more than 1 or 2 (in these UInt8 values). Here is the same delta image, this time with the levels blown out so you can see the differences:
Bottom line, there clearly is some change to the algorithm, but I wouldn’t characterize it as a bug. It is just different, but it works fine.
In an unrelated observation, your code has some leaks. If a Core Foundation method has Create or Copy in the name, you are responsible for releasing it (or, in bridged types, transferring ownership to ARC, which isn’t an option here). The static analyzer, shift+command+B, is excellent at identifying these issues.
FWIW, here’s my rendition:
- (UIImage * _Nullable)resizedImage:(UIImage *)sourceImage to:(NSInteger)imageSize {
NSData *squareData = UIImagePNGRepresentation(sourceImage);
UIImage *thumbnail = nil;
CGImageSourceRef imageSource = CGImageSourceCreateWithData((CFDataRef)squareData, NULL);
if (imageSource) {
NSDictionary *createOptions = #{
(id)kCGImageSourceCreateThumbnailWithTransform: #true,
(id)kCGImageSourceCreateThumbnailFromImageIfAbsent: #true,
(id)kCGImageSourceThumbnailMaxPixelSize: #(imageSize)
};
CGImageRef thumbnailImage = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, (CFDictionaryRef)createOptions);
if (thumbnailImage) {
thumbnail = [UIImage imageWithCGImage:thumbnailImage];
if (thumbnail) {
NSData *data = UIImagePNGRepresentation(thumbnail);
// do something with `data` if you want
}
CFRelease(thumbnailImage);
}
CFRelease(imageSource);
}
return thumbnail;
}
In the app I'm working on, we're capturing photos which need to have 4:3 aspect ratio in order to maximize the field of view we capture. Up untill now we were using AVCaptureSessionPreset640x480 preset, but now we're in need of larger resolution.
As far as I've figured, the only other two 4:3 formats are 2592x1936 and 3264x2448. Since these are too large for our use case, I need a way to downsize them. I looked into a bunch of options but did not find a way (prefereably without copying the data) to do this in an efficient manner without losing the exif data.
vImage was one of the things I looked into but as far as I've figured the data would need to be coppied and the exif data would be lost. Another option was creating an UIImage from data provided by jpegStillImageNSDataRepresentation, scaling it and getting the data back. This approach also seems to strip the exif data.
The ideal approach here would be resizing the buffer contents directly and resizing the photo. Does anyone have an idea how I would go about doing this?
I ended up using ImageIO for resizing purposes. Leaving this piece of code here in case someone runs into the same problem, as I've spent way too much time on this.
This code will preserve the exif data, but will create a copy of the image data. I ran some benchmarks - the execution time for this method is ~0.05sec on iPhone6, using AVCaptureSessionPresetPhoto as the preset for the original photo.
If someone does have a more optimal solution, please leave a comment.
- (NSData *)resizeJpgData:(NSData *)jpgData
{
CGImageSourceRef source = CGImageSourceCreateWithData((CFDataRef)jpgData, NULL);
// Create a copy of the metadata that we'll attach to the resized image
NSDictionary *metadata = (NSDictionary *)CFBridgingRelease(CGImageSourceCopyPropertiesAtIndex(source, 0, NULL));
NSMutableDictionary *metadataAsMutable = [metadata mutableCopy];
// Type of the image (e.g. public.jpeg)
CFStringRef UTI = CGImageSourceGetType(source);
NSDictionary *options = #{ (id)kCGImageSourceCreateThumbnailFromImageIfAbsent: (id)kCFBooleanTrue,
(id)kCGImageSourceThumbnailMaxPixelSize: #(MAX(FORMAT_WIDTH, FORMAT_HEIGHT)),
(id)kCGImageSourceTypeIdentifierHint: (__bridge NSString *)UTI };
CGImageRef resizedImage = CGImageSourceCreateThumbnailAtIndex(source, 0, (CFDictionaryRef)options);
NSMutableData *destData = [NSMutableData data];
CGImageDestinationRef destination = CGImageDestinationCreateWithData((CFMutableDataRef)destData, UTI, 1, NULL);
if (!destination) {
NSLog(#"Could not create image destination");
}
CGImageDestinationAddImage(destination, resizedImage, (__bridge CFDictionaryRef) metadataAsMutable);
// Tell the destination to write the image data and metadata into our data object
BOOL success = CGImageDestinationFinalize(destination);
if (!success) {
NSLog(#"Could not create data from image destination");
}
if (destination) {
CFRelease(destination);
}
CGImageRelease(resizedImage);
CFRelease(source);
return destData;
}
In my project i need to show the different sizes of images in zig-zag fashion. so, i converted the uiimages(url) which are coming from service to NSData and then i get the uiimage. my code is
NSURL *url = [NSURL URLWithString:[[_result objectAtIndex:i ] valueForKey:#"PImage"]];
NSData *data = [NSData dataWithContentsOfURL:url];
UIImage *image = [UIImage imageWithData:data];
so i can get the image size(width and height), But my problem is according to the image size, i need to create UIView, this code is works fine for me, but it is taking too much of time(almost 25 sec) to load 8 images. i figured converting UIImage to NSData is taking time. Is there any way to get the image size(width and height) without converting it into NSData
Thanks for spending time for me.
You can get image properties without actually loading whole image data from disk using ImageIO framework:
#import ImageIO;
...
NSURL *imageURL = … // Init URL somehow
CGImageSourceRef imgSource = CGImageSourceCreateWithURL((__bridge CFURLRef)url, NULL);
NSDictionary* imageProps = (__bridge_transfer NSDictionary*) CGImageSourceCopyPropertiesAtIndex(imgSource, 0, NULL);
NSLog(#"%#", imageProps);
CFRelease(imgSource);
Image width and height will be stored in dictionary under PixelHeight and PixelWidth keys (tested with png image, may be other image formats will use different keys)
Instead of converting url to data and to UIImage, Use EGOImageView OR AsyncImageView. You can simply pass the URL to them. Again setFrame based on size of the image.