I'm creating a Theos tweak for my jailbroken iPhone and I'm stuck on a feature. So far I can block screenshot detection and replay detection, add infinite text, and have the app open directly to the feed. The feature I'm stuck on is saving incoming and outgoing snaps (and stories, but one step at a time). I used the FLEXible tweak from Cydia to assist me and I noticed that the snaps have something to do with "UIImage." From what I know I have to somehow convert this UIImage to *.jpg or *.png and save it somewhere (I'm looking to save it to the app's Documents directory).
I searched around the site and found the following that may be helpful:
// Convert UIImage to JPEG
NSData *imgData = UIImageJPEGRepresentation(image, 1); // 1 is compression quality
// Identify the home directory and file name
NSString *jpgPath = [NSHomeDirectory() stringByAppendingPathComponent:#"Documents/Test.jpg"];
// Write the file. Choose YES atomically to enforce an all or none write. Use the NO flag if partially written files are okay which can occur in cases of corruption
[imgData writeToFile:jpgPath atomically:YES];
and
// Create paths to output images
NSString *pngPath = [NSHomeDirectory() stringByAppendingPathComponent:#"Documents/Test.png"];
NSString *jpgPath = [NSHomeDirectory() stringByAppendingPathComponent:#"Documents/Test.jpg"];
// Write a UIImage to JPEG with minimum compression (best quality)
// The value 'image' must be a UIImage object
// The value '1.0' represents image compression quality as value from 0.0 to 1.0
[UIImageJPEGRepresentation(image, 1.0) writeToFile:jpgPath atomically:YES];
// Write image to PNG
[UIImagePNGRepresentation(image) writeToFile:pngPath atomically:YES];
(like I said I found this; it is not my code).
Bah, I'm not sure how I would incorporate this. Any help?
Related
I've been checking different solutions that I've found through StackOverflow but sadly I couldn't find the solution to this problem. I basically need to find the size (in bytes, Kb or whatever) to an image that is stored in the iPhone Image gallery. Actually, I need to validate the size before uploading it to a server.
So, I took a JPEG image from the internet:
http://static1.uk.businessinsider.com/image/596c82d34af3fa51058b4978-707/david%20slater.jpeg
Once I download this to my computer, when i get the file information it says:
Size 80865 bytes (82 KB on disk). So, I save this file to the iPhone's image library. Once I get the image from the gallery and I inspect it through xCode I see the following:
So, basically the file has 80865 bytes when it's on my computers disk, but it has 289371 when I inspect it through xcode (the method length from the NSData object returns 'the number of bytes contained by the data object' according to the documentation). Am I missing something? Why is the image inside the iPhone disk almost 4 times the size of the image stored in the iPhone?
The code I am using is:
-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
NSUInteger length = [UIImageJPEGRepresentation(image,1.0f) length];
}
EDIT: Ok, so I've changed the code to save the image to disk first and check the size then, I use:
-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
UIImage *image = [info objectForKey:UIImagePickerControllerEditedImage];
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *filePath = [[paths objectAtIndex:0] stringByAppendingPathComponent:#"Image.png"];
[UIImageJPEGRepresentation(image,1.0f) writeToFile:filePath atomically:YES];
NSError *attributesError;
NSDictionary*fileAttributes = [[NSFileManager defaultManager] attributesOfItemAtPath:filePath error:&attributesError];
NSNumber *fileSizeNumber = [fileAttributes objectForKey:NSFileSize];
}
So, If i print the value of the variable fileSizeNumber I get: 369055 (which according to the documentation indicated the file's size in bytes). But this number doesn't seem to match the original 80865 either, it is actually 10 times bigger than the original image!
[UIImagePNGRepresentation(image) writeToFile:filePath atomically:YES];
You're writing the uncompressed data (or at least differently compressed data) to a file and then checking its length. But the point the JPEG file format is that it compresses the data. It's not surprising that reading a JPEG image, writing it to a file as PNG data, and then looking at the length of that file doesn't tell you what you want.
I'm not sure why you don't believe what Xcode is telling you, but if you feel driven to see the actual size of the JPEG image in its compressed state, you'll need to find the JPEG file itself. You'd do something like:
[[NSBundle mainBundle] pathForResource:#"selfie" ofType:#"jpeg"];
to get a path to the file, and then use NSFileManager to look at the attributes of the file at that path.
I have to build an iPhone app that check for the net connectivity and whenever it get it has an online sync with a web service, it downloads X images and saves them into the device.
Then, in offline mode, I have to load the images in collectionview for that i have to store images somewhere.Same For data also
I am using .net webservice with json response.
I was thinking about the Core Data storage, is that possible? Maybe storing images in core data database will slow down the app?
You have to download and save each received image in the application directory, then you save in CoreData the path to those images.
// Save image to disk
NSString *documentaryPath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
NSString *filePath = [NSString stringWithFormat:#"%#/Image.png",documentaryPath];
NSData *data = [NSData dataWithData:UIImagePNGRepresentation(YOUR_IMAGE)];
[data writeToFile:filePath atomically:YES];
// Retrieve the Image
- (NSData *) imageData {
NSString *docDir = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
NSString *pngFilePath = [NSString stringWithFormat:#"%#/Image.png",docDir];
NSData *dataImage = [NSData dataWithContentsOfFile:pngFilePath];
return dataImage;
}
And use like below.
UIImage *image = [UIImage imageWithData:imageData]
May be it will help you.
Yes,it is possible,but not worthwhile.
As usually,we cache an image with follow steps:
Check whether we have load the image at the specific URL,if yes,go step 2,otherwise,go step 3
Load the image from file.
DownLoad the image,and save it to file.you should give the image file a name associated with its URL,so you can find it by the url next time.
There is a tool can do bellow work well,you can try using SDWebImage.
Yes you can store images in core data in the form of Binary Data, here is an example of how to do it link!
As far as speed is concerned core data will not slow down your app performance but it usually takes more space than sqlite, here's a link! that show comparison between core data and sqlite performance.
I suddenly find out that in iOS 8.3 SDK (I didn't test other versions), I can use [UIImage imageWithData:] to load PVRTC format directly?
It isn't supposed to be like that, right? I can't find any documentation or discussion about it, I don't even know if I can rely on it...
But it does work in both simulators and real devices.
Here are some codes of my test, the test project doesn't include OpenGLES.framework.
NSData *data = THE_CONTENT_OF_A_PVR_FILE;
NSString *tempDir = NSTemporaryDirectory();
[data writeToFile:[tempDir stringByAppendingPathComponent:#"temp.pvr"] atomically:YES];
UIImage *tempImage = [UIImage imageWithData:data];
[UIImagePNGRepresentation(tempImage) writeToFile:[tempDir stringByAppendingPathComponent:#"temp.png"] atomically:YES];
The two saved files are shared here: http://d.pr/f/17ugQ
You can check the pvr file in a HEX editor, it coresponds to the specifications of PVRTC format version 2, with pixel format in PVRTC4.
Any idea?
I'm new to iOS development and I'm trying to write an image as a jpeg file to the file system. From the logs I know that those file are indeed written to the file system, but when I try to save them to the camera roll they all appear black. I'm using the following code to write them as jpeg files:
[UIImageJPEGRepresentation(image, 1.0) writeToFile:jpegPath atomically:YES];
And the following code to write to camera roll:
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
Anybody know how to verify that those jpeg files are indeed written to the file system? And what I might be doing wrong in the second line of code?
EDIT: So here is the entire method:
- (BOOL)createImagesForSlides:(NSString *)documentsPath destinationURL:(NSURL *)destinationURL
{
NSFileManager *manager = [NSFileManager defaultManager];
CPDFDocument *document = [[CPDFDocument alloc] initWithURL:destinationURL];
NSString *folderName = [NSString stringWithFormat:#"/%#", document.title];
// create new folder
NSString *newFolderPath = [documentsPath stringByAppendingString:folderName];
BOOL result = [manager createDirectoryAtPath:newFolderPath withIntermediateDirectories:NO attributes:nil error:nil];
// create a jpeg file for each page of the pdf file
for (int i = 1; i <= document.numberOfPages; ++i) {
NSString *jpegPath = [NSString stringWithFormat:#"%#/%d.jpg", newFolderPath, i];
[UIImageJPEGRepresentation([[document pageForPageNumber:i] image], 1.0) writeToFile:jpegPath atomically:YES];
UIImageWriteToSavedPhotosAlbum([[document pageForPageNumber:i] image], nil, nil, nil);
}
return result;
}
document is a pointer to a CPDFDocument instance, and it's from some open source reader code available on github (iOS-PDF-Reader). What I basically do here is grab each page of the pdf document, generate an image, and then save them to the file system as jpeg file. Weird enough, even though there are more than 10 pages in the document, UIImageWriteToSavedPhotosAlbum only writes 5 files to camera roll. Any idea why?
It would probably be useful to see how you construct jpegPath here.
First, writeToFile:atomically: returns a BOOL, so check that for your first indication of success or failure.
There are a couple of ways you can verify that the image is written to the file system. If you are running on a device use something like iExplorer to access the file system and look at the file written. Since it is NSData* you can cheek the file size to make sure it looks reasonable. On the simulator, dig into the folder structure under ~/Library/Application Support/iPhone Simulator/ and examine the file. Without looking into the filesystem itself try reading the image back into another UIImage (imageWithData: in your case since you are writing a NSData* object).
There doesn't appear to be anything wrong with your UIImageWriteToSavedPhotosAlbum call according to the docs. It is OK for the last 3 arguments to be nil (all are marked as optional), you just have to be sure the UIImage is valid. Have you set a breakpoint to be sure you have a valid image (Xcode Quick Look feature)?
In my application I have to send images of different formats to the server (it must be all file formats that can be read by the UIImage class) https://developer.apple.com/library/ios/#documentation/uikit/reference/UIImage_Class/Reference/Reference.html
And the problem is: I don't know when I should use each of this methods. Of course it's obvious that for .png images I need to use UIImagePNGRepresentation and for .jpg/.jpeg UIImageJPEGRepresentation. But what about other formats (.tiff,.gif , etc.)? There are only two methods for image manipulations and so many formats.
You say:
Of course it's obvious that for .png images I need to use UIImagePNGRepresentation and for .jpg/.jpeg UIImageJPEGRepresentation.
No, that's not necessarily the case. If you have some original "digital asset", rather than creating a UIImage and then using one of those two functions to create the NSData that you'll upload, you will often just load the NSData from the original asset and bypass the round-trip to a UIImage at all. If you do this, you don't risk any loss of data that converting to a UIImage, and then back again, can cause.
There are some additional considerations, though:
Meta data:
These UIImageXXXRepresentation functions strip the image of its meta data. Sometimes that's a good thing (e.g. you don't want to upload photos of your children or expensive gadgets the include the GPS locations where malcontents could identify where the shot was taken). In other cases, you don't want the meta data to be thrown away (e.g. date of the original shot, which camera, etc.).
You should make an explicit decision as to whether you want meta data stripped or not. If not, don't round-trip your image through a UIImage, but rather use the original asset.
Image quality loss and/or file size considerations:
I'm particularly not crazy about UIImageJPEGRepresentation because it a lossy compression. Thus, if you use a compressionQuality value smaller than 1.0, you can lose some image quality (modest quality loss for values close to 1.0, more significant quality loss with lower compressionQuality values). And if you use a compressionQuality of 1.0, you mitigate much of the JPEG image quality loss, but the resulting NSData can often be bigger than the original asset (at least if the original was, itself, a compressed JPEG or PNG), resulting in slower uploads.
UIImagePNGRepresentation doesn't introduce compression-based data loss, but depending upon the image, you may still lose data (e.g. if the original file was a 48-bit TIFF or used a colorspace other than sRGB).
It's a question of whether you are ok with some image quality loss and/or larger file size during the upload process.
Image size:
Sometimes you don't want to upload the full resolution image. For example, you might be using a web service that wants images no bigger than 800px per side. Or if you're uploading a thumbnail, they might want something even smaller (e.g. 32px x 32px). By resizing images, you can make the upload much smaller and thus much faster (though with obvious quality loss). But if you use an image resizing algorithm, then creating a PNG or JPEG using these UIImageXXXRepresentation functions would be quite common.
In short, if I'm trying to minimize the data/quality loss, I would upload the original asset if it's in a format that the server accepts, and I'd use UIImagePNGRepresentation (or UIImageJPGRepresentation with quality setting of 1.0) if the original asset was not in a format accepted by the server. But the choice of using these UIImageXXXRepresentation functions is a question of your business requirements and what the server accepts.
Rob points out a lot of very good things to consider when working with images (+1), however here is an example of how to create tiff's and gif's as you asked:
First, you need to link to the ImageIO library (under the Build Phases of your app).
Next you need to #import <ImageIO/ImageIO.h> at the top of your file.
Then, the following code will convert the image for you:
// Get a reference to the image that you already have stored on disk somehow.
// If it isn't stored on disk, then you can use CGImageSourceCreateWithData() to create it from an NSData representation of your image.
NSURL *url = [[NSBundle mainBundle] URLForResource:#"01" withExtension:#"jpg"];
CGImageSourceRef src = CGImageSourceCreateWithURL((__bridge CFURLRef)(url), NULL);
// Create a URL referencing the Application Support Directory. We will save the new image there.
NSFileManager *fm = [NSFileManager defaultManager];
NSURL *suppurl = [fm URLForDirectory:NSApplicationSupportDirectory
inDomain:NSUserDomainMask
appropriateForURL:nil
create:YES
error:NULL];
// Append the name of the output file to the app support directory
// For tiff change the extension in the next line to .tiff
NSURL *gifURL = [suppurl URLByAppendingPathComponent:#"mytiff.gif"];
// Create the destination for the new image
// For tiff, use #"public.tiff" for the second argument of the next line (instead of #com.compuserve.gif".
CGImageDestinationRef dest = CGImageDestinationCreateWithURL((__bridge CFURLRef)gifURL,
(CFStringRef)#"com.compuserve.gif",
1,
NULL);
CGImageDestinationAddImageFromSource(dest, src, 0, NULL);
// Write the image data to the URL.
bool ok = CGImageDestinationFinalize(dest);
if (!ok)
NSLog(#"Unable to create gif file.");
// Cleanup
CFRelease(src);
CFRelease(dest);
This was adapted from the code in this book.