Data hiding in Image - ios

I am hiding a text file in an image using http://github.com/anirudhsama it works fine and I could able to extract the text file again back with my program.
But when I programmatically share the image in facebook, twitter and email, that shared image is not decode properly so I'm not getting the file back.
I retrive the image as follows:
UIImage *finalImageWithStegno = [UIImage imageWithContentsOfFile:fileName];

What I suspect is image compression when it is uploaded to the site. A simple way to check this is to hide a message in a cover image (obtain stego image). Upload the image on a website and download it. Compare the original stego image to the downloaded image. If they differ (byte by byte), there's your problem.
From a quick look at the code, it seems the app hides the data in the spatial domain, which is not robust. Your message is directly hidden in the image pixels and if they change (due to lossy compression, blurring, etc), your message will be lost. A solution to this would be to hide the data in the frequency domain. Another solution could be uploading the images with a filetype which doesn't get compressed? I don't know much how sites deal with images so the second suggestion may be impossible.
In any case, if uploading to a site distorts the image, look around for another app which may serve you unless you can code yourself. Then we can get into the specifics. :)

Your algorithm is not robust. Use Transform domain Stegnography to retain the information when its is re encoded. You may choose to embed in DCT coefficients or DWT Coefficients for better robustness.

Related

Saving UIImages to files without using NSData

I want to store a bunch of images that are taken while the user uses the app, while making sure that I can view them with decently high resolution later on. And by "store", for now I don't need to store them past the closure of the app. Simply having them available after some point while the app is still alive is all I need.
I first tried simply storing the UIImages in their original size on the app, but then the app would crash after 7 or 8 pics were taken because of memory usage.
I then tried shrinking them (since my app has a grid display wherein I can see all the pictures, but shrunk to fit on a 3x3 grid of images) , and my app stopped crashing. But when I wanted the pictures to be viewed individually on full screen, the quality was terrible because I was enlarging a shrunk photo.
So I figured why not find a way to store the original image through another object in a way that wouldn't eat up too much memory. Searching online lead me to decide to store them in a file, by converting the images into NSData and then writing this into a file. BUT, when I would then load the NSData back into a UIImage, the orientation of my photos taken through the camera were all sideways! And after hours of looking (and failing) through how to transform it back into the proper orientation, I've decided to give up on trying to fix this orientation bug.
Instead, I just want to know if there is any other way for me to store large/high-res UIImages (without hogging up memory) besides using NSData. What ideas do you guys have?
And pardon me for having to write so much for a one-liner question. I just didn't want to get suggestions on doing something I've already tried.
Save it as a jpeg instead of a PNG, that way the image will be rotated for you. See https://stackoverflow.com/a/4868914/96683

Does a NSData image weight in memory as much as the opened image?

I am doing an upload function to my server from my iOS app. I am right now architecting the best way to do this and thinking that my approach would run into trouble.
I've over 20 pictures that I would like to send in 1 request using AFmultiparFormData in AFNetworking with also text information that is relevant to that picture.
Now... I know how to create the NSDictionary with lots of information inside but never created a NSDictionary with 20 images in NSData format or attempted to upload many pictures at once.
The issue is that I don't know whether the NSData images would take as much memory space as an image. Thus, I could only do 2/3 at the time (if they were full screen images which they are not, but its just an arbitrary point of measurement)
Given What I said and the seudo code below, Could I do that or would I ran out of memory? otherwise I would be left to do 1 by 1.
// for loop that does this 20 times
UIImage* image = [UIImage imageNamed:myExercise.exercisePictureThumb];
NSData * exercisePictureThumb = UIImagePNGRepresentation(image1);
// add to dictionary recursively
You should not put all of your image data into a NSDictionary, as that's likely to cause memory problems.
Instead, create a dictionary or array of the URLs to your files, then make use of AFNetworking's multipartFormRequestWithMethod:path:parameters:constructingBodyWithBlock: and the AFMultipartFormData protocol, particularly appendPartWithFileURL:name:fileName:mimeType:error:. This way your image data for all of the images is not all in memory at one time, but is instead streamed from disk. Huge performance increase.
MishieMoo is absolutely right regarding the memory/streaming considerations and the wise counsel to avoid loading all of the images into memory at one time.
I wanted to get back to your UIImage vs NSData question, though. In my mind, there are two considerations:
File size: Unfortunately, there is no simple answer as to whether the original image will be smaller than the UIImage/UIImagePNGRepresentation output. Sometimes, if the original file was a JPEG, the result of round-tripping the image through a UIImage can actually make the file larger (e.g. I just grabbed three images and the PNG roundtrip took them from 4.7mb to 38.9mb). But, if the originals were uncompressed images, then the PNG roundtrip could make them considerably smaller.
Data loss: For me, the more significant issue with round-tripping the image through UIImage is that you'll suffer data loss. You lose meta data. Depending upon the image, you can even lose image data in the translation, too (e.g. if original wasn't in sRGB color space, if bit-depth lowered, etc.).
Now, sometimes it makes perfect sense (e.g., I use a variation of this UIImage round trip technique when creating image thumbnails). But if my intent is to upload images onto a server, I'd be very wary about going through the UIImage conversion, possibly losing data, without a compelling business case.

Handle large images in iOS

I want to allow the user to select a photo, without limiting the size, and then edit it.
My idea is to create a thumbnail of the large photo with the same size as the screen for editing, and then, when the editing is finished, use the large photo to make the same edit that was performed on the thumbnail.
When I use UIGraphicsBeginImageContext to create a thumbnail image, it will cause a memory issue.
I know it's hard to edit the whole large image directly due to hardware limits, so I want to know if there is a way I can downsample the large image to less then 2048*2048 wihout memory issues?
I found that there is a BitmapFactory Class which has an inSampleSize option which can downsample a photo in Android platform. How can this be done on iOS?
You need to handle the image loading using UIImage which doesn't actually load the image into memory and then create a bitmap context at the size of the resulting image that you want (so this will be the amount of memory used). Then you need to iterate a number of times drawing tiles from the original image (this is where parts of the image data are loaded into memory) using CGImageCreateWithImageInRect into the destination context using CGContextDrawImage.
See this sample code from Apple.
Large images don't fit in memory. So loading them into memory to then resize them doesn't work.
To work with very large images you have to tile them. Lots of solutions out there already for example see if this can solve your problem:
https://github.com/dhoerl/PhotoScrollerNetwork
I implemented my own custom solution but that was specific to our environment where we had an image tiler running server side already & I could just request specific tiles of large images (madea server, it's really cool)
The reason tiling works is that basically you only ever keep the visible pixels in memory, and there isn't that many of those. All tiles not currently visible are factored out to the disk cache, or flash memory cache as it were.
Take a look at this work by Trevor Harmon. It improved my app's performance.I believe it will work for you too.
https://github.com/coryalder/UIImage_Resize

Besides standard/progressive, the 3rd kind of JPEG compression: load by channel?

this question might be an "Open Question" and many of you might be eager to close it, but please don't. Let me explain.
As we all know, JPEG has two kinds of compression (at least in Photoshop save dialog)
optimized, where image was loaded kinda like line-by-line
progressive, where image was loaded first mosaic-like, the progressively better till the original resolution
I have read a lot of PNG/JPEG optimization articles before, but now I encountered this awesome third kind compression, from a wild random Google Image search. The JPEG in question is this
http://storage.googleapis.com/marc-pres/boston-event-1012/images/google-data-center.jpg
Try load the link in Chrome/Firefox (in IE/Safari only until the image was fully loaded then displayed)
you can observe:
image were loaded first in black & white
then looks like the Red channel loaded
next the Green channel loaded
last the Blue channel loaded
I tried loading it again with a emulated very slow connection, and observed that the JPEG is not only loads by channel order, but in progressive way as well. So first loaded image is blank-and-white mosaic then green-ish mosaic then gradually full color mosaic and finally full resolution and full color image.
This is amazing technology, suppose you are building an e-magazine, where each page has a lot of pictures, you want the user to fast flip browsing through pages, and this kind of image is exactly what works best. For fast preview, load blank-n-white thumbnail, if the user stays, fully load the original image.
So my question is: How could I generate such image using Python Pillow or ImageMagick, or any kind of open source software?
If you think this question is inappropriate please comment, don't just close it.
Update 1:
It turns out Google used this technology in all of its JPEG pictures 1, 2 e.g. this
Update 2: I found another clue
The image data in a JPEG file can be sliced up in many different ways, and the slices (or "scans" as they're usually called) can be stored in the file in many different orders.
In most JPEG files, the first scan in the file contains all of the image's color components, interleaved together if it is a color image. In a non-progressive JPEG, the file will contain just that one scan. In a progressive JPEG, other scans will follow, each of which may contain one component or multiple components.
But there's nothing that requires it to be done that way. If the first scan in the file does not contain all the color components, we might call such a file "non-interleaved".
Your examples files are non-interleaved, and they are also progressive. Progressive non-interleaved JPEGs seem to be more widely supported than non-progressive non-interleaved JPEGs.
The standard IJG libjpeg software is capable of creating non-interleaved files. Though it's not exactly easy, you can use its cjpeg utility, with the -scans option documented in the wizard.txt file.

Image Processing In Ios

I am new to iOS, and I am trying a image on a view. The image in question is large, and fetched from a URL, so to load the view it takes quite some time (almost 5-10 sec).
I don't want to display a place holder and to do some asynchronous call to update the image;
instead, I want image processing similar to how it is implemented in the Facebook iOS app: the big image display hazy at first, slowly becoming the original image as it loads. Does anyone know how this can be achieved in iOS?
The 'hazy' image you talk about is a 'progressive' jpeg. You can re-format any arbitrary image to a progressive jpeg (on the server side), then use the Image I/O methods to display partial versions. There are a variety of techniques you could use - see this prior post for more pointers.
You can download and display a thumbnail image view first or just display a temporary loading image while you are downloading the big image. Take a look at ASIHTTP for asynchronous image downloading. There are several other frameworks available as well and tons of sample code if you take time to google.

Resources