ObjCinstance to PIL image - ios

I am designing a face recognition system for a course.
For converting I Currently i use Image.writeToFile_atomically_(filename, True) which saves the image and I can open the image but it is very slow.
I need to be able to convert ObjCinstance into PIL or ui image without saving the image.
Edit:
My problem is the way I convert ObjCinstance is slow because I save the image and then access the image with PIL. I would like to get a faster way to convert ObjCinstance into PIL. Preferably convert within the script.

Related

How to create a large jpeg image from svg image using macaw?

I want to create a high resolution image from a svg file.But i don't know how.Can i create it using macaw or there is another way.
SVG - is a vector format, it used in Android, but not iOS, so You can look for svg pods, like https://cocoapods.org/pods/SwiftSVG so You can add it in code directly
If You need it once - ask google "svg online converter" and convert it to png/pdf/jpg. For example: https://svgtopng.com
Macaw is currently in beta and it has limitations. It can create high-resolution image but it is limited and also macaw can not render all SVG.
But macaw is so far the best svg renderer for ios till now and they have done some really magical things.

Displaying BufferedImage directly in JavaFX

I'm making a raster image editor in JavaFX, and all the imaging libraries I've found use BufferedImage. Problem is, I can't display a BufferedImage in a JavaFX app, without converting it to javafx.scene.image.Image first. I'm concerened about speed here; if I'm constantly converting between BufferedImage and JavaFX's Image, the UI will lag.
For example, if the user wants to draw a line with the pencil tool, I'm constantly updating the image in the UI. Every time they move the cursor, I have to convert between BufferedImage and JavaFX's Image, which I'm sure will make the line appear laggy as it's being drawn.
The libraries I've found are ImgLib2, Marvin, Apache Commons Imaging Library, and ImageJ. All of them use BufferedImage. I haven't found an imaging library yet that uses JavaFX
's Image.
So my question is: how do I display a BufferedImage directly in JavaFX? I'm reluctant to go back to AWT. Alternatively, is there an imaging library that uses JavaFX's Image instead of BufferedImage?
Thanks for your replies!
Use a SwingNode to display a BufferedImage in JavaFX without first converting the image to a JavaFX Image.
I do not know if this would be any more performant than doing an image conversion, you would have to do some benchmarking with your application to see.
In general, I'd probably recommend doing a conversion over putting a BufferedImage in a SwingNode. With a SwingNode you will need to be careful how you manage threading and I think benchmarking using a SwingNode will not yield you the performance increase you are looking for.

How to attach metadata to UIImage without saving to library

I have a function capturing few images and add them into NSMutableArray rawImages of UIImage. Then I pass this rawImages to other object to apply some image processing on the images in the array. In the image processing, I need to retrieve the exposure time from EXIF metadata.
I have been able to get the EXIF from CMSampleBuffer when capturing using AVFoundation. Is there a way to attach this EXIF to the rawImages array?
(PS: I know that one can save it into library and then read the images from library along with the metadata, but I don't want that)
Any help is much appreciated! THANKS!

ALAssest of an Image taken from Camera without saving it

Hi I was wondering if theres a way to extract ALAsset of an image taken from Camera but without saving it...
Ive come across various example that used writeImageToSavedPhotosAlbum and then fetched the ALAssest, but i dont deem it necessary to save the image in the camera roll, was just wondering if this could be done otherwise
No ALAssest exists until the image has been successfully saved to the image library. Until then you just have a UIImage that has come from the picker. There is no mandate that the image needs to be saved into the library and any decision about whether you want to save the image should be based on what the app tells the user and if the user would naturally expect to find the image in the library after taking / saving it in the app.

Data hiding in Image

I am hiding a text file in an image using http://github.com/anirudhsama it works fine and I could able to extract the text file again back with my program.
But when I programmatically share the image in facebook, twitter and email, that shared image is not decode properly so I'm not getting the file back.
I retrive the image as follows:
UIImage *finalImageWithStegno = [UIImage imageWithContentsOfFile:fileName];
What I suspect is image compression when it is uploaded to the site. A simple way to check this is to hide a message in a cover image (obtain stego image). Upload the image on a website and download it. Compare the original stego image to the downloaded image. If they differ (byte by byte), there's your problem.
From a quick look at the code, it seems the app hides the data in the spatial domain, which is not robust. Your message is directly hidden in the image pixels and if they change (due to lossy compression, blurring, etc), your message will be lost. A solution to this would be to hide the data in the frequency domain. Another solution could be uploading the images with a filetype which doesn't get compressed? I don't know much how sites deal with images so the second suggestion may be impossible.
In any case, if uploading to a site distorts the image, look around for another app which may serve you unless you can code yourself. Then we can get into the specifics. :)
Your algorithm is not robust. Use Transform domain Stegnography to retain the information when its is re encoded. You may choose to embed in DCT coefficients or DWT Coefficients for better robustness.

Resources