How to get a thumbnail from MFi Card reader accessories? - ios

I use the ExternalAccessory wrapper-based SDK to read the image source file with the following pseudo-code.
let read = buffer.withUnsafeMutableBytes { bufferPointer in
self.sessionController.readFile(handle, data: bufferPointer, len: UInt32(min(remainingToRead, UInt64(bufferSize))))
}
if read == 0 {
completion(fileURL!, FileError.readFile)
return // Read error
}
remainingToRead -= UInt64(read)
fileHandle?.write(buffer)
What I want to achieve is that I read the file stream of the image thumbnail directly from the external device, not the file stream of the original image file, does anyone have a good solution for this?
I tried to read the image raw file and then compress it, but this will affect the speed of the image display.
Also, do I need to consider C language to process the file stream to implement this feature?

Related

Crop Captured RAW Photo and save to file iOS [duplicate]

I want to build an iOS 10 app that lets you shoot a RAW (.dng) image, edit it, and then saved the edited .dng file to the camera roll. By combining code from Apple's 2016 "AVCamManual" and "RawExpose" sample apps, I've gotten to the point where I have a CIFilter containing the RAW image along with the edits.
However, I can't figure out how to save the resulting CIImage to the camera roll as a .dng file. Is this possible?
A RAW file is "raw" output direct from a camera sensor, so the only way to get it is directly from a camera. Once you've processed a RAW file, what you have is by definition no longer "raw", so you can't go back to RAW.
To extend the metaphor presented at WWDC where they introduced RAW photography... a RAW file is like the ingredients for a cake. When you use Core Image to create a viewable image from the RAW file, you're baking the cake. (And as noted, there are many different ways to bake a cake from the same ingredients, corresponding to the possible options for processing RAW.) But you can't un-bake a cake — there's no going back to original ingredients, much less a way that somehow preserves the result of your processing.
Thus, the only way to store an image processed from a RAW original is to save the processed image in a bitmap image format. (Use JPEG if you don't mind lossy compression, PNG or TIFF if you need lossless, etc.)
If you're writing the results of an edit to PHPhotoLibrary, use JPEG (high quality / less compressed if you prefer), and Photos will store your edit as a derived result, allowing the user to revert to the RAW original. You can also describe the set of filters you applied in PHAdjustmentData saved with your edit — with adjustment data, another instance of your app (or Photos app extension) can reconstruct the edit using the original RAW data plus the filter settings you save, then allow a user to alter the filter parameters to create a different processed image.
Note: There is a version of the DNG format called Linear DNG that supports non-RAW (or "not quite RAW") images, but it's rather rare in practice, and Apple's imaging stack doesn't support it.
Unfortunately DNG isn't supported as an output format in Apple's ImageIO framework. See the output of CGImageDestinationCopyTypeIdentifiers() for a list of supported output types:
(
"public.jpeg",
"public.png",
"com.compuserve.gif",
"public.tiff",
"public.jpeg-2000",
"com.microsoft.ico",
"com.microsoft.bmp",
"com.apple.icns",
"com.adobe.photoshop-image",
"com.adobe.pdf",
"com.truevision.tga-image",
"com.sgi.sgi-image",
"com.ilm.openexr-image",
"public.pbm",
"public.pvr",
"org.khronos.astc",
"org.khronos.ktx",
"com.microsoft.dds",
"com.apple.rjpeg"
)
This answer comes late, but it may help others with the problem. This is how I saved a raw photo to the camera roll as a .dng file.
Just to note, I captured the photo using the camera with AVFoundation.
import Photos
import AVFoundation
//reading in the photo data in as a data object
let photoData = AVCapturePhotoOutput.dngPhotoDataRepresentation(forRawSampleBuffer: sampleBuffer, previewPhotoSampleBuffer: previewPhotoSampleBuffer)
// put it into a temporary file
let temporaryDNGFileURL = NSURL(fileURLWithPath: NSTemporaryDirectory()).appendingPathComponent("temp.dng")!
try! photoData?.write(to: temporaryDNGFileURL)
// get access to photo library
PHPhotoLibrary.requestAuthorization { status in
if status == .authorized {
// Perform changes to the library
PHPhotoLibrary.shared().performChanges({
let options = PHAssetResourceCreationOptions()
options.shouldMoveFile = true
//Write Raw:
PHAssetCreationRequest.forAsset().addResource(with: .photo, fileURL: temporaryDNGFileURL, options: options)
}, completionHandler: { success, error in
if let error = error { print(error) }
})
}
else { print("cant access photo album") }
}
Hope it helps.
The only way to get DNG data as of the writing of this response (iOS 10.1) is:
AVCapturePhotoOutput.dngPhotoDataRepresentation(forRawSampleBuffer: CMSampleBuffer, previewPhotoSampleBuffer: CMSampleBuffer?)
Noting the OP refers to Core Image. As mentioned by rickster, CI works on processed image data, therefore only offers processed image results (JPEG, TIFF):
CIContext.writeJPEGRepresentation(of:to:colorSpace:options:)
CIContext.writeTIFFRepresentation(of:to:format:colorSpace:options:)

Saving DNG (raw photos) taken on other camera to iOS Photo Library

I write an app that reads dng files from my Ricoh GR II and save them to iOS 10's Photo Library, code like this
let photoLibrary = PHPhotoLibrary.shared()
photoLibrary.performChanges({
PHAssetChangeRequest.creationRequestForAsset(from: image)
}) { (success: Bool, error: Error?) -> Void in
if success {
print("Saving photo ok")
} else {
print("Error writing to photo library: \(error!.localizedDescription)")
}
}
And I got the error below:
ImageIO: PluginForUTType:316: file format 'com.adobe.raw-image' doesn't support writing
Error writing to photo library: The operation couldn’t be completed. (Cocoa error -1.)
I guess maybe iOS only support DNG that's taken on iPhone?
PHAssetChangeRequest.creationRequestForAsset(from: image) starts from a UIImage, so it wouldn't get the original file into your library anyway. A UIImage is the displayable result of reading and decoding an image file; by the time you have a UIImage it doesn't know whether it came from a JPEG or a DNG or a GIF or was rendered at run time via CGBitmapContext or whatever. When you try to save via creationRequestForAssetFromImage:, you're taking that end result and turning it back into a file — whatever kind of file that method wants. (Probably JPEG.)
If you want to put an actual DNG file into the photo library, you'll need to use a Photos framework method that takes original files, not decoded images. Furthermore, since not every Photos client can deal with RAW DNGs, Photos requires that every DNG file you put in the library be accompanied by a JPEG representation that non-RAW-supporting apps (sadly, including the Photos app itself) can see.
Luckily, there's an API for that.
PHPhotoLibrary.shared().performChanges( {
let creationRequest = PHAssetCreationRequest.forAsset()
let creationOptions = PHAssetResourceCreationOptions()
creationOptions.shouldMoveFile = true
creationRequest.addResource(with: .photo, data: jpegData, options: nil)
creationRequest.addResource(with: .alternatePhoto, fileURL: dngFileURL, options: creationOptions)
}, completionHandler: completionHandler)
PHAssetCreationRequest is for creating assets from the underlying resources - one or more image files, video files, some combination thereof (for Live Photos), etc. The photo and alternatePhoto resources are how you provide a DNG file and its accompanying JPEG preview. And the shouldMoveFile option is good for if you don't want to blow your device storage from copying the file from your app sandbox into the Photos library storage — good for big resources like DNGs and 4K video.
(The code snippet is from Apple's Photo Capture guide.)
That said, while Apple's RAW processing supports images from all sorts of third-party cameras, it doesn't look like their list includes any Ricoh models. (Not even this kind.)
That doesn't prevent you from storing Ricoh DNGs in the Photos library, though — it just means the only apps that will be able to usefully read them from the library will need their own Ricoh RAW processing support to see anything but the preview JPEG.

how to read data in rich content files using NSFilehandler

This is my swift code simplified:
func readContent(url:NSURL!){
do {
let file: NSFileHandle? = try NSFileHandle(forReadingFromURL: url)
if (file != nil){
file?.seekToFileOffset(10)
let content=file!.readDataOfLength(5)
print(String(data: content, encoding: NSASCIIStringEncoding))
file!.closeFile()
}
}catch {
}
}
I am building a file reader, that could read text files and pdfs. I am using NSFileHandler because I need to keep track of the position where the reading is happening.
I am able to read text files without a problem. However with pdfs I am having two problems:
How do I get the type of encoding used to encore the pdf? NSASCIIStringEncoding for instance does not work fine with text file but not with the pdf file. I am getting strange characters. I imaging that there is a way to detect the encoding. I have been following https://developer.apple.com/library/ios/documentation/FileManagement/Conceptual/FileSystemProgrammingGuide/Introduction/Introduction.html and I haven't found anything addressing this issue on stack overflow.
Given the fact that pdfs may contain text, images and videos how do I identify these contents while reading.. I read that magic numbers might do it https://en.wikipedia.org/wiki/Magic_number_(programming)#Magic_numbers_in_files but I read that there are not advisable http://www.techrepublic.com/article/avoid-using-magic-numbers-and-string-literals-in-your-code/. In addition I have not yet found a guide on how to use them.
please note that it is important for me to keep track of the Offset while reading.

Saving and masking webcam still AS3 AIR IOS

I am aiming to create an application where the user can take a picture of their face, which includes an overlay of a face cutout. I need the user to be able to click the screen and for the application to save the picture, mask it with the same face cutout, and then save it to the applications storage.
This is the first time using AIR on IOS with Actionscript3. I know there is a proper directory that you are supposed to save to on IOS however I am not aware of it. I have been saving other variables using SharedObjects...
E.g:
var so:SharedObject = SharedObject.getLocal("applicationID");
and then writing to it
so.data['variableID'] = aVariable;
This is how I access the front camera and display it. For some reason to display the whole video and not a narrow section of it, I add the video from the camera to a movieclip on the stage, 50% of the size of the stage.
import flash.media.Camera;
import flash.media.Video;
import flash.display.BitmapData;
import flash.utils.ByteArray;
import com.adobe.images.JPGEncoder
var camera:Camera = Camera.getCamera("1");
camera.setQuality(0,100);
camera.setMode(1024,768, 30, false);
var video:Video = new Video();
video.attachCamera(camera);
videoArea.addChild(video);
Multitouch.inputMode = MultitouchInputMode.TOUCH_POINT;
Capture_Picture_BTN.addEventListener(TouchEvent.TOUCH_TAP, savePicture);
function savePicture(event:TouchEvent):void
{
trace("Saving Picture");
//Capture Picture BTN
var bitmapData:BitmapData = new BitmapData(1024,768);
bitmapData.draw(video);
}
I apologize if this is the wrong way of going about this I am still fairly new to Actionscript as it is. If you need any more information I will be happy to provide.
You can only save ~100kb of data via SharedObject, so you can't use that. It is meant solely to save application settings and, from my experience, is ignored by AIR devs because we have better control over the file system.
We have the File and FileStream classes. These classes allow you to read and write directly to and from the device's disk, something not quite possible on the web (the user is the one who has to save/open; can't be done automatically).
Before my example, I must stress that you should read the documentation. Adobe's LiveDocs are among the best language/SDK docs available and it will point out many things that my quick example on usage will not (such as in-depth discussion of each directory, how to write various types, etc)
So, here's an example:
// create the File and resolve it to the applicationStorageDirectory, which is where you should save files
var f:File = File.applicationStorageDirectory.resolvePath("name.png");
// this prevents iCloud backup. false by default. Apple will reject anything using this directory for large file saving that doesn't prevent iCloud backup. Can also use cacheDirectory, though certain aspects of AIR cannot access that directory
f.preventBackup = true;
// set up the filestream
var fs:FileStream = new FileStream();
fs.open(f, FileMode.WRITE); //open file to write
fs.writeBytes( BYTE ARRAY HERE ); // writes a byte array to file
fs.close(); // close connection
So that will save to disk. To read, you open the FileStream in READ mode.
var fs:FileStream = new FileStream();
var output:ByteArray = new ByteArray();
fs.open(f, FileMode.READ); //open file to write
fs.readBytes( output ); // reads the file into a byte array
fs.close(); // close connection
Again, please read the documentation. FileStream supports dozens of read and write methods of various types. You need to select the correct one for your situation (readBytes() and writeBytes() should work in all cases, though there are instances where you are should use a more specific method)
Hope that helps.

get yuv planar format image from camera - iOS

I am using AVFoundation to capture still images from camera (capturing still images and not video frame) using captureStillImageAsynchronouslyFromConnection. This gives to me a buffer of type CMSSampleBuffer, which I am calling imageDataSampleBuffer.
As far as I have understood, this buffer can contain any type of data related to media, and the type of data is determined when I am configuring the output settings.
for output settings, I make a dictionary with value: AVVideoCodecJPEG for key: AVVideoCOdecKey.
There is no other codec option. But when I read the AVFoundation Programming Guide>Media Capture, I can see that 420f, 420v, BGRA, jpeg are the available encoded formats supported for iPhone 3gs (which i am using)
I want to get the yuv420 (i.e. 420v) formatted image data into the imageSampleBuffer. Is that possible?
if I print the availableImageDataCodecTypes, I get only JPEG
if I print availableImageDataCVPixelFormatTypes, I get three numbers 875704422, 875704438, 1111970369.
Is it possible that these three numbers map to 420f, 420v, BGRA?
If yes, which key should I modify in my output settings?
I tried putting the value: [NSNumber numberWithInt:875704438] for key: (id)kCVPixelBufferPixelFormatTypeKey.
Would it work?
If yes, how do I extract this data from the imageSampleBuffer?
Also, In which format is UIImage stored? Can it be any format? Is it just NSData with some extra info which makes it interpreted as an image?
I have been trying to use this method :
Raw image data from camera like "645 PRO"
I am saving the data using writeToFile and I have been trying to open it using irfan view.
But I am unable to verify whether or not the saved file is in yuv format ot not because irfan view gives error that it is unable to read the headers.

Resources