Capturing still image from camera using gpuimage2 - ios

I have a photo/gif app I made previously using AVFoundation as a base for the camera and taking photos, but I wanted to upgrade it to add some live filtering and post capture filtering too.
After some digging I found gpuimage/gpuimage2 and since my project is in swift 3 I started replacing my previous camera module with gpuimage.
I got the camera to work again but I have issues capturing a photo from the camera to store it as a uiimage until it is uploaded to a server.
do {
self.videoCamera = try Camera(sessionPreset: AVCaptureSessionPresetPhoto, location: .frontFacing)
} catch {
self.videoCamera = nil
print("Couldn't initialize camera with error: \(error)")
}
this is my init and then this is where I place the camera feed in the view
self.filterView!.frame = self.view.frame
self.filterView!.orientation = .portraitUpsideDown
self.filterView!.fillMode = .preserveAspectRatioAndFill
self.videoCamera! --> self.filterView!
self.videoCamera!.startCapture()
as you can see for the moment I don't want to use any filters, I'm trying to first get the basic functionality back (i.e. showing a camera feed the taking 1-5 images in a row)
I noticed there was a saveNextFrameToURL but it saves the file on the device but I only want the uiimage so this is what I put to replace the content of my takePhoto method (images is nil on first run)
func takePhoto(){
if self.images == nil {
self.images = []
}
let pictureOutput = PictureOutput()
pictureOutput.encodedImageFormat = .jpeg
pictureOutput.imageAvailableCallback = {image in
self.images!.append(image)
}
self.videoCamera! --> pictureOutput
}
My issue is that imageAvailableCallback is simply never called (I tried placing a breakpoint in it but nothing) whereas it goes through the rest of the method not raising any errors or warning.
what am I doing wrong ? is it even possible to capture a still image from a non filtered view ? if so how can I add a filter that would not change the image as such so I can still do some unedited photo capture in my app?
I've been going at it for over 2 weeks now and every time I search if anyone had the same issue I only find issues about editing a still image or a filtered image and when I tried filtering the image as such:
self.filterView!.frame = self.view.frame
self.filterView!.orientation = .portraitUpsideDown
self.filterView!.fillMode = .preserveAspectRatioAndFill
self.baseFilter = BrightnessAdjustment()
self.videoCamera! --> self.baseFilter --> self.filterView!
self.videoCamera!.startCapture()
and the takePhoto method
func takePhoto(){
if self.images == nil {
self.images = []
}
let pictureOutput = PictureOutput()
pictureOutput.encodedImageFormat = .jpeg
pictureOutput.imageAvailableCallback = {image in
self.images!.append(image)
}
self.baseFilter! --> pictureOutput
}
I get a white screen instead of my camera feed and still no image.
Any help would be appreciated thank you

I found where my problem was by looking at similar issues and the comments (found one where Brad Larson commented here
Basically it has to do with the lifespan of my pictureOutput variable, since it was enclosed inside a method is didn't last long enough for the callback to be made and to save the image, by making my pictureOutput variable a class variable I solved my issues

Related

Why is the Vision framework unable to align two images?

I'm trying to take two images using the camera, and align them using the iOS Vision framework:
func align(firstImage: CIImage, secondImage: CIImage) {
let request = VNTranslationalImageRegistrationRequest(
targetedCIImage: firstImage) {
request, error in
if error != nil {
fatalError()
}
let observation = request.results!.first
as! VNImageTranslationAlignmentObservation
secondImage = secondImage.transformed(
by: observation.alignmentTransform)
let compositedImage = firstImage!.applyingFilter(
"CIAdditionCompositing",
parameters: ["inputBackgroundImage": secondImage])
// Save the compositedImage to the photo library.
}
try! visionHandler.perform([request], on: secondImage)
}
let visionHandler = VNSequenceRequestHandler()
But this produces grossly mis-aligned images:
You can see that I've tried three different types of scenes — a close-up subject, an indoor scene, and an outdoor scene. I tried more outdoor scenes, and the result is the same in almost every one of them.
I was expecting a slight misalignment at worst, but not such a complete misalignment. What is going wrong?
I'm not passing the orientation of the images into the Vision framework, but that shouldn't be a problem for aligning images. It's a problem only for things like face detection, where a rotated face isn't detected as a face. In any case, the output images have the correct orientation, so orientation is not the problem.
My compositing code is working correctly. It's only the Vision framework that's a problem. If I remove the calls to the Vision framework, put the phone of a tripod, the composition works perfectly. There's no misalignment. So the problem is the Vision framework.
This is on iPhone X.
How do I get Vision framework to work correctly? Can I tell it to use gyroscope, accelerometer and compass data to improve the alignment?
You should set secondImage as targetImage, and perform handler with firstImage.
I use your composite way.
check out this example from MLBoy:
let request = VNTranslationalImageRegistrationRequest(targetedCIImage: image2, options: [:])
let handler = VNImageRequestHandler(ciImage: image1, options: [:])
do {
try handler.perform([request])
} catch let error {
print(error)
}
guard let observation = request.results?.first as? VNImageTranslationAlignmentObservation else { return }
let alignmentTransform = observation.alignmentTransform
image2 = image2.transformed(by: alignmentTransform)
let compositedImage = image1.applyingFilter("CIAdditionCompositing", parameters: ["inputBackgroundImage": image2])

GPUImageView stop responding to "Filter Change" after two times

I'm probably missing something. I'm trying to change filter to my GPUImageView.It's actually working the first two times(sometimes only one time), and than stop responding to changes. I couldn't find a way to remove the target from my GPUImageView.
Code
for x in filterOperations
{
x.filter.removeAllTargets()
}
let f = filterOperations[randomIntInRange].filter
let media = GPUImagePicture(image: self.largeImage)
media?.addTarget(f as! GPUImageInput)
f.addTarget(g_View)
media.processImage()
Any suggestions? * Processing still image from my library
UPDATE
Updated Code
//Global
var g_View: GPUImageView!
var media = GPUImagePicture()
override func viewDidLoad() {
super.viewDidLoad()
media = GPUImagePicture(image: largeImage)
}
func changeFilter(filterIndex : Int)
{
media.removeAllTargets()
let f = returnFilter(indexPath.row) //i.e GPUImageSepiaFilter()
media.addTarget(f as! GPUImageInput)
f.addTarget(g_View)
//second Part
f.useNextFrameForImageCapture()
let sema = dispatch_semaphore_create(0)
imageSource.processImageWithCompletionHandler({
dispatch_semaphore_signal(sema)
return
})
dispatch_semaphore_wait(sema, DISPATCH_TIME_FOREVER)
let img = f.imageFromCurrentFramebufferWithOrientation(img.imageOrientation)
if img != nil
{
//Useable - update UI
}
else
{
// Something Went wrong
}
}
My primary suggestion would be to not create a new GPUImagePicture every time you want to change the filter or its options that you're applying to an image. This is an expensive operation, because it requires a pass through Core Graphics and a texture upload to the GPU.
Also, since you're not maintaining a reference to your GPUImagePicture beyond the above code, it is being deallocated as soon as you pass out of scope. That tears down the render chain and will lead to a black image or even crashes. processImage() is an asynchronous operation, so it may still be in action at the time you exit your above scope.
Instead, create and maintain a reference to a single GPUImagePicture for your image, swap out filters (or change the options for existing filters) on that, and target the result to your GPUImageView. This will be much faster, churn less memory, and won't leave you open to premature deallocation.

iOS: making video file from many images generated by code

I need to make a video file from thousands of images generated by code.
I cannot put those images into an array because they are too many--a 30second-long video would consist of 1800 imgae frames. And I don't get all images at once.
To get each image, it first triggers Javascript fucntion in WebView asking if a video frame,which is UIImage, should be generated. if Yes, my code makes UIImage for a video frame one at a time. and the app does other things and at some point it asks webview again for a permission to generate another image. it does this for a thousand times and more.
If Delegate gets a message that says No, the last image was a final video frame so a complete video file should be made at this point and saved to Document directory.
How should I do this? Objective C solutions would be acceptable too. thank you in advance
//ask webview if another video frame should be made
func askWebView() {
webView?.evaluateJavaScript("Ask JS function",completionHandler:nil)
}
Delegate
//Delegate method
func userContentController(userContentController:WKUserContentController,didReceiveScriptMessage message:WKScriptMessage){
let body:String = message.body as! String
if body == "makeAFrame" {
let videoFrame = self.makeImage()
//should asseble video frames
} else {
//No,nomore video frame. write a complete video file to Document directory
}
Generating an UIImage for a video frame
func makeImage() -> UIImage {
//make an image and return it
}

Save image with the correct orientation - Swift & Core Image

I'm using Core Image in Swift for editing photos and I have a problem when I save the photo. I'm not saving it with correct orientation.
When I get the picture from the Photo Library I'm saving the orientation in a variable as UIImageOrientation but I don't know how to set it back before saving the edited photo to the Photo Library. Any ideas how?
Saving the orientation:
var orientation: UIImageOrientation = .Up
orientation = gotImage.imageOrientation
Saving the edited photo to the Photo Library:
#IBAction func savePhoto(sender: UIBarButtonItem) {
let originalImageSize = CIImage(image:gotImage)
filter.setValue(originalImageSize, forKey: kCIInputImageKey)
// 1
let imageToSave = filter.outputImage
// 2
let softwareContext = CIContext(options:[kCIContextUseSoftwareRenderer: true])
// 3
let cgimg = softwareContext.createCGImage(imageToSave, fromRect:imageToSave.extent())
// 4
let library = ALAssetsLibrary()
library.writeImageToSavedPhotosAlbum(cgimg,
metadata:imageToSave.properties(),
completionBlock:nil)
}
Instead of using the metadata version of writeImageToSavedPhotosAlbum, you can use :
library.writeImageToSavedPhotosAlbum(
cgimg,
orientation: orientation,
completionBlock:nil)
then you can pass in the orientation directly.
To satisfy Swift, you may need to typecast it first:
var orientation : ALAssetOrientation = ALAssetOrientation(rawValue:
gotImage.imageOrientation.rawValue)!
As per my somewhat inconclusive answer here.
(As you have confirmed, this solution does indeed work in Swift - I derived it, untested, from working Objective-C code)
If you are interested in manipulating other information from image metadata, here are a few related answers I provided to other questions...
Updating UIImage orientation metaData?
Force UIImagePickerController to take photo in portrait orientation/dimensions iOS
How to get author of image in cocoa
Getting a URL from (to) a "picked" image, iOS
And a small test project on github that digs out image metadata from various sources (it's objective-C, but the principles are the same).
You are calling writeImageToSavedPhotosAlbum:metadata:completionBlock:... The docs on that method say:
You must specify the orientation key in the metadata dictionary to preserve the orientation of the image.

Knowing resolution of AVCaptureSession's session presets

I'm accessing the camera in iOS and using session presets as so:
captureSession.sessionPreset = AVCaptureSessionPresetMedium;
Pretty standard stuff. However, I'd like to know ahead of time the resolution of the video I'll be getting due to this preset (especially because depending on the device it'll be different). I know there are tables online you can look this up (such as here: http://cmgresearch.blogspot.com/2010/10/augmented-reality-on-iphone-with-ios40.html ). But I'd like to be able to get this programmatically so that I'm not just relying on magic numbers.
So, something like this (theoretically):
[captureSession resolutionForPreset:AVCaptureSessionPresetMedium];
which might return a CGSize of { width: 360, height: 480}. I have not been able to find any such API, so far I've had to resort to waiting to get my first captured image and querying it then (which for other reasons in my program flow is not good).
I am no AVFoundation pro, but I think the way to go is:
captureSession.sessionPreset = AVCaptureSessionPresetMedium;
AVCaptureInput *input = [captureSession.inputs objectAtIndex:0]; // maybe search the input in array
AVCaptureInputPort *port = [input.ports objectAtIndex:0];
CMFormatDescriptionRef formatDescription = port.formatDescription;
CMVideoDimensions dimensions = CMVideoFormatDescriptionGetDimensions(formatDescription);
I'm not sure about the last step and I didn't try it myself. Just found that in the documentation and think it should work.
Searching for CMVideoDimensions in Xcode you'll find the RosyWriter example project. Have a look at that code (I don't have time to do that now).
You can programmatically get the resolution from activeFormat before capture begins, though not before adding inputs and outputs: https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVCaptureDevice_Class/index.html#//apple_ref/occ/instp/AVCaptureDevice/activeFormat
private func getCaptureResolution() -> CGSize {
// Define default resolution
var resolution = CGSize(width: 0, height: 0)
// Get cur video device
let curVideoDevice = useBackCamera ? backCameraDevice : frontCameraDevice
// Set if video portrait orientation
let portraitOrientation = orientation == .Portrait || orientation == .PortraitUpsideDown
// Get video dimensions
if let formatDescription = curVideoDevice?.activeFormat.formatDescription {
let dimensions = CMVideoFormatDescriptionGetDimensions(formatDescription)
resolution = CGSize(width: CGFloat(dimensions.width), height: CGFloat(dimensions.height))
if (portraitOrientation) {
resolution = CGSize(width: resolution.height, height: resolution.width)
}
}
// Return resolution
return resolution
}
FYI, I attach here an official reply from Apple.
This is a follow-up to Bug ID# 13201137.
Engineering has determined that this issue behaves as intended based on the following information:
There are several problems with the included code:
1) The AVCaptureSession has no inputs.
2) The AVCaptureSession has no outputs.
Without at least one input (added to the session using [AVCaptureSession addInput:]) and a compatible output (added using [AVCaptureSession addOutput:]), there will be no active connections, therefore, the session won't actually run in the input device. It doesn't need to -- there are no outputs to which to deliver any camera data.
3) The JAViewController class assumes that the video port's -formatDescription property will be non nil as soon as [AVCaptureSession startRunning] returns.
There is no guarantee that the format description will be updated with the new camera format as soon as startRunning returns. -startRunning starts up the camera and returns when it is completely up and running, but doesn't wait for video frames to be actively flowing through the capture pipeline, which is when the format description would be updated.
You're just querying too fast. If you waited a few milliseconds more, it would be there. But the right way to do this is to listen for the AVCaptureInputPortFormatDescriptionDidChangeNotification.
4) Your JAViewController class creates a PVCameraInfo object in retrieveCameraInfo: and asks it a question, then lets it fall out of scope, where it is released and dealloc'ed.
Therefore, the session doesn't have long enough to run to satisfy your dimensions request. You stop the camera too quickly.
We consider this issue closed. If you have any questions or concern regarding this issue, please update your report directly (http://bugreport.apple.com).
Thank you for taking the time to notify us of this issue.
Best Regards,
Developer Bug Reporting Team
Apple Worldwide Developer Relations
According to Apple, there's no API for that. It stinks, I've had the same problem.
May be you can provide a list of all posible preset resolutions for every iPhone model and check which device model the app is running on? - using something like this...
[[UIDevice currentDevice] platformType] // ex: UIDevice4GiPhone
[[UIDevice currentDevice] platformString] // ex: #"iPhone 4G"
However, you have to update the list for each newer device model. Hope this helps :)
if preset is .photo, the return size is for still photo size, not preview video size
if preset is not .photo, the return size is for video size, not for captured photo size.
if self.session.sessionPreset != .photo {
// return video size, not captured photo size
let format = videoDevice.activeFormat
let formatDescription = format.formatDescription
let dimensions = CMVideoFormatDescriptionGetDimensions(formatDescription)
} else {
// other way to get video size
}
Answer of #Christian Beer is a good way for specified preset.
My way is a good for active preset.
The best way to do what you want (get a known video or image format) is to set the format of the capture device.
First find the capture device you want to use:
if #available(iOS 10.0, *) {
captureDevice = defaultCamera()
} else {
let devices = AVCaptureDevice.devices()
// Loop through all the capture devices on this phone
for device in devices {
// Make sure this particular device supports video
if ((device as AnyObject).hasMediaType(AVMediaType.video)) {
// Finally check the position and confirm we've got the back camera
if((device as AnyObject).position == AVCaptureDevice.Position.back) {
captureDevice = device as AVCaptureDevice
}
}
}
}
self.autoLevelWindowCenter = ALCWindow.frame
if captureDevice != nil && currentUser != nil {
beginSession()
}
}
func defaultCamera() -> AVCaptureDevice? {
if #available(iOS 10.0, *) { // only use the wide angle camera never dual camera
if let device = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera,
for: AVMediaType.video,
position: .back) {
return device
} else {
return nil
}
} else {
return nil
}
}
Then find the formats that that device can use:
let options = captureDevice!.formats
var supportable = options.first as! AVCaptureDevice.Format
for format in options {
let testFormat = format
let description = testFormat.description
if (description.contains("60 fps") && description.contains("1280x 720")){
supportable = testFormat
}
}
You can do more complex parsing of the formats, but you might not care.
Then just set the device to that format:
do {
try captureDevice?.lockForConfiguration()
captureDevice!.activeFormat = supportable
// setup other capture device stuff like autofocus, frame rate, ISO, shutter speed, etc.
try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice!))
// add the device to an active CaptureSession
}
You may want to look at the AVFoundation docs and tutorial on AVCaptureSession as there are lots of things you can do with the output as well. For example, you can convert the result to .mp4 using AVAssetExportSession so that you can post it on YouTube, etc.
Hope this helps
Apple is using 4:3 ratio for the iPhone camera.
You can you this ratio to get the frame size of the captured video by fixing either the width or height constraint of the AVCaptureVideoPreviewLayer and set the aspect ratio constraint to 4:3.
In the left image, the width was fixed to 300px and the height was retrieved by setting the 4:3 ratio, and it was 400px.
In the right image, the height was fixed to 300px and width was retrieved by setting the 3:4 ratio, and it was 225px.

Resources