Apple's AVCamera Night Mode - ios

I've been building a camera app using the AVFoundation and wanted to add NightMode support to it.
Apple features a stunning implementation for this, more about it can be read here: https://www.macrumors.com/guide/night-mode/
Now, the only "properties" I can find about "night mode" would be low-light boost, which only seems to be an iPhone 5 feature
https://developer.apple.com/documentation/avfoundation/avcapturedevice/1624602-islowlightboostenabled
https://forums.developer.apple.com/thread/52574
I'd like to take advantage of the apple native night mode and implement it into my camera app. Is there any way to do so? Is this a feature that might be added to the SDK within the next releases? Did I miss something in the SDK?

Well, some time ago I tried to replicate what Apple calls NightMode.
Of course, there is any reference inside AVFoundation as occurs with HDR or SmartHDR.
The point is, what does NighMode do? How Apple archive this "effect"?
Answering this questions points out why there is anything inside AVFoundation.
They basically blend multiple exposures replicating what in Photography is called "Long Exposure". The only way to do so would be to extract frame per frame from didOutputSampleBuffer and than find a (good) way to blend.
Of course there is a lot more involved, since each frame of the buffer is already pre-worked by isp.

Related

How to reproduce watchOS closing activity ring animation with sparks on iOS?

How to best reproduce the closing activity ring animation from watchOS 4 on iOS? Im particularly interested in the rotating sparkling effect.
Here a still frame of the animation I'm talking about:
and here is a video of it.
Is it possible to implement something like this with Core Animation?
Here at the university of science in zürich in the usability lab, we use:
sketch or illustrator or designer.gravit.io for designing the svg sketches.
than we import it in after effects or in Haiku.ai for animating
and export it as .json for airbnbs animations library Bodymovin or also known as Lottie. Therefor are libraries for web, android and ios available.
The advantage of this solution over #bryanjclark "exported it as a series of images" is that the animation is sharp in every resolution (svg), it is only one .json file and you have the full control over its speed and frames.
Otherwise if you really want to do it with code only, give a look at this Article, done with OpenGL ES2.0.
Or with the AnimationCore example in this SO Answer.
I’m nearly-certain that is a pre-rendered animation, not something generated on-device. (If it is generated on-device, it’s not something you’d have API access).
I’d bet that:
a designer worked it up in a tool like AfterEffects,
exported it as a series of images,
then the developers implemented it using something like WKImageAnimatable
You can see other developers using WKImageAnimatable to build gorgeous animations in their WatchKit apps - for example, Cultured Code’s app Things (watch the video there!) has some really terrific little animation flourishes that (almost-definitely) use WKImageAnimatable under-the-hood!

Creating a 360 photo experience on iOS mobile device

I am interested in VR and trying to get a bit more information. I want to create a similar experience on iOS where I can take a 360 image and be able to view it on a iOS device by tilting the phone around and using the devices gyroscope, as I tilt the phone around it will pan around the 360 image (like on google street view where you can use the tilt gesture).
And something similar to this app: http://bubb.li/
Can anybody give a brief overview how this would be do-able, any sources that could help me achieve this, API's etc...?
Much appreciated.
Two options here: You can use a dedicated device to capture the image for you, or you can write some code to stitch together multiple images taken from the iOS device as you move it around a standing point.
I've used the Ricoh Theta for this (no affiliation). They have a 360 viewer in the SDK for mapping 360 images to a sphere that works exactly as you've asked.
Assuming you've figured out how to create 360 photospheres, you can use Unity and Unreal, and probably development platforms to create navigation between the locations you captured.
Here is a tutorial that looks pretty detailed for doing this in Unity:
https://tutorialsforvr.com/creating-virtual-tour-app-in-vr-using-unity/
One pro of doing this in something like Unity or Unreal is once you have navigation between multiple photo spheres working it's fairly easy to add animation or other interactive elements. I've seen interactive stories done with 360 video using this method.
(I see that the question is from a fairly long time ago, but it was one of the top results when I looked for this same topic)

Reading on QRCodes on iOS with AVCaptureSession -- alignment issues?

We have implemented a QRCode reading function in iOS using the AVCaptureSession class, as described nicely here:
https://github.com/HEmobile/ScanBarCode/tree/master/ScanBarCodes
https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVCaptureSession_Class/
But one thing we notice... the QRCode has to be aligned exactly vertically or horizontally. Oblique angles such as 45 degress does not trigger a scan. This issue doesn't really google, which is surprising.
Our experiments with other QRcode reading apps indicate that this limitation does not exist. Perhaps/seemingly (presumably -- since the built-in function is new) these apps don't use AVCaptureSession.
Our question is, is this a sign that Apple's version of this function is not mature yet? Or is there some option to enable or improve this capability?
Thanks for any thoughts.
It seems you've written some sort of limitation in your code. Check out my github repo: https://github.com/alexekoren/qr-3d
It was built specifically for reading QR Codes at angles in a pretty way. I'm testing it now and it pulls at 30-45 degrees easily.
Here's the direct link for everything you'll need to make the scanner object that can present on a UIView: https://github.com/AlexEKoren/QR-3D/blob/master/Code%20Scanner/Scanner/CSScanner.m
It should work out of the box!

Is is possible to use Vuforia without a camera?

Is it possible to use Vuforia without a camera for image tracking?
Basically I would like a function I could call with an image as a indata parameter and coordinates of a image target as a result. Does that exist?
It is unfortunately not possible. I've been looking for such an option myself several times while working on a Moodstocks (image recognition SDK) / Vuforia mashup (see these 2 blog posts if you are interested in it), but the Vuforia SDK prevents the use of any other source than the camera.
I guess the main reason for this is that the camera management is fully handled internally by the Vuforia SDK, probably in order to make it easier to use as managing the camera by ourselves is at best a boring task (lines and lines of code to repeat in each project...), at worst a huge pain in the ass (especially on Android where there are sometimes devices than don't behave as expected).
By the way, it looks to me like the Vuforia SDK is not the best solution you can find for your use case: it is mainly an augmented-reality SDK, focussed on real-time tracking, which imply working with a camera stream... so using it to do "simple" image recognition looks really overkill!

How to make a screenshot of OpenGl ES on top of the live preview camera in iOS (Augmented Reality app)?

I am a very beginner in Objective-C and iOS programming. I spent a month to find out how to show a 3D model using OpenGL ES (version 1.1) on top of the live camera preview by using AvFoundation. I am doing a kind of augmented reality application on iPad. I process the input frames and show 3D object overlay with the camera preview in realtime. These was fine because there are so many site and tutorial about these things (Thanks to this website as well).
Now, I want to make a screen capture of the whole screen (the model with camera preview as the background) as the image and show in the next screen. I found a really good demonstration here, http://cocoacoderblog.com/2011/03/30/screenshots-a-legal-way-to-get-screenshots/. He did everything I want to do. But, as I said before, I am so beginner and don't understand the whole project without explanation in details. So, I'm stuck for a while because I don't know how to implement this.
Does anybody know any of good tutorial or any kind of source in this topic or any suggestion that I should learn more in order to do this screen capture? This will help me a lot to moving on.
Thank you in advance.
I'm currently attempting to solve this same problem to allow a user to take a screenshot of an Augmented Reality app. (We use Qualcomm's AR SDK plugged into Unity 3D to make our AR apps, which saved me from ever having to learn how to programmatically render OpenGL models)
For my solution I am first looking at implementing the second answer found here: How to take a screenshot programmatically
Barring that I will have to re-engineer the "Combined Screenshots" method found in CocoaCoder's Screenshots app.
I'll check back in when I figure out which one works better.
Here are 3 very helpful links to capture screenshot:
OpenGL ES View Snapshot
How to capture video frames from the camera as images using AV Foundation
How do I take a screenshot of my app that contains both UIKit and Camera elements
Enjoy

Resources