I've a camera that has a RAW8 stream, Bayer pattern:
B Gr B Gr...
Gr R Gr R...
Can I capture the stream with DirectShow Sample Grabber Filter?
If not are there any other API I can use?
You can if the camera is DirectShow compatible. In which case you will be able to see it as an option in GraphEdit under video capture devices, and you will also be able to inspect media types the camera advertises.
Related
I'm trying to create a camera capture session that has the following features:
Preview
Photo capture
Video capture
Realtime frame processing (AI)
While the first two things are not a problem, I haven't found a way to make the last two separately.
Currently, I use a single AVCaptureVideoDataOutput and run the video recording first, then the frame processing in the same function, in the same queue. (see code here and here)
The only problem with this is that the video capture captures 4k video, and I don't really want the frame processor to receive 4k buffers as that is going to be very slow and blocks the video recording (frame drops).
Ideally I want to create one AVCaptureDataOutput for 4k video recording, and another one that receives frames in a lower (preview?) resolution - but you cannot use two AVCaptureDataOutputs in the same capture session.
I thought maybe I could "hook into" the Preview layer to receive the CMSampleBuffers from there, just like the captureOutput(...) func, since those are in preview-sized resolutions, does anyone know if that is somehow possible?
For such thing I recommended implement custom renderer flow.
You need just one AVCaptureDataOutput without system PreviewLayer which provided by iOS.
Setup Color scheme to YUV (it's more compact then BGRA)
Get CMSampleBuffer in AVCaptureDataOutput
Send CMSampleBuffer to Metal texture.
Create resized low resolution texture in Metal
Hi resolution texture send to Renderer in draw it in MTKLView
Low resolution texture send to CVPixelBuffer, then you can convert it to CGImage, CGImage, Data.
Send low resolution image to Neural network
I have article on Medium: link. You can use it as some example.
I'm trying to get both channels from the supposedly stereo rear microphone out of the iPhone XS but can only ever see a single channel at various points in the AVAudioSession and AVAudioSessionPortDescription's associated with the rear camera.
I have tried using AVAudioSession APIs like setPreferredInputNumberOfChannels:
do {
try session.setPreferredInputNumberOfChannels(2)
} catch let error {
debugPrint("\(error)")
}
But get an error:
Error Domain=NSOSStatusErrorDomain Code=-50
Has anyone had experience getting a 2-channel built-in mic route working?
The stereo microphone on the iPhone XS is placed at the bottom on the left side of the lightning connector. The one on the right side of the lightning connector is a noise cancelling microphone.
The 2 other microphones are placed next to the front camera and the back camera. But those are only mono microphones.
But there is a big BUT when it comes to the bottom stereo microphone.
It only records in stereo when using the build in camera app.
Voice only in the voice memo app only records in mono.
And, the stereo microphone is not compatible with any third party app.
It only works when shooting video in the native camera app.
Since iOS 14 and iPadOS 14, you can capture stereo audio with the built-in microphones.
To determine whether a device supports stereo recording, query the audio session’s selected data source to see if its supportedPolarPatterns array contains the stereo polar pattern.
Read more from the documentation.
I did some search and found various examples, documentation on iPhone X Face ID and how it can be used for various stuff like authentication, animated emojis.
Wanted to check if there is an API/SDK to get the raw depth map from iPhone X sensor to the app?
From my understanding the depth calculation is done based on the projected pattern. This can be used to get depth profile of any object in front of the sensor. (Might be dependent on the texture of the object.)
You'll need at least the iOS 11.1 SDK in Xcode 9.1 (both in beta as of this writing). With that, builtInTrueDepthCamera becomes one of the camera types you use to select a capture device:
let device = AVCaptureDevice.default(.builtInTrueDepthCamera, for: .video, position: .front)
Then you can go on to set up an AVCaptureSession with the TrueDepth camera device, and can use that capture session to capture depth information much like you can with the back dual camera on iPhone 7 Plus and 8 Plus:
Turn on depth capture for photos with AVCapturePhotoOutput.isDepthDataDeliveryEnabled, then snap a picture with AVCapturePhotoSettings.isDepthDataDeliveryEnabled. You can read the depthData from the AVCapturePhoto object you receive after the capture, or turn on embedsDepthDataInPhoto if you just want to fire and forget (and read the data from the captured image file later).
Get a live feed of depth maps with AVCaptureDepthDataOutput. That one is like the video data output; instead of recording directly to a movie file, it gives your delegate a timed sequence of image (or in this case, depth) buffers. If you're also capturing video at the same time, AVCaptureDataOutputSynchronizer might be handy for making sure you get coordinated depth maps and color frames together.
As Apple's Device Compatibility documentation notes, you need to select the builtInTrueDepthCamera device to get any of these depth capture options. If you select the front-facing builtInWideAngleCamera, it becomes like any other selfie camera, capturing only photo and video.
Just to emphasize: from an API point of view, capturing depth with the front-facing TrueDepth camera on iPhone X is a lot like capturing depth with the back-facing dual cameras on iPhone 7 Plus and 8 Plus. So if you want a deep dive on how all this depth capture business works in general, and what you can do with captured depth information, check out the WWDC17 Session 507: Capturing Depth in iPhone Photography talk.
I need to create an application that captures a video from a webcam. Later on I need to load the video and be able to navigate around in the video freely.
When I use the videocap-demo that comes with the Sources of DSPack, the captured video is encoded in H264. When I navigate around in that video, the picture becomes blurry, with wrong colors and you can't recognize the picture. (I tried with VLC and Windows media player)
...playing the video works, but after jumping to any position in the video, it looks like this...
How can I tell DSPack to capture the video e.g. in the old MJPEG-format?
(I tried with old videos from my camera. In MJpeg the navigation seems to work flawless)
Thanks in advance, R.
Any solution for a video encoder on WP8 ? I have custom camera frames and need to record video from them. But I can't find any video encoder for WP8
Currently there is no API for your question. If you want to do it, the only way is to capture each frame using interval and build your own encoder to encode them. Even with that idea, there is no sound.