I am working on zigbee light link(ZLL) profile. I would like to capture the frames in ZLL using USRP B200. In ZLL there are frames called Inter-PAN frames, there frames will be transmitted in 0db(zero gain) only. I tried to capture those by placing the receiver close proximity to bridge. But i was unable to capture those inter-PAN frames. Other frames i captured.
How to capture 0db frames using USRP B200?
Related
I'm trying to create a camera capture session that has the following features:
Preview
Photo capture
Video capture
Realtime frame processing (AI)
While the first two things are not a problem, I haven't found a way to make the last two separately.
Currently, I use a single AVCaptureVideoDataOutput and run the video recording first, then the frame processing in the same function, in the same queue. (see code here and here)
The only problem with this is that the video capture captures 4k video, and I don't really want the frame processor to receive 4k buffers as that is going to be very slow and blocks the video recording (frame drops).
Ideally I want to create one AVCaptureDataOutput for 4k video recording, and another one that receives frames in a lower (preview?) resolution - but you cannot use two AVCaptureDataOutputs in the same capture session.
I thought maybe I could "hook into" the Preview layer to receive the CMSampleBuffers from there, just like the captureOutput(...) func, since those are in preview-sized resolutions, does anyone know if that is somehow possible?
For such thing I recommended implement custom renderer flow.
You need just one AVCaptureDataOutput without system PreviewLayer which provided by iOS.
Setup Color scheme to YUV (it's more compact then BGRA)
Get CMSampleBuffer in AVCaptureDataOutput
Send CMSampleBuffer to Metal texture.
Create resized low resolution texture in Metal
Hi resolution texture send to Renderer in draw it in MTKLView
Low resolution texture send to CVPixelBuffer, then you can convert it to CGImage, CGImage, Data.
Send low resolution image to Neural network
I have article on Medium: link. You can use it as some example.
I'm developing app which supports both still image and video capture with AVFoundation. Capturing them requires different AVCaptureSession presets. I check for canSetSessionPreset(), begin change with beginConfiguration(), set required preset with sessionPreset and end with commitConfiguration().
I found if I'm capturing still image with AVCaptureStillImageOutput immediately after changing preset, it returns no errors, but the resulting image is black or very dark sometimes.
If I start capturing video with AVCaptureMovieFileOutput immediately after changing preset, first several frames in a resulting file are also black or very dark at times.
Right after changing preset the screen flickers likely due to the camera adjusting the exposure. So it looks like immediately after changing preset camera start measuring exposure from very fast shutter speed, which results in black/dark frames.
Both problems goes away if I insert a 0.1 second delay between changing the preset and starting capture, but that's ugly and no one can guarantee it will work all the time on all devices.
Is there a clean solution to this problem?
This is for future users...
It was happening for me when I was setting the sessionPreset as high and as soon as I was starting recording I was making changes to video output connection and setting focus which I then moved to while setting up camera and it worked!!!
I currently have a video camera set up with an AVCaptureVideoDataOutput whose sample buffer delegate is implemented as such:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
NSArray *detectedFaces = [self detectFacesFromSampleBuffer:sampleBuffer];
[self animateViewsForFaces:detectedFaces];
}
The sample buffer is processed and if any faces are detected, their bounds are shown as views over an AVCaptureVideoPreviewLayer that's displaying the live video output (rectangles over the faces). The views are animated so that they move smoothly between face detects. Is it possible to somehow record what's shown in the preview layer and merge it with the animated UIViews that are overlaying it, the end result being a video file?
Generally, you can use low-level approach to make a video stream, then write it to a file. I'm not an expert with video formats, codecs and so on, but approach is:
— Set up an CADisplayLink for getting fired callback every frame the screen redraws. Maybe good decision is to setup frame interval to 2 to reduce target video frame rate to ~30 fps.
— Each time screen redraws take a snapshot of preview layer and overlay.
— Process collected images: zip each two images of one frame then make a video stream from the sequence of merged frames. I assume, iOS has built-in tools for more or less simple way to do this.
Of course, resolution and quality constrained to the layers' parameters. If you need raw video stream from the camera, you should capture this stream and then draw your overlay data directly in the video frames that you captured.
I am using GPUImage's GPUImageVideoCamera initWithSessionPreset:cameraPosition: in order to display video from the rear facing camera on an iOS device (targeting iOS 7). This is filtered and displayed on a GPUImageView. Will not exceed AVCaptureSessionPreset640x480.
At any given moment in the app, I need to recall the past 5 seconds of unfiltered video captured from the rear-facing camera and instantly play this back on another (or the same) GPUImageView.
I can access CMSampleBufferRef via GPUImageVideoCamera's willOutputSampleBuffer: which is passed through from but I'm not sure how one goes about getting the most recent frames into memory in an efficient way such that they can be instantly, seamlessly played back.
I believe the solution is a Circular Buffer using something like TPCircularBuffer but I'm not sure that will work with a video stream. Also wanted to reference unanswered Buffering CMSampleBufferRef into a CFArray and Hold multiple Frames in Memory before sending them to AVAssetWriter as they closely resembled my original plan of attack until I started researching this.
I have used the following method iOS4: how do I use video file as an OpenGL texture? to get video frames rendering in openGL successfully.
This method however seems to fall down when you want to scrub (jump to a certain point in the playback) as it only supplies you with video frames sequentially.
Does anyone know a way this behaviour can successfully be achieved?
One easy way to implement this is to export the video to a series of frames, store each frame as a PNG, and then "scrub" by seeing to a PNG at a specific offset. That gives you random access in the image stream at the cost of decoding the entire video first and holding all the data on disk. This would also involve decoding each frame as it is accessed, that would eat up CPU but modern iPhones and iPads can handle it as long as you are not doing too much else.