How can I get frame dimensions from live stream using libav C++ - stream

I am trying to extract the frame dimensions from H.264 encoded live stream using libav and C++. I have access to the raw stream, I don't need to re-encode the frames just to parse the width and height. I think I have to get the SPS to get this information.
I found this post but it doesn't seem to help with my problem:
How to get width and height from the H264 SPS using ffmpeg

Related

In mpeg-4 part 10, is the video resolution stored in a header in the video codecs or within each frame?

In mpeg-4 part 10 (AVC), is the video resolution stored in a header in the video codecs or within each frame, so that each frame can have its own resolution?
If both, which of either does main stream video players as VLC take in account when playbacking?

AVFoundation max render size

I've searched quite a lot and it seems that couldn't find a definite answer to what is the maximum render size of a video on iOS using AVFoundation.
I need to stitch two or more videos side by side or above each and render them in one new video with a final size larger than 1920 x 1080. So for example if I have two full hd videos (1920 x 1080) side by side the final composition would be 3840 x 1080.
I've tried with AVAssetExportSession and it always shrinks the final video proportionally to max 1920 in width or 1080 in height. It's quite understandable because of all possible AVAssetExportSession settings like preset, file type etc.
I tried also using AVAssetReader and AVAssetWriter but the results are the same. I only have more control over the quality, bitrate etc.
So.. is there a way this can be achieved on iOS or we have to stick to max Full HD?
Thanks
Well... Actually the answer should be YES and also NO. At least of what I've found until now.
H.264 allows higher resolutions only using a higher level profile which is fine. However on iOS the max profile that can be used is AVVideoProfileLevelH264High41 which according the specs, permits a max resolution of 1,920×1,080#30.1 fps or 2,048×1,024#30.0 fps.
So encoding with H.264 won't do the job and the answer should be NO.
The other option is to use other compression/codec. I've tried AVVideoCodecJPEG and was able to render such a video. So the answer should be YES.
But.. the problem is that this video is not playable on iOS which again changes the answer to NO.
To summarise I'd say: it is possible if that video is meant to be used out of the device otherwise the video will simply not be useable.
Hope it will help other people as well and if someone else gives a better, even different answer I'll be glad.

How to get frame by frame images from movie file in iPhone

In iOS, I would like to get frame-by-frame images from a movie file,
I tried it by using AVAssetImageGenerator. But it gets one image per second for a 30fps movie. It should be 30images!
I heard that there is a way to use FFmpeg.
But in newer OSs like iOS7, is there a new API to do this without using external libraries like FFmpeg?
You can also try OpenCV to capture the frames. But the speed will be dependent on the device and processor. Hope for iPhone5 the speed is perfect to capture all the frames. You can have a look at this link if it helps.

CVPixelBufferGetBytesPerRow() for CVImageBufferRef returns unexpected wrong value for 1080p video frame

I am developing an application doing real-time video processing using AVFoundation with back cameras of iDevices. The AVCaptureSession is configured with sessionPreset AVCaputeSessionPreset1920x1080 (full HD), video settings kCVPixelBufferPixelFormatTypeKey=kCVPixelFormatType_32BGRA and outputs sample buffers of type CMSampleBufferRef to a AVCaptureVideoDataOutput delegate. Video/Interface orientation portrait is used (means frames of size 1080x1920 are expected). On each arrival of a frame, a CVImageBufferRef is retrieved from the sample buffer for further access to it's raw bytes. When accessing CVPixelBufferGetBytesPerRow() of this CVImageBufferRef instance, I get the value 4352 which is totally unexpected in my opinion. My expectation is, that bytes per row reflects 4 bytes (BGRA) per pixel for the entire frame width (1080) resulting in the value 4320 (=1080*4bytes). With bytes per row = 4352, divided by 4bytes this would give a frame width of 1088. Does anyone have a clue why this is actually happening? I can't work with the expected width of 1080 when analyzing pixel-wise as it leads to distorted images (checked converting to UIImage and save to disk), I definitly need to work with 1088 as width so the image is straight and analysations give proper results - but this is weird.
As I am using the raw frame bytes for real-time analyzation and expect to use a width of 1080, this is very essential for me so I really appreciate help on this issue.
Devices used:
- iPod touch 5G with iOS 6.0.1
- iPhone 5S with iOS 7.0.2
Code excerpt:
- (uint8_t*) convertSampleBuffer: (CMSampleBufferRef) sampleBuffer {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); // = 4352 (=1088*4)
.....
}
Apple released "Technical Q&A QA1829 Understanding the bytes per row value returned by CVPixelBufferGetBytesPerRow" on 5/1/2014 which states that this is due to hardware alignment requirements. See the tech note for details.

cant get c170logitech webcam working using openCV( unable to change pixel format from MJPG toYUYV)

i tried to run the tutorial code for object detection. i have logitech c170 webcam with me. i am not able to run any tutorial codes with this cam connected.
HIGHGUI ERROR: V4L/V4L2: VIDIOC_S_CROP --(!) No captured frame -- Break!
but the same program runs effectively with my built in webcam. since i am a newbie to openCV i am not able to understand the mistakes.
when i took the details of the both cams(built in and logitech c170) using
v4l2-ctl --device=/dev/video0 --all
Driver Info (not using libv4l2):
Driver name : uvcvideo
Card type : Webcam C170
Bus info : usb-0000:00:1d.0-1.2
Driver version: 3.2.50
Capabilities : 0x04000001
Video Capture
Streaming
Format Video Capture:
Width/Height : 640/480
Pixel Format : 'MJPG'
Field : None
Bytes per Line: 0
Size Image : 921600
Colorspace : SRGB
Crop Capability Video Capture:
Bounds : Left 0, Top 0, Width 640, Height 480
Default : Left 0, Top 0, Width 640, Height 480
Pixel Aspect: 1/1
Video input : 0 (Camera 1: ok)
Streaming Parameters Video Capture:
Capabilities : timeperframe
Frames per second: 30.000 (30/1)
Read buffers : 0
i got the difference that the pixel formats of both are different. the built in cam is YUYV while logitech is MJPG. more over i am completely unaware of MJPG format..i tried to change the formate to YUYV using
v4l2-ctl --device=/dev/video1 --set-fmt-video=width=640,height=480,pixelformat=0
i could change the format.but when i run the program again the error repeats the format is again changed to MJPG by the system.
consider me as smallest guy in OpenCV..
I've been trying to use my Logitech C170 with the V4L2 API for a couple of days, and capturing YUYV uncompressed images doesn't seem to work in 640x480 nor 640x360 resolutions. Waiting for the frame buffer filling makes V4L2 stall.
But that works well with all the other resolutions reported in the UVC capabilities, e.g. 352x288, 320x240, 176x144, 160x120, 544x288, 432x240 and 320x176.
For 640x480 resolution, I've only managed to dump JFIF compressed frames from the camera.
Maybe you could make your codes work with this camera if you try image widths lowerer than 640 pixels.

Resources