Is it possible to retrieve video decoder output in Flash? - actionscript

Assuming I use OMSF, FlowPlayer or FWPlayer (all of these are open source), do you know if it is possible to retrieve the output from the decoder (a streaming stream) so that I can detect image freezes, artifacts, get bitrate, current level etc ... ?
Thank you !

Related

Can't get stats while using sout with libvlc

I want to display a stream, and monitor the video stats of the displayed stream while recording it using libvlc. When I use sout + duplicate to record the stream while displaying it, I can only get the demux_bitrate stat from the displayed stream using libvlc_media_get_stats function. I am looking to get decoded_video, displayed_pictures, etc as well.
I've tried using the duplicate module to try to make this happen but I can't seem to make this work - am not sure if what I want to do is supported. Code below is tweaked from https://wiki.videolan.org/Documentation:Modules/display/ example for transcoding a stream while displaying the original version.
:sout=#duplicate{dst='transcode{vcodec=h264}:std{access=file,mux=ts,dst=c:\junk\test.mp4}',dst=display}
The stream displays, the file is generated, but the only valid stat is demux_bitrate which seems like the stat that would be accessible from the non-display stream instead of the displayed version.
Display and save with Transcoding
:sout=#duplicate{dst=display,dst="transcode{vcodec=h264}:standard{access=file,mux=mp4,dst=c:\junk\test.mp4}"}
Display and save without Transcoding
:sout=#duplicate{dst=display,dst=standard{access=file,mux=mp4,dst=c:\junk\test.mp4}}

WebRTC iOS: Filtering camera stream from RTCCameraVideoCapturer. Conversion from RTCFrame to CVPixelBuffer

I found the git below is simple and efficient by using func capturer(_ capturer: RTCVideoCapturer, didCapture frame: RTCVideoFrame) of RTCVideoCapturerDelegate. You get RTCVideoFrame and then convert to CVPixelBuffer to modify.
https://gist.github.com/lyokato/d041f16b94c84753b5e877211874c6fc
However, I found Chronium says nativeHandle to get PixelBuffer is no more available(link below). I tried frame.buffer.pixelbuffer..., but, looking at framework > Headers > RTCVideoFrameBuffer.h, I found CVPixelBuffer is also gone from here!
https://codereview.webrtc.org/2990253002
Is there any good way to convert RTCVideoFrame to CVPixelBuffer?
Or do we have better way to modify captured video from RTCCameraVideoCapturer?
Below link suggests modifying sdk directly but hopefully we can achieve this on Xcode.
How to modify (add filters to) the camera stream that WebRTC is sending to other peers/server
can you specify what is your expectation? because you can get pixel buffer from RTCVideoframe easily but I feel there can be a better solution if you want to filter video buffer than sent to Webrtc, you should work with RTCVideoSource.
you can get buffer with
as seen
RTCCVPixelBuffer *buffer = (RTCCVPixelBuffer *)frame.buffer;
CVPixelBufferRef imageBuffer = buffer.pixelBuffer;
(with latest SDK and with local video camera buffer only)
but in the sample i can see that filter will not work for remote.
i have attached the screenshot this is how you can check the preview as well.

How to change url from m3u8 to .ts

I'm trying to make an iptv link work on my receiver
this is the original link that i want to convert
http://s7.iapi.com:8000/re-NBA/index.m3u8?token=BzyIVQOtO77MTw
and this is the format that i want to reach in the end.
http://pro-vision.dyndns.pro:12580/live/laurent/laurent/2791.ts
An m3u8 file is just a text file that acts as an index for media streams - it will contain 'pointers' to the location of video and audio streams themselves.
A TS file is a 'container' that contains the video and audio streams themselves - i.e. the actual video and audio data.
You can't simply convert any m3u8 to a ts file or stream, but you can extract from the m3u8 file a ts file URL, which maybe is what you want.
If you look at the overview section of the m3u8 definition there is a very simple example which is maybe the best way of understanding this:
https://datatracker.ietf.org/doc/html/draft-pantos-http-live-streaming-19
The m3u8 file includes the ts references and can be seen in this extract from the above document:
#EXTM3U
#EXT-X-TARGETDURATION:10
#EXTINF:9.009,
http://media.example.com/first.ts
#EXTINF:9.009,
http://media.example.com/second.ts
#EXTINF:3.003,
http://media.example.com/third.ts
The numbers here refer to the length of the stream. More complex examples allow you have multiple variants of a particular stream, to allow different bit rate versions of a video for Adaptive Bit Rate (ABR) streaming for example.

Why I am receiving only a few audio samples per second when using AVAssetReader on iOS?

I'm coding something that:
record video+audio with the built-in camera and mic (AVCaptureSession),
do some stuff with the video and audio samplebuffer in realtime,
save the result into a local .mp4 file using AVAssetWritter,
then (later) read the file (video+audio) using AVAssetReader,
do some other stuff with the samplebuffer (for now I do nothing),
and write the result into a final video file using AVAssetWriter.
Everything works well but I have an issue with the audio format:
When I capture the audio samples from the capture session, I can log about 44 samples/sec, which seams to be normal.
When I read the .mp4 file, I only log about 3-5 audio samples/sec!
But the 2 files look and sound exactly the same (in QuickTime).
I didn't set any audio settings for the Capture Session (as Apple doesn't allow it).
I configured the outputSettings of the 2 audio AVAssetWriterInput as follow:
NSDictionary *settings = #{
AVFormatIDKey:#(kAudioFormatLinearPCM),
AVNumberOfChannelsKey:#(2),
AVSampleRateKey:#(44100.),
AVLinearPCMBitDepthKey:#(16),
AVLinearPCMIsNonInterleaved:#(NO),
AVLinearPCMIsFloatKey:#(NO),
AVLinearPCMIsBigEndianKey:#(NO)
};
I pass nil to the outputSettings of the audio AVAssetReaderTrackOutput in order to receive samples as stored in the track (according to the doc).
So, the sample rate should be 44100Hz from the CaptureSession to the final file. Why I am reading only a few audio samples? And why is it working anyway? I have the intuition that it will not work well when I'll have to work with the samples (I need to update their timestamps for example).
I tried several other settings (such as kAudioFormatMPEG4AAC), but AVAssetReader can't read compressed audio formats.
Thanks for your help :)

get yuv planar format image from camera - iOS

I am using AVFoundation to capture still images from camera (capturing still images and not video frame) using captureStillImageAsynchronouslyFromConnection. This gives to me a buffer of type CMSSampleBuffer, which I am calling imageDataSampleBuffer.
As far as I have understood, this buffer can contain any type of data related to media, and the type of data is determined when I am configuring the output settings.
for output settings, I make a dictionary with value: AVVideoCodecJPEG for key: AVVideoCOdecKey.
There is no other codec option. But when I read the AVFoundation Programming Guide>Media Capture, I can see that 420f, 420v, BGRA, jpeg are the available encoded formats supported for iPhone 3gs (which i am using)
I want to get the yuv420 (i.e. 420v) formatted image data into the imageSampleBuffer. Is that possible?
if I print the availableImageDataCodecTypes, I get only JPEG
if I print availableImageDataCVPixelFormatTypes, I get three numbers 875704422, 875704438, 1111970369.
Is it possible that these three numbers map to 420f, 420v, BGRA?
If yes, which key should I modify in my output settings?
I tried putting the value: [NSNumber numberWithInt:875704438] for key: (id)kCVPixelBufferPixelFormatTypeKey.
Would it work?
If yes, how do I extract this data from the imageSampleBuffer?
Also, In which format is UIImage stored? Can it be any format? Is it just NSData with some extra info which makes it interpreted as an image?
I have been trying to use this method :
Raw image data from camera like "645 PRO"
I am saving the data using writeToFile and I have been trying to open it using irfan view.
But I am unable to verify whether or not the saved file is in yuv format ot not because irfan view gives error that it is unable to read the headers.

Resources