how to specify duration in live m3u8 stream using ffmpeg? - stream

I want to specify a duration for example 4 min when using ffmpeg but i keep getting error:
ffmpeg -i "./test.m3u8" -t 04:00 "output.mp4"
and error i get is this:
Invalid duration specification for t: 04:00
also these warnings in yellow color:
max_analyze_duration 5000000 reached at 5014800
Could not find codec parameters (Video: h264 (
Could not find codec parameters (Audio: aac (
Could not find codec parameters (Video: h264 (
Could not find codec parameters (Audio: aac (
hope you guys help me what i am doing wrong. Thanks in advance.

From the documentation:
duration may be a number in seconds, or in hh:mm:ss[.xxx] form.
Your form is mm:ss, so it's simply not valid. Which is also what the error message says.
Use -t 00:04:00 or -t 240 instead.

Related

Open CV capture read() does not read specific video frames fps 59.9

I have code to read video frames from a file:
video_capture = cv2.VideoCapture(file_path)
fps = int(video_capture.get(cv2.CAP_PROP_FPS))
frame_count = int(starting_time * fps)
video_capture.set(cv2.CAP_PROP_POS_FRAMES, frame_count)
while video_capture.isOpened():
success, np_image = video_capture.read()
if success is False:
break
else ......
files are mp4 and fps is 59.9.
However, it can not read successfully some frames - from 53-59. That is video_capture.read() returns False.
Converting it to AVI format resolves that issue. However, I am trying to find if there is a way to return why it could not read the frame and returns False.
Any help is appreciated!
This has been a problem in highly compressed videos (p, b frame enabled). This issue seems to be fixed here. So updating your OpenCV version might help. This issue won't occur in the least compressible videos (I frame) like avi.

How to hardcode a MP4 stream file with iOS VideoToolbox and FFMPEG?

Guys!
I have found a demo in github that is :-VideoToolboxDemo. And I also found a question in stackoverflow how-to-use-videotoolbox-to-decompress-h-264-video-stream which someone has implemented in github:https:/ /github.com/lileilei1119/VTDemo
But there is a different between they in finding SPS and PPS.
The VideoToolboxDemo is:
uint8_t *data = pCodecCtx -> extradata;
int size = pCodecCtx -> extradata_size;
it uses extradata of pCodecCtx from FFMPEG to find start code like 0x00 00 01(or 0x00 00 00 01)
but the introduction in stackoverflow is:
[_h264Decoder decodeFrame:packet.data withSize:packet.size];
use data of packet?
I have try these two ways, but I still can't find start code of SPS and PPS. Does anyone know why? Is there something wrong in my file?
My mp4 file is http:/ /7u2m53.com1.z0.glb.clouddn.com/201601131107187320.mp4
Videotoolbox does not use annex b, hence does not produce a start code. Read more here. Possible Locations for Sequence/Picture Parameter Set(s) for H.264 Stream

Do I Need to Set the ASBD of a Core Audio File Player Audio Unit?

I've specified and instantiated two Audio Units: a multichannel mixer unit and a generator of subtype AudioFilePlayer.
I would have thought I needed to set the ASBD of the filePlayer's output to match the ASBD I set for the mixer input. However when I attempt to set the filePlayer's output I get a kAudioUnitErr_FormatNotSupported (-10868) error.
Here's the stream format I set on the mixer input (successfully) and am also trying to set on the filePlayer (it's the monostream format copied from Apple's mixerhost sample project):
Sample Rate: 44100
Format ID: lpcm
Format Flags: C
Bytes per Packet: 2
Frames per Packet: 1
Bytes per Frame: 2
Channels per Frame: 1
Bits per Channel: 16
In the course of troubleshooting this I queried the filePlayer AU for the format it is 'natively' set to. This is what's returned:
Sample Rate: 44100
Format ID: lpcm
Format Flags: 29
Bytes per Packet: 4
Frames per Packet: 1
Bytes per Frame: 4
Channels per Frame: 2
Bits per Channel: 32
All the example code I've found sends the output of the filePlayer unit to an effect unit and set the filePlayer's output to match the ASBD set for the effect unit. Given I have no effect unit it seems like setting the filePlayer's output to the mixer input's ASBD would be the correct - and required - thing to do.
How have you configured the AUGraph? I might need to see some code to help you out.
Setting the output scope of AUMultiChannelMixer ASBD once only (as in MixerHost) works. However if you have any kind of effect at all, you will need to think about where their ASBDs are defined and how you arrange your code so CoreAudio does not jump in and mess with your effects AudioUnits ASBDs. By messing with I mean overriding your ASBD to the default kAudioFormatFlagIsFloat, kAudioFormatFlagIsPacked, 2 channels, non-interleaved. This was a big pain for me at first.
I would set the effects AudioUnits to their default ASBD. Assuming you have connected the AUFilePlayer node, then you can pull it out later in the program like this
result = AUGraphNodeInfo (processingGraph,
filePlayerNode,
NULL,
&filePlayerUnit);
And then proceed to set
AudioUnitSetProperty(filePlayerUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
0,
&monoStreamFormat,
sizeof(monoStreamFormat));
Hopefully this helps.
Basically I didn't bother setting the filePlayer ASBD but rather retrieved the 'native' ASBD it was set to and updated only the sample rate and channel count.
Likewise I didn't set input on the mixer and let the mixer figure it's format out.

ffmpeg output for flac and wav differs, why?

I need to parse ffmpegs meta data output but it is different for some reason between a wav and a flac file.
Flac:
(int) 14 => ' Duration: 00:03:18.93, bitrate: 1045 kb/s',
(int) 15 => ' Stream #0:0: Audio: flac, 44100 Hz, stereo, s16',
Wav:
(int) 13 => ' Duration: 00:00:15.00, bitrate: 1411 kb/s',
(int) 14 => ' Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s',
I could get the bitrate from the duration line too I think but why is it different? And will there be more differences in future releases? It really sucks that there is no better way to get the information from ffmpeg than to parse it's output. Any better ideas?
Here is my whole ffmpeg output and my parsed result:
http://pastebin.com/4qJfzZNL
I solved it finally by using ffprobe which comes with ffmpeg.
ffprobe -v quiet -show_streams -show_format -show_error -print_format <format> <file>
See the writers section of the documentation about the formats it supports, I've used json but xml, csv and ini are also supported.
The stream line provides different information because each codec has different parameters. You will need to parse the line and depending on the audio type you will need to understand those parameters that come after it.
You could just use the bitrate in the duration line, but this may be misleading without knowledge of which codec is in use.

How do I set the interlaced flag on an MKV file so that VLC can automatically play it back deinterlaced?

I've got an MKV file whose source is interlaced NTSC MPEG-2 video. I converted it to H.264 MKV with HandBrake, but this process did not set the "interlaced" flag in the MKV file. The content is interlaced—and I do want it to stay interlaced because it looks much better playing back as 60 fields-per-second content with on-the-fly deinterlacing than it does as 30 frames-per-second content that's been deinterlaced at encode-time.
I tried this...
mkvpropedit -e track:v1 -a interlaced=1 foo.mkv
which did indeed set the interlaced bit...
|+ Segment tracks
| + A track
| + Video track
| + Pixel width: 704
| + Pixel height: 480
| + Display width: 625
| + Display height: 480
| + Interlaced: 1
But when I play the video with VLC with Deinterlace set to Automatic, it doesn't think the video is interlaced and therefore doesn't do the deinterlacing.
What am I doing wrong?
Software versions:
HandBrake 0.9.5
mkvpropedit v5.0.1
Mac OS X 10.7.3
to make handbrake set the interlaced flag:
-use H.264(x264) Video Codec
-at the bottom of the Advanced Tab add :tff or :bff, ( dependant if source is top field first or bottom field first)
I would recommend trying FFMPEG.
Documentation: http://ffmpeg.org/ffmpeg.html
‘-ilme’
Force interlacing support in encoder (MPEG-2 and MPEG-4 only).
Use this option if your input file is interlaced and you want to keep
the interlaced format for minimum losses. The alternative is to
deinterlace the input stream with ‘-deinterlace’, but deinterlacing
introduces losses.
Since you mentioned you are on OSX 10.7 you can use MacPorts to install all dependencies + ffmpeg for you (once the deps are installed you can also build the latest from git).
http://www.macports.org/
(You must be comfortable with the command line for all these tools.)

Resources