I am using MobileVLCKit for playing Http stream avi files. The Stream is ok when video filesize is small. Stream faces grey screen, jitter, lag when video file is large. Can anyone tell me how to solve this ? The log says -
[0000000154e74628] core input error: ES_OUT_SET_(GROUP_)PCR is called too late (pts_delay increased to 2193 ms)
[0000000154e74628] core input error: ES_OUT_RESET_PCR called
[0000000154e74628] core input error: ES_OUT_SET_(GROUP_)PCR is called too late (jitter of 24516 ms ignored)
Pls forgive my writing as i am not used to write.
Related
So I have put together a sample project https://github.com/liuxuan30/TestH264.git that uses VideoToolBox to have a H264 sample decoder to display a stream file, captured from a camera.
The H264 decoder using VideoToolBox is copied from internet, I didn't write it, when I tried to play my h264 stream file, it plays too fast, comparing to ffmpeg or ffplay, which both played back at a normal speed.
I wanted to ask, how to fix this behaviour? Thanks.
This happens because of this constant kCMSampleAttachmentKey_DisplayImmediately:
If this key is present, the sample should be displayed as soon as possible rather than
according to its presentation timestamp. Use this attachment at run time to request this
behavior from a display pipeline such as the AVSampleBufferDisplayLayer class.
This attachment is not written to media files.
from Apple documation
So you have two options of displaying:
Display immediately - which is probably good for real-time stream, when you need to display frame as soon as possible
Display frames at specific timestamp
*comparing to ffmpeg or ffplay, which both played back at a normal speed.
ffplay and ffmpeg probably use timestamp at this point.
I have same result as you from your test H.264 file, but it's happens because you get all decoded frame at once so decoder is displaying it immediately.
You can watch this video for more information about VideoToolbox framework:
Direct Access to Video Encoding and Decoding
Hello I am trying to capture stream with open cv with python. I am getting a delay of 6 seconds for opening the stream which i feel is very high. In the vlc im able to open the stream in 0.5 seconds at most. Is there any way to fast up the processing of opening the stream.
I was able to solve the issue by using the opencv Compiled with gstreamer and by making opencv to take default as gstreamer(with cv2.CAP_GSTREAMER) instead of ffmpeg to open the rtsp stream. With this i was able to open the stream in 1.5 seconds
TL;DR
I want to convert fMP4 fragments to TS segments (for HLS) as the fragments are being written using FFmpeg on an iOS device.
Why?
I'm trying to achieve live uploading on iOS while maintaining a seamless, HD copy locally.
What I've tried
Rolling AVAssetWriters where each writes for 8 seconds, then concatenating the MP4s together via FFmpeg.
What went wrong - There are blips in the audio and video at times. I've identified 3 reasons for this.
1) Priming frames for audio written by the AAC encoder creating gaps.
2) Since video frames are 33.33ms long, and audio frames 0.022ms long, it's possible for them to not line up at the end of a file.
3) The lack of frame accurate encoding present on Mac OS, but not available for iOS Details Here
FFmpeg muxing a large video only MP4 file with raw audio into TS segments. The work was based on the Kickflip SDK
What Went Wrong - Every once in a while an audio only file would get uploaded, with no video whatsoever. Never able to reproduce it in-house, but it was pretty upsetting to our users when they didn't record what they thought they did. There were also issues with accurate seeking on the final segments, almost like the TS segments were incorrectly time stamped.
What I'm thinking now
Apple was pushing fMP4 at WWDC this year (2016) and I hadn't looked into it much at all before that. Since an fMP4 file can be read, and played while it's being written, I thought that it would be possible for FFmpeg to transcode the file as it's being written as well, as long as we hold off sending the bytes to FFmpeg until each fragment within the file is finished.
However, I'm not familiar enough with the FFmpeg C API, I only used it briefly within attempt #2.
What I need from you
Is this a feasible solution? Is anybody familiar enough with fMP4 to know if I can actually accomplish this?
How will I know that AVFoundation has finished writing a fragment within the file so that I can pipe it into FFmpeg?
How can I take data from a file on disk, chunk at a time, pass it into FFmpeg and have it spit out TS segments?
Strictly speaking you don't need to transcode the fmp4 if it contains h264+aac, you just need to repackage the sample data as TS. (using ffmpeg -codec copy or gpac)
Wrt. alignment (1.2) I suppose this all depends on your encoder settings (frame rate, sample rate and GOP size). It is certainly possible to make sure that audio and video align exactly at fragment boundaries (see for example: this table). If you're targeting iOS, I would recommend using HLS protocol version 3 (or 4) allowing timing to be represented more accurately. This also allows you to stream audio and video separately (non-multiplexed).
I believe ffmpeg should be capable of pushing a live fmp4 stream (ie. using a long-running HTTP POST), but playout requires origin software to do something meaningful with it (ie. stream to HLS).
Is this possible to access the raw audio PCM data that is being played when using XAudio2 to play file?
I've been searching for several ways to access a decoded version of audio files being played in SL4/Windows Phone, without success.
According to this post someone had success writing a custom XAPO that just grabs samples and is enabled on a Submix Voice. http://social.msdn.microsoft.com/Forums/windowsapps/en-US/05593fad-dfd8-4c77-983b-8c84cd4a324b/xaudio2-saving-output-custom-xapos-slow-down-audio-play-backwards
Please note that if you just want to do this for audio processing this approach is not optimal because you are limited to the speed of audio playback.
In my audio app I need to be able to change the format of an audio file (AIFF), more specifically the sample rate. The audio session is running at 22050 Hz, and the audio file itself is created in libpd/Pure Data also running the same sample rate. The problem is that the file appears to be a 44100 Hz audio file, which means that when played back on the device it plays twice as fast.
Is it possible to change the header of the file or something so that its sample rate becomes 22050 Hz, without resampling the audio?
I have seen other related topics where one suggestion is to play the file at half speed. However, this will not solve my problem, as the file will be further compressed to AAC for uploading to a server, and it need to be able to play back at correct speed on other devices.
Thanks!
I discovered that the problem was caused by a bug in a file creating object in Pure Data. No matter what sample rate I set the file to be, it ended up being 44100 Hz. So I simply switched to using wav files, and the files ended up having the correct sample rate of 22050, and now play back at correct speed.
All good now!