I am using AVPlayer to run a HLS video. The video has no sound. Also i have a audio track url of same format m3u8. Can i somehow change the AVPlayer item asset or something while running my video without sound to add my other audio track so that they are sort of played together.
Disappointingly, you can't create an AVComposition using non local video and audio tracks and play that.
But HLS is at its heart a sort of textual playlist consisting of media pieces that can be played either serially or concurrently. If you inspect both your video and audio m3u8 streams, you should be able to concoct a new single m3u8 stream that includes both video and audio.
HOWEVER, disappointingly, it seems you can't play this resulting stream as a local file (why!?!), so you'd set up an http server to serve it to you, either locally or from or afar, or maybe (!?) you could avoid all that with a clever use of AVAssetResourceLoaderDelegate.
It also seems synchronising two AVPlayers is unsupported too, although perhaps that situation has improved.
Related
I have an app which can play video HLS streams.
HLS master playlist contains redundant steams to provide backup service
Looks like this:
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1500000,RESOLUTION=638x480
https://example.com/playlist.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1564000,RESOLUTION=638x480
https://example.com/playlist.m3u8?redundant=1
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1564000,RESOLUTION=638x480
https://example.com/playlist.m3u8?redundant=2
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1564000,RESOLUTION=638x480
https://example.com/playlist.m3u8?redundant=3
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=400000,RESOLUTION=638x480
https://example.com/playlist_lq.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=400000,RESOLUTION=638x480
https://example.com/playlist_lq.m3u8?redundant=1
....
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=400000,RESOLUTION=638x480
https://example.com/playlist_lq.m3u8?redundant=5
So, I decided to test out how this setup will fly in case of a bad network scenario. For this, I used network link conditioner's 3G preset, which provides 750kbs of download bandwidth. Naturally I expected relatively smooth playback of 400kbs video but alas, it took 60 seconds to fully load test clip (800kb total size).
What I noticed is that AVPlayer sends requests for all listed redundant playlist (and I have 5 for each bandwidth). If I remove them and keep only 1 media-playlist per bandwidth - video loads in 10 seconds and plays without hiccups.
It looks like AVPlayer try to process all redundant links in parallel with main video load and chokes hard.
Is there any way to restrict this behavior of AVPlayer and force him to go for redundant streams only in case of actual load error?
Any idea why it tries to load all of them? Maybe some HLS tags can help?
Also it sometimes display errors like this in console:
{OptimizedCabacDecoder::UpdateBitStreamPtr} bitstream parsing error!!!!!!!!!!!!!!
And I cant find much info about it
Problem was in incorrectly set BANDWIDTH value, AVPlayer has some obscure logic with switching redundant streams if property current one doesn't match m3u8 values
I'm creating an app where I want the possibility to record a video(and sound), and then play it back while recording audio. After this, I'd be left with a video file (containing audio), and a separate audio file(completely different from the video's audio track).
Is it possible to use AVMutableCompositionTrack to compose a new video file containing one video track and two separate audio tracks, and then using AVAssetExportSession to export this to one single standalone video-file which keeps these audio tracks separated? What I hope to achieve with this is that the user can later watch this video-file and choose if one or both audio tracks should be playing. I know of the possibility to use multiple AVAssets to synchronize playback from different audio/video-files, but I'm wondering if I can create one file containing separable audio tracks, and then later define each audio track as an AVAsset to control the syncing.
I know some video formats/codecs etc have the ability to change audio language, even when it's only one single file. I also know AVFoundation has great support for handling tracks. What I don't know, is if these are compatible to each other: Can AVFoundation handle separate tracks from within one single file? And are there any codecs with support for such tracking for iOS(e.g .mp4, .mov, ...)?
I imagine "problems" would occur if a standard video player tried to watch this resulting movie-file (possibly only playing the video with the first audio track), but I'm thinking since I can already assume there are two (or more) audio tracks, it could be done?
Is this possible in any way?
Yes, it is possible to create a video file with multiple audio tracks by using AVAssetWriterInputGroup. The reference says:
Use this class to associate tracks corresponding to multiple AVAssetWriterInput instances as mutually exclusive to each other for playback or other processing.
For example, if you are creating an asset with multiple audio tracks using different spoken languages—and only one track should be played at a time—group the inputs corresponding to those tracks into a single instance of AVAssetWriterInputGroup and add the group to the AVAssetWriter instance using the AVAssetWriter method add(_:).
Am using MPMoviePlayerController to stream n play a video. Its, in m3u8 format and plays with no problems. However, lets say it has buffered n played 50% of the total video, now if i seek backward, it starts buffering from that point. shouldn't it just play the video and not buffer from the seeked point as it has already buffered that part..??
This behaviour is observed only in case of m3u8 file, if I play a mp4 file, it doesn't do that. I mean, it won't buffer again.
So, is this an expected behaviour or am I just missing something..?
Thanks in advance.
m3u8 is a playlist file. It has links to either other playlist files or video files in TS(transport stream) format. The ts files are most commonly 10 seconds chunks of video. So, every N(10) seconds it fetches a new stream.So, when you seek, it will go and fetch the stream that had that chunk of the video. So, you will see buffering again.
I'm building a video player that should handle both streaming and non-streaming content and I want it to be playable with AirPlay.
I'm currently using multiple AVPlayer instances (one for each clip), and it works okay, but the problem is it doesn't give a very smooth experience when using AirPlay. The interface jumps back and forth between each clip when switching AVPlayer, so I would like to migrate to using a single AVPlayer. This seems like a trivial task, but I haven't yet found a way to do this.
This is what I've tried so far:
Using a single AVPlayer with multiple AVPlayerItems and switching between those using replaceCurrentItemWithPlayerItem. This works fine when switching between streaming->streaming clips or non-streaming->non-streaming, but AVPlayer doesn't seem to accept replacements between streaming->non-streaming or vice versa. Basically, nothing happens when I try to switch.
Using an AVQueuePlayer with multiple AVPlayerItems fails for the same reason as above.
Using a single AVPlayer with a single AVPlayerItem based on an AVMutableComposition asset. This doesn't work because streaming content is not allowed in an AVMutableComposition (and AVURLAssets created from a streaming url doesn't have any AVAssetTracks and they are required).
So is there anything I am missing? Any other suggestion on how to accomplish this?
I asked this question to Apple's Technical Support and got the answer that it's currently not possible to avoid the short jump back to menu interface, and that no version of AVPlayer supports mixing of streaming and non-streaming content.
Full response:
This is in response to your question about how to avoid the short jump back to the main interface when switching AVPlayers or AVPlayerItems for different media items while playing over AirPlay.
The issue here is the same with AVPlayer and AVQueuePlayer: no instance of AVPlayer (regardless of which particular class) can currently play both streaming and non-streaming content; that is, you can't mix HTTP Live Streaming media (e.g. .m3u8) with non-streaming media (a file-based resource such as an .mp4 file).
And with regard to AVMutableComposition, it does not allow streaming content.
Currently, there is no support for "seamless" video playback across multiple items. I encourage you to file an enhancement request for this feature using the Apple Bug Reporter (http://developer.apple.com/bugreporter/).
AVComposition is probably the best option at present for "seamless" playback. However, it has the limitation just described where streaming content is not allowed.
Long story short, I am trying to implement a naive solution for streaming video from the iOS camera/microphone to a server.
I am using AVCaptureSession with audio and video AVCaptureOutputs, and then using AVAssetWriter/AVAssetWriterInput to capture video and audio in the captureOutput:didOutputSampleBuffer:fromConnection method and write the resulting video to a file.
To make this a stream, I am using an NSTimer to break the video files into 1 second chunks (by hot-swapping in a different AVAssetWriter that has a different outputURL) and upload these to a server over HTTP.
This is working, but the issue I'm running into is this: the beginning of the .mp4 files appear to always be missing audio in the first frame, so when the video files are concatenated on the server (running ffmpeg) there is a noticeable audio skip at the intersections of these files. The video is just fine - no skipping.
I tried many ways of making sure there were no CMSampleBuffers dropped and checked their timestamps to make sure they were going to the right AVAssetWriter, but to no avail.
Checking the AVCam example with AVCaptureMovieFileOutput and AVCaptureLocation example with AVAssetWriter and it appears the files they generate do the same thing.
Maybe there is something fundamental I am misunderstanding here about the nature of audio/video files, as I'm new to video/audio capture - but thought I'd check before I tried to workaround this by learning to use ffmpeg as some seem to do to fragment the stream (if you have any tips on this, too, let me know!). Thanks in advance!
I had the same problem and solved it by recording audio with a different API, Audio Queue. This seems to solve it, just need to take care of timing in order to avoid sound delay.