I have an iOS app that downloads videos from Firebase (Cloud Firestore) with a feed similar to Instagram/TikTok. However, I can't get the videos to be readily available before a user scrolls to the video. Any tips would be super helpful. How does TikTok do it? Do they save a whole bunch of videos to file in the background on load?
My current VideoDownloadManager:
Checks if video is already loaded in temp cache or local file,
and if not:
Downloads video url (view Firebase's downloadURL)
Returns the video url for immediate use (before playing it still
has some delay for buffering)
Stores the video url in a temp
cache (in case user scrolls away and scrolls back)
Begins to write video to file and removes video from temp cache on completion
With the current set up, videos play efficiently after they are downloaded. But if a video is not downloaded yet and a user scrolls to that video, it takes too long for (#1/2) above to complete and buffer enough to play. I am already using OperationQueues and prioritizing the current video over any other background videos - but this isn't fast enough still.
TikTok videos are almost always readily available as a user scrolls. What the secret?
Thanks for your help!
I have a few tips for you:
1- when you are loading your stream you should start preheating video URLs in a background thread.
2- try not to download complete files, and just cache or buffer a small amount of file like 1MB.
3- with .mp4 files you can play videos even if they are not downloaded completely.
4- start full download when the video starts playing based on the buffer rate or video length.
5- try to use videos with the smallest file size and bitrates. when you are creating them, try to convert them to a convenient format. my suggestion will be:
Video:
video bit rate -> 12.5
video size -> 960x540
conversion format -> h264
Sound:
rate -> 44100
encoding bit rate -> 96000
6- check if the video has a buffered range of more than 25% when you are going to start the playback.
7- don't forget to do the downloads in a temp folder and clean that folder regularly. This helps avoid huge app size, the consequence of not doing that will may lead users to delete your app!!.
For iOS developers: this is my videoConverter.
also, you can use this cache video player GSPlayer
Related
I am building an app that streams video content, something like TikTok. So you can swipe videos in table, and when new cell becomes visible the video starts playing. And it works great, except when you compare it to TikTok or Instagram or ect. My video starts streaming pretty fast but not always, it is very sensible to network quality, and sometimes even when network is great it still buffering too long. When comparing to TikTok, Instagram ... in same conditions they don't seam to have that problem. I am using JWPlayer as video hosting service, and AVPlayer as player. I am also doing async preload of assets before assigning them to PlayerItem. So my question is what else can I do to speed up video start. Do I need to do some special video preparations before uploading it to streaming service. (also I stream m3U8 files). Is there some set of presets that enables optimum streaming quality and start speed. Thanks in advance.
So theres a few things you can do.
HLS is apples preferred method of streaming to an apple device. So try to get that as much as possible for iOS devices.
The best practices when it comes to mobile streaming is offering multiple resolutions. The trick is to start with the lowest resolution available to get the video started. Then switch to a higher resolution once the speed is determined to be capable of higher resolutions. Generally this can be done quickly that the user doesn't really notice. YouTube is the best example of this tactic. HLS automatically does this, not sure about m3U8.
Assuming you are offering a UICollectionView or UITableView, try to start low resolution streams of every video on the screen in the background every time the scrolling stops. Not only does this allow you to do some cool preview stuff based off the buffer but when they click on it the video is already established. If thats too slow try just the middle video.
Edit the video in the background before upload to only be at the max resolution you expected it to be played at. There is no 4k resolution screen resolutions on any iOS device and probably never will be so cut down the amount of data.
Without getting more specifics this is all I got for now. Hope I understood your question correctly. Good luck!
The main source of my worries comes from the m3u8 manifests I use to play video in my app.
They contain variants with audio & video and some containing only audio.
The problem occurs when I load this type of manifest in avplayercontroller/avplayer while having a bad network or low bandwidth.
In every case avplayer witch to the tracks with no video to clear some bandwidth, resulting in no image appearing on the player.
I would like to know if there is a way to force avplayer or avplayercontroller to only focus on the manifest tracks containing audio and video and forbid audio only tracks to be selected, loaded, and played.
Thank you in advance to any tips or pointers you can give me on this subject.
I am using AVPlayer to play a video from an https url with a setup this:
player = AVPlayer(url: URL(string: urlString))
player?.automaticallyWaitsToMinimizeStalling = false
But since the video is a little long, there is a short blank screen delay before the video actually starts to play. I think this is because it is being loaded from https.
Is there anyway to remove that delay by making AVPlayer play the video right away without loading the whole thing?
I added .automaticallyWaitsToMinimizeStalling but that does not seem to make a difference.
If anyone has any other suggestions please let me know.
I don't think there is nothing to do with loading from https. what is your video file format? I think you are thinking of adaptive bitrate streaming behavior.
https://en.wikipedia.org/wiki/Adaptive_bitrate_streaming#Apple_HTTP_Live_Streaming
HTTP Live Streaming (HLS) is an HTTP-based media streaming
communications protocol implemented by Apple Inc. as part of QuickTime
X and iOS. HLS supports both live and Video on demand content. It
works by breaking down streams or video assets into several small
MPEG2-TS files (video chunks) of varying bit rates and set duration
using a stream or file segmenter. One such segmenter implementation is
provided by Apple.[29] The segmenter is also responsible for producing
a set of index files in the M3U8 format which acts as a playlist file
for the video chunks. Each playlist pertains to a given bitrate level,
and contains the relative or absolute URLs to the chunks with the
relevant bitrate. The client is then responsible for requesting the
appropriate playlist depending on the available bandwidth.
For more information about HTTP Live Streaming
https://developer.apple.com/documentation/http_live_streaming
This tutorial includes some experiments on HTTP Live Streaming version and Non-HTTP Live Streaming version.
https://www.raywenderlich.com/5191-video-streaming-tutorial-for-ios-getting-started
Have you tried using AVPlayerItem's preferredForwardBufferDuration? You can manage how long AVPlayer continues to buffer using this property.
player.currentItem?.preferredForwardBufferDuration = 1
From Apple's own documentation:
The duration the player should buffer media from the network ahead of the playhead to guard against playback disruption.
This property defines the preferred forward buffer duration in seconds. If set to 0, the player will choose an appropriate level of buffering for most use cases. Setting this property to a low value will increase the chance that playback will stall and re-buffer, while setting it to a high value will increase demand on system resources.
Suggestion:
As a video is streaming, we are also relying on a network connection. So for poor network connection, It always has a chance to display a blank screen.
We can do like, We can fetch thumbnail of streaming video on the previous screen from the server or can generate thumbnail from application side from streaming URL. When streaming screen open, display thumbnail to a user with streaming indicator and when video starts streaming hide thumbnail.
In that particular case you can place an UIImageView above to the AVPLayerlayer view.
That image work as a landscape/cover image of your video which matched the first frame of your video along an UIActivityIndicator on subview.
Now hide that image when video got about to play.
This helps to hide the black frame of your video as it inevitable to handle the initial buffer state of the video.
My app uses VideoCore project for live streaming to Wowza server and store the video. Also it uses AVCaptureMovieFileDataOutput to record the offline video.
I want to embed the video capturing time stamp on top-left of video, and it is not a static time. It means it is not only a static watermark but also a real video capturing time display.
For the streaming case, I have no idea for now. For the offline case, I tried to utilize AVCaptureAudioDataOutput to get every frame to add time text overlay. But this causes preview screen freezes.
Any tips are helpful.
Thank you.
My platform is Xcode7.3 + Swift2
I did some similar thing using transcodig on wowza, the transcodig menu enables image overlay, this image could be refreshed every second (or less), so if you create an image with timestamps every second, wowza takes it and put it on the stream every second. you can define where to put the image, the size and transparency.
to create the image I use PHP, but you could use another tool that enables image creation.
The app I’m working on loops a video a specified # of times by adding the same AVAssetTrack (created from the original video url) multiple times to the same AVComposition at successive intervals. The app similarly inserts a new video clip into an existing composition by 'removing' the time range from the composition's AVMutableCompositionTrack (for AVMediaTypeVideo) and inserting the new clip's AVAssetTrack into the previously removed time range.
However, occasionally and somewhat rarely, after inserting a new clip as described above into a time range within a repeat of the original looping video, there are resulting blank frames which only appear at the video loop’s transition points (within the composition), but only during playback - the video exports correctly without gaps.
This leads me to believe the issue is with the AVPlayer or AVPlayerItem and how the frames are currently buffered for playback, rather than how I'm inserting/ looping the clips or choosing the correct CMTime stamps to do so. The app is doing a bunch of things at once (loop visualization in the UI via an NSTimer, audio playback via Amazing Audio Engine) - could my issue be a result of competition for resources?
One more note: I understand that discrepancies between audio and video in an asset can cause glitches (i.e. the underlying audio is a little bit longer than the video length), but as I'm not adding an audioEncodingTarget to the GPUImageWriter that I'm using to record and save the video, the videos have no audio components.
Any thoughts or directions you can point me in would be greatly appreciated! Many thanks in advance.
Update: the flashes coincide with the "Had to drop a video frame" error logged by the GPUImage library, which according to the creator has to do with the phone not being able to process video fast enough. Can multi-threading solving this?
Update 2: So the flashes actually don't always correspond to the had to drop a video frame error. I have also disabled all of the AVRecorder/Amazing Audio Engine code and the issue still persists making it not a problem of resource competition between those engines. I have been logging properties of AVPlayer item and notice that the 'isPlayBackLikelyToKeepUp' which is always NO, and 'isPlaybackBufferFull' which is always yes.
So problem is solved - sort of frustrating how brutally simple the fix is. I just used a time range a frame shorter for adding the videos to the composition rather than the AVAssetTrack's time range. No more flashes. Hopefully the users won't miss that 30th of a second :)
shortened_duration = CMTimeSubtract(originalVideoAssetTrack.timeRange.duration, CMTimeMake(1,30));