I'm looping my streamed videos (not live stream) via .m3u8 playlist and each time the video restarts, it plays the video with the same bitrate adapting that occurs the first time you watch the video (bad quality -> good quality). Is there a way to refresh the stream quality each time the video loops so that the beginning gets replaced with the higher-rate bitrate seamlessly? Instead of just re-playing what was initially loaded?
Apple's AVPlayer attempts to load the first stream listed in the HLS playlist. So if you want the highest quality stream to be loaded first by default, you need to specify it as the first stream in the playlist file.
With that in mind, one way of achieving what you need to achieve is to have a different m3u8 file for each of your streams.
For example, if you have a three variant stream playlist, you would have three .m3u8 playlists.
Then in your view controller where you are using your AVPlayer, you need to keep a reference to the last observed bitrate and most recent bit rate:
var lastObservedBitrate: double = 0
var mostRecentBitrate: double = 0
You would then need to register a notification observer on your player with notification name: AVPlayerItemNewAccessLogEntryNotification
NSNotificationCenter.defaultCenter().addObserver(self, selector:#selector(MyViewController.accessEventLog(_:)), name: AVPlayerItemNewAccessLogEntryNotification, object: nil)
Whenever the access log is updated, you can then inspect the bitrate and stream used using the following code:
func accessLogEvent(notification: NSNotification) {
guard let item = notification.object as? AVPlayerItem,
accessLog = item.accessLog() else {
return
}
accessLog.events.forEach { lastEvent in
let bitrate = lastEvent.indicatedBitrate
lastObservedBitrate = lastEvent.observedBitrate
if let mostRecentBitrate = self.mostRecentBitrate where bitrate != mostRecentBitrate {
self.mostRecentBitrate = bitrate
}
}
}
Whenever your player loops, you can load the appropriate m3u8 file based on your lastObservedBitrate. So if your lastObservedBitrate is 2500 kbps, you would load your m3u8 file that has the 2500kbps stream at the top of the file.
Shameless plug: We've designed something similar in our video api. All you need to do is request the m3u8 file with your connection type: wifi or cellular and lastObservedBitrate and our API will vend you the best possible stream for that bitrate, but still have the ability to downgrade/upgrade the stream if network conditions change.
If you are interested in checking it out visit: https://api.storie.com or https://github.com/Storie/StorieCloudSDK
Related
Given an HLS manifest with multiple variants/renditions:
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1612430,CODECS="avc1.4d0020,mp4a.40.5",RESOLUTION=640x360
a.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=3541136,CODECS="avc1.4d0020,mp4a.40.5",RESOLUTION=960x540
b.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=5086455,CODECS="avc1.640029,mp4a.40.5",RESOLUTION=1280x720
c.m3u8
Is it possible to get an array of the three variants (with the attributes such as bandwidth and resolution) from either the AVAsset or AVPlayerItem?
I am able to get the currently playing AVPlayerItemTrack by using KVO on the AVPlayerItem, but again, it's only the track that's actively being played not the full list of variants.
I'm interested in knowing if the asset is being played at it's highest possible quality, so that I can make a decision on whether the user has enough bandwidth to start a simultaneous secondary video stream.
To know which variant you are currently playing, you can keep a KVO on AVPlayerItemNewAccessLogEntryNotification and by looking at AVPlayerItemAcessLogEvent in the access log, you can tell current bitrate and any change in bitrate.
AVPlayerItemAccessLog *accessLog = [((AVPlayerItem *)notif.object) accessLog];
AVPlayerItemAccessLogEvent *lastEvent = accessLog.events.lastObject;
if( lastEvent.indicatedBitrate != self.previousBitrate )
{
self.bitrate = lastEvent.indicatedBitrate
}
As far as knowing the entire list of available bitrates, you can simply make a GET request for the master m3u8 playlist and parse it. You will only need to do it once so not much of an overhead.
New in iOS 15, there’s AVAssetVariant
I am in the process of developing a game for iOS 9+ using Sprite Kit and preferably using Swift libraries.
Currently I'm using a Singleton where I preload my audio files, each connected to a separate instance of AVAudioPlayer.
Here's a short code-snipped to get the idea:
import SpriteKit
import AudioToolbox
import AVFoundation
class AudioEngine {
static let sharedInstance = AudioEngine()
internal var sfxPing: AVAudioPlayer
private init() {
self.sfxPing = AVAudioPlayer()
if let path = NSBundle.mainBundle().pathForResource("ping", ofType: "m4a") {
do {
let url = NSURL(fileURLWithPath:path)
sfxPing = try AVAudioPlayer(contentsOfURL: url)
sfxPing.prepareToPlay()
} catch {
print("ERROR: Can't load ping.m4a audio file.")
}
}
}
}
This Singleton is initialised during app start-up. In the game-loop I then just call the following line to play a specific audio file:
AudioEngine.sharedInstance.sfxPing.play()
This basically works, but I always get glitches when a file is played and the frame rate drops from 60.0 to 56.0 on my iPad Air.
Someone any idea how to fix this performance issue with AVAudioPlayer ?
I also watched out for 3rd party libraries, namely:
AudioKit [Looks very heavy-weighted]
ObjectAL [Last Update 2013 ...]
AVAudioEngine [Based on AVAudioPlayer, same problems ?]
Requirements:
Play a lot of very short samples (like shots, hits, etc..)
Play some motor effects (thus pitching would be nice)
Play some background / ambient sound in a loop
NO nasty glitches / frame rate drops !
Could you recommend any of the above mentioned libraries for my requirements or point out the problems using the above code ?
UPDATE:
Playing short sounds with:
self.runAction(SKAction.playSoundFileNamed("sfx.caf", waitForCompletion: false))
does indeed improve the frame rate. I exported the audio files with Audiacity to the .caf format (Apple's Core Audio Format). But in the tutorial, they export with "Signed 32-bit PCM" encoding which led to disturbed audio playback in my case. Using any of the other encoding options (32-bit float, U-Law, A-Law, etc..) worked fine for me.
Why using caf format? Because it's uncompressed and thus loaded faster into memory with less CPU overhead compared to compressed formats like m4a. For short sound effects played a lot in short intervals, this makes sense and disk usage is not affected much for short audio files consuming few kilobytes. For bigger audio files, like ambient and background music, using compressed formats (mp3, m4a) is obviously the better choice.
According to your question, if you develop a game for iOS 9+, you can use the new iOS 9 library SKAudioNode (official Apple doc):
var backgroundMusic: SKAudioNode!
For example you can add this to didMoveToView():
if let musicURL = NSBundle.mainBundle().URLForResource("music", withExtension: "m4a") {
backgroundMusic = SKAudioNode(URL: musicURL)
addChild(backgroundMusic)
}
You can also use to play a simple effect:
let beep = SKAudioNode(fileNamed: "beep.wav")
beep.autoplayLooped = false
self.addChild(beep)
Finally, if you want to change the volume:
beep.runAction(SKAction.changeVolumeTo(0.4, duration: 0))
Update:
I see you have update your question speaking about AVAudioPlayer and SKAction. I've tested both of them for my iOS8+ compatible games.
About AVAudioPlayer, I personally use a custom library maked by me based from the old SKTAudio.
I see your code, about AVAudioPlayer init, and my code is different because I use:
#available(iOS 7.0, *)
public init(contentsOfURL url: NSURL, fileTypeHint utiString: String?)
I don't know if fileTypeHint make the difference, so try and fill me about your test.
Advantages about your code:
With a shared instance audio manager based to AVAudioPlayer you can control volume, use your manager wherever you want, ensure compatibility with iOS8
Disadvantages about your code:
Everytime you play a sound and you want to play another sound, the previous is broken, especially if you have launch a background music.
How to solve? According with this SO post to work well without issues seems AVFoundation is limited to 4 AVAudioPlayer properties instantiables, so you can do this:
1) backgroundMusicPlayer: AVAudioPlayer!
2) soundEffectPlayer1: AVAudioPlayer!
3) soundEffectPlayer2: AVAudioPlayer!
4) soundEffectPlayer3: AVAudioPlayer!
You could build a method that switch through the 3 soundEffect to see if is occupied:
if player.playing
and use the next free player. With this workaround you have always your sound played correctly, even your background music.
I'm working with AVAudioplayer and AVAudiosession. I have got an iPad and a audio interface (sound card).
This audio interface has 4 outputs (2 stereo), a lightning cable and it receive energy from the iDevice, works excellent.
Ive coded a simple play() stop() AVAudioplayer that works fine BUT I need to asign specific channel of the audio interface (1-2 & 3-4). My idea is send two audios (A & B) to each output/channel (1-2 or 3-4)
I've read the AVAudioplayer's documentation and it says: channelAssignments is for asign channels to a audioplayer.
The problem is: I've created an AVAudiosession that get the data of the USBport's device plugged (soundcard). And I got:
let route = AVAudioSession.sharedInstance().currentRoute
for port in route.outputs {
if port.portType == AVAudioSessionPortUSBAudio {
let portChannels = port.channels
let sessionOutputs = route.outputs
let dataSource = port.dataSources
dataText.text = String(portChannels) + "\n" + String(sessionOutputs) + "\n" + String(dataSource)
}
}
Log:
outputs
Which data I must to take and use to send the audios with play()?
Wow - I had no idea that AVAudioPlayer had been developed at all since AVPlayer came out in iOS 4. Yet here we are in 2016, and AVAudioPlayer has channelAssignments while the fancy streaming, video playing with subtitles AVPlayer does not.
But I don't think you will be able to play two files A and B through one AVAudioPlayer as each player can only open one file. That leaves
creating two players (player(A) and player(B)) and setting the channelAssignments of each to one half of the audio devices output channels, dealing with the joys of synchronising the players, or
creating a four channel file, AB, and playing it through one player, assigning channelAssignments the full four channels you found above, dealing with the joys of slightly non-standard audio files .
Sanity check: is your session.outputNumberOfChannels returning 4?
Personally, when I do this kind of thing I create a 4 channel remote io audio unit as I've found the higher level APIs cause too much heartache once you do anything a little unusual. I also use AVAudioSessionCategoryMultiRoute because I don't have any > 2 channel sound cards, so I have to cobble headphone jack plus usb sound card to get 4 channels, but you shouldn't need this.
Despite not having procedural output (like remoteIO audio units), you may also be able to use AVAudioEngine to do what you want.
I'm coding something that:
record video+audio with the built-in camera and mic (AVCaptureSession),
do some stuff with the video and audio samplebuffer in realtime,
save the result into a local .mp4 file using AVAssetWritter,
then (later) read the file (video+audio) using AVAssetReader,
do some other stuff with the samplebuffer (for now I do nothing),
and write the result into a final video file using AVAssetWriter.
Everything works well but I have an issue with the audio format:
When I capture the audio samples from the capture session, I can log about 44 samples/sec, which seams to be normal.
When I read the .mp4 file, I only log about 3-5 audio samples/sec!
But the 2 files look and sound exactly the same (in QuickTime).
I didn't set any audio settings for the Capture Session (as Apple doesn't allow it).
I configured the outputSettings of the 2 audio AVAssetWriterInput as follow:
NSDictionary *settings = #{
AVFormatIDKey:#(kAudioFormatLinearPCM),
AVNumberOfChannelsKey:#(2),
AVSampleRateKey:#(44100.),
AVLinearPCMBitDepthKey:#(16),
AVLinearPCMIsNonInterleaved:#(NO),
AVLinearPCMIsFloatKey:#(NO),
AVLinearPCMIsBigEndianKey:#(NO)
};
I pass nil to the outputSettings of the audio AVAssetReaderTrackOutput in order to receive samples as stored in the track (according to the doc).
So, the sample rate should be 44100Hz from the CaptureSession to the final file. Why I am reading only a few audio samples? And why is it working anyway? I have the intuition that it will not work well when I'll have to work with the samples (I need to update their timestamps for example).
I tried several other settings (such as kAudioFormatMPEG4AAC), but AVAssetReader can't read compressed audio formats.
Thanks for your help :)
Ok, I have been trying to wrap my head around this http live streaming. I just do not understand it and yes I have read all the apple docs and watched the wwdc videos, but still super confused, so please help a wanna be programer out!!!
The code you write goes on the server? not in xcode?
If I am right how do i set this up?
Do I need to set up something special on my server? like php or something?
How do use the tools that are supplied by Apple.. segmenter and such?
Please help me,
Thanks
HTTP Live Streaming
HTTP Live Streaming is a streaming standard proposed by Apple. See the latest draft standard.
Files involved are
.m4a for audio (if you want a stream of audio only).
.ts for video. This is a MPEG-2 transport, usually with a h.264/AAC payload. It contains 10 seconds of video and it is created by splitting your original video file, or by converting live video.
.m3u8 for the playlist. This is a UTF-8 version of the WinAmp format.
Even when it's called live streaming, usually there is a delay of one minute or so during which the video is converted, the ts and m3u8 files written, and your client refresh the m3u8 file.
All these files are static files on your server. But in live events, more .ts files are added, and the m3u8 file is updated.
Since you tagged this question iOS it is relevant to mention related App Store rules:
You can only use progressive download for videos smaller than 10 minutes or 5 MB every 5 minutes. Otherwise you must use HTTP Live Streaming.
If you use HTTP Live Streaming you must provide at least one stream at 64 Kbps or lower bandwidth (the low-bandwidth stream may be audio-only or audio with a still image).
Example
Get the streaming tools
To download the HTTP Live Streaming Tools do this:
Get a Mac or iPhone developer account.
Go to https://developer.apple.com and search for "HTTP Live Streaming Tools", or look around at https://developer.apple.com/streaming/.
Command line tools installed:
/usr/bin/mediastreamsegmenter
/usr/bin/mediafilesegmenter
/usr/bin/variantplaylistcreator
/usr/bin/mediastreamvalidator
/usr/bin/id3taggenerator
Descriptions from the man page:
Media Stream Segmenter: Create segments from MPEG-2 Transport streams for HTTP Live Streaming.
Media File Segmenter: Create segments for HTTP Live Streaming from media files.
Variant Playlist Creator: Create playlist for stream switching from HTTP Live streaming segments created by mediafilesegmenter.
Media Stream Validator: Validates HTTP Live Streaming streams and servers.
ID3 Tag Generator: Create ID3 tags.
Create the video
Install Macports, go to the terminal and sudo port install ffmpeg. Then convert the video to transport stream (.ts) using this FFMpeg script:
# bitrate, width, and height, you may want to change this
BR=512k
WIDTH=432
HEIGHT=240
input=${1}
# strip off the file extension
output=$(echo ${input} | sed 's/\..*//' )
# works for most videos
ffmpeg -y -i ${input} -f mpegts -acodec libmp3lame -ar 48000 -ab 64k -s ${WIDTH}x${HEIGHT} -vcodec libx264 -b ${BR} -flags +loop -cmp +chroma -partitions +parti4x4+partp8x8+partb8x8 -subq 7 -trellis 0 -refs 0 -coder 0 -me_range 16 -keyint_min 25 -sc_threshold 40 -i_qfactor 0.71 -bt 200k -maxrate ${BR} -bufsize ${BR} -rc_eq 'blurCplx^(1-qComp)' -qcomp 0.6 -qmin 30 -qmax 51 -qdiff 4 -level 30 -aspect ${WIDTH}:${HEIGHT} -g 30 -async 2 ${output}-iphone.ts
This will generate one .ts file. Now we need to split the files in segments and create a playlist containing all those files. We can use Apple's mediafilesegmenter for this:
mediafilesegmenter -t 10 myvideo-iphone.ts
This will generate one .ts file for each 10 seconds of the video plus a .m3u8 file pointing to all of them.
Setup a web server
To play a .m3u8 on iOS we point to the file with mobile safari.
Of course, first we need to put them on a web server. For Safari (or other player) to recognize the ts files, we need to add its MIME types. In Apache:
AddType application/x-mpegURL m3u8
AddType video/MP2T ts
In lighttpd:
mimetype.assign = ( ".m3u8" => "application/x-mpegURL", ".ts" => "video/MP2T" )
To link this from a web page:
<html><head>
<meta name="viewport" content="width=320; initial-scale=1.0; maximum-scale=1.0; user-scalable=0;"/>
</head><body>
<video width="320" height="240" src="stream.m3u8" />
</body></html>
To detect the device orientation see Detect and Set the iPhone & iPad's Viewport Orientation Using JavaScript, CSS and Meta Tags.
More stuff you can do is create different bitrate versions of the video, embed metadata to read it while playing as notifications, and of course have fun programming with the MoviePlayerController and AVPlayer.
This might help in swift:
import UIKit
import MediaPlayer
class ViewController: UIViewController {
var streamPlayer : MPMoviePlayerController = MPMoviePlayerController(contentURL: NSURL(string:"http://qthttp.apple.com.edgesuite.net/1010qwoeiuryfg/sl.m3u8"))
override func viewDidLoad() {
super.viewDidLoad()
streamPlayer.view.frame = self.view.bounds
self.view.addSubview(streamPlayer.view)
streamPlayer.fullscreen = true
// Play the movie!
streamPlayer.play()
}
}
MPMoviePlayerController is deprecated from iOS 9 onwards. We can use AVPlayerViewController() or AVPlayer for the purpose. Have a look:
import AVKit
import AVFoundation
import UIKit
AVPlayerViewController :
override func viewDidAppear(animated: Bool){
let videoURL = NSURL(string: "https://clips.vorwaerts-gmbh.de/big_buck_bunny.mp4")
let player = AVPlayer(URL: videoURL!)
let playerViewController = AVPlayerViewController()
playerViewController.player = player
self.presentViewController(playerViewController, animated: true) {
playerViewController.player!.play()
}
}
AVPlayer :
override func viewDidAppear(animated: Bool){
let videoURL = NSURL(string: "https://clips.vorwaerts-gmbh.de/big_buck_bunny.mp4")
let player = AVPlayer(URL: videoURL!)
let playerLayer = AVPlayerLayer(player: player)
playerLayer.frame = self.view.bounds
self.view.layer.addSublayer(playerLayer)
player.play()
}
Another explanation from Cloudinary http://cloudinary.com/documentation/video_manipulation_and_delivery#http_live_streaming_hls
HTTP Live Streaming (also known as HLS) is an HTTP-based media streaming communications protocol that provides mechanisms that are scalable and adaptable to different networks. HLS works by breaking down a video file into a sequence of small HTTP-based file downloads, with each download loading one short chunk of a video file.
As the video stream is played, the client player can select from a number of different alternate video streams containing the same material encoded at a variety of data rates, allowing the streaming session to adapt to the available data rate with high quality playback on networks with high bandwidth and low quality playback on networks where the bandwidth is reduced.
At the start of the streaming session, the client software downloads a master M3U8 playlist file containing the metadata for the various sub-streams which are available. The client software then decides what to download from the media files available, based on predefined factors such as device type, resolution, data rate, size, etc.