Video.JS Recommended Encoder Setting for Cell Streaming - ios

I'm on the final steps of finalizing a video.js heavy website and am running into one problem that I really don't want to leave to experimentation. I was wondering if there were any recommendations for streaming over 4G/3G.
The problem occurs specifically when streaming HLS using the hls-contrib tech over 4G/3G. Both Android (Lollipop) and iOS 9 phones immediately pick up audio but never get video, or do several minutes later (a normal user would stop watching by that point). When I plug in an Android or iOS device for console debugging, there are no console errors (symptoms persist), and turning on WiFi gets me both audio and video. Which leads me to believe I'm just dealing with an encoder settings problem here. Their signal chain is encoder (setting below) to a Wowza server, and out to ScaleEngine as a CDN. They're of course using the CDN HLS and RTMP links for public playback.
Encoder Settings:
Video: H.264 # 1,250Kbps CBR, 720x480#30i
Audio: AAC # 96Kbps Stereo
Profile: Main
Level: 3.1
B-Frames: 0
Don't see any other pertinent info.
Appreciate the help/opinions/general information. Thanks.

Related

Video No Longer Recording Audio (PBJVision/AVFoundation)

I have an app (enterprise, distributed OTA) that among other things records video clips. All of a sudden, we started getting video uploads that were missing audio, and this issue now seems to be totally reproducible. I've been using the PBJVision library, which had seemed to work great, but I have also tested this with SwiftyCam (another AVFoundation-based library) with same results. It's unclear exactly when this was introduced, but I've checked the following:
Ensure that a NSMicrophoneUsageDescription is set in the target .plist
Ensure that camera and microphone permissions are showing as granted in system settings
Try disabling microphone permissions in settings (app correctly prompts the user to re-enable permissions)
Try earlier releases of the video capture library in case of regression
Try different video capture library
Explicitly set audio enabled and bitrate for PBJVision/SwiftyCamera, and ensure that the session is at least reporting that it has audio in the logs (that is, the library and AVFoundation think there's an input set up, with an input stream that's being handled)
Take a video with the system camera, and upload through the app — in this case, audio does work (it's not a problem with the hardware)
Reset all content and permissions on a device, to make sure there isn't some kind of cached permission hanging out
Make sure volume is not muted
The copy that gets saved to the camera roll is also silent, so it's not happening when the video gets uploaded. I also started to implement recording using just AVFoundation, but don't want to waste time if that will produce the same results. What could be causing a particular app not to record audio with video? I have looked at related questions, and none of the solutions provided address the problem I'm having here.
EDIT:
Here are the logs that appear when starting, recording, and stopping a PBJVision session:
[5411:1305718] VISION: camera setup
[5411:1305718] VISION: switchDevice 1 switchMode 1
[5411:1305718] VISION: capture session setup
[5411:1305291] VISION: session was started
[5411:1305718] VISION: capture session running
[5411:1305291] VISION: starting video capture
[5411:1305718] VISION: ready for video (1)
[5411:1305718] VISION: audio stream setup, channels (1) sampleRate (44100.000000)
[5411:1305718] VISION: ready for audio (1)
[5411:1305291] VISION: ending video capture
[5411:1305963] VISION: capture session stopped
[5411:1305963] VISION: session was stopped
[5411:1305291] CMTimeMakeWithSeconds(8.396 seconds, timescale 24): warning: error of -0.021 introduced due to very low timescale
It turns out that this was actually due to using another library to play a sound after starting the video recording. This apparently preempts the audio channel for the recording, as that ends up being empty (see Record Audio/Video with AVCaptureSession and Playback Audio simultaneously?). It does not appear to matter whether or not the other sound playback is started before or after starting video recording. This is a good warning case around using multiple libraries that all touch the same system APIs — in some cases, like this one, they interact in undesirable ways.
In this case, the solution is to make sure that the two sources aren't using the same AVAudioSessionCategory, so they don't conflict.

Playing only audio from an HLS video stream when not connected to WiFi using Daily Motion's hls.js

This question is in regards to DailyMotion's hls.js API
My goal is to save on data usage when not connected to WiFi by playing only the audio portion of a HLS video stream.
I have looked at similar questions for other APIs but have not found anything relevant to the hls.js API.
Details:
I tested my live stream HLS file on your demo page. It identified 1 audio track and displayed it in the Audio Track Controls. At the bottom of this post I am including the format of my HLS file with identifying info changed.
Question:
Will the hsl.js API allow me to force the playback to only audio once I have identified the lack of a WiFi connection? What setting or command would I use to do that? Alternatively, can I force playback to the lowest resolution?
Thanks,
RKern
HLS File Format:
#EXTM3U
#EXT-X-VERSION:5
#EXT-UPLYNK-LIVE
#EXT-X-START:TIME-OFFSET=0.00
#EXT-X-INDEPENDENT-SEGMENTS
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac",NAME="unspecified",LANGUAGE="en",AUTOSELECT=YES,DEFAULT=YES
#UPLYNK-MEDIA0:416x234x30,baseline-13,2x48000
#EXT-X-STREAM-INF:PROGRAM-ID=1,RESOLUTION=416x234,BANDWIDTH=471244,CODECS="mp4a.40.5,avc1.42000d",FRAME-RATE=30.000,AUDIO="aac",AVERAGE-BANDWIDTH=411975
http://content-ause1.uplynk.com/channel/test/d.m3u8?pbs=test
#UPLYNK-MEDIA0:704x396x30,main-30,2x48000
#EXT-X-STREAM-INF:PROGRAM-ID=1,RESOLUTION=704x396,BANDWIDTH=873267,CODECS="mp4a.40.5,avc1.4d001e",FRAME-RATE=30.000,AUDIO="aac",AVERAGE-BANDWIDTH=688830
http://content-ause1.uplynk.com/channel/test/e.m3u8?pbs=test
#UPLYNK-MEDIA0:896x504x30,main-31,2x48000
#EXT-X-STREAM-INF:PROGRAM-ID=1,RESOLUTION=896x504,BANDWIDTH=1554841,CODECS="mp4a.40.5,avc1.4d001f",FRAME-RATE=30.000,AUDIO="aac",AVERAGE-BANDWIDTH=1171051
http://content-ause1.uplynk.com/channel/test/f.m3u8?pbs=test
#UPLYNK-MEDIA0:1280x720x30,main-31,2x48000
#EXT-X-STREAM-INF:PROGRAM-ID=1,RESOLUTION=1280x720,BANDWIDTH=3328000,CODECS="mp4a.40.5,avc1.4d001f",FRAME-RATE=30.000,AUDIO="aac",AVERAGE-BANDWIDTH=2414865
http://content-ause1.uplynk.com/channel/test/g.m3u8?pbs=test
#UPLYNK-MEDIA0:192x108x15,baseline-11,2x48000
#EXT-X-STREAM-INF:PROGRAM-ID=1,RESOLUTION=192x108,BANDWIDTH=136226,CODECS="mp4a.40.5,avc1.42000b",FRAME-RATE=15.000,AUDIO="aac",AVERAGE-BANDWIDTH=120009
http://content-ause1.uplynk.com/channel/test/b.m3u8?pbs=test
#UPLYNK-MEDIA0:256x144x30,baseline-12,2x48000
#EXT-X-STREAM-INF:PROGRAM-ID=1,RESOLUTION=256x144,BANDWIDTH=259601,CODECS="mp4a.40.5,avc1.42000c",FRAME-RATE=30.000,AUDIO="aac",AVERAGE-BANDWIDTH=232565
http://content-ause1.uplynk.com/channel/test/c.m3u8?pbs=test

iOS MobileVLCKit and VideoCore conflict

I'm using MobileVLCKit to stream video and audio from Wowza RTMP server. At the same time I'm using VideoCore]1 to stream audio to Wowza RTMP server (I closed off the video channel in VideoCore). Now I'm attempting to make this sort of a teleconferencing solution. Now I'm limited to RTMP or RTSP, not teleconferencing solution (WebRTC or SIP or what not...I am not familiar with these at the moment) because of the limitation on the other end of the line.
The above setup doesn't work. Turning the both functions (video and audio streaming down and audio streaming up) individually runs fine. But not when run simultaneously as audio cannot be heard on the other end. In fact, when app started with VideoCore streaming audio upstream, as soon as I started to downstream via MobileVLCKit, audio cannot be heard on the other end, even though the stream is open. It appears that microphone is somehow wrested away from VideoCore, even though MobileVLC should not need the microphone.
However, when I made the two into two apps and allow them to run in the background (audio & airplay background mode), the two runs fine with one app stream down video & audio and the other picking up microphone voices and stream to the other end.
Is there any reason why the two functions appear to apparently be in conflict within the same app, and any ideas how to resolve the conflict?
I encountered the same problem. Say I have two objects, a vlc player and the other audio processor which listens the microphone. It works fine in simulator for operating both functions in the same time. But conflict in the iPhone device. I think the root cause is that there is only one position or right for listening the microphone. And vlc occupies the right so that my audio processor cannot work. But for some reasons, I cannot modify the vlc code. So I'd to figure out the workaround resolution. And I found one.
The problem comes from vlc which occupies the right but doesn't event use the microphone, and my audio processor did. So the way appears clearly. That is, vlc player plays first and then we new the other object instance, audio processor in my case, which needs to listen the microphone. Since audio processor comes after vlc player, it takes back the right of microphone listening. And they both work properly.
For your reference and hope it can help you.

Widevine video streaming on iOS and AirPlay

Could you help us please with the following problem related to the DRM (Widevine) encrypted video stream playback and use of the AirPlay?
When we tried to play the video from iPhone with use of the AirPlay on Apple TV, the "failed to load content” error was shown on the TV screen. We are not sure if that is correct behaviour. We think it is, because for encrypted video playback we cannot use the AirPlay as it transports the raw unencrypted stream, right?
So far we found that the only possible solution is showing video on iPhone, while playing audio on the AppleTV, it seems that for audio the DRM restriction does not apply.
Could you confirm the above description? Could you give us some advices?
We found also following (Note that we are not using Brightcove, but the principle should be same) information on the Internet: http://support.brightcove.com/en/video-cloud/docs/widevine-plugin-brightcove-video-cloud-player-sdk-ios
Try WVUseEncryptedLoopback (set it to #1). It enables AirPlay support by securing the AirPlay stream. 1 enables encrypted loopback. 0 by default.
Also, enable WVPlayerDrivenAdaptationKey (set it to #1) (Switch between Apple Native Player Adaption = 1 and Widevine Adaption = 0)
Widevine version: 6.0.0.12792

Streaming HLS to iOS devices using Windows Azure Media Services

I have been doing some exploring of Azure Media Services, specifically with media converted to HLS. I walked through the process of creating HLS content using a process similar to the one outlined in this HLS Walkthrough
Now that I have my HLS content in Azure, I am hoping to stream it just as you would any m3u8 stream. I have tried the following:
WebView in iPad – works OK, it's jumping and not very smooth
Safari on OS X – does not work at all
VLC Player – does not work at all.
Granted this i not exhaustive nor thorough (yet) but before I continue I wanted to get feedback if anyone has any. I stumbled along WAMS Media Player for iOS regarding the Smooth Player for iOS. Is the expectation here that the Smooth Player developed for iOS is the best way to consume HLS generated media from WAMS?
As I understand it, Safari's support (or lack thereof) for HLS depends on QuickTime -- there are versions of QuickTime that do support HLS (QuickTime Pro), but by default, the support is not there.
I suggest you transcode to both Smooth Streaming and HLS. Serve Smooth Streaming via Flash or Silverlight to Windows/Mac clients and HLS via HTML5 to iOS clients.

Resources