Unable to Play RTMP live video stream in ios - ios

I have downloaded sample code using this link:
https://github.com/yixia/Vitamio-iOS
I tried to play RTMP video streaming, but it does no play it gives the error:
NAL 1RRE &&&& VMediaPlayer Error: (null)
I used this key:
keys[0] = #"-rtmp_live";
vals[0] = #"-1";
[mMPayer setOptionsWithKeys:keys withValues:vals];
Video does not play.
Does somebody have an idea why?

Related

Why won't this encrypted HLS video play on iOS (but works on Windows Chrome via hls.js library)?

The original video is "Sample Video 5" from https://www.appsloveworld.com/download-sample-mp4-video-mp4-test-videos/.
My /home/vagrant/Code/example/public/hls_hls.keyInfo is:
https://example.com/hls.key
/home/vagrant/Code/example/public/hls_hls.key
467216aae8a26fb699080812628031955e304a66e9e4480f9b70d31d8fe94e9a
My /home/vagrant/Code/example/public/hls_hls.key was generated using PHP: hex2bin('467216aae8a26fb699080812628031955e304a66e9e4480f9b70d31d8fe94e9a')
The ffmpeg command for encrypting the video as HLS playlist with "ts" files:
'/usr/bin/ffmpeg' '-y' '-i' 'storage/app/sample_media2/2020-02-27/Sample_Videos_5.mp4'
'-c:v' 'libx264' '-s:v' '1920x1080' '-crf' '20' '-sc_threshold' '0' '-g' '48'
'-keyint_min' '48' '-hls_list_size' '0'
'-hls_time' '10' '-hls_allow_cache' '0' '-b:v' '4889k' '-maxrate' '5866k'
'-hls_segment_type' 'mpegts' '-hls_fmp4_init_filename' 'output_init.mp4'
'-hls_segment_filename' 'storage/app/public/test/output_1080p_%04d.ts'
'-hls_key_info_file' '/home/vagrant/Code/example/public/hls_hls.keyInfo'
'-strict' '-2' '-threads' '12' 'storage/app/public/test/output_1080p.m3u8'
Then, I know from https://caniuse.com/#search=hls that Windows Chrome won't be able to play the HLS video without a library, so I use https://github.com/video-dev/hls.js/, and Windows Chrome successfully plays the encrypted video!
However, iOS Safari is unable to play it (with or without the hls.js library).
On iOS Safari, when I try to play the video, I see just a quick glimpse (less than a second) where the screen shows 0:15, so it must be reading and decrypting enough to know the correct duration of the video.
So, to debug, I log events:
const nativeHlsEvents = ['play', 'playing', 'abort', 'error', 'canplaythrough', 'waiting', 'loadeddata', 'loadstart', 'progress', 'timeupdate', 'volumechange'];
$.each(nativeHlsEvents, function (i, eventType) {
video.addEventListener(eventType, (event) => {//https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaElement
console.log(eventType, event);
if (eventType === 'error') {
console.error(video.error);//https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaElement/error
}
});
});
I see in the console log:
loadstart, {"isTrusted":true}
progress, {"isTrusted":true}
play, {"isTrusted":true}
waiting, {"isTrusted":true}
error, {"isTrusted":true}
video.error, {}
I don't know how to find more details about the error.
Note that even though Windows Chrome successfully plays the video, it too shows warnings in the console log:
{"type":"mediaError","details":"fragParsingError","fatal":false,"reason":"TS packet did not start with 0x47","frag":{"...
{"type":"mediaError","details":"fragParsingError","fatal":false,"reason":"no audio/video samples found","frag":{...
Where is my problem?
I would need to buy a newer iPhone.
I see at https://en.wikipedia.org/wiki/IPhone_6#Software and https://support.apple.com/guide/iphone/supported-iphone-models-iphe3fa5df43/ios that “6s” is the oldest hardware that iOS 13 supports, and https://caniuse.com/#search=hls says HLS needs >=13.2.
Iphone devices not supported media source extension.
You should by user agent checking has iphone?
If be right, you should use iphone native video player for play hls video type or more video type

Convert RTMP to RTP in iOS application for sending it to Kurento Media Server

Working on implementing Screen Sharing(Replay Kit) in iOS app using Kurento Media Server. I get CMSampleBuffer which follows RTMP protocol. But Kurento doesn't support RTMP. It does support RTP. Is there a way to convert from RTMP to RTP. I read about ffmpeg but it seems to need to be implemented on server side which require a lot of change in current flow something like below
[Browser] -> RTMP -> [Node_Media_Server(srs)] -> RTMP ->
[FFmpeg] -> RtpEndpoint -> [Kurento] -> WebrtcEndpoint -> [Browser]
Will this flow be efficient enough ?
Is there a way to convert it from client side i.e iOS application?
Using WebRTC to send an iOS devices’ screen capture using ReplayKit Looks like the answer in this question may point you in the right direction. Kurento supports webrtc. You can take the pixel buffer from the cmsamplebuffer and turn it into a RTCFrame and pipe that into a local video source and stream it up using webrtc
The problem was with videoCapturer , VideoSource and video track initialized again and again in processSampleBuffer.
We need to create VideoCapturer, VideoSource, MediaStream, VideoTrack only once in broadcastStartedWithSetupInfo.
Now i am able to see video packets on wireshark but a green screen appears on receiver side . I think the issue is with media configuration which is as below.
NBMMediaConfiguration *config = [[NBMMediaConfiguration alloc] init];
config.rendererType = NBMRendererTypeOpenGLES;
config.audioBandwidth = 0;
config.videoBandwidth = 0;
config.audioCodec = NBMAudioCodecOpus;
config.videoCodec = NBMVideoCodecVP8;
NBMVideoFormat format;
format.dimensions = (CMVideoDimensions){720, 480};
format.frameRate = 30;
format.pixelFormat = NBMPixelFormat420f;
config.receiverVideoFormat = format;
config.cameraPosition = NBMCameraPositionAny;
Please suggest if it seems correct
SRS 4.0 supports coverting RTMP to WebRTC, or vice versa.
For usage, please follow SRS #307 and RTMP to RTC.
The stream flow is like bellow, it's quite simple:
FFmpeg/OBS/CMSampleBuffer --RTMP--> SRS --WebRTC--> Browser
Note that RTMP does not support opus, while WebRTC always use opus as default audio codec, so SRS transcodes aac to opus, which causes performance suffer.
However, seems there is no audio for your sue scenario, so it seems OK.

iOS GoogleVR - 360 degree video playback issue

My app has 360-degree video playback and I am using GoogleVR's GVRRendererView class for it. I am trying to play a high-quality 360-degree video from the server. But the problem is that video streaming is very slow and I get below mentioned error message in the XCode console.
<AppName> [Symptoms] {
"transportType" : "HTTP Progressive Download",
"mediaType" : "HTTP Progressive Download",
"BundleID" : "AppID",
"name" : "MEDIA_PLAYBACK_STALL",
"interfaceType" : "WiredEthernet"
}
How to resolve it?

Having trouble transferring audio data from iOS to another device which uses PyAudio to play it

I am using novocaine to get mic audio data and PyAudio on another computer to play the audio.
I am using WAMP to send over the data, so far data sends just fine.
But I am having trouble trying to get the data received from the mic on iOS device to play on the Py device.
Novocaine returns an array of floats when getting audio so I figured I do this in order to transfer the data properly.
NSMutableData *outBuffer = [NSMutableData dataWithCapacity:0];
[outBuffer appendBytes:&newAudio length:sizeof(float)];
When playing with the data on python device I get a length of 4 however, when I try to print it out there is always a problem is UTF formatting issue.
Basic of playing audio with PyAudio
I am unsure how I should be translating this float audio data.
Edit:
Novocaine *audioManager = [Novocaine audioManager];
[audioManager setInputBlock:^(float *newAudio, UInt32 numSamples, UInt32 numChannels) {
// Now you're getting audio from the microphone every 20 milliseconds or so. How's that for easy?
// Audio comes in interleaved, so,
// if numChannels = 2, newAudio[0] is channel 1, newAudio[1] is channel 2, newAudio[2] is channel 1, etc.
}];
[audioManager play];

Why I am receiving only a few audio samples per second when using AVAssetReader on iOS?

I'm coding something that:
record video+audio with the built-in camera and mic (AVCaptureSession),
do some stuff with the video and audio samplebuffer in realtime,
save the result into a local .mp4 file using AVAssetWritter,
then (later) read the file (video+audio) using AVAssetReader,
do some other stuff with the samplebuffer (for now I do nothing),
and write the result into a final video file using AVAssetWriter.
Everything works well but I have an issue with the audio format:
When I capture the audio samples from the capture session, I can log about 44 samples/sec, which seams to be normal.
When I read the .mp4 file, I only log about 3-5 audio samples/sec!
But the 2 files look and sound exactly the same (in QuickTime).
I didn't set any audio settings for the Capture Session (as Apple doesn't allow it).
I configured the outputSettings of the 2 audio AVAssetWriterInput as follow:
NSDictionary *settings = #{
AVFormatIDKey:#(kAudioFormatLinearPCM),
AVNumberOfChannelsKey:#(2),
AVSampleRateKey:#(44100.),
AVLinearPCMBitDepthKey:#(16),
AVLinearPCMIsNonInterleaved:#(NO),
AVLinearPCMIsFloatKey:#(NO),
AVLinearPCMIsBigEndianKey:#(NO)
};
I pass nil to the outputSettings of the audio AVAssetReaderTrackOutput in order to receive samples as stored in the track (according to the doc).
So, the sample rate should be 44100Hz from the CaptureSession to the final file. Why I am reading only a few audio samples? And why is it working anyway? I have the intuition that it will not work well when I'll have to work with the samples (I need to update their timestamps for example).
I tried several other settings (such as kAudioFormatMPEG4AAC), but AVAssetReader can't read compressed audio formats.
Thanks for your help :)

Resources