I am trying to use the LFLiveKit sdk to send rtmp streams to server. I tried this to stream the pixel buffer like so,
var Lsession: LFLiveSession = {
let audioConfiguration = LFLiveAudioConfiguration.defaultConfiguration(for: LFLiveAudioQuality.high)
let videoConfiguration = LFLiveVideoConfiguration.defaultConfiguration(for: LFLiveVideoQuality.low3)
let session = LFLiveSession(audioConfiguration: audioConfiguration, videoConfiguration: videoConfiguration)
return session!
}()
let stream = LFLiveStreamInfo()
stream.url = "rtmp://domain.com:1935/show/testS"
Lsession.pushVideo(frame.capturedImage)
How can I initialize the session with screen capture? Any pointers?
I had to set captureType in the session initilization like so,
let session = LFLiveSession(audioConfiguration: audioConfiguration, videoConfiguration: videoConfigurationcaptureType: LFLiveCaptureTypeMask.inputMaskVideo)
Related
I need to detect number of channels and the format of audio (interleaved or non-interleaved) from AVAssetTrack. I tried the following code to detect the number of channels. As can be seen in the code, there are two ways to detect number of channels. I want to know which one is more reliable and correct, or none of them perhaps (irrespective of audio format)?
if let formatDescriptions = track.formatDescriptions as? [CMAudioFormatDescription],
let audioFormatDesc = formatDescriptions.first,
let asbd = CMAudioFormatDescriptionGetStreamBasicDescription(audioFormatDesc)
{
//First way to detect number of channels
numChannels = asbd.pointee.mChannelsPerFrame
var aclSize:size_t = 0
var currentChannelLayout:UnsafePointer<AudioChannelLayout>? = nil
currentChannelLayout = CMAudioFormatDescriptionGetChannelLayout(audioFormatDesc, sizeOut: &aclSize)
if let currentChannelLayout = currentChannelLayout, aclSize > 0 {
let channelLayout = currentChannelLayout.pointee
//second way of detecting number of channels
numChannels = AudioChannelLayoutTag_GetNumberOfChannels(channelLayout.mChannelLayoutTag)
}
}
And I don't know how to get audio format details (interleaved or non-interleaved). Looking for help in this.
Use the AudioStreamBasicDescription. All audio CMFormats have one, while the AudioChannelLayout is optional:
https://developer.apple.com/documentation/coremedia/1489137-cmaudioformatdescriptiongetchann?language=objc
AudioChannelLayouts are optional; this API returns NULL if one doesn’t exist.
I need to use Replaykit (Broadcast extension UI) to be able to cast content from iPhone to TV (Chrome cast).
Currently I am using Haishinkit library, I have written content to HTTPStream (CMSampleBuffer), I use this URL and cast to TV, it doesn't work.
let url = URL.init(string: "abc.m38u")!
let mediaInfoBuilder = GCKMediaInformationBuilder.init(contentURL: url)
mediaInfoBuilder.streamType = GCKMediaStreamType.buffered
mediaInfoBuilder.contentID = mediaURL.absoluteString
mediaInfoBuilder.contentType = mediaURL.mimeType()
mediaInfoBuilder.hlsSegmentFormat = .TS
mediaInfoBuilder.hlsVideoSegmentFormat = .MPEG2_TS
mediaInfoBuilder.streamDuration = .infinity
Where am I going wrong.
Can I use any other way to stream content to Chromecast, because using HTTPStream, the content is deleyed for about 5 to 10 seconds.
Thanks.
I am trying to make an iOS app that does some pre-processing on video from the camera, then sends it out over webrtc. I am doing the pre-processing on each individual frame using the AVCaptureVideoDataOutputSampleBufferDelegate protocol and then capturing the frame with the captureOutput method.
Now I need to figure out how to send it out on WebRTC. I am using the Google WebRTC library: https://webrtc.googlesource.com/src/.
There is a class called RTCCameraVideoCapturer [(link)][1] that most iOS example apps using this library seem to use. This class accesses the camera itself, so I won't be able to use it. It uses AVCaptureVideoDataOutputSampleBufferDelegate, and in captureOutput, it does this
RTC_OBJC_TYPE(RTCCVPixelBuffer) *rtcPixelBuffer =
[[RTC_OBJC_TYPE(RTCCVPixelBuffer) alloc] initWithPixelBuffer:pixelBuffer];
int64_t timeStampNs = CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(sampleBuffer)) *
kNanosecondsPerSecond;
RTC_OBJC_TYPE(RTCVideoFrame) *videoFrame =
[[RTC_OBJC_TYPE(RTCVideoFrame) alloc] initWithBuffer:rtcPixelBuffer
rotation:_rotation
timeStampNs:timeStampNs];
[self.delegate capturer:self didCaptureVideoFrame:videoFrame];
[self.delegate capturer:self didCaptureVideoFrame:videoFrame] seems to be the call that is made to feed a single frame into webRTC.
How can I write swift code that will allow me to feed frames into webRTC one at a time, similar to how it is done in the `RTCCameraVideoCapturer` class?
[1]: https://webrtc.googlesource.com/src/+/refs/heads/master/sdk/objc/components/capturer/RTCCameraVideoCapturer.m
You just need to create an instance of RTCVideoCapturer (which is just a holder of the delegate, localVideoTrack.source), and calls a delegate method "capturer" with a frame whenever you have a pixelBuffer you want to push.
Here is a sample code.
var capturer: RTCVideoCapturer?
let rtcQueue = DispatchQueue(label: "WebRTC")
func appClient(_ client: ARDAppClient!, didReceiveLocalVideoTrack localVideoTrack: RTCVideoTrack!) {
capturer = RTCVideoCapturer(delegate: localVideoTrack.source)
}
func render(pixelBuffer: CVPixelBuffer, timesample: CMTime) {
let buffer = RTCCVPixelBuffer(pixelBuffer: pixelBuffer)
self.rtcQueue.async {
let frame = RTCVideoFrame(buffer: buffer, rotation: ._0, timeStampNs: Int64(CMTimeGetSeconds(timesample) * Double(NSEC_PER_SEC)))
self.capturer?.delegate?.capturer(self.capturer!, didCapture: frame)
}
}
I am using Red5 iOS code and their CustomVideoSource class. Successfully publish the stream over server but it's shows as Black & White. Not the actual coloured stream.
If any one had faced this issue , please help me to find solution for it.
Please find the code sample
let contextImage = McamImage.shared.image
let image: CGImage? = contextImage.cgImage
let dataProvider: CGDataProvider? = image?.dataProvider
let data: CFData? = dataProvider?.data
if (data != nil) {
let baseAddress = CFDataGetBytePtr(data!)
//contextImage = nil
/*
* We own the copied CFData which will back the CVPixelBuffer, thus the data's lifetime is bound to the buffer.
* We will use a CVPixelBufferReleaseBytesCallback callback in order to release the CFData when the buffer dies.
*/
let unmanagedData = Unmanaged<CFData>.passRetained(data!)
var pixelBuffer: CVPixelBuffer?
var result = CVPixelBufferCreateWithBytes(nil,
(image?.width)!,
(image?.height)!,
kCVPixelFormatType_24RGB,
UnsafeMutableRawPointer( mutating: baseAddress!),
(image?.bytesPerRow)!,
{ releaseContext, baseAddress in
let contextData = Unmanaged<CFData>.fromOpaque(releaseContext!)
contextData.release()
},
unmanagedData.toOpaque(),
nil,
&pixelBuffer)
Thanks!
Is it possible to send cookie with the AVPlayer url?I have a livestream which is AES encrypted and needs a key to decrypt.It will hit the server and the server returns the key only if session is there.So I want to send phpsessionid along with the url to AVPlayer.
Is it possible? I saw AVURLAssetHTTPHeaderFieldsKey.I don't know if it is what I have to set.If so how to do it?
This is how you can set Signed Cookies (Headers) in AVPlayer URL Request :
fileprivate func setPlayRemoteUrl() {
if playUrl.isEmpty { return }
let cookiesArray = HTTPCookieStorage.shared.cookies!
let values = HTTPCookie.requestHeaderFields(with: cookiesArray)
let cookieArrayOptions = ["AVURLAssetHTTPHeaderFieldsKey": values]
let assets = AVURLAsset(url: videoURL! as URL, options: cookieArrayOptions)
let item = AVPlayerItem(asset: assets)
player = AVPlayer(playerItem: item)
playerLayer = AVPlayerLayer(player: player)
playerLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
playerLayer?.contentsScale = UIScreen.main.scale
layer.insertSublayer(playerLayer!, at: 0)
}
In your case FPS(FairPlay Streaming) by apple will work. FairPlay Streaming is DRM(Digital Right Management) support where you will get content key along with your content data and you need to pass through delegate which supports AES-128 encrypt. Please refer below link which i shared below
https://developer.apple.com/streaming/fps/
I haven't really tried it myself but it seems there's an API that let's you create AVURLAsset with options. One of the possible option key is AVURLAssetHTTPCookiesKey. You might want to look into that.