Broadcasting video iphone to Wowza server using AVCaptureSession - ios

I am Developing live broadcasting feature, i have built Custom camera to shoot video using AVCaptureSession, and we have Wowza server for broadcasting,
So my Question is how to Encode Video from AVCaptureFileOutputRecordingDelegate,AVCaptureVideoDataOutputSampleBufferDelegate and send to Server, I found many libraries, but not suitable for our application, they provide their own UI, Can any one Suggest any other library or Step by step Integration

Are you using the AVAssetWriterInput
init mediaType:outputSettings:sourceFormatHint: method?. This takes a dictionary with the desired settings. From the docs..."Specify a dictionary containing the settings used for encoding the media appended to the output. You may pass nil for this parameter if you do not want the appended samples to be re-encoded."

Related

Architecture for a web app to add overlays to users' Youtube live stream video?

I am trying to build a web app for users to easily add text (as open caption) and other assets in my app as overlays in real-time to their YouTube live stream video.
They will use their camera to record their video, and select from my app which text should be added to the video.
Then, the video will be sent to Youtube live through their API.
Here are my questions:
First of all, I was wondering if mixing video + subtitle and sending it to Youtube's rtmp url can be done from the client side, so it's simple and lightweight.
Second, should I encode the output being sent to Youtube? Can this be done from the browser too?
I'm only seeing a few node.js frameworks, and even they're not very mature (or is Webcodecs for this purpose?). Is a web app a poor choice for this task?
Lastly, if I do need a server to process the video, where should the encoding happen (from the user's machine, or in the server, or both?)? Is my server most likely going to be the bottleneck given YouTube's infrastructure, since video files are huge and my server is limited?
I am new to video streaming, so please excuse my lack of understanding of the subject. Also, if there's any good resource for my problem, please share them with me.
First of all, I was wondering if mixing video + subtitle and sending it to Youtube's rtmp url can be done from the client side, so it's simple and lightweight.
You can do the video compositing and audio mixing and what not, but browsers don't support RTMP. To get the data to an RTMP server, you need to send it to a server where it is proxied off to the final URL.
They will use their camera to record their video, and select from my app which text should be added to the video.
Yeah, that's no problem at all. Draw everything to a canvas every frame.
Second, should I encode the output being sent to Youtube?
Yes, you must. Check out the Media Recorder API.
Lastly, if I do need a server to process the video, where should the encoding happen (from the user's machine, or in the server, or both?)?
The video has to be encoded client-side to get to the server in the first place. The server can then hopefully just repackage with flv and send it along. If the browser doesn't support H.264 in its Media Recorder API, then you'll have an intermediary codec like VP8, and you'll have to transcode server-side.
A few years ago, I wrote a tutorial on how to do all of these steps here: https://github.com/fbsamples/Canvas-Streaming-Example Note that the tutorial is in the context of Facebook, but this should teach you the concepts.

Vuforia display video that was downloaded from the internet

I need to develop an app, that uses Vuforia cloud recognition of object, and then display a video on top of that object. This video file needs to be downloaded from the internet from separate web service, using recognized object identifier. I was looking at Vuforia samples, and was able to configure Cloud Recognition to use correct target manager database - objects are recognized correctly. But I don't know how to do, so that after discovering this object, I would display an loader view, and when video is ready to play, then display this video. I don't know where and what to update in the code. I only found that some local dataset can be used, but I can't use local dataset, because videos I want to display, are supposed to be downloaded from the internet after detection. Can someone direct me, where in Vuforia examples I can update what is shown on the target?
Vuforia cloud is only for markers.
As per your requirements
1. You need to use iOS code to download the video.
2. After that you have to show that video using AVPlayer.

What is the major role of Streaming Media Server?

I am new to Live streaming of a data. I have been exploring in a web about how to live stream a Video. Actually I am an iOS developer and I want to develop an App that streams video.
I am clear about the fundamentals of live video streaming. I came to know that I will be need a Streaming Media Server which will feed the stream to the viewer. I also came to know that viewer has to have a player which decodes the data and synchronize the audio/video stream.
Now, Wowza is a kind of Streaming Media Server which is recommended. But, I have following questions..
(1) Why Media Server? Why we can't have our own Media server? What actually Media Server do that makes its role necessary ?
(2) In my App, I will have to integrate a library for encoding and feed to a streaming server like Wowza. But, how it would be fed to the streaming server ?
(3) How will my server communicate with a streaming server like Wowza ?
(4) How Wowza will feed the stream to the receiving side i.e. the user having an iPhone and needs to see a live stream.
(5) What should be at the receiving side. What will decode the stream and will play the stream to AVPlayer ?
Guys, I need to develop a streaming App with better quality. So, better I first understand the flow of data and then start.
It would be great if someone gives a graphical representation of the data flow.
Thanks a lot in Advance !!!
Let me quickly add my understanding to your questions:
1a. Why Media Server? ..
You could write your own software for distributing the stream data to all the players as well. But in that case you would need to implement various transport protocols and you would end up implementing a fairly big piece of software, your home grown media server.
1b. What actually Media Server do to make its role necessary?
A way to see the role of the media server is to either receive the live stream from a stream source and handle the distribution of this stream to probably many-many other players. This usually involves taking the data out from the source transport protocol and repackage it into one or more other container format or transport protocol that the clients favour. Optionally the Media Server can change the way the video or the audio is encoded (transcoding), or produce different resolution and quality streams and provide the players with the list of available qualities in the form of a manifest file (e.g. m3u8 or smil file) so they can do so called adaptive streaming.
An other typical use-case of Media Servers is serving non-live video files to players from disk, as well as recording live streams, and so on. If you look at the feature list of popular media servers, you'll see that they are really doing many things, so practically this is something you probably want to get out of the box and not implement your own.
In my App, I will have to integrate a library for encoding and feed to a streaming server like Wowza. But, how it would be fed to the
streaming server?
You need to encode the video and audio with a particular codec (such as H.264 for video and AAC for audio), then you need to choose a suitable container format to put these streams into (e.g. MPEG-TS) and then choose a transport protocol to push the stream to the server (e.g. RTMP). Best if you google for tutorials to see how this looks like in code.
How will my server communicate with a streaming server like Wowza?
The contract is basically the transport protocol, one example is using RTMP protocol to connect to Wowza and publish the stream to it. These protocols cover all the technical details.
How Wowza will feed to the stream to the receiving side i.e. the user having an iPhone and needs to see a live stream.
The player software will initiate the communication with Wowza. This is again protocol dependent but in case you are using HLS, the player will use the HTTP protocol to find out the URL of the consequtive video chunks that it will progressively download and display to the user.
What should be at the receiving side. What will decode the stream and will play the stream to AVPlayer ?
It's not clear whether your app under development is the broadcaster side or the player side. But generally on the player side you need to find a library that is able to pull the stream from the media server with the protocol/transport/codec you are using. I am not familiar with this part in iOS, I only have experience with players embedded in websites.
I am not going to draw this, but imagine 3 boxes connected with arrows and that's the data flow. From encoder to streaming server and finally to player. That's it I guess.. :-)

How to display RTSP from IP Camera/CCTV in iOS

There is obviously a way to do this because so many applications are already doing it - NetCamViewer and iCamviewer to name just one.
I have searched and searched, but I'm not finding anything of value that gives a hint as to how this is done. I'm reaching out hoping that someone will give me a clue.
I'm trying to connect to an video security camera (Y-CAM), which supports the RTSP protocol, and display the video from my iPhone/iPad application. The camera has an IP address and I can view the video from a web browser and from Quicktime running on my Mac. The problem is that RSTP is not supported on iOS so even trying to connect using Safari on an iPad doesn't work.
I've read that some are trying to use Live5555, but I haven't seen an article that describes if it has been done successfully and how.
An alternative is to capture the RTSP stream on a server, convert it to an HTTP Live stream and then connect to the HTTP Live stream from iOS. Unfortunately, this hasn't proved as easy as it sounds.
I'd prefer to go directly to the camera like other applications I've seen do. the RTSP to Live is a fall back if I have to.
Any hints are greatly appreciated. Thanks!
This is wrong :) or not necessary (An alternative is to capture the RTSP stream on a server, convert it to an HTTP Live stream and then connect to the HTTP Live stream from iOS. Unfortunately, this hasn't proved as easy as it sounds.)
You should use ffmpeg library, as this library can connect any streaming server (supporting rtsp, mms, tcp, udp ,rtmp ...) and then draw pictures to the screen.. (for drawing you can use opengles or uiimage also works)
First of all, use avformat_open_input to connect to your ip address
then use avcodec_find_decoder & avcodec_open2 to find codecs and to open them (you should call them for both audio & video)
Then, in a while loop read packets from server by using av_read_frame method
When you get frame, if it is audio then sent it to AudioUnit or AudioQueue,
if it is video, then convert it from yuv to rgb format by using sws_scale method and draw the picture to the screen.
That's all.
look at this wrapper also (http://www.videostreamsdk.com), it's written on ffmpeg library and supports iOS
You really need to search stack overflow before posting , this question has been asked many times. Yes live 555 sort of works and some of us have gotten it to work..
There are other players too, including ours http://www.streammore.tv/
You can find an open source FFMepg Decoder for iOS (and somes samples) on GitHub : https://github.com/mooncatventures-group
Sample use of this library : http://sol3.typepad.com/exotic_particles/
There are two general technology to display RTSP video on iOS Safari:
RTSP / HLS (H.264+AAC)
RTSP / Websocket (H.264+AAC ==> MPEG+G.711 or H.264+?)
For HLS you can consider Wowza server.
For Websocket playback in iOS Safari you can use WCS4 server.
Main idea for websocket playback is direct HTML5 rendering to HTML page Canvas element and audio context. In the case of MPEG playback video decoding will be done on iOS Safari side using plain JavaScript.
Another option - install a WebRTC plugin with getUserMedia support and play this stream via WebRTC. Anyway you will need a server side RTSP-WebRTC transcoder in such case.

Capture video on iOS device and live stream it to a server (or another mobile)

I want to be able to record footage using my iOS device and stream it directly to a server.
There's quite a few articles on S.O. that talk about this, but I'm not sure any have answered the question very well.
Should I be using HTTP Live Streaming, or is this just for sending data to an iPhone?
Should I be using AVCaptureSession to grab the video (a segment at a time?), sending each segment to the server?
Should I be using AVCaptureVideoDataOutput and ffmpeg for streaming?
I'm a little lost with all this, so any sample code or docs or links would be really appreciated.
Thanks for your help guys.
Duncan
You have to choose a network protocol for that purpose and find an appropriate media server to receive and process the stream. If the RTMP format is ok for your project, check angl library which supports RTMP streaming from iOS. Currently it's compatible with iOS 6 and 7.

Resources