I have to be able to record an incoming video call into a file. The recording must be done on the desktop application, built with electron. I'm using OpenVidu as a streaming platform. Is there any way to do that?
#Vasniktel Technically it could be possible to record the video client side as there are a number of WebRTC examples that record locally on the client, however this is not natvie to openvidu. However recording on electronjs is...
github.com/hokein/electron-screen-recorder
tutorialspoint.com/electron/… You could integrate recording separately along side your openvidu app.
The main difference here is that you want to record an incoming call and while you likely won't be able to just write the incoming webrtc data you should be able to record the area of the app (canvas) where the video player is rendered. You will be re-encoding the decoded rendered video stream, but it shouldn't be too much of a hit performance wise.
Related
I am trying to build a web app for users to easily add text (as open caption) and other assets in my app as overlays in real-time to their YouTube live stream video.
They will use their camera to record their video, and select from my app which text should be added to the video.
Then, the video will be sent to Youtube live through their API.
Here are my questions:
First of all, I was wondering if mixing video + subtitle and sending it to Youtube's rtmp url can be done from the client side, so it's simple and lightweight.
Second, should I encode the output being sent to Youtube? Can this be done from the browser too?
I'm only seeing a few node.js frameworks, and even they're not very mature (or is Webcodecs for this purpose?). Is a web app a poor choice for this task?
Lastly, if I do need a server to process the video, where should the encoding happen (from the user's machine, or in the server, or both?)? Is my server most likely going to be the bottleneck given YouTube's infrastructure, since video files are huge and my server is limited?
I am new to video streaming, so please excuse my lack of understanding of the subject. Also, if there's any good resource for my problem, please share them with me.
First of all, I was wondering if mixing video + subtitle and sending it to Youtube's rtmp url can be done from the client side, so it's simple and lightweight.
You can do the video compositing and audio mixing and what not, but browsers don't support RTMP. To get the data to an RTMP server, you need to send it to a server where it is proxied off to the final URL.
They will use their camera to record their video, and select from my app which text should be added to the video.
Yeah, that's no problem at all. Draw everything to a canvas every frame.
Second, should I encode the output being sent to Youtube?
Yes, you must. Check out the Media Recorder API.
Lastly, if I do need a server to process the video, where should the encoding happen (from the user's machine, or in the server, or both?)?
The video has to be encoded client-side to get to the server in the first place. The server can then hopefully just repackage with flv and send it along. If the browser doesn't support H.264 in its Media Recorder API, then you'll have an intermediary codec like VP8, and you'll have to transcode server-side.
A few years ago, I wrote a tutorial on how to do all of these steps here: https://github.com/fbsamples/Canvas-Streaming-Example Note that the tutorial is in the context of Facebook, but this should teach you the concepts.
I want to broadcast existing videos to multiple users through wowza...
Suppose I want to broadcast any 1 uploaded video (in wowza server) to my multiple users? so how can I do that.. can wowza call any API to start streaming in other users devices? Means when I started streaming video from my application then it should start in other devices also through wowza API.
Are you talking about broadcasting (streaming) an MP4 file as a simulated Live stream (playout) or as Video On Demand (VOD)?
Obviously you cannot force devices to start playing a stream. That'd only work if you develop an App that can listen for commands and trigger playback accordingly. Wowza doesn't have such an App, nor any built-in features that can do this.
If you want devices to access a stream on-demand you can simply upload the file to Wowza's content folder. If you want to have a programmed playout, like a TV channel, then you can check out this article: https://www.wowza.com/docs/how-to-schedule-streaming-with-wowza-streaming-engine-streampublisher
(the source code of the plug-in that is used in that article is available from https://github.com/WowzaMediaSystems/wse-plugin-streampublisher)
As from your Question, You can broadcast stream successfully and you might have used Wowza Go Coder SDK for doing it.
On Broadcasting live stream, broadcasted videos stored in CONTENT directory which is inside of Stream Engine installation directory structure.
You can find all the streamed videos in it.
Now, You want to broadcast particular stored video then you can do it by loading specific URL for that video. Broadcasting for that Video is not possible but, you can play that particular video as below which will be accessible to all your application users :
In IOS, Link or URL for Videos stored in CONTENT directory is as below :
http://[Host Address]:[PORT]/vod/mp4:sample.mp4/playlist.m3u8
rtsp://[Host Address]:[PORT]/vod/sample.mp4 (In Android)
Here, sample is the name of your Stream. You have to broadcast live stream video with different stream names so that all videos are accessible.
In this way, you can play stored live stream videos.
I'm building a voice-only IOS (swift) app and Tokbox is my VoIP provider.
My app is simple: user1 is talking to user2. However, I would like to get access to the audio stream in real-time. I'm ok with both options: 1. The audio stream goes to my piece of code then I stream it back to Tokbox 2. The audio stream is forked to Tokbox and to my code in parallel.
The only way I was able to put my hand on the audio stream is by using their archiving capabilities, but that is too late (only after the session ends)
Any ideas? or maybe other providers that give me that option?
Option 1 can be done using the external/custom audio driver, take look at this example on how to use/implement it https://github.com/opentok/opentok-ios-sdk-samples/tree/master/Custom-Audio-Driver
I am new to Live streaming of a data. I have been exploring in a web about how to live stream a Video. Actually I am an iOS developer and I want to develop an App that streams video.
I am clear about the fundamentals of live video streaming. I came to know that I will be need a Streaming Media Server which will feed the stream to the viewer. I also came to know that viewer has to have a player which decodes the data and synchronize the audio/video stream.
Now, Wowza is a kind of Streaming Media Server which is recommended. But, I have following questions..
(1) Why Media Server? Why we can't have our own Media server? What actually Media Server do that makes its role necessary ?
(2) In my App, I will have to integrate a library for encoding and feed to a streaming server like Wowza. But, how it would be fed to the streaming server ?
(3) How will my server communicate with a streaming server like Wowza ?
(4) How Wowza will feed the stream to the receiving side i.e. the user having an iPhone and needs to see a live stream.
(5) What should be at the receiving side. What will decode the stream and will play the stream to AVPlayer ?
Guys, I need to develop a streaming App with better quality. So, better I first understand the flow of data and then start.
It would be great if someone gives a graphical representation of the data flow.
Thanks a lot in Advance !!!
Let me quickly add my understanding to your questions:
1a. Why Media Server? ..
You could write your own software for distributing the stream data to all the players as well. But in that case you would need to implement various transport protocols and you would end up implementing a fairly big piece of software, your home grown media server.
1b. What actually Media Server do to make its role necessary?
A way to see the role of the media server is to either receive the live stream from a stream source and handle the distribution of this stream to probably many-many other players. This usually involves taking the data out from the source transport protocol and repackage it into one or more other container format or transport protocol that the clients favour. Optionally the Media Server can change the way the video or the audio is encoded (transcoding), or produce different resolution and quality streams and provide the players with the list of available qualities in the form of a manifest file (e.g. m3u8 or smil file) so they can do so called adaptive streaming.
An other typical use-case of Media Servers is serving non-live video files to players from disk, as well as recording live streams, and so on. If you look at the feature list of popular media servers, you'll see that they are really doing many things, so practically this is something you probably want to get out of the box and not implement your own.
In my App, I will have to integrate a library for encoding and feed to a streaming server like Wowza. But, how it would be fed to the
streaming server?
You need to encode the video and audio with a particular codec (such as H.264 for video and AAC for audio), then you need to choose a suitable container format to put these streams into (e.g. MPEG-TS) and then choose a transport protocol to push the stream to the server (e.g. RTMP). Best if you google for tutorials to see how this looks like in code.
How will my server communicate with a streaming server like Wowza?
The contract is basically the transport protocol, one example is using RTMP protocol to connect to Wowza and publish the stream to it. These protocols cover all the technical details.
How Wowza will feed to the stream to the receiving side i.e. the user having an iPhone and needs to see a live stream.
The player software will initiate the communication with Wowza. This is again protocol dependent but in case you are using HLS, the player will use the HTTP protocol to find out the URL of the consequtive video chunks that it will progressively download and display to the user.
What should be at the receiving side. What will decode the stream and will play the stream to AVPlayer ?
It's not clear whether your app under development is the broadcaster side or the player side. But generally on the player side you need to find a library that is able to pull the stream from the media server with the protocol/transport/codec you are using. I am not familiar with this part in iOS, I only have experience with players embedded in websites.
I am not going to draw this, but imagine 3 boxes connected with arrows and that's the data flow. From encoder to streaming server and finally to player. That's it I guess.. :-)
Is there any solution of this below one?
I have the Video/audio URLs
My Requirement is:
Is it possible to get the video/audio from the server and at the same time I have to open the player to play it(Like showing the Live-video directly in browser Field).
Means
Getting streaming into a buffer in back-end and at the same time I want to show it in the player.
If above is possible
I want to save that particular video/audio streaming data in to one file.
This blackberry KB link explains about streaming video from server. It may help you.