Is it possible on a composed archive, to show, not only the stream but also the stream name above or below the stream?
I have read that the maximum streams in a composed archive are 9, and the 10th will be ignored. But will it record for example 20 individual streams for the same session?
TokBox Developer Evangelist here.
At this time, OpenTok Archiving does not support recording the name of the stream.
Updated (11/27/18):
You can record up to 50 streams when in individual archiving mode.
Related
I'm creating an iOS app where I want a user to be able to live stream a video, however, users who join the live stream after it starts, start watching the stream from the beginning instead of live (I will also add functionality that allows the user watching to skip ahead and then be able to watch live).
I have looked at many third party streaming options such as Agora, Twilio, Vimeo, etc, however, I don't believe they meet my needs as I need users who join the live stream to start watching from the beginning and not live.
I have explored continuously uploading small video chunks to something like firebase storage, and then continuously reading those chunks for users watching the stream. However, as explained here: https://stackoverflow.com/a/37870706/13731318 , this is to very efficient and leads to a substantial lag.
Does anyone have any idea how to go about doing this that leverages third parties?
I think you can use the HLS protocol to implement this.
HLS allows starting to watch from the beginning or not. That is controlled by the settings.
I am not sure about uploading because I think it has to be implemented on the server-side more.
I am looking for a way to record sound on the iPhone from all microphones at the same time or at least two to record stereo. I looked through several topics on the English version of this resource, but all the topics were created before 2017, and they say that such a recording is not possible due to the single-channel incoming audio stream.
Further searches prompted information that the iPhone XS can write video with stereo sound.
Dear colleagues, kindly tell me how to solve this problem or at least where to look.
I'm building a voice-only IOS (swift) app and Tokbox is my VoIP provider.
My app is simple: user1 is talking to user2. However, I would like to get access to the audio stream in real-time. I'm ok with both options: 1. The audio stream goes to my piece of code then I stream it back to Tokbox 2. The audio stream is forked to Tokbox and to my code in parallel.
The only way I was able to put my hand on the audio stream is by using their archiving capabilities, but that is too late (only after the session ends)
Any ideas? or maybe other providers that give me that option?
Option 1 can be done using the external/custom audio driver, take look at this example on how to use/implement it https://github.com/opentok/opentok-ios-sdk-samples/tree/master/Custom-Audio-Driver
The goal is to optimize viewing quality as fast as possible (always the goal isn't it?).
Project notes:
Using HTTP Live Streaming (HLS) to allow the iOS device to choose the best stream for viewing.
The stream is not live.
The video duration is ~1 minute.
Targeting iPhone 3gs and beyond
Three questions:
What should the target encoder settings be for the initial cellular stream? Encoder settings tables: Preparing Media for Delivery to iOS-Based Devices
Apple suggests (reproduced below) the target duration should be 10 seconds. If the initial stream quality is lower than current capability, you'll be stuck viewing that same stream for 10 seconds before the switch is made. I'm considering moving it to 3-5 seconds. Are there recommendations around a lower limit? I believe Apple's advice comes from a live streaming perspective, and may not apply.
How can I debug the HLS on the device? To view stream switches, and timings. I ran into a link at one point...
Use 10 second Target Durations
The value you specify in the EXT-X-TARGETDURATION tag for the maximum media segment duration will have an effect on startup. We strongly recommend a 10 second target duration. If you use a smaller target duration, you increase the likelihood of a stall. Here's why: if you've got live content being delivered through a CDN, there will be propogation delays, and for this content to make it all the way out to the edge nodes on the CDN it will be variable. In addition, if the client is fetching the data over the cellular network there will be higher latencies. Both of these factors make it much more likely you'll encounter a stall if you use a small target duration.
Thanks SO
1) This will probably be trial and error with your consumers. I would go with a very low bitrate for the initial stream given a low target duration and assume the quality change will happen quickly (see 2)
2) This really does depend on your CDN. It is easier for VOD however because there is only one HTTP request per segment, unlike live (2 requests per segment). That being said, Microsoft silverlight's default is 2 seconds. And it was good enough for netflix.
3) No idea.
For #1, Apple mentions having more than 1 master file to solve quality for first stream played.
If you are an iOS app developer, you can query the user’s device to determine whether the initial connection is cellular or WiFi and choose an appropriate master index file.
To ensure the user has a good experience when the stream is first played, regardless of the initial network connection, you should have more than one master index file consisting of the same alternate index files but with a different first stream.
A 150k stream for the cellular variant playlist is recommended.
A 240k or 440k stream for the Wi-Fi variant playlist is recommended.
Note: For details on how to query an iOS-based device for its network connection type, see the following sample code: Reachability.
When I try to stream TS chunks generated by 3rd party multiplexers (Mainconcept/Elecard) from Safari browser in IPad 2.0/1.0 I always see Audio Video synchronization issue over a period of time.
But the same clips are playing fine in standard media player in Windows PC or Macbook.
I also observe that there is no issue in IPad when I try to stream TS chunks generated by Media File Segmenter tool in MAcbook.
What is that IPad is expecting from 3rd party multiplexers?
For Ex: When I try to stream a set TS chunks in Ipad where the overall chunk duration is 5mts 35 secs (including all TS chunks), I observe audio goes out of sync after 2 mts 40 secs.
Following is the media pipeline used to generate TS chunks
Video.mp4 (Source)-> Mainconcept MPEG4 DeMultiplexer-> Mainconcept MPEG Multiplxer-> Mainconcept Sink Filter (Generates TS chunks based on time)
Can someone share some points on IPad HLS behaviour? Does IPad expects some additional parameters for synchronization?
Thanks.
In Mainconcept Multiplexer settings, enable "optimized packing". This will resolve the AV sync issue