I'm creating a tool using Twilio where I record upon request bits of the call. I'm using Audacity to check the encoding of the file and if it is a Linear PCM, but in part of Twilio's documentation it is stated that a call is using an 8-bit PCM mono uLaw with a sampling rate of 8Khz.
Can this be configured, because I'd need to have the recording in a given encoding configured by me when launching the recording or when downloading it.
Upon fetching the recording I can only choose mp3 or wav.
Thank you.
The recording format is not a configurable option.
Note that calls traveling across the Public Switch Telephone Network (PSTN) have their audio quality constrained by that transport. See the below article for more details.
You could download the recordings from Twilio and transcode to your required format, and store in your cloud.
Best Practices for Audio Recordings
Related
I read this document and I'm not sure I need to add audio only stream.
https://developer.apple.com/library/content/qa/qa1767/_index.html
App store review guideline was changed like following.
Video streaming content over a cellular network longer than 10 minutes must use HTTP Live Streaming and include a baseline 64 kbps audio-only HTTP Live stream
=> Video streaming content over a cellular network longer than 10 minutes must use HTTP Live Streaming and include a baseline 192 kbps or lower HTTP Live stream
When I have a audio-only stream, and user entered that stream, It looks bug, because there is still image. And It is little bit late recorvered from that stream.
So I just want to prepare minimum bitrate with video(video 100Kbps, audio 92Kbps).
Is it possible to use 192kbps video for minimum bitrate without audio-only bitrate?
Thanks.
Apple basically moved the answer to that into the linked Technical Note TN2224.
If your app uses HTTP Live Streaming over cellular networks, you are required to provide at least one stream at 192 kb/s or lower bandwidth. The low-bandwidth stream may be audio-only, or audio with a still image, but you should strive to have video in your 192kbps stream.
You should be fine now if your lowest stream is according to their restrictions and it must no longer be audio-only.
This question is in regards to DailyMotion's hls.js API
My goal is to save on data usage when not connected to WiFi by playing only the audio portion of a HLS video stream.
I have looked at similar questions for other APIs but have not found anything relevant to the hls.js API.
Details:
I tested my live stream HLS file on your demo page. It identified 1 audio track and displayed it in the Audio Track Controls. At the bottom of this post I am including the format of my HLS file with identifying info changed.
Question:
Will the hsl.js API allow me to force the playback to only audio once I have identified the lack of a WiFi connection? What setting or command would I use to do that? Alternatively, can I force playback to the lowest resolution?
Thanks,
RKern
HLS File Format:
#EXTM3U
#EXT-X-VERSION:5
#EXT-UPLYNK-LIVE
#EXT-X-START:TIME-OFFSET=0.00
#EXT-X-INDEPENDENT-SEGMENTS
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac",NAME="unspecified",LANGUAGE="en",AUTOSELECT=YES,DEFAULT=YES
#UPLYNK-MEDIA0:416x234x30,baseline-13,2x48000
#EXT-X-STREAM-INF:PROGRAM-ID=1,RESOLUTION=416x234,BANDWIDTH=471244,CODECS="mp4a.40.5,avc1.42000d",FRAME-RATE=30.000,AUDIO="aac",AVERAGE-BANDWIDTH=411975
http://content-ause1.uplynk.com/channel/test/d.m3u8?pbs=test
#UPLYNK-MEDIA0:704x396x30,main-30,2x48000
#EXT-X-STREAM-INF:PROGRAM-ID=1,RESOLUTION=704x396,BANDWIDTH=873267,CODECS="mp4a.40.5,avc1.4d001e",FRAME-RATE=30.000,AUDIO="aac",AVERAGE-BANDWIDTH=688830
http://content-ause1.uplynk.com/channel/test/e.m3u8?pbs=test
#UPLYNK-MEDIA0:896x504x30,main-31,2x48000
#EXT-X-STREAM-INF:PROGRAM-ID=1,RESOLUTION=896x504,BANDWIDTH=1554841,CODECS="mp4a.40.5,avc1.4d001f",FRAME-RATE=30.000,AUDIO="aac",AVERAGE-BANDWIDTH=1171051
http://content-ause1.uplynk.com/channel/test/f.m3u8?pbs=test
#UPLYNK-MEDIA0:1280x720x30,main-31,2x48000
#EXT-X-STREAM-INF:PROGRAM-ID=1,RESOLUTION=1280x720,BANDWIDTH=3328000,CODECS="mp4a.40.5,avc1.4d001f",FRAME-RATE=30.000,AUDIO="aac",AVERAGE-BANDWIDTH=2414865
http://content-ause1.uplynk.com/channel/test/g.m3u8?pbs=test
#UPLYNK-MEDIA0:192x108x15,baseline-11,2x48000
#EXT-X-STREAM-INF:PROGRAM-ID=1,RESOLUTION=192x108,BANDWIDTH=136226,CODECS="mp4a.40.5,avc1.42000b",FRAME-RATE=15.000,AUDIO="aac",AVERAGE-BANDWIDTH=120009
http://content-ause1.uplynk.com/channel/test/b.m3u8?pbs=test
#UPLYNK-MEDIA0:256x144x30,baseline-12,2x48000
#EXT-X-STREAM-INF:PROGRAM-ID=1,RESOLUTION=256x144,BANDWIDTH=259601,CODECS="mp4a.40.5,avc1.42000c",FRAME-RATE=30.000,AUDIO="aac",AVERAGE-BANDWIDTH=232565
http://content-ause1.uplynk.com/channel/test/c.m3u8?pbs=test
I am new to Live streaming of a data. I have been exploring in a web about how to live stream a Video. Actually I am an iOS developer and I want to develop an App that streams video.
I am clear about the fundamentals of live video streaming. I came to know that I will be need a Streaming Media Server which will feed the stream to the viewer. I also came to know that viewer has to have a player which decodes the data and synchronize the audio/video stream.
Now, Wowza is a kind of Streaming Media Server which is recommended. But, I have following questions..
(1) Why Media Server? Why we can't have our own Media server? What actually Media Server do that makes its role necessary ?
(2) In my App, I will have to integrate a library for encoding and feed to a streaming server like Wowza. But, how it would be fed to the streaming server ?
(3) How will my server communicate with a streaming server like Wowza ?
(4) How Wowza will feed the stream to the receiving side i.e. the user having an iPhone and needs to see a live stream.
(5) What should be at the receiving side. What will decode the stream and will play the stream to AVPlayer ?
Guys, I need to develop a streaming App with better quality. So, better I first understand the flow of data and then start.
It would be great if someone gives a graphical representation of the data flow.
Thanks a lot in Advance !!!
Let me quickly add my understanding to your questions:
1a. Why Media Server? ..
You could write your own software for distributing the stream data to all the players as well. But in that case you would need to implement various transport protocols and you would end up implementing a fairly big piece of software, your home grown media server.
1b. What actually Media Server do to make its role necessary?
A way to see the role of the media server is to either receive the live stream from a stream source and handle the distribution of this stream to probably many-many other players. This usually involves taking the data out from the source transport protocol and repackage it into one or more other container format or transport protocol that the clients favour. Optionally the Media Server can change the way the video or the audio is encoded (transcoding), or produce different resolution and quality streams and provide the players with the list of available qualities in the form of a manifest file (e.g. m3u8 or smil file) so they can do so called adaptive streaming.
An other typical use-case of Media Servers is serving non-live video files to players from disk, as well as recording live streams, and so on. If you look at the feature list of popular media servers, you'll see that they are really doing many things, so practically this is something you probably want to get out of the box and not implement your own.
In my App, I will have to integrate a library for encoding and feed to a streaming server like Wowza. But, how it would be fed to the
streaming server?
You need to encode the video and audio with a particular codec (such as H.264 for video and AAC for audio), then you need to choose a suitable container format to put these streams into (e.g. MPEG-TS) and then choose a transport protocol to push the stream to the server (e.g. RTMP). Best if you google for tutorials to see how this looks like in code.
How will my server communicate with a streaming server like Wowza?
The contract is basically the transport protocol, one example is using RTMP protocol to connect to Wowza and publish the stream to it. These protocols cover all the technical details.
How Wowza will feed to the stream to the receiving side i.e. the user having an iPhone and needs to see a live stream.
The player software will initiate the communication with Wowza. This is again protocol dependent but in case you are using HLS, the player will use the HTTP protocol to find out the URL of the consequtive video chunks that it will progressively download and display to the user.
What should be at the receiving side. What will decode the stream and will play the stream to AVPlayer ?
It's not clear whether your app under development is the broadcaster side or the player side. But generally on the player side you need to find a library that is able to pull the stream from the media server with the protocol/transport/codec you are using. I am not familiar with this part in iOS, I only have experience with players embedded in websites.
I am not going to draw this, but imagine 3 boxes connected with arrows and that's the data flow. From encoder to streaming server and finally to player. That's it I guess.. :-)
I'm working with Apple's HTTP Live Streaming protocol which, when submitting apps to the App Store, required that there's an audio only stream as part of the multiplex. As a result of this the first segment (10 seconds) is always audio only and the image below is shown instead of the beginning of the video stream, regardless of the amount of bandwidth that's actually available.
I know I can show a static image instead but I'm wondering if there's a way to delay the stream starting until it's determined if there's enough bandwidth to go straight to the video stream.
The order of your bitrates in the manifest file is key, as the device try to play the bitrates segments in order. We recommend listing the bitrates from highest to lowest to avoid starting the video off with the worst bitrate and then switching only when iOS had detected sufficient bandwidth.
I have been unable to find a radio station streaming with HLS that uses timed metadata. I have found many HLS streams, but none with actual metadata in the stream. I need to find an existing stream for testing.
Any suggestions for a station, or how to find one?
It's not a radio station, but Apple's developer resources for HTTP Live Streaming includes two example streams. The "advanced" stream mentions including timed metadata featuring a timecode every five seconds.