How to load my local mpd(mpeg-dash) file to online players? - url

I am trying to use some online players to test my local mpd file, but it could't be loaded as 'file:///path-to-file' form like local urls, what's the correct format to load the file? Or should I upload it online or so?

DASH is designed to be streamed from a server so the player will be expecting to send requests for the mpd and for each media chunk to a server which will respond with the mpd or corresponding chunk.
The mpd is the 'index file' or manifest for the individual video and audio streams.
If you want to test locally then this is definitely possible and the easiest way is to set up a local test server and stream from there. You will need to create the mpd and the chunked media streams and make them available on your server, but it sounds like you have already these created.
You can then point your test player to your local server. Remember to ensure the server serves the streams using HTTPS as this is required by most players and browsers now.
There are a good set of step by step instructions available from Mozilla: https://developer.mozilla.org/en-US/docs/Web/Media/DASH_Adaptive_Streaming_for_HTML_5_Video

Related

Twilio: How to get the call record mp3 file?

I am using this API for getting Twilio call logs. Some of the calls are recorded, so I need to get the corresponding mp3 file. Under the subresource_uris section I found recordings, but it is a .json file.
How can I get the mp3 file link of call records? I am using c# codes.
The .json is to retrieve the metadata.
If you want to retrieve the actual recording leave out the .json (will deliver .wav) or add a .mp3, i.e. https://api.twilio.com/2010-04-01/Accounts/ACXXXXX.../Recordings/REXXXXX....mp3.
See also the Twilio documentation "Fetch a recording media file".
The Twilio media URLs should be public by default, see "Fetch a Recording resource":
Because the URLs that host individual recordings are useful for many external applications, they are public and do not require HTTP Basic Auth to access.
However you can turn on HTTP Basic Auth via the settings:

How to find out what my live stream URL is?

I am trying to stream from obs or similar to maybe restream, but I also want to send the feed to a custom RTMP player on my site, it's asking for the Live Streaming URL. How can I find that?
I believe obs asks you to connect to a streaming platform (I think this is what you meant by "restream"). Depending on the platform you are streaming to (Twitch, Youtube, etc), then you would need to get the URL from that site. Connect to one of these first and it should give you a URL. I think YouTube requires 24 hours to enable live streaming.
If you can stream directly to your website I am not aware of it. Reason being, in most cases you have a dynamic IP or port that is not open for your website to view the original stream from your Obs software/computer.

On Heroku, can I use Rails to serve a generated audio file back to my React Native front-end?

I'm generating temporary audio files using Rails and storing them in the tmp directory in Heroku. I think want to send the audio back to Expo/React Native to be played. How can I use Rails to serve that data as a POST response?
In my Rails API backend, I connect to a Text-to-Speech API (Google), and generate an mp3 from text. I can verify that these files are being correctly created and stored in tmp as per my POST request. I am missing the conceptual piece of how to return them back.
It's ideal for me that these files are temporary, as I only need them during a single session. I can see a couple of options.
I can save the files as .mp3s in tmp, as I'm currently doing, and find a way to attach them to my POST response.
I get the audio back from the API as binary, so I can skip saving the files as .mp3s, and respond to the POST with the binary as JSON, or something.
???
Play sound in Expo/ React Native
I'm trying to avoid setting up an AWS bucket for this if possible, but if that's the best way to do it, I'm happy to hear that as well.
Thanks!

generate URL for mpd videos

I hope that I am posting using the right tags.
I added the dash.js player to a page in my website
I did create some mpd files from mp4 ones.
I wanted to know how can I generate URL for these files so that my app can access them.
In case it will help I am using Apache2 to serve my application.
The mpd file provides an index with pointers to the individual steams for your video - e.g. the different bitrate video files, the Audi stream, and subtitles etc.
The pointers in the mpd file are relative or absolute URL's which the client, e.g. the browser can access.
To allow the browser access the mpd itself you just have to put it someplace in your server file structure that clients can access, or that the server will redirect client requests for video to.
The online apache documentation provides an overview of how you can match URL requests to file locations:
https://httpd.apache.org/docs/trunk/urlmapping.html

Monitor and navigate S3 bucket for new files added by users

I have a Rails app that catalogues recorded music products with metadata & wav files.
Previously, my users had the option to send me files via ftp, which i'd monitor with a cron task for new .complete files and then pick it's associated .xml file and a perform metadata import and audio file transfer to S3.
I regularly hit capacity limits on the prior FTP so decided to move the user 'dropbox' to S3, with an FTP gateway to allow users to send me their files. Now it's on S3 and due to S3 not storing the object in folders i'm struggling to get my head around how to navigate the bucket, find the .complete files and then perform my imports as usual.
Can anyway recommend how to 'scan' a bucket for new .complete files.....read the filename and then pass back to my app so that I can then pick up it's xml, wav and jpg files?
The structure of the files in my bucket is like this. As you can see there are two products here. I would need to find both and import their associated xml data and wavs/jpg
42093156-5060156655634/
42093156-5060156655634/5060156655634.complete
42093156-5060156655634/5060156655634.jpg
42093156-5060156655634/5060156655634.xml
42093156-5060156655634/5060156655634_1_01_wav.wav
42093156-5060156655634/5060156655634_1_02_wav.wav
42093156-5060156655634/5060156655634_1_03_wav.wav
42093156-5060156655634/5060156655634_1_04_wav.wav
42093156-5060156655634/5060156655634_1_05_wav.wav
42093156-5060156655634/5060156655634_1_06_wav.wav
42093156-5060156655634/5060156655634_1_07_wav.wav
42093156-5060156655634/5060156655634_1_08_wav.wav
42093156-5060156655634/5060156655634_1_09_wav.wav
42093156-5060156655634/5060156655634_1_10_wav.wav
42093156-5060156655634/5060156655634_1_11_wav.wav
42093163-5060243322593/
42093163-5060243322593/5060243322593.complete
42093163-5060243322593/5060243322593.jpg
42093163-5060243322593/5060243322593.xml
42093163-5060243322593/5060243322593_1_01_wav.wav
Though Amazon S3 does not formally have the concept of folders, you can actually simulate folders through the GET Bucket API, using the delimiter and prefix parameters. You'd get a result similar to what you see in the AWS Management Console interface.
Using this, you could list the top-level directories, and scan through them. After finding the names of the top-level directories, you could change the parameters and issue a new GET Bucket request, to list the "files" inside the "directory", and check for the existence of the .complete file as well as your .xml and other relevant files.
However, there might be a different approach to your problem: did you consider using SQS? You could make the process that receives the uploads post a message to a queue in SQS, say, completed-uploads, with the name of the folder of the upload that just completed. Another process would then consume the queue and process the finished uploads. No need to scan through the directories in S3.
Just note that, if you try the SQS approach, you might need to be prepared for the possibility of being notified more than once of a finished upload: SQS guarantees that it will eventually deliver posted messages at least once; you might receive duplicated messages! (you can identify a duplicated message by saving the id of the received message on, say, a consistent database, and checking newly received messages against the same database).
Also, remember that, if you use the US Standard Region for S3, then you don't have read-after-write consistency, you have only eventual-consistency, which means that the process receiving messages from SQS might try to GET the object from S3 and get nothing back -- just try again until it sees the object.

Resources