Syncing multiple videos over network efficiently on iOS - ios

I am trying to play multiple videos together in synchronisation (they are all the same length) on iOS. To make this efficient over internet I have transcoded these videos to HLS, however when I try and synchronise them using AVPlayer it turns out that I cannot control the source clock they use (this only works on local files) and I cannot control the rate (so I can't do minor corrections to bring these videos in sync).
To solve this I have currently downloaded the flat mpg files and then play them in sync using the source clock, however this is very bandwidth intensive. Does anyone know some ways I can achieve better network efficiency while managing to keep these files in sync?

Related

Asp.Net Core Web App Play and download Video from Disk

I am having a scenario here. I have to build an asp.net core web application from where I have to browse drive and directories (where hosted: windows) and list the available video files. This portion working perfectly.
Now I have to do two things with those video files:
Need to download files (working except the size is < 2 GB)
Need to play the video on the page whatever the format and size is.
Asking for your helps. Thanks in advance.
If your website videos, each video is very small, assuming that the video size in a single page does not exceed 100M, or your internet speed is extremely fast, I think your needs are feasible.
What I said above is only the content contained in a single page. When there are not many video resources, the page loading speed is related to the bandwidth.
It is very simple to read files under a certain path, but I think the network speed and the performance of IIS are the bottleneck for reading and downloading.
Reason and suggestions
1. IIS web server just host web application, and also can store some small static resources, such as website pictures, css and js files.
IIS itself is on windows. When there are not many users, it can run normally by accessing 1G video. It can be loaded slowly. But what if there are many users, such as 100 people or 1000+ people visiting at the same time? I think the pressure on the web server is very high, so your idea is only suitable for a small range of users, and it is recommended that the video resources should not be too large.
2. I suggest you use third-party media services to store and read videos. For example, use Azure Media Service.
The reason is that if the video resource under the path is read through the web server, it must be very slow in terms of video loading, especially when the file is large and more users access it, the phenomenon will be particularly obvious.
Using Azure Media Services, the opening speed and download speed of playing videos will be improved. As a web server, IIS will no longer bear the pressure of this part of the media service.

Reducing bandwidth usage of vlc audio playback from smb share

I'm looking for a way to reduce a java based media player network bandwidth usage. During my research I found out that quality can be traded for lower datarates on streams with the transcode options. In my case the audio source is in a samba network share (file based - only wav type), and I'm not sure if the transcode setting apply for it.
The source of my problem is that our customer's work site has only a 50Mbit connection to their datacenter, and many clients (~10) has to be able to play back these audio files simultaneously. There is no QoS i guess, and the network is used for other purposes too. Caching is not an option (it's a long story, that i can't tell).
I would be really grateful if someone can clarify it for me. Can I lower the bandwidth requirements in this scenario by lowering the quality with transcode?
I'm open for other suggestions too, if you have an idea.

Streaming audio distorted when played in mobile safari in iOS

We are hosting mp3 files on AWS s3. We have built a web app (in React) that will play back the mp3s. However, it sometimes becomes distorted when played in Safari on iOS. The strange thing is that this does not happen all the time.
Here is the original file (sometimes distorted): https://sayyit-prod-static-assets.s3.amazonaws.com/static/audio/Darrin+M.+McMahon.original.mp3
Here is the file sounds when distorted: https://sayyit-prod-static-assets.s3.amazonaws.com/static/audio/WhatsApp+Video+2019-09-26+at+11.06.49+AM.mp4
Now, this distortion only happens when playing it through our app. When we provide a direct link to s3 (like I did above), it works. The distortion also happens when linking directly to s3 in our app.
Here are some ideas:
The mp3 file is broken
When going directly to the S3 link, it downloads entirely, which seems to allow the mp3 file to play perfectly
Any help would be greatly appreciated.
The sample rate on this MP3 file is 16 kHz. That's very low (not abnormal for voice), but also uncharacteristically low for a 128k MP3. I suspect that there's a bug with the resampler (as the iPhone hardware is locked to 48 kHz anyway), or that you're hitting an edge case bug with the decoder.
I'd recommend that you stop using MP3 and solve a few things at once. While MP3 is of acceptable quality, it's quality for a given bitrate isn't as good as alternatives. These days, you should consider using Opus. It's supported on iOS if muxed into a CAF file, and is extremely efficient. You could drop the bitrate down to 48k for voice and still have excellent quality. And, you'll bypass whatever resampling or decoding issue you're having now all in one go.

Bundling mp4 files with the app, letting users play videos straight from the device over downloading it from the cloud

I want my app to have a bunch of 30-sec mp4 clips. I want to ship these clips with the App and not have the users download them from the cloud
Each of my clip is around 5 MB and I expect to have a lot of them.
Is there a way to compress them to reduce the app download size? ( the 5Mb size is after all the CODEC's etc) I need an iOS solution for this.
MP4 is very compressed already so there isn't a way to compress it more. That's why zipping mp4s barely changes their size.
You have two options:
1) Include whichever ones the user needs first and download the rest, hopefully before they're needed.
2) If you absolutely have to have them all in the app you could reduce the resolution and/or encode at a lower bitrate.
If you go with option 2, you could still download higher quality ones from the cloud in the background and use those if available, but default to the lower quality ones if not.

Creating synchronized stereo videos using webcams

I am using OpenCV to capture video streams from two USB webcams (Microsoft LifeCam Studio) in Ubuntu 14.04. I am using very simple VideoCapture code (source here) and am trying to at least view two videos that are synchronized against each other.
I used Android stopwatch apps (UltraChron Stopwatch Lite and Stopwatch Timer) on my Samsung Galaxy S3 mini to realize that my viewed images are out of sync (show different time on stopwatch).
The frames are synced maybe in 50% of the time. The frame time differences I get are from 0 to about 300ms with an average about 120ms. It seems that the amount of timeout used has very little effect on sync (same for 1000ms or 2000ms). I tried to minimize the timeout (waitKey(1) for the OpenCV loop to work at all) and read every Xth iteration of the loop - this gave worse results that waitKey(1000). I run in FullHD but lowering resolution to 640x480 had no effect.
An ideal result would be a 100% synchronized stereo video stream that has X FPS. As I said I so far use OpenCV to view video still images, but I do not mind using anything else to get the desired result (can be on Windows too).
Thanks for help in advance!
EDIT: In my search for low-cost hardware I fount that it is probably possible to do some commodity hardware hacking (link here) and inject a single clock signal into multiple camera modules simultaneously to get the desired sync. The guy who did that seems to have developed his GENLOCKed camera board (called NerdCam1) and even a synced stereo camera board that he now sells for about €200.
However, I have almost zero ability of hardware hacking. Also I am not sure if such clock injection is possible for resolutions above NTSC/PAL standard (as it seems to be an "analog" solution?). Also, I would prefer a variable baseline option where both cameras would not be soldered on a single board.
It is not possible to stereo sync two common webcams because webcams lack external trigger feature that lets one precisely sync multiple cams using a common trigger signal. Such trigger may be done both in SW or HW but the latter will give better precision. Webcams only support "free-running" mode and let you stream whatever FPS they support but you can not influence when exactly the frame integration/exposure is done.
There are USB cameras with a dedicated external trigger feature (usually scientific cameras like Point Grey) - they are more expensive (starting at about $300/piece) than webcams but can be synced. If you really are on low budget you can try to hack the PS3 Eye camera to get the ext. trigger feature.

Resources