prevent decreasing video quality in twilio - twilio

twillio video quality (resolution, height/width of video) decrease while connecting and never improved.
Test condition
device: pc and mobile app
mobile send video and pc receive
connecting several minutes without idle mode.
sometimes move mobile
network condition: stable in office
Result
video quality become worse by decreasing height and width of video
you can see that frame width and fram height has become smaller while frame rate is the same
in our condition, network is stable and does not have reason to decrease.
if it decrease worse, I would like it improve once condition become better.
Also, if the condition become worse, it is better to decrease frame rate rather than resolution of video.
How can I configure in that way?
Current implementation with Flutter
final connectOptions = ConnectOptions(
token,
roomName: _channelName,
preferredAudioCodecs: [OpusCodec()],
audioTracks: [LocalAudioTrack(true)],
dataTracks: [LocalDataTrack()],
videoTracks: [LocalVideoTrack(true, _cameraCapturer)],
enableNetworkQuality: true,
networkQualityConfiguration: NetworkQualityConfiguration(
remote: NetworkQualityVerbosity.NETWORK_QUALITY_VERBOSITY_MINIMAL,
),
enableDominantSpeaker: true,
);
_room = await TwilioProgrammableVideo.connect(connectOptions);

Twilio developer evangelist here.
Also, if the condition become worse, it is better to decrease frame rate rather than resolution of video.
This is not necessarily true. From the Twilio docs on developing high quality video applications:
Frame-rate and resolution are the two main capture constraints that affect video fidelity. When the video source is a camera showing people or moving objects, typically the perceptual quality is better at higher frame-rate. However, for screen-sharing, the resolution is typically more relevant.
To my knowledge, if a network connection is able to support higher resolution video, then WebRTC and Twilio Video will send the resolution that can be supported. However, network connection is not the only variable when it comes to the resolution, CPU usage is also a factor, and matters particularly on mobile devices. Continuing from the docs:
You should try to set resolution and frame-rate to the minimum value required by your use-case. Over-dimensioning resolution and frame-rate will have a negative impact on the CPU and network consumption and may increase latency. In addition, remember that the resolution and frame-rate you specify as capture constraints are just hints for the client video engine. The actual resolution and frame-rate may decrease if CPU overuse is detected or if the network capacity is not enough for the required traffic.
I see you have included the network quality API in your code. I'd recommend taking readings from that, as well as investigating CPU profiling in your application to see if you can get the resolution stable on your device.

Related

How to reduce VLCJ memory and CPU consumption when playing hls?

My javafx application has 6 small windows and one large one. Each plays hls using VLCJ.
From time to time the picture freezes on some windows, so I want to somehow reduce the consumption of players on the PC.
How can i do this?
In 6 small windows, I don't need sound, if I can turn it off with a parameter, will it affect memory or CPU consumption?
At the moment, I remove the sound there with --aout=directsound and the mute() function, but perhaps the audio is still processed by the players and the consumption is not reduced.
Since these are small windows, high quality content does not need to be displayed there. Is it possible to reduce the quality of the content using the player? Can this help and how to do it?
Tried using the :adaptive-logic=highest playback parameter, but it didn't help, most likely because the content has only one high quality.
Parameters for the player are here: https://wiki.videolan.org/VLC_command-line_help/.
But there are a lot of them and I do not understand how they work, so I ask for help.
Maybe I can skip some frames, which will not be very noticeable, but can help?
Update:
Now I'm trying these options, but I don't notice much change...
--no-audio
--postproc-q=1
--ffmpeg-hw
--avcodec-skip-frame=1
--avcodec-skip-idct=1
--avcodec-skiploopfilter=1
--avcodec-hw=any
--sout-avcodec-hurry-up
--no-sout-avcodec-interlace-me

The YouTube iframe API serves streams at resolutions that will not playback over cellular connections

I hope that the YouTube API team will address the following issue.
YouTube has disabled the ability to request a specific size using the setPlaybackQuality() method.
If I am correct, the YouTube iframe API automatically determines the appropriate resolution / size to serve up (small, medium, large, hd720 etc) depending upon the pixel dimensions of the embedded player.
This is a huge problem over cellular networks.
AT&T, Verizon, TMobile and others have all begun to throttle video streams and / or disable playback all together in some cases for streams above 480p.
In our case, we are seeing 1.5 - 2 minutes of buffering before playback in the embedded YouTube player at widths above 360px.
In portrait mode this limit would at least be somewhat acceptable, but in full-screen landscape on mobile, the preferred method for watching video, YouTube changes the quality automatically and in most cases serves up HD720p which almost immediately becomes stuck in buffering mode over cellular connections.
We need the ability to request a specific resolution, and/or we need YouTube to serve up video at 480p over cellular connections.
The suggestedQuality parameter of player.setPlaybackQuality(suggestedQuality:String):Void determines appropriate playback quality not only depending upon the pixel dimensions of the embedded player, but actually varies for different users, videos, systems and other playback conditions.
Setting the parameter value to default instructs YouTube to select the most appropriate playback quality.
YouTube selects the appropriate playback quality. This setting effectively reverts the quality level to the default state and nullifies any previous efforts to set playback quality using the cueVideoById, loadVideoById or setPlaybackQuality functions.
I assume this also takes mobile connections into consideration, but if you believe there is an issue on this API feature, you can contact YouTube here.

Does AVPlayer auto adjust when playing a m3u8 playlist?

After spending some time setting up a transcoding process on AWS I am finding that the loading times for videos has not been lowered as expected with HLS (m3u8).
It seems that if I am using AVPlayer directly, without AVPlayerViewController, I a may need to do the managing of the video stream quality myself? My understanding was that if I had a m3u8, that things would be done automatically and the best quality would be used depending on network conditions / device / etc?
So far it seems that the loading times are the same if not slightly worse than without the m3u8 if AVPlayer is used as is.
To better understand what's going on I've been trying out a few things.
1) While doing this has worked to reduce loading times, I would prefer to do a bit more than just lower it all the way when not on wfifi:
self.player?.currentItem?.preferredPeakBitRate = 1
This seems to give me a pretty low quality video but it loads pretty quickly. I have yet to figure out how to detect the actual bitrate being used though (since setting this value has improved loading times dramatically, I am going to assume AVPlayer does not handle the adjustments on its own?).
2) Also, haven't had any luck with (causes infinite spinner, even with the preferredPeakBitRate set to 1):
self.player.automaticallyWaitsToMinimizeStalling = false
3) I am open to using a third party library that might handle this, found something called VKVideoPlayer that might do some of this?
Thanks
It's possible now in iOS8 and onwards.
Following copied from Apple's documentation:
The desired limit, in bits per second, of network bandwidth
consumption for this item. SWIFT: var preferredPeakBitRate: Double
OBJECTIVE-C: #property(nonatomic) double preferredPeakBitRate
Set preferredPeakBitRate to non-zero to indicate that the player
should attempt to limit item playback to that bit rate, expressed in
bits per second.
If network bandwidth consumption cannot be lowered to meet the
preferredPeakBitRate, it will be reduced as much as possible while
continuing to play the item.

iOS - how can I programmatically calculate the time limit to record audio/video with the known file limit size

I have tried to google a lot but it seems like no one have done it beforein iOS.
My issue is: my server only allow the client to upload the video / audio / image file with limited size (e.g: 30M for video, 1M for audio). With that limit, I want to figure how much time the users are allow to record audio / video. This calculation must consider the difference devices for example the iPad 3 has better camera then ipad 2 so we will have less time to record the video.
I am wondering if we can programmatically calculate the time limit base on the known file size.
Thanks,
Luan.
When working with large amounts of data such as video and audio, compression should play a part in your calculation.
Compression results can vary greatly depending on what you are recording and as a result it would be unrealistic to try to forecast a certain maximum duration.
I can think of two options:
Predetermine very restrictive recording times per device (I believe it is possible in iOS to tell an iPad3 from an iPad2)
Figure out a way to re-encode a smaller part of the video until it is within limits.
Best of luck!
Cantgetright has the reason this is hard described perfectly.
What you really care about is megapixels of the camera (definition), worst case storage size of one second of video, and how many free megs are on the phone as well.
If you know most of these elements, time can be the constraint by which you determine the last one.
Always overestimate size to guarantee it'll work no matter what. People don't know how big 5secs of video is on their iDevices anyway so you can be stingy with allotted time

In image processing, what is real time?

in image processing applications what is considered real time? Is 33 fps real time? Is 20 fps real time? If 33 and 20 fps are considered real time then is 1 or 2 fps also real time?
Can anyone throw some light.
In my experience, it's a pretty vague term. Often, what is meant is that the algorithm will run at the rate of the source (e.g. a camera) supplying the images; however, I would prefer to state this explicitly ("the algorithm can process images at the frame rate of the camera").
Real time image processing = produce output simultaneously with the input.
The input may be 25 fps but you may choose to process 1 of every 5 frames(that makes 5 fps processing) and your application is still real time.
TV streaming software: all the frames are processed.
Security application and the input is CCTV security cams: you may choose to skip some frames to fit the performance.
3d game or simulation: fps changes depending on the current scene.
And they are all real time.
Strictly speaking, I would say real-time means that the application is generating images based on user input as it occurs, e.g. a mouse movement which changes the facing of an avatar.
How successful it is at this task - 1 fps, 10 fps, 100 fps, etc - is actually another question.
Real-time describes an approach, not a performance metric.
If however you ask what is the slowest fps which passes as usable by a human, the answer is about 15, I think.
i think it depends on what the real time application is. If the app is showing slideshows with 1 picture every 3 seconds, and the app can process 1 picture within this 3 seconds and show it, then it is real time processing.
If the movie is 29.97 frames per second, and the app can process all 29.97 frames within the second, then it is also real time.
An example is, if an app can take the movie from a VCR or Cable's analog output, and compress it into 29.97 frames per second video and also send all that info to a remote area for another person to watch, then it is real time processing.
(Hard) Real time is when an outcome has no value when delivered too early or too late.
Any FPS is real time provided that displayed frames represent what should be displayed at the very instant they are displayed.
The notion of real-time display is not really tied to a specific frame rate - it could be defined as the minimum frame rate at which movement is perceived as being continuous. So for slow moving objects in a visual frame (e.g. ships in a harbour, or stars in the night sky) a relatively slow frame rate might suffice, whereas for rapid movement (e.g. a racing car simulator) a much higher frame rate would be needed.
There is also a secondary consideration of latency. A real-time display must have sufficiently low latency in relation to other events (e.g. behaviour of a real-time simulation) that there is no perceptible lag in display updates.
That's not actually an easy question (even without taking into account differences between individulas).
Wikipedia has a good article explaining why. For what it's worth, I think cinema films run at 24fps so, if you're happy with that, that's what I'd consider realtime.
It depends on what exactly you are trying to do. For some purposes 1fps or even 2 spf (Seconds per frame) could be considered real-time. For others thats way too slow ...
That said, real-time means that it takes as long (or less) to process x frames as it would take to just present those x frames.
It depends.
automatic aircraft cannon - 1000 fps
monitoring - 10 - 15 fps
authentication - 1 fps
medical devices - 1 fph
I guess the term is used with different meanings in different contexts. In industrial image processing, real time processing is usually the opposite of offline processing. In offline processing applications, you record images (many of them) and process them at a later time. In real time processing, the system that acquires the images also processes them, at the same time, so the processing frame rate must not be higher than the acquisition frame rate.
Real-time means your implementation is fast enough to meet some deadline. The deadline is part of your system's specification. If it's an interactive UI and the users are not too picky, 15Hz update can be OK, although it can feel laggy. If you're using it to drive a car along the motorway 30Hz is about right. If it's a missile, well, maybe 100Hz?

Resources