CPU load of streaming vs file downloading when routing data - youtube

I'm using a Raspberry Pi 2 to route wifi-eth connections. So from the eth side I have a computer that will connect to internet using the Pi wifi connection. On the Raspberry I started htop to monitor the CPUs load, then on the computer I started chrome and played a 20-minute 1080 video. The load on the CPU didn't seem to go beyond 5% anyhow. After that I closed youtube tab and started a download of a binary file of 5GB from the first row here (https://testdebit.info/). Well, I noticed that CPU load was much more higher, around 10%!
Any explanation of such a difference?

It has to do with compression and how video is encoded. A normal file can be compressed, but nothing like that of a video stream.
A video stream can achieve very high compressions due to the predictable characteristics of video, e.g. video from one frame to another doesn't change much. As such, video will send a whole frame (I-frame) and then update it with just the changes (P-frame). It's even possible to do backward prediction (B-frame). Here's a wikipedia reference.
Yes, I hear your next unspoken question: Doesn't more compression mean more CPU time to uncompress? That's true for a lot of types of compression, such as that used by zip files. But since raw video is not very information dense over time, you have compression techniques that in essence reduce the amount of data you send with very little CPU usage.
I hope this helps.

Related

Reducing bandwidth usage of vlc audio playback from smb share

I'm looking for a way to reduce a java based media player network bandwidth usage. During my research I found out that quality can be traded for lower datarates on streams with the transcode options. In my case the audio source is in a samba network share (file based - only wav type), and I'm not sure if the transcode setting apply for it.
The source of my problem is that our customer's work site has only a 50Mbit connection to their datacenter, and many clients (~10) has to be able to play back these audio files simultaneously. There is no QoS i guess, and the network is used for other purposes too. Caching is not an option (it's a long story, that i can't tell).
I would be really grateful if someone can clarify it for me. Can I lower the bandwidth requirements in this scenario by lowering the quality with transcode?
I'm open for other suggestions too, if you have an idea.

Web Audio API on iOS Memory crash

We are using Web Audio API to play and manipulate audio in a web app.
When trying to decode large mp3 files (around 5MB) the memory usage spikes upwards in Safari on iPad, and if we load another similar size file it will simply crash.
It seems like Web Audio API is not really usable when running on the iPad unless we use small files.
Note that the same code works well on Chrome Desktop version - Safari version does complain on high memory usage.
Does anybody knows how to get around this issue? or what's the memory limit for playing audio files using Web Audio on an iPad?
Thanks!
Decoded audio files weight a lot more in RAM than on disk. A single sample uses 4 bytes (32-bit float). This translates to 230 MB of RAM for 10 minutes of audio at 48 000 Hz sample rate and in stereo. One hour of audio at the same sample rate and with stereo will take ~1,3 GB of RAM!
So, if you decode a lot of files, you can consume big amounts of RAM. My suggestion is to "undecode" files that you don't need (just "forget" unneeded audio buffers, so garbage collector can free memory).
You can also use mono audio files instead of stereo, that should reduce memory usage by half.
Note, that decoded audio files are always resampled to device's sample rate. This means that using audio with low sample rates won't help with memory usage.

Possible to stream video over 115kbps?

I need some advice from people experienced with streaming video.
I have a task to put together a system that allows video coming from RS-170 (composite) video cameras and have them displayed on an iPad. The catch is that no wireless (no Wi-Fi, no bluetooth) is allowed. Only a wired interface.
The physical I/O options on an iPad are apparently extremely limited, but I did manage to come across a company named Redpark that makes an RS232-to-Lightning cable. So my proposed solution is to have the video feeds go into a box with software that digitizes and encodes the video, and then sends it over RS232 to the iPad using that cable. The catch here is that the maximum bandwidth on that cable is 115kbps.
My preliminary testing of this setup on a prototype system have been less than stellar so far. I set up two PCs, each with serial ports, and hooked them together with a null modem. I then set the baud rates of the ports to 115kpbs and then attempted to stream a web cam video feed over the serial connection in real-time using ffmpeg. The results weren't very encouraging, but I at least did manage to get some sort of image to show up.
I guess I need to play around with the ffmpeg encoding options some more. But I need to ask: am I wasting my time with this idea, or should what I am asking here be possible?
For SDA LQ standard ("low quality") we encode H.264 mp4 (using x264) with a 128 kbps video track. The hardware decoding on the iPad can play it. It is maximum 320x240 30 fps video. The quality depends heavily on the material. For mostly nonmoving material, it is watchable. If there is a lot of movement or lighting changes, you may not be able to make out much. You can check out some examples at the link. Video game video, but some may be comparable to your application.
Without knowing more about your requirements (resolution, framerate, type of material), it is difficult to say more. However, given the right material, it is definitely possible to do it and have it be watchable (for some definitions of watchable).

VLC Plugin Memory consumption

I have used the vlc plugin(vlc web plugin 2.1.3.0) in Firefox to display the receiving live stream from my server into my browser. and i need to display 16 channels into one web page, but when i play more than 10 channels in the same time, i show that the processor is 100% and some breaking in the video appear. i have checked the plugin-memory in the running task, i have showed that around 45 MB from memory is dedicated for each video (so 10 channels : 10 * 45 = 450 MB).
kindly, do you have any method to reduce the consumption of the VLC plugin to allow the display of 16 channels in the same time ?
best regards,
There is no way to do that correctly. You could probably save a few megabytes by disabling audio decoding if there are audio tracks in one of your 16 streams in case you don't need them. Except for that, 45MB per stream is quite reasonable in terms of VLC playback and won't be able to go much below that, unless you reduce the video dimensions.
Additionally, your problem is probably not the use of half a giga byte of memory (Chrome and Firefox easily manage to use that much memory by themselves if you open a few tabs), but that VLC exceeds your CPU capacity. Make sure not to use windowless playback since this is less efficient that the normal windowed mode.
VLC 2.2 will improve the performance of the webplugins on windows by adding hardware acceleration known from the standalone application.

Fastest way to get frames from webcam

I have a little wee of a problem developing one of my programs in C++ (Visual studio) - Right now im struggling with connection of multiple webcams (connected via usb cables), creating for each of them separate thread to capture frames, and separate frame for processing image.
I use OpenCV to process frames, but the problem is that i dont get a peak of webcam possibilities (it supports 25 fps, i get only 18) is there some library that i could use to get frames, than process them with OpenCV that would made frames be captured faster?
I was researching a bit and the most popular way is to use directshow to get frames and OpenCV to process them.
Do You agree? Or do You have another solution?
I wouldn't be offended by some links :)
DirectShow is only used, if you open your capture using the
CV_CAP_DSHOW flag, like:
VideoCapture capture( CV_CAP_DSHOW + 0 ); // 0,1,2, your cam id there
(without it, it defaults to vfw )
the capture already runs in a separate thread, so wrapping it with more threads won't give you any gain.
another obstacle with multiple cams is the usb bandwidth, so if you got ports on the back & the front of your machine, dont plug all your cams into the same port/controller else you just saturate it
OpenCV uses DirectShow. Using DirectShow (primary video capture API in Windows) directly will obviously get you par or better performance (and even more likely so if OpenCV is set to use Video for Windows). USB cams typically hit USB bandwidth and hence frame rate limit, using DirectShow to capture in compressed formats or in formats with less bits/pixel is the way to reach higher frame rates within the same USB bandwidth limit.
Another typical problem causing low frame rates is slow synchronous processing delaying the capture. You typically identify this by putting trivial processing into the same capture loop and seeing higher FPS compared to processing-enabled operation.

Resources