What is the optimal usage of Bandwidth for a Signaling server? - licode

We are using Licode for a Audio/Video conferencing solution. For a conference room with 15 members (audio only, without video), the bandwidth usage of the Signaling Server is in the range of 4 MB/sec (Out) and 750 KB/sec (In). Is this expected regular usage or should it be less?
Also, is there a baseline, with which we can compare against?

Related

Web Audio API on iOS Memory crash

We are using Web Audio API to play and manipulate audio in a web app.
When trying to decode large mp3 files (around 5MB) the memory usage spikes upwards in Safari on iPad, and if we load another similar size file it will simply crash.
It seems like Web Audio API is not really usable when running on the iPad unless we use small files.
Note that the same code works well on Chrome Desktop version - Safari version does complain on high memory usage.
Does anybody knows how to get around this issue? or what's the memory limit for playing audio files using Web Audio on an iPad?
Thanks!
Decoded audio files weight a lot more in RAM than on disk. A single sample uses 4 bytes (32-bit float). This translates to 230 MB of RAM for 10 minutes of audio at 48 000 Hz sample rate and in stereo. One hour of audio at the same sample rate and with stereo will take ~1,3 GB of RAM!
So, if you decode a lot of files, you can consume big amounts of RAM. My suggestion is to "undecode" files that you don't need (just "forget" unneeded audio buffers, so garbage collector can free memory).
You can also use mono audio files instead of stereo, that should reduce memory usage by half.
Note, that decoded audio files are always resampled to device's sample rate. This means that using audio with low sample rates won't help with memory usage.

CPU load of streaming vs file downloading when routing data

I'm using a Raspberry Pi 2 to route wifi-eth connections. So from the eth side I have a computer that will connect to internet using the Pi wifi connection. On the Raspberry I started htop to monitor the CPUs load, then on the computer I started chrome and played a 20-minute 1080 video. The load on the CPU didn't seem to go beyond 5% anyhow. After that I closed youtube tab and started a download of a binary file of 5GB from the first row here (https://testdebit.info/). Well, I noticed that CPU load was much more higher, around 10%!
Any explanation of such a difference?
It has to do with compression and how video is encoded. A normal file can be compressed, but nothing like that of a video stream.
A video stream can achieve very high compressions due to the predictable characteristics of video, e.g. video from one frame to another doesn't change much. As such, video will send a whole frame (I-frame) and then update it with just the changes (P-frame). It's even possible to do backward prediction (B-frame). Here's a wikipedia reference.
Yes, I hear your next unspoken question: Doesn't more compression mean more CPU time to uncompress? That's true for a lot of types of compression, such as that used by zip files. But since raw video is not very information dense over time, you have compression techniques that in essence reduce the amount of data you send with very little CPU usage.
I hope this helps.

How can I estimate the power consumption of a BLE module?

I'm writing an iOS app for a device with a BLE module that advertises a few bytes of data on a consistent basis while it's connected. We are trying to estimate the power consumption of the BLE module so we can estimate the battery life for the device. I've scoured SO and Google looking for the appropriate way to estimate this, but I'm coming up empty. Is there a way to take the number of bytes that are being sent, multiplied by the frequency with which the data is sent and come up with a rough approximation of power consumption?
A typical BLE SoC (i.e. a all-in-one Application + Radio chip) typically consumes:
A few hundreds nA while in deep sleep,
2 to 10 µA while a RTC tracks time (needed between radio events while advertising or connected),
10 to 30 mA while CPU or Radio runs (computing data, TX, RX). RX and TX power consumption is roughly the same.
Life of a BLE peripheral basically consists of 3 main states:
Be idle (not advertising, not connected). Most people will tell your device is off. Unless it has a physical power switch, it still consumes a few hundred nanoamps though.
Advertise (before a connection takes place). Peripheral needs to be running approximatively 5 ms every 50 ms. This is the time when your device actually uses most power because advertising requires to send many packets, frequently. Average power consumption is in the 1-10 mA range.
Be connected. Here, consumption is application-dependant. If application is mostly idle, peripheral is required to wakeup periodically and must send a packet each time in order to keep the connection alive. Even if the peripheral has nothing useful to send, an empty packet is still sent. Side effect: that means low duty cycle applications basically transmit packets for free.
So to actually answer you question:
length of your payload is not a problem (as long as you keep your packets shorts): we're talking about transmitting during 1 µs more per bit, while the rest of the handling (waking up, receiving master packet, etc. kept us awake during at least 200 µs);
what you actually call "continuous" is the key point. Is it 5 Hz ? 200 Hz ? 3 kHz ?
Let's say we send data at a 5 Hz rate. Power estimate will be around 5 connection events every second, roughly 2 ms CPU + Radio per connection event, so 10 ms running every second. Average consumption: 200 µA (.01 * 20 mA + .99 * 5 µA)
This calculation does not take some parameters into account though:
You should add consumption from your sensors (Gyro/Accelerometers can eat a few mA),
You should consider on-board communication (i2c, SPI, etc),
If your design actually uses two chips (one for the application talking to a radio module), consumption will roughly double.

Flurry / Google Analytics / Localytics bandwidth consumption on iOS

I'm choosing an analytics service for my iOS app. I want to track quite a lot of events and the app I'm developing is going to be used outdoors, so there will be no wi-fi connection available, and even the cellular connectivity can be of a poor quality.
Analytics is the only thing that requires network connectivity in my app. Recently I've checked how much traffic it consumes, and it consumes much more than I've expected. That was about 500KB for Google Analytics and about 2MB for Flurry, and that's just for a 2-minute long session with a few hundred events. It seems very inefficient to me. (Flurry logs a little bit more parameters, but definitely not 4 times more.)
I wonder — have anybody compared other popular analytics solutions for their bandwidth consumption? Which one is the slimmest one?
Thank you
If you don't need real time data (and you probably don't with outdoor app), you can get the best network compression for Analytics by dispatching more hits at once to benefit from batching and compression. To do that set the dispatch interval to 30 minutes. The maximum size of uncompressed hit that analytics will accept is about 8k so you should be sending less then that. With compression that would bring it down to ~25% of the original size for individual hit assuming mostly ascii data. To generate 500k of data you should be sending few hundred hits individually. With batching and compression the hits will shrink down more efficiently. Usually batch of 20 hits will compress to less then 10% of the uncompressed size or about 800 bytes per hit at most. For further network savings just send less data per event or fewer events. Btw, Analytics has a rate limit of 60 tokens that are replenished at a rate of 1 hit every 2 seconds. If you are sending few hundred events in short period of time your data is likely getting rate limited.
https://developers.google.com/analytics/devguides/collection/ios/limits-quotas#ios_sdk

VLC Plugin Memory consumption

I have used the vlc plugin(vlc web plugin 2.1.3.0) in Firefox to display the receiving live stream from my server into my browser. and i need to display 16 channels into one web page, but when i play more than 10 channels in the same time, i show that the processor is 100% and some breaking in the video appear. i have checked the plugin-memory in the running task, i have showed that around 45 MB from memory is dedicated for each video (so 10 channels : 10 * 45 = 450 MB).
kindly, do you have any method to reduce the consumption of the VLC plugin to allow the display of 16 channels in the same time ?
best regards,
There is no way to do that correctly. You could probably save a few megabytes by disabling audio decoding if there are audio tracks in one of your 16 streams in case you don't need them. Except for that, 45MB per stream is quite reasonable in terms of VLC playback and won't be able to go much below that, unless you reduce the video dimensions.
Additionally, your problem is probably not the use of half a giga byte of memory (Chrome and Firefox easily manage to use that much memory by themselves if you open a few tabs), but that VLC exceeds your CPU capacity. Make sure not to use windowless playback since this is less efficient that the normal windowed mode.
VLC 2.2 will improve the performance of the webplugins on windows by adding hardware acceleration known from the standalone application.

Resources