GStreamer based RTSP server gradually allocates lots memory if left without request - memory

I am using a simple RTSP server based on Ullaakut/RTSPAllTheThings, built on the top of GStreamer. It is configured to read video from the file and serve over RTSP.
The server works in general but if the viewer (that consumes the RTSP stream) is late to connect, the server gradually allocates somewhat 3 Gb of RAM, using about 50 % of CPU resources over all this time. As soon as this limit is reached, CPU usage drops to zero and there is no further memory increase. However I must stop this growth at 1 Gb or even earlier, 3 Gb is way too much.
The pipeline that the server reports on startup is:
( appsrc name=mysrc ! decodebin ! timeoverlay halignment=left valignment=top shaded-background=true font-desc="Sans 10" ! clockoverlay halignment=right valignment=top shaded-background=true font-desc="Sans 10" ! videorate ! video/x-raw,framerate=12/1 ! capsfilter ! queue ! x264enc speed-preset=superfast ! rtph264pay name=pay0 pt=96 )
I tried to add max-size-bytes=512000000 after the queue that I would believe should limit the spike to 512m but no effect before 3.3 Gb are allocated anyway. My file input is set up as here. I have set the frame rate with RTSP_FRAMERATE property. There are no any other alterations that I think should affect anything.
I need to serve 32 streams from the server. With current setup this would need 128 Gb of RAM or about!

I have switched into VLC streamer that does not have these issues. Maybe GStreamer is great somewhere else, it looks much more configurable. Anyway, VLC works for me and does not have addressed issues.

Related

Why is LuaJIT's memory limited to 1-2 GB on 64-bit platforms?

On 64-bit platforms, LuaJIT allows only up to 1-2GB of data (not counting objects allocated with malloc). Where does this limitation come from, and why is this even less than on 32-bit platforms?
LuaJIT is designed to use 32-bit pointers. On x64 platforms the limit comes from the use of mmap and the MAP_32BIT flag.
MAP_32BIT (since Linux 2.4.20, 2.6):
Put the mapping into the first 2 Gigabytes of the process address space. This flag is supported only on x86-64, for 64-bit programs. It was added to allow thread stacks to be allocated somewhere in the first 2GB of memory, so as to improve context-switch performance on some early 64-bit processors.
Essentially using this flag limits to the first 31-bits, not the first 32-bits as the name suggests. Have a look here for a nice overview of the 1GB limit using MAP_32BIT in the Linux kernel.
Even if you could have more than 1GB, the LuaJIT author explains why this would be bad for performance:
A full GC takes 50% more time than the allocations themselves.
If the GC is enabled, it doubles the allocation time.
To simulate a real application, the links between objects are randomized in the third run. This doubles the GC time!
And that was just for 1GB! Now imagine using 8GB -- a full GC cycle would keep the CPU busy for a whopping 24 seconds!
Ok, so the normal mode is to use the incremental GC. But this just means the overhead is ~30% higher, it's mixed in between the allocations and it will evict the CPU cache every time. Basically your application will be dominated by the GC overhead and you'll begin to wonder why it's slow ....
tl;dr version: Don't try this at home. And the GC needs a rewrite (postponed to LuaJIT 2.1).
To summarize, the 1GB limit is a limitation of the Linux kernel and the LuaJIT garbage collector. This only applies to objects within the LuaJIT state and can be overcome by using malloc, which will allocate outside the lower 32-bit address space. Also, it's possible to use the x86 build on x64 in 32-bit mode and have access the full 4GB.
Check out these links for more information:
How to get past 1gb memory limit of 64 bit LuaJIT on Linux?
LuaJIT x64 limited to 31 bit address space, even without MAP_32BIT restrictions?
LuaJIT strange memory limit
Digging out the craziest bug you never heard about from 2008: a linux threading regression
Due to recent patch luajit 2GB memory limit can be solved.
To test, clone this repo and build with LUAJIT_ENABLE_GC64 symbol defined:
msvcbuild.bat gc64
or XCFLAGS+= -DLUAJIT_ENABLE_GC64 in Makefile
I used this code to test memory allocation:
local ffi = require("ffi")
local CHUNK_SIZE = 1 * 1024 * 1024 * 1024
local fraction_of_gb = CHUNK_SIZE / (1024*1024*1024)
local allocations = {}
for index=1, 64 do
local huge_memory_chunk = ffi.new("char[?]", CHUNK_SIZE)
table.insert(allocations, huge_memory_chunk)
print( string.format("allocated %q GB", index*fraction_of_gb) )
local pause = io.read(1)
end
print("Test complete")
local pause = io.read(1)
Allocated 48GB before not enough memory error on my machine.

How can I swap windows 7 paged memory back into ram after memory is freed?

I typically use about 5GB of RAM on my workstation at work. I normally have to run a few instances of matlab at a time, each running simulink simulations. These use a total of about 4-6GB RAM. When these are active, windows dumps memory in RAM to the page file to free space for matlab.
The problem is when the simulations are over, 2-3GB stays in the page file and slows the systems DRAMATICALLY. This computer has AWFUL disk read and write performance.
Is there a way I can move the paged memory back over to ram to avoid this performance hit?
Right now I am required to restart my computer when I am done running the simulations to speed it up again.
I have 8GB RAM with a 12GB page file.
Check out
Is it possible to unpage all memory in Windows?
The answer given by #KerrekSB seems to include some code for doing it. But the long and short of it is that you need to walk the list of processes, then walk the list of memory allocations for those processes, reading as you go.

ejabberd: Memory difference between erlang and Linux process

I am running ejabberd 2.1.10 server on Linux (Erlang R14B 03).
I am creating XMPP connections using a tool in batches and sending message randomly.
ejabberd is accepting most of the connections.
Even though connections are increasing continuously,
value of erlang:memory(total) is observed to be with-in a range.
But if I check the memory usage of ejabberd process using top command, I can observe that memory usage by ejabberd process is increasing continuously.
I can see that difference between the values of erlang:memory(total) and the memory usage shown by top command is increasing continuously.
Please let me know the reason for the difference in memory shown.
Is it because of memory leak? Is there anyway I can debug this issue?
What for the additional memory (difference between the erlang & top command) is used if it is not memory leak?
A memory leak in either the Erlang VM itself or in the non-Erlang parts of ejabberd would have the effect you describe.
ejabberd contains some NIFs - there are 10 ".c" files in ejabberd-2.1.10.
Was your ejabberd configured with "--enable-nif"?
If so, try comparing with a version built using "--disable-nif", to see if it has different memory usage behaviour.
Other possibilities for debugging include using Valgrind for detecting and locating the leak. (I haven't tried using it on the Erlang VM; there may be a number of false positives, but with a bit of luck the leak will stand out, either by size or by source.)
A final note: the Erlang process's heap may have been fragmented. The gaps among allocations would count towards the OS process's size; It doesn't look like they are included in erlang:memory(total).

Are there constraints on a serial port with more than 6 devices connected?

I have a project that uses Rocketport Infinity 16 ports to receive data from 6 different anemometers (wind speed measurement devices) (RS422, 50Hz, 38.4k baud, 47 bytes per record). When I use 32Hz and 9600 baud, everything is alright, however, when I change to 50Hz, some of the data isn't received. I tried to use USB instead of the Rocketport Infinity with no luck.
So, apart from the anemometer failing, I suspect the following explanations for the data loss:
For the Rocketport Infinity, I opened all 16 ports, but only connected 6 of them, I suspect the maximum data throughput is to high when I switch to 50Hz.
The IRQ switch speed is too high for the com port to operate properly.
Is there any other possible reason? Please correct me if I'm mistaken.
Development environment of Receiver : Delphi 6 in Windows XP Professional 32-bit version, with CPort 3.1
The IRQ rate isn't that high and modern machines should have no trouble keeping up with it. I suspect the real problem is your app not processing the received bytes fast enough. Especially when your code also updates a UI in the same thread that receives the data.
Hard to give specific troubleshooting hints because you neither specify a language nor an operating system. But be sure to get your error handling correct. Distinguish between a buffer overflow (app not reading fast enough) and a character buffer overrun (driver not reading fast enough). On Windows that's CE_RXOVER and CE_OVERRUN.
Are there constraints on a serial port with more than 6 devices
connected?
Yes, there are constrains. I assume that you have differential outputs and an I/O receiver with differential inputs. Please, see Balanced differential signals. It is possible that the maximum voltage ratings of the receiver circuits are exceeded.
Each port speed must match corresponding device speed. Please, see other criterias which must be matched.
The IRQ switch speed is too high for the com port to operate properly.
Why do you assume that it would be a problem with your IRQ switch speed? - I would say that you have only scarce IRQ resources.

how to change "eheap_alloc" size on windows system to run erlang server?

How to change the "eheap_alloc" size on windows? This is for to do load test of erlang server with several number of clients. My server is running successfully up to 100 clients but if it is 200, server works two minutes with good results and then after server crashed and resulted with abnormal termination by showing
eheap_alloc: Cannot allocate 8414160 bytes of memory (of type "heap").
But in Linux it can work for all the clients successfully. How can I over come this problem?
help me some one....
thank you.
Have you tried [1] ?
erl +hms Size
Sets the default heap size of processes to the size Size.
erl +hmbs Size
Sets the default binary virtual heap size of processes to the size Size.
with different Sizes ?
[1] http://www.erlang.org/doc/man/erl.html
When you get this message there is probably some memory leak in you server even it works well at Linux. This can be some sort of "live lock" which locking you from release memory in some circumstances. So best what you can do is look better what eats memory in your server.

Resources