CMA Allocation Fault on Petalinux 2020.2 (Zynq-7000) - xilinx

I want to use 1920x1080 (or more) on my custom Zynq-7000 Board. Mode 1024x768 works nice.
There is CMA allocation error, when I try to use FullHD.
I added some output to source code (output below is for 2560x1600, it is the same for 1920x1080, excepting buffer size):
[12:09:34:466] xlnx-pl-disp amba_pl:xlnx_pl_disp: surface width(2560), height(1600) and bpp(24)
[12:09:34:474] xlnx-pl-disp amba_pl:xlnx_pl_disp: bytes per line after alignment: 12288000
[12:09:34:480] xlnx-pl-disp amba_pl:xlnx_pl_disp: allocating 12288000 bytes with kzalloc()...
[12:09:34:488] xlnx-pl-disp amba_pl:xlnx_pl_disp: OK
[12:09:34:491] xlnx-pl-disp amba_pl:xlnx_pl_disp: init gem object...
[12:09:34:497] xlnx-pl-disp amba_pl:xlnx_pl_disp: OK
[12:09:34:500] xlnx-pl-disp amba_pl:xlnx_pl_disp: creating mmap offset...
[12:09:34:505] xlnx-pl-disp amba_pl:xlnx_pl_disp: OK
[12:09:34:508] xlnx-pl-disp amba_pl:xlnx_pl_disp: gem cma created with size 12288000
[12:09:34:514] xlnx-pl-disp amba_pl:xlnx_pl_disp: failed to allocate buffer with size 12288000
[12:09:34:522] xlnx-pl-disp amba_pl:xlnx_pl_disp: Failed to create cma gem object (12288000 bytes)
[12:09:34:527] xlnx-pl-disp amba_pl:xlnx_pl_disp: drm_fb_helper_single_fb_probe() returns -12
[12:09:34:536] xlnx-pl-disp amba_pl:xlnx_pl_disp: Failed to set initial hw configuration.
[12:09:34:541] xlnx-pl-disp amba_pl:xlnx_pl_disp: failed to initialize drm fb
As I see, the issue goes from this line (drm_gem_cma_helper.c)
cma_obj->vaddr = dma_alloc_wc(drm->dev, size, &cma_obj->paddr,GFP_KERNEL | __GFP_NOWARN);
I try to change some settings:
increase CMA Size in Kernel config (it was 128, now 256 Mb)
increase CMA Areas number in Kernel config (from 7 to 20)
add reserved memory to Device Tree
add coherent_pool option to bootargs
I get the same fault anyway.
Please help to find the reason and solve my issue.
Many thanks!
With regards,
Maksim

The only solution I've found is to change kernel base address from 0x18000000 to 0x11000000 (as shown on screenshot).
Unfortunately, I don't have complete explanation on how it helps.
With regards,
Maksim

Related

Failed to allocate bytes in slab allocator for memtx_tuple Tarantool

What could be the reason for the error "Failed to allocate 153 bytes in slab allocator for memtx_tuple" on the client when writing to Tarantool memtx?
This means that memtx engine runs out of memory. Memory available for data and indexes stored in memtx is limited by memtx_memory option of box.cfg, which is default to 256Mb. It's possible to increase this limit in runtime:
-- add 512Mb
box.cfg({memtx_memory = box.cfg.memtx_memory + 512 * 2^20})
Here is documentation section about function for monitoring memory usage:
https://www.tarantool.io/en/doc/latest/reference/reference_lua/box_slab/#lua-function.box.slab.info

PrestaShop Fatal error: Allowed memory size of 33554432 bytes exhausted (tried to allocate 8320 bytes) in smarty_internal_templatelexer.php

Fatal error: Allowed memory size of 33554432 bytes exhausted (tried to allocate 7680 bytes) in /home/admin/web/original-peptide.com/public_html/tools/smarty/sysplugins/smarty_internal_templatelexer.php on line 473
try this
ini_set('memory_limit', '512M');
look here for more info
A support customer just modified a product from "normal" product to "pack", and added the same product as a variant of the new "pack", resulting in a endless loop, using up as much RAM as the system would allow.
Since the product page didn't open anymore neither in the backend and on the catalog page, we had to undo the "packing" via database.
delete from ps_pack where id_product_item=1;
delete from ps_pack where id_product_item=2;
update ps_product set cache_is_pack=0 where id_product=1;
update ps_product set cache_is_pack=0 where id_product=2;

Out of memory (allocated 2097152) (tried to allocate 11318820 bytes) in D:\xampp\htdocs\file_manager\php\elFinderVolumeFTP.class.php

Out of memory (allocated 2097152) (tried to allocate 11318820 bytes) in D:\xampp\htdocs\file_manager\php\elFinderVolumeFTP.class.php on line 640.
Out of memory (allocated 41156608) (tried to allocate 2097233 bytes) in D:\xampp\htdocs\modules\member\data_member.php on line 87
I set momory_limit = 10000M in php.ini. But it still bugs.
(I useing xampp 1.8.2 , PHP / 5.4.16)
Please help. Thanks

Delphi Shared Module for Apache Runs Out of Memory

I hope someone will shed some lights on issue I am having.
I am writing an Apache shared object module that acts as a server for my app. Client or multiple Clients make SOAP request to this module, module talks to database and returns SOAP responses to client(s).
Task Manager shows httpd processes (incoming requests from clients) in Task Manager using about 35000K but every once in a while, one of httpd processes will grow in memory/CPU usage and over time reach the cap of 2GB, and then it will crash. Server reports "Internal Server 500" error in this case. (screenshot added)
I use FastMM to check for memory leaks and it does produce log but there is no memory leaks reported. To make sure I use FastMM properly, I introduced memory leaks and FastMM will log it. So, my assumption is that I do not have memory leak but that somehow memory gets consumed until 2GB threshold is reached and not released until I manually restart Apache.
Then I started taking snapshots using FastMM's LogMemoryManagerStateToFile call to create memory snapshot at the time of call. LogMemoryManagerStateToFile creates file with following information but I dont understand how is it helpful to me other than telling me that for example 8320 calls to UnicodeString allocated 676512 bytes.
Which of these are not released properly?
Note that this information is shortened version of created file and it is just for one single method call. There could be many calls that occur to different methods:
FastMM State Capture:
---------------------
1793K Allocated
7165K Overhead
20% Efficiency
Usage Detail:
676512 bytes: UnicodeString x 8320
152576 bytes: TTMQuery x 256
144312 bytes: TParam x 1718
134100 bytes: TDBQuery x 225
107444 bytes: Unknown x 1439
88368 bytes: TSDStringList x 1052
82320 bytes: TStringList x 980
80000 bytes: TList x 4000
53640 bytes: TTMStoredProc x 90
47964 bytes: TFieldList x 571
47964 bytes: TFieldDefList x 571
41112 bytes: TFields x 1142
38828 bytes: TFieldDefs x 571
20664 bytes: TParams x 574
...
...
...
4 bytes: TChunktIME x 1
4 bytes: TChunkpHYs x 1
4 bytes: TChunkgAMA x 1
Additional Information:
-----------------------
Finalizing MyMethodName
How is this data helpful to figure out where memory is being consumed so much in terms of locating it in code and fixing it?
Much appreciated,

Error in file.read() return above 2 GB on 64 bit python

I have several ~50 GB text files that I need to parse for specific contents. My files contents are organized in 4 line blocks. To perform this analysis I read in subsections of the file using file.read(chunk_size) and split into blocks of 4 then analyze them.
Because I run this script often, I've been optimizing and have tried varying the chunk size. I run 64 bit 2.7.1 python on OSX Lion on a computer with 16 GB RAM and I noticed that when I load chunks >= 2^31, instead of the expected text, I get large amounts of /x00 repeated. This continues as far as my testing has shown all the way to, and including 2^32, after which I once again get text. However, it seems that it's only returning as many characters as bytes have been added to the buffer above 4 GB.
My test code:
for i in range((2**31)-3, (2**31)+3)+range((2**32)-3, (2**32)+10):
with open('mybigtextfile.txt', 'rU') as inf:
print '%s\t%r'%(i, inf.read(i)[0:10])
My output:
2147483645 '#HWI-ST550'
2147483646 '#HWI-ST550'
2147483647 '#HWI-ST550'
2147483648 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
2147483649 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
2147483650 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
4294967293 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
4294967294 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
4294967295 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
4294967296 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
4294967297 '#\x00\x00\x00\x00\x00\x00\x00\x00\x00'
4294967298 '#H\x00\x00\x00\x00\x00\x00\x00\x00'
4294967299 '#HW\x00\x00\x00\x00\x00\x00\x00'
4294967300 '#HWI\x00\x00\x00\x00\x00\x00'
4294967301 '#HWI-\x00\x00\x00\x00\x00'
4294967302 '#HWI-S\x00\x00\x00\x00'
4294967303 '#HWI-ST\x00\x00\x00'
4294967304 '#HWI-ST5\x00\x00'
4294967305 '#HWI-ST55\x00'
What exactly is going on?
Yes, this is the known issue according to the comment in cpython's source code. You can check it in Modules/_io/fileio.c. And the code add a workaround on Microsoft windows 64bit only.

Resources