PrestaShop Fatal error: Allowed memory size of 33554432 bytes exhausted (tried to allocate 8320 bytes) in smarty_internal_templatelexer.php - prestashop-1.6

Fatal error: Allowed memory size of 33554432 bytes exhausted (tried to allocate 7680 bytes) in /home/admin/web/original-peptide.com/public_html/tools/smarty/sysplugins/smarty_internal_templatelexer.php on line 473

try this
ini_set('memory_limit', '512M');
look here for more info

A support customer just modified a product from "normal" product to "pack", and added the same product as a variant of the new "pack", resulting in a endless loop, using up as much RAM as the system would allow.
Since the product page didn't open anymore neither in the backend and on the catalog page, we had to undo the "packing" via database.
delete from ps_pack where id_product_item=1;
delete from ps_pack where id_product_item=2;
update ps_product set cache_is_pack=0 where id_product=1;
update ps_product set cache_is_pack=0 where id_product=2;

Related

Failed to allocate bytes in slab allocator for memtx_tuple Tarantool

What could be the reason for the error "Failed to allocate 153 bytes in slab allocator for memtx_tuple" on the client when writing to Tarantool memtx?
This means that memtx engine runs out of memory. Memory available for data and indexes stored in memtx is limited by memtx_memory option of box.cfg, which is default to 256Mb. It's possible to increase this limit in runtime:
-- add 512Mb
box.cfg({memtx_memory = box.cfg.memtx_memory + 512 * 2^20})
Here is documentation section about function for monitoring memory usage:
https://www.tarantool.io/en/doc/latest/reference/reference_lua/box_slab/#lua-function.box.slab.info

The Keil RTX RTOS thread stack size

In the Keil RTX RTOS configuration file, user could configure the default user thread stack size.
In general, the stack holds auto/local variables. The "ZI data" section holds uninitialized global variables.
So if I change the user thread stack size in the RTX configuration file, the stack size will increase and the "ZI data" section size will not increase.
I test it, the test result shows that if I increase user thread stack size. The "ZI data" section size will increase synchronously with the same size.
In my test program, there is 6 threads and each has 600 bytes stack. I use Keil to build the program and it shows me that:
Code (inc. data) RO Data RW Data ZI Data Debug
36810 4052 1226 380 6484 518461 Grand Totals
36810 4052 1226 132 6484 518461 ELF Image Totals (compressed)
36810 4052 1226 132 0 0 ROM Totals
==============================================================================
Total RO Size (Code + RO Data) 38036 ( 37.14kB)
Total RW Size (RW Data + ZI Data) 6864 ( 6.70kB)
Total ROM Size (Code + RO Data + RW Data) 38168 ( 37.27kB)
But if I changed each thread stack size to 800 bytes. Keil shows me as follows:
==============================================================================
Code (inc. data) RO Data RW Data ZI Data Debug
36810 4052 1226 380 7684 518461 Grand Totals
36810 4052 1226 132 7684 518461 ELF Image Totals (compressed)
36810 4052 1226 132 0 0 ROM Totals
==============================================================================
Total RO Size (Code + RO Data) 38036 ( 37.14kB)
Total RW Size (RW Data + ZI Data) 8064 ( 7.88kB)
Total ROM Size (Code + RO Data + RW Data) 38168 ( 37.27kB)
==============================================================================
The "ZI data" section size increase from 6484 to 7684 bytes. 7684 - 6484 = 1200 = 6 * 200. And 800 - 600 = 200.
So I see the thread stack is put in "ZI Data" section.
My question is:
Does it mean auto/local variables in the thread will be put in "ZI Data" section, when thread stack is put in "ZI data" section in RAM ?
If it's true, it means there is no stack section at all. There are only "RO/RW/ZI Data" and heap sections at all.
This article gives me the different answer. And I am a little confused about it now.
https://developer.mbed.org/handbook/RTOS-Memory-Model
The linker determines what memory sections exist. The linker creates some memory sections by default. In your case, three of those default sections are apparently named "RO Data", "RW Data", and "ZI Data". If you don't explicitly specify which section a variable should be located in then the linker will assign it to one of these default sections based on whether the variable is declared const, initialized, or uninitialized.
The linker is not automatically aware that you are using an RTOS. And it has no special knowledge of which variables you intend to use as thread stacks. So the linker does not automatically create independent memory sections for your thread stacks. Rather, the linker will treat the stack variables like any other variable and include them in one of the default memory sections. In your case the thread stacks are apparently being put into the ZI Data section by the linker.
If you want the linker to create special independent memory sections for your thread stacks then you have to explicitly tell the linker to do so via the linker command file. And then you also have to specify that the stack variables should be located in your custom sections. Consult the linker manual for details on how to do this.
Tasks stacks have to come from somewhere - in RTX by default they are allocated statically and are of fixed size.
The os_tsk_create_user() allows the caller to supply a stack that could be allocated in any manner (statically or from the heap; allocating from the caller stack is possible, but unusual, probably pointless and certainly dangerous) so long as it has 8 byte alignment. I find RTX's automatic stack allocation almost useless and seldom appropriate in all but the most trivial application.

insufficient memory when using proc assoc in SAS

I'm trying to run the following and I receive an error saying that ERROR: The SAS System stopped processing this step because of insufficient memory.
The dataset has about 1170(row)*90(column) records. What are my alternatives here?
The error infor. is below:
332 proc assoc data=want1 dmdbcat=dbcat pctsup=0.5 out=frequentItems;
333 id tid;
334 target item_new;
335 run;
----- Potential 1 item sets = 188 -----
Counting items, records read: 19082
Number of customers: 203
Support level for item sets: 1
Maximum count for a set: 136
Sets meeting support level: 188
Megs of memory used: 0.51
----- Potential 2 item sets = 17578 -----
Counting items, records read: 19082
Maximum count for a set: 119
Sets meeting support level: 17484
Megs of memory used: 1.54
----- Potential 3 item sets = 1072352 -----
Counting items, records read: 19082
Maximum count for a set: 111
Sets meeting support level: 1072016
Megs of memory used: 70.14
Error: Out of memory. Memory used=2111.5 meg.
Item Set 4 is null.
ERROR: The SAS System stopped processing this step because of insufficient memory.
WARNING: The data set WORK.FREQUENTITEMS may be incomplete. When this step was stopped there were
1089689 observations and 8 variables.
From the documentation (http://support.sas.com/documentation/onlinedoc/miner/em43/assoc.pdf):
Caution: The theoretical potential number of item sets can grow very
quickly. For example, with 50 different items, you have 1225 potential
2-item sets and 19,600 3-item sets. With 5,000 items, you have over 12
million of the 2-item sets, and a correspondingly large number of
3-item sets.
Processing an extremely large number of sets could cause your system
to run out of disk and/or memory resources. However, by using a higher
support level, you can reduce the item sets to a more manageable
number.
So - provide a support= option make sure it's sufficiently high, e.g.:
proc assoc data=want1 dmdbcat=dbcat pctsup=0.5 out=frequentItems support=20;
id tid;
target item_new;
run;
Is there a way to frame the data mining task so that it requires less memory for storage or operations? In other words, do you need all 90 columns or can you eliminate some? Is there some clear division within the data set such that PROC ASSOC wouldn't be expected to use those rows for its findings?
You may very well be up against software memory allocation limits here.

Delphi Shared Module for Apache Runs Out of Memory

I hope someone will shed some lights on issue I am having.
I am writing an Apache shared object module that acts as a server for my app. Client or multiple Clients make SOAP request to this module, module talks to database and returns SOAP responses to client(s).
Task Manager shows httpd processes (incoming requests from clients) in Task Manager using about 35000K but every once in a while, one of httpd processes will grow in memory/CPU usage and over time reach the cap of 2GB, and then it will crash. Server reports "Internal Server 500" error in this case. (screenshot added)
I use FastMM to check for memory leaks and it does produce log but there is no memory leaks reported. To make sure I use FastMM properly, I introduced memory leaks and FastMM will log it. So, my assumption is that I do not have memory leak but that somehow memory gets consumed until 2GB threshold is reached and not released until I manually restart Apache.
Then I started taking snapshots using FastMM's LogMemoryManagerStateToFile call to create memory snapshot at the time of call. LogMemoryManagerStateToFile creates file with following information but I dont understand how is it helpful to me other than telling me that for example 8320 calls to UnicodeString allocated 676512 bytes.
Which of these are not released properly?
Note that this information is shortened version of created file and it is just for one single method call. There could be many calls that occur to different methods:
FastMM State Capture:
---------------------
1793K Allocated
7165K Overhead
20% Efficiency
Usage Detail:
676512 bytes: UnicodeString x 8320
152576 bytes: TTMQuery x 256
144312 bytes: TParam x 1718
134100 bytes: TDBQuery x 225
107444 bytes: Unknown x 1439
88368 bytes: TSDStringList x 1052
82320 bytes: TStringList x 980
80000 bytes: TList x 4000
53640 bytes: TTMStoredProc x 90
47964 bytes: TFieldList x 571
47964 bytes: TFieldDefList x 571
41112 bytes: TFields x 1142
38828 bytes: TFieldDefs x 571
20664 bytes: TParams x 574
...
...
...
4 bytes: TChunktIME x 1
4 bytes: TChunkpHYs x 1
4 bytes: TChunkgAMA x 1
Additional Information:
-----------------------
Finalizing MyMethodName
How is this data helpful to figure out where memory is being consumed so much in terms of locating it in code and fixing it?
Much appreciated,

Error in file.read() return above 2 GB on 64 bit python

I have several ~50 GB text files that I need to parse for specific contents. My files contents are organized in 4 line blocks. To perform this analysis I read in subsections of the file using file.read(chunk_size) and split into blocks of 4 then analyze them.
Because I run this script often, I've been optimizing and have tried varying the chunk size. I run 64 bit 2.7.1 python on OSX Lion on a computer with 16 GB RAM and I noticed that when I load chunks >= 2^31, instead of the expected text, I get large amounts of /x00 repeated. This continues as far as my testing has shown all the way to, and including 2^32, after which I once again get text. However, it seems that it's only returning as many characters as bytes have been added to the buffer above 4 GB.
My test code:
for i in range((2**31)-3, (2**31)+3)+range((2**32)-3, (2**32)+10):
with open('mybigtextfile.txt', 'rU') as inf:
print '%s\t%r'%(i, inf.read(i)[0:10])
My output:
2147483645 '#HWI-ST550'
2147483646 '#HWI-ST550'
2147483647 '#HWI-ST550'
2147483648 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
2147483649 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
2147483650 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
4294967293 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
4294967294 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
4294967295 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
4294967296 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
4294967297 '#\x00\x00\x00\x00\x00\x00\x00\x00\x00'
4294967298 '#H\x00\x00\x00\x00\x00\x00\x00\x00'
4294967299 '#HW\x00\x00\x00\x00\x00\x00\x00'
4294967300 '#HWI\x00\x00\x00\x00\x00\x00'
4294967301 '#HWI-\x00\x00\x00\x00\x00'
4294967302 '#HWI-S\x00\x00\x00\x00'
4294967303 '#HWI-ST\x00\x00\x00'
4294967304 '#HWI-ST5\x00\x00'
4294967305 '#HWI-ST55\x00'
What exactly is going on?
Yes, this is the known issue according to the comment in cpython's source code. You can check it in Modules/_io/fileio.c. And the code add a workaround on Microsoft windows 64bit only.

Resources