Limiting a Lua script's memory usage? - lua

I've seen it said multiple times that there is no way to limit a Lua script's memory usage, including people jumping through hoops to prevent Lua scripts from creating functions and tables. But given that lua_newstate allows you to pass a custom allocator, couldn't one just use that to limit memory consumption? At worst, one could use an arena-based allocator and put a hard limit even on the amount of memory that could be used by fragmentation.
Am I missing something here?

static void *l_alloc_restricted (void *ud, void *ptr, size_t osize, size_t nsize)
{
const int MAX_SIZE = 1024; /* set limit here */
int *used = (int *)ud;
if(ptr == NULL) {
/*
* <http://www.lua.org/manual/5.2/manual.html#lua_Alloc>:
* When ptr is NULL, osize encodes the kind of object that Lua is
* allocating.
*
* Since we don’t care about that, just mark it as 0.
*/
osize = 0;
}
if (nsize == 0)
{
free(ptr);
*used -= osize; /* substract old size from used memory */
return NULL;
}
else
{
if (*used + (nsize - osize) > MAX_SIZE) /* too much memory in use */
return NULL;
ptr = realloc(ptr, nsize);
if (ptr) /* reallocation successful? */
*used += (nsize - osize);
return ptr;
}
}
To make Lua use your allocator, you can use
int *ud = malloc(sizeof(int)); *ud = 0;
lua_State *L = lua_State *lua_newstate (l_alloc_restricted, ud);
Note: I haven't tested the source, but it should work.

Related

GRUB memory map gives me weird values

I am trying to use grub in order to get the memory map, instead of going through the bios route. The problem is that grub seems to be giving me very weird values for some reason. Can anyone help with this?
Relevant code:
This is how I parse the mmap
void mm_init(mmap_entry_t *mmap_addr, uint32_t length)
{
mmap = mmap_addr;
/* Loop through mmap */
printk("-- Scanning memory map --");
for (size_t i = 0; mmap < (mmap_addr + length); i++) {
/* RAM is available! */
if (mmap->type == 1) {
uint64_t starting_addr = (((uint64_t) mmap->base_addr_high) << 32) | ((uint64_t) mmap->base_addr_low);
uint64_t length = (((uint64_t) mmap->length_high) << 32) | ((uint64_t) mmap->length_low);
printk("Found segment starting from 0x%x, with a length of %i", starting_addr, length);
}
/* Next entry */
mmap = (mmap_entry_t *) ((uint32_t) mmap + mmap->size + sizeof(mmap->size));
}
}
This is my mmap_entry_t struct (not the one in multiboot.h):
struct mmap_entry {
uint32_t size;
uint32_t base_addr_low, base_addr_high;
uint32_t length_low, length_high;
uint8_t type;
} __attribute__((packed));
typedef struct mmap_entry mmap_entry_t;
And this is how I call mm_init()
/* Kernel main function */
void kmain(multiboot_info_t *info)
{
/* Check if grub can give us a memory map */
/* TODO: Detect manually */
if (!(info->flags & (1<<6))) {
panic("couldn't get memory map!");
}
/* Init mm */
mm_init((mmap_entry_t *) info->mmap_addr, info->mmap_length);
for(;;);
}
This is the output I get on qemu:
-- Scanning memory map --
Found segment starting from 0x0, with a length of 0
Found segment starting from 0x100000, with a length of 0
And yes, I am pushing eax and ebx before calling kmain. Any ideas on what is going wrong here?
It turns out that the bit masking stuff was the problem. If we drop that, we can still have 32-bit addresses and the memory map works just fine.

Store the ADC stream on µSD Card without loss on STM32H743ZI

I am working on a project in which I have to store the datas of an ADC Stream on a µSD card. However even if I use a 16 bits buffer, I lose data from the ADC stream. My ADC is used with DMA and I use FATFS (WITHOUT DMA) and the SDMMC1 peripheral to fill a .bin file with the datas.
Do you have an idea to avoid this loss ?
Here is my project : https://github.com/mathieuchene/STM32H743ZI
I use a nucleo-h743zi2 Board, CubeIDE, and CubeMx in their last version.
EDIT 1
I tried to implement Colin's solution, it's better but I have a strange things in the middle of my acquisition. However when I increase the maximal count value or try to debug, the HardFault_Handler appears. I modified main.c file by creating 2 blocks (uint16_t blockX[BUFFERLENGTH/2]) and 2 flags for when adcBuffer is half filled or completely filled.
I also changed the while(1) part in main function like this
if (flagHlfCplt){
//flagCplt=0;
res = f_write(&SDFile, block1, strlen((char*)block1), (void *)&byteswritten);
memcpy(block2, adcBuffer, BUFFERLENGTH/2);
flagHlfCplt = 0;
count++;
}
if (flagCplt){
//flagHlfCplt=0;
res = f_write(&SDFile, block2, strlen((char*)block2), (void *)&byteswritten);
memcpy(block1, adcBuffer[(BUFFERLENGTH/2)-1], BUFFERLENGTH/2);
flagCplt = 0;
count++;
}
if (count == 10){
f_close(&SDFile);
HAL_ADC_Stop_DMA(&hadc1);
while(1){
HAL_GPIO_TogglePin(LD1_GPIO_Port, LD1_Pin);
HAL_Delay(1000);
}
}
}
EDIT 2
I modified my program. I set block 1 and block 2 with the length of BUFFERLENGTH and I added a pointer (*idx) to change the buffer which is filled. I don't have HardFault_Handler anymore but I still loose some datas from my adc's stream.
Here are the modification I made:
// my pointer and buffers
uint16_t block1[BUFFERLENGTH], block2[BUFFERLENGTH], *idx;
// init of pointer and adc start
idx=block1;
HAL_ADC_Start_DMA(&hadc1, (uint32_t*)idx, BUFFERLENGTH);
// while(1) part
while (1)
{
if (flagCplt){
if (flagToChangeBuffer) {
idx=block1;
res = f_write(&SDFile, block2, strlen((char*)block2), (void *)&byteswritten);
flagCplt = 0;
flagToChangeBuffer=0;
count++;
}
else {
idx=block2;
res = f_write(&SDFile, block1, strlen((char*)block1), (void *)&byteswritten);
flagCplt = 0;
flagToChangeBuffer=1;
count++;
}
}
if (count == 150){
f_close(&SDFile);
HAL_ADC_Stop_DMA(&hadc1);
while(1){
HAL_GPIO_TogglePin(LD1_GPIO_Port, LD1_Pin);
HAL_Delay(1000);
}
}
}
Does someone know how to solve my matter with these loss?
Best Regards
Mathieu

Debugging Fatal Error - alloc: invalid block: 0000000001F00AEF0: 0 0

I have a GUI written in R that utilizes Tcl/TK package as well a C .dll that also uses Tcl library. I have done some research on this issue, and it seems to be memory related. I am an inexperienced programmer, so I am not sure where I should be looking for this memory issue. Each call of malloc() has a matching free(), and same with the analogous Tcl_Alloc() and Tcl_Free(). This error is very hard to reproduce as well, thus I am afraid I cannot provide a reproducible example as it is seemingly random in nature. One pattern is however that it seems to only happen upon closure of the program, though this is very inconsistent.
By making this post, I am hoping to gain a logical process that one should take in an attempt to debug this problem in a general context under Tcl/Tk - C - R applications. I am not looking for a solution specific to my code, but rather what an individual should think about when encountering this problem.
The message comes from the function Ptr2Block() in tclThreadAlloc.c (or there's something else about which produces the same error message; possible but unlikely) which is Tcl's thread-specific memory allocator (which is used widely inside Tcl to reduce the number of times global locks are hit). Specifically, it's this bit:
if (blockPtr->magicNum1 != MAGIC || blockPtr->magicNum2 != MAGIC) {
Tcl_Panic("alloc: invalid block: %p: %x %x",
blockPtr, blockPtr->magicNum1, blockPtr->magicNum2);
}
The problem? Those zeroes should be MAGIC (which is equal to 0xEF). This indicates that something has overwritten the memory block's metadata — which also should include the size of the block, but that is now likely hot garbage — and program memory integrity can no longer be trusted. Alas, at this point we're now dealing with a program in a broken state where the breakage happened some time previously; the place where the panic happened is merely where detection of the bug happened, not the actual location of the bug.
Debugging further is usually done by building a version of everything with fancy memory allocators turned off (in Tcl's code, this is done by defining the PURIFY symbol when building) and then running the resulting code — which hopefully still has the bug — with a tool like electricfence or purify (hence the special symbol name) to see what sort of out-of-bounds errors are found; they're very good at hunting down this sort of issue.
I would advise you to start by having a closer look to the sizeof() values provided to your Tcl_Alloc() calls in this C .dll.
I'm writing myself a Tcl binding for a C library and I faced recently exactly the same problem and therefore I'm assuming you may have the same error than me in your code.
Here below a minimal example that reproduces the problem:
#include <tcl.h>
#include <stdlib.h> // malloc
static unsigned int dataCtr;
struct tDataWrapper {
const char *str; // Tcl_GetCommandName(interp, cmd)
unsigned int n; // dataCtr value
void *data; // pointer to wrapped object
};
static void wrapDelCmd(ClientData clientData)
{
struct tDataWrapper *wrap = (struct tDataWrapper *) clientData;
if (wrap != NULL) {
/* with false sizeof value provided while creating the wrapper
* (see above), this data pointer would overwrite the
* overhead section of the allocated tcl memory block
* from what I understood and this is what can be causing
* the panic with message like following one when the
* memory is freed with ckfree (here after calling unload)
* alloc: invalid block: 0000018F2624E760: 0 0 */
printf("DEBUG: #%s(%s) &wrap->data #%p\n",
__func__, wrap->str, &wrap->data);
if (wrap->data != NULL) {
// call your wrapped API to deinstantiate the object
}
ckfree(wrap);
}
}
static int wrapCmd(ClientData clientData, Tcl_Interp *interp,
int objc, Tcl_Obj *const objv[])
{
struct tDataWrapper *wrap = (struct tDataWrapper *) clientData;
if (wrap == NULL)
return TCL_ERROR;
else if (wrap->data != NULL) {
// call your wrapped API to do something with instantiated object
return TCL_OK;
} else {
Tcl_Obj *obj = Tcl_ObjPrintf("wrap: {str=\"%s\", n=%u, data=%llx}",
wrap->str, wrap->n, (unsigned long long) wrap->data);
if (obj != NULL) {
Tcl_SetObjResult(interp, obj);
return TCL_OK;
} else
return TCL_ERROR;
}
}
static int newCmd(ClientData clientData, Tcl_Interp *interp,
int objc, Tcl_Obj *const objv[])
{
struct tDataWrapper *wrap;
Tcl_Obj *obj;
Tcl_Command cmd;
// 3) this is correct
// if ((wrap = attemptckalloc(sizeof(struct tDataWrapper))) == NULL)
// 2) still incorrect but GCC gives more warning regarding the inconsistent pointer handling
// if ((wrap = malloc(sizeof(struct tDataWrapper *))) == NULL)
// 1) this is incorrect
if ((wrap = attemptckalloc(sizeof(struct tDataWrapper *))) == NULL)
Tcl_Panic("%s:%u: attemptckalloc failed\n", __func__, __LINE__);
else if ((obj = Tcl_ObjPrintf("data%u", dataCtr+1)) == NULL)
Tcl_Panic("%s:%u: Tcl_ObjPrintf failed\n", __func__, __LINE__);
else if ((cmd = Tcl_CreateObjCommand(interp, Tcl_GetString(obj),
wrapCmd, (ClientData) wrap, wrapDelCmd)) == NULL)
Tcl_Panic("%s:%u: Tcl_CreateObjCommand failed\n", __func__, __LINE__);
else {
wrap->str = Tcl_GetCommandName(interp, cmd);
wrap->n = dataCtr;
wrap->data = NULL; // call your wrapped API to instantiate an object
dataCtr++;
Tcl_SetObjResult(interp, obj);
}
return TCL_OK;
}
int Allocinvalidblock_Init(Tcl_Interp *interp)
{
dataCtr = 0;
return (Tcl_CreateObjCommand(interp, "new",
newCmd, (ClientData) NULL, NULL)
== NULL) ? TCL_ERROR : TCL_OK;
}
int Allocinvalidblock_Unload(Tcl_Interp *interp, int flags)
{
Tcl_Namespace *ns = Tcl_GetGlobalNamespace(interp);
Tcl_Obj *obj;
Tcl_Command cmd;
unsigned int i;
for(i=0; i<dataCtr; i++) {
if ((obj = Tcl_ObjPrintf("data%u", i+1)) != NULL) {
if ((cmd = Tcl_FindCommand(interp,
Tcl_GetString(obj), ns, TCL_GLOBAL_ONLY)) != NULL)
Tcl_DeleteCommandFromToken(interp, cmd);
Tcl_DecrRefCount(obj);
}
}
return TCL_OK;
}
Once built (for example with Code::Blocks as shared library project linking against C:/msys64/mingw64/lib/libtcl.dll.a), the error can be triggered when more than a data object is created and the library immediately unloaded:
load bin/Release/libAllocInvalidBlock.dll
new
new
unload bin/Release/libAllocInvalidBlock.dll
If used otherwise the crash may even be not triggered... Anyway, such an error in the C code is not particularly obvious to identify (although easy to fix) because the compilation is running without any warning (although -Wall compiler flag is set).

Understanding the use of memset in this example

I'm studying an example from the Linux Device Driver book(http://lwn.net/Kernel/LDD3/), and I don't understand the use and usefullness of the function memset in this context and I hoped that someone could explain it to me. I understand that we allocate memory for our device structure using kmalloc and with memset we put 0's in front of the memory address? Here is the example nonortheless:
int scull_p_init(dev_t firstdev)
{
int i, result;
result = register_chrdev_region(firstdev, scull_p_nr_devs, "scullp");
if (result < 0) {
printk(KERN_NOTICE "Unable to get scullp region, error %d\n", result);
return 0;
}
scull_p_devno = firstdev;
scull_p_devices = kmalloc(scull_p_nr_devs * sizeof(struct scull_pipe), GFP_KERNEL);
if (scull_p_devices == NULL) {
unregister_chrdev_region(firstdev, scull_p_nr_devs);
return 0;
}
memset(scull_p_devices, 0, scull_p_nr_devs * sizeof(struct scull_pipe));
for (i = 0; i < scull_p_nr_devs; i++) {
init_waitqueue_head(&(scull_p_devices[i].inq));
init_waitqueue_head(&(scull_p_devices[i].outq));
init_MUTEX(&scull_p_devices[i].sem);
scull_p_setup_cdev(scull_p_devices + i, i);
}
The memset is not putting 0 in front of scull_p_devices. It is overwriting the memory from the address in scull_p_devices up to the size of the allocated region with zeros.

Memory not freed after calling free()

I have a short program that generates a linked list by adding nodes to it, then frees the memory allocated by the linked list.
Valgrind does not report any memory leak errors, but the process continues to hold the allocated memory.
I was only able to fix the error after I changed the memory allocated from sizeof(structure_name) to fixed number 512. (see commented code)
Is this a bug or normal operation?
Here is the code:
#include <execinfo.h>
#include <stdlib.h>
#include <stdio.h>
typedef struct llist_node {
int ibody;
struct llist_node * next;
struct llist_node * previous;
struct llist * list;
}llist_node;
typedef struct llist {
struct llist_node * head;
struct llist_node * tail;
int id;
int count;
}llist;
llist_node * new_lnode (void) {
llist_node * nnode = (llist_node *) malloc ( 512 );
// llist_node * nnode = (llist_node *) malloc ( sizeof(llist_node) );
nnode->next = NULL;
nnode->previous = NULL;
nnode->list = NULL;
return nnode;
}
llist * new_llist (void) {
llist * nlist = (llist *) malloc ( 512 );
// llist * nlist = (llist *) malloc ( sizeof(llist) );
nlist->head = NULL;
nlist->tail = NULL;
nlist->count = 0;
return nlist;
}
void add_int_tail ( int ibody, llist * list ) {
llist_node * nnode = new_lnode();
nnode->ibody = ibody;
list->count++;
nnode->next = NULL;
if ( list->head == NULL ) {
list->head = nnode;
list->tail = nnode;
}
else {
nnode->previous = list->tail;
list->tail->next = nnode;
list->tail = nnode;
}
}
void destroy_list_nodes ( llist_node * nodes ) {
llist_node * llnp = NULL;
llist_node * llnpnext = NULL;
llist_node * llnp2 = NULL;
if ( nodes == NULL )
return;
for ( llnp = nodes; llnp != NULL; llnp = llnpnext ) {
llnpnext = llnp->next;
free (llnp);
}
return;
}
void destroy_list ( llist * list ) {
destroy_list_nodes ( list->head );
free (list);
}
int main () {
int i = 0;
int j = 0;
llist * list = new_llist ();
for ( i = 0; i < 100; i++ ) {
for ( j = 0; j < 100; j++ ) {
add_int_tail ( i+j, list );
}
}
printf("enter to continue and free memory...");
getchar();
destroy_list ( list );
printf("memory freed. enter to exit...");
getchar();
printf( "\n");
return 0;
}
If by "the process continues to hold the allocated memory" you mean that ps doesn't report a decrease in the process's memory usage, that's perfectly normal. Returning memory to your process's heap doesn't necessarily make the process return it to the operating system, for all sorts of reasons. If you create and destroy your list over and over again, in a big loop, and the memory usage of your process doesn't grow without limit, then you probably haven't got a real memory leak.
[EDITED to add: See also Will malloc implementations return free-ed memory back to the system? ]
[EDITED again to add: Incidentally, the most likely reason why allocating 512-byte blocks makes the problem go away is that your malloc implementation treats larger blocks specially in some way that makes it easier for it to notice when there are whole pages that are no longer being used -- which is necessary if it's going to return any memory to the OS.]
I discovered the answer to my question here:
http://linuxupc.upc.es/~pep/OLD/man/malloc.html
The memory after expanding the heap can be returned back to kernel if the conditions configured by __noshrink are satisfied. Only then the ps will notice that the memory is freed.
It is important to configure it sometimes particularly when the memory usage is small, but the heap size is bigger than the main memory available. Thus the program trashes even if the required memory is less than the available main memory.

Resources