Memory leakage in QLocalServer/QLocalSocket - memory

I have some problem in using QLocalServer/QLocalSocket.
I'm sending raw pixel data from server to client, and there is a huge memory leakage while process. but I couldn't know what is the reason...
Memory increases about 20MB/1sec (when I checked with my eyes in system administrator.)
Followings are my codes.
Server
void qsharedServer::updateImageData(unsigned char* r_data, int r_width, int r_height, int r_step, int r_label_i){
QLocalSocket* connection = 0;
connection = clientSocket;
if (connection)
{
if (connection->isOpen())
{
QByteArray block;
QDataStream out(&block, QIODevice::WriteOnly);
out.setVersion(QDataStream::Qt_5_7);
const char* rc_data = reinterpret_cast<const char*>(r_data);
out <<r_step*r_height<< r_width << r_height << r_step;
out.writeBytes(rc_data, r_step*r_height);
connection->write(block);
connection->flush();
}
}
Client
void qsharedClient::readSocket(){
QByteArray block = connection->readAll();
QDataStream in(&block, QIODevice::ReadOnly);
in.setVersion(QDataStream::Qt_5_7);
/* Read Raw Data */
char* data;
uint size;
int width;
int height;
int step;
while (!in.atEnd())
{
in >> size >> width >> height >> step;
in.readBytes(data, size);
}
emit drawData((unsigned char*)data, width, height, step);
}
These two codes can communicate well, but memories are increasing very sharply and terminated when it over certain level.
I tried like connection->reset() or QByteArray::clear()..etc but it doesn't worked.
Is there any idea with my problem??
How about using QTcpServer/QTcpSocket?? This can solve my problem??
Please share your idea. Thanks!!

This should be happening because named pipe stores all data until you close it.
As a solution you should call disconnectFromServer() for socket (on the sender or receiver side - does not matter) once you have sent/received enough data for one packet.

Related

GRUB memory map gives me weird values

I am trying to use grub in order to get the memory map, instead of going through the bios route. The problem is that grub seems to be giving me very weird values for some reason. Can anyone help with this?
Relevant code:
This is how I parse the mmap
void mm_init(mmap_entry_t *mmap_addr, uint32_t length)
{
mmap = mmap_addr;
/* Loop through mmap */
printk("-- Scanning memory map --");
for (size_t i = 0; mmap < (mmap_addr + length); i++) {
/* RAM is available! */
if (mmap->type == 1) {
uint64_t starting_addr = (((uint64_t) mmap->base_addr_high) << 32) | ((uint64_t) mmap->base_addr_low);
uint64_t length = (((uint64_t) mmap->length_high) << 32) | ((uint64_t) mmap->length_low);
printk("Found segment starting from 0x%x, with a length of %i", starting_addr, length);
}
/* Next entry */
mmap = (mmap_entry_t *) ((uint32_t) mmap + mmap->size + sizeof(mmap->size));
}
}
This is my mmap_entry_t struct (not the one in multiboot.h):
struct mmap_entry {
uint32_t size;
uint32_t base_addr_low, base_addr_high;
uint32_t length_low, length_high;
uint8_t type;
} __attribute__((packed));
typedef struct mmap_entry mmap_entry_t;
And this is how I call mm_init()
/* Kernel main function */
void kmain(multiboot_info_t *info)
{
/* Check if grub can give us a memory map */
/* TODO: Detect manually */
if (!(info->flags & (1<<6))) {
panic("couldn't get memory map!");
}
/* Init mm */
mm_init((mmap_entry_t *) info->mmap_addr, info->mmap_length);
for(;;);
}
This is the output I get on qemu:
-- Scanning memory map --
Found segment starting from 0x0, with a length of 0
Found segment starting from 0x100000, with a length of 0
And yes, I am pushing eax and ebx before calling kmain. Any ideas on what is going wrong here?
It turns out that the bit masking stuff was the problem. If we drop that, we can still have 32-bit addresses and the memory map works just fine.

reading a memory mapped file after it's closed

The mmap man pages indicate that closing a file does not result in unmapping of pages. However, I wonder if the following sequence is valid that by the time the read occurs the pages have likely not have been faulted into memory. In other words, is the file still open after the close ? Also is the behavior expected on both Android and iOS?
void func()
{
auto fd = open("test.txt", O_RDONLY);
void *ptr = mmap(nullptr, 16384, PROT_READ, MAP_PRIVATE, fd, 0);
close(fd);
uint8_t *p = (uint8_t *)ptr;
// Read from *p
}

Store the ADC stream on µSD Card without loss on STM32H743ZI

I am working on a project in which I have to store the datas of an ADC Stream on a µSD card. However even if I use a 16 bits buffer, I lose data from the ADC stream. My ADC is used with DMA and I use FATFS (WITHOUT DMA) and the SDMMC1 peripheral to fill a .bin file with the datas.
Do you have an idea to avoid this loss ?
Here is my project : https://github.com/mathieuchene/STM32H743ZI
I use a nucleo-h743zi2 Board, CubeIDE, and CubeMx in their last version.
EDIT 1
I tried to implement Colin's solution, it's better but I have a strange things in the middle of my acquisition. However when I increase the maximal count value or try to debug, the HardFault_Handler appears. I modified main.c file by creating 2 blocks (uint16_t blockX[BUFFERLENGTH/2]) and 2 flags for when adcBuffer is half filled or completely filled.
I also changed the while(1) part in main function like this
if (flagHlfCplt){
//flagCplt=0;
res = f_write(&SDFile, block1, strlen((char*)block1), (void *)&byteswritten);
memcpy(block2, adcBuffer, BUFFERLENGTH/2);
flagHlfCplt = 0;
count++;
}
if (flagCplt){
//flagHlfCplt=0;
res = f_write(&SDFile, block2, strlen((char*)block2), (void *)&byteswritten);
memcpy(block1, adcBuffer[(BUFFERLENGTH/2)-1], BUFFERLENGTH/2);
flagCplt = 0;
count++;
}
if (count == 10){
f_close(&SDFile);
HAL_ADC_Stop_DMA(&hadc1);
while(1){
HAL_GPIO_TogglePin(LD1_GPIO_Port, LD1_Pin);
HAL_Delay(1000);
}
}
}
EDIT 2
I modified my program. I set block 1 and block 2 with the length of BUFFERLENGTH and I added a pointer (*idx) to change the buffer which is filled. I don't have HardFault_Handler anymore but I still loose some datas from my adc's stream.
Here are the modification I made:
// my pointer and buffers
uint16_t block1[BUFFERLENGTH], block2[BUFFERLENGTH], *idx;
// init of pointer and adc start
idx=block1;
HAL_ADC_Start_DMA(&hadc1, (uint32_t*)idx, BUFFERLENGTH);
// while(1) part
while (1)
{
if (flagCplt){
if (flagToChangeBuffer) {
idx=block1;
res = f_write(&SDFile, block2, strlen((char*)block2), (void *)&byteswritten);
flagCplt = 0;
flagToChangeBuffer=0;
count++;
}
else {
idx=block2;
res = f_write(&SDFile, block1, strlen((char*)block1), (void *)&byteswritten);
flagCplt = 0;
flagToChangeBuffer=1;
count++;
}
}
if (count == 150){
f_close(&SDFile);
HAL_ADC_Stop_DMA(&hadc1);
while(1){
HAL_GPIO_TogglePin(LD1_GPIO_Port, LD1_Pin);
HAL_Delay(1000);
}
}
}
Does someone know how to solve my matter with these loss?
Best Regards
Mathieu

part of gpu memory is not released after asynchronous resize

I'm facing some problem with gpu resize using opencv.
Here is my code:
#define MX 500
#define ASYNC 0
class job {
public:
cv::cuda::GpuMat gpuImage;
cv::cuda::Stream stream;
cv::Mat cpuImage;
~job() {
printf("job deleted\n");
}
};
void onComplete(int status, void* uData) {
job* _job = (job*) uData;
delete _job;
}
void resize(job* _job, vector<uchar> buffer) {
_job->cpuImage = cv::imdecode(buffer, cv::IMREAD_COLOR);
if (ASYNC) {
_job->gpuImage.upload(_job->cpuImage, _job->stream);
cv::cuda::resize(_job->gpuImage, _job->gpuImage, cv::Size(100, 100), 0, 0, cv::INTER_NEAREST, _job->stream);
_job->gpuImage.download(_job->cpuImage, _job->stream);
_job->stream.enqueueHostCallback(onComplete, _job);
// _job->stream.waitForCompletion();
} else {
_job->gpuImage.upload(_job->cpuImage);
cv::cuda::resize(_job->gpuImage, _job->gpuImage, cv::Size(100, 100), 0, 0, cv::INTER_NEAREST);
_job->gpuImage.download(_job->cpuImage);
delete _job;
}
}
vector<uchar> readFile(string filename) {
std::ifstream input(filename, std::ios::binary);
std::vector<unsigned char> buffer(std::istreambuf_iterator<char>(input),{});
return buffer;
}
int main() {
for (int i = 0; i < MX; i++) {
vector<uchar> buf = readFile("input.jpg");
job* _job = new job();
resize(_job, buf);
printFreeGPUMemory();
}
while (true) {
// wait
}
return 0;
}
When I run resize synchronously (ASYNC = 0), the code works perfectly fine. But when I run it asynchronously (ASYNC = 1), it seems that some gpu memory is lost somewhere despite the fact that I have deleted all created GpuMats and Streams. The more loop I run, the less free memory I have. is there a bug or part of my code is wrong?
problem solved.
here is the note of the callback from OpenCV docs:
Callbacks must not make any CUDA API calls. Callbacks must not perform
any synchronization that may depend on outstanding device work or
other callbacks that are not mandated to run earlier. Callbacks
without a mandated order (in independent streams) execute in undefined
order and may be serialized.
I had read the note but didn't actually notice that even deleting a cv::cuda::* still causes problems. So the solution is to avoid "touching" any cv::cuda::* in the callback, even deleting or releasing.

Limiting a Lua script's memory usage?

I've seen it said multiple times that there is no way to limit a Lua script's memory usage, including people jumping through hoops to prevent Lua scripts from creating functions and tables. But given that lua_newstate allows you to pass a custom allocator, couldn't one just use that to limit memory consumption? At worst, one could use an arena-based allocator and put a hard limit even on the amount of memory that could be used by fragmentation.
Am I missing something here?
static void *l_alloc_restricted (void *ud, void *ptr, size_t osize, size_t nsize)
{
const int MAX_SIZE = 1024; /* set limit here */
int *used = (int *)ud;
if(ptr == NULL) {
/*
* <http://www.lua.org/manual/5.2/manual.html#lua_Alloc>:
* When ptr is NULL, osize encodes the kind of object that Lua is
* allocating.
*
* Since we don’t care about that, just mark it as 0.
*/
osize = 0;
}
if (nsize == 0)
{
free(ptr);
*used -= osize; /* substract old size from used memory */
return NULL;
}
else
{
if (*used + (nsize - osize) > MAX_SIZE) /* too much memory in use */
return NULL;
ptr = realloc(ptr, nsize);
if (ptr) /* reallocation successful? */
*used += (nsize - osize);
return ptr;
}
}
To make Lua use your allocator, you can use
int *ud = malloc(sizeof(int)); *ud = 0;
lua_State *L = lua_State *lua_newstate (l_alloc_restricted, ud);
Note: I haven't tested the source, but it should work.

Resources