When I am trying to adjust of stack size of threads:
- (void)testStack:(NSInteger)n {
NSThread *thread = [[NSThread alloc]initWithTarget:self selector:#selector(dummy) object:nil];
NSUInteger size = 4096 * n;
[thread setStackSize:size];
[thread start];
}
- (void)dummy {
NSUInteger bytes = [[NSThread currentThread] stackSize];
NSLog(#"%#", #(bytes));
}
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
// Override point for customization after application launch.
for (NSInteger i = 126; i <= 130; i++) {
[self testStack:i];
}
return YES;
}
in the output, the size is not changed:
2015-06-19 11:05:06.912 Stack[52982:2082454] 524288
2015-06-19 11:05:06.913 Stack[52982:2082457] 524288
2015-06-19 11:05:06.913 Stack[52982:2082456] 524288
2015-06-19 11:05:06.913 Stack[52982:2082458] 524288
2015-06-19 11:05:06.913 Stack[52982:2082455] 524288
is the iPhone stack size fixed?
p.s. I am testing the above in iPhone 6 Plus, debug mode.
UPDATE: the stack can be adjusted when running in the Simulator on MacBook:
2015-06-19 11:25:17.042 Stack[1418:427993] 528384
2015-06-19 11:25:17.042 Stack[1418:427994] 532480
2015-06-19 11:25:17.042 Stack[1418:427992] 524288
2015-06-19 11:25:17.042 Stack[1418:427991] 520192
2015-06-19 11:25:17.042 Stack[1418:427990] 516096
The stack size is bounded on the device, and in most cases cannot exceed 1MB for the main thread on iPhone OS, nor can it be shrunk.
The minimum allowed stack size for secondary threads is 16 KB and the stack size must be a multiple of 4 KB. The space for this memory is set aside in your process space at thread creation time, but the actual pages associated with that memory are not created until they are needed.
Actually you can set it. I am not sure if this changed with iOS 10, but on iOS 10.2.1 this does work. The only limitation is that the stack size has to be a multiple of 4kb.
pthread_attr_t tattr;
int ret = pthread_attr_init ( &tattr ) ;
size_t size;
ret = pthread_attr_getstacksize(&tattr, &size);
printf ( "Get: ret=%d,size=%zu\n" , ret , size ) ;
size = 4096 * 512 ;
ret = pthread_attr_setstacksize(&tattr, size);
int ret2 = pthread_attr_getstacksize(&tattr, &size);
printf ( "Set & Get: ret=%d ret2=%d,size=%zu\n" , ret , ret2 , size ) ;
Related
When I jump from FREERTOS App1 To FreeRTOS App2 the program is stuck in the default handler.
FreeRTOS App1 boundary - 0x0 To 0x14000
FreeRTOS App2 boundary - 0x15000 To 0x30000
#define FAPP2_ADDRESS ((uint32_t)0x15000)
#define FAPP2_SIZE ((uint32_t)0x15000)
#define MSP_SPMPU (0)
#define PSP_SPMPU (1)
void app_jumpTo(uint32_t jumpLocation)
{
if (jumpLocation != 0xFFFFFFFF)
{
__set_MSP (*(uint32_t*) jumpLocation);
__set_PC (*(uint32_t*) (jumpLocation + 4));
}
}
void jump_to_app2(void)
{
xTimerStop(g_rtos_timer0, 0);
R_IOPORT_Close (&g_ioport_ctrl);
__disable_irq ();
memset ((uint32_t*) NVIC->ICER, 0xFF, sizeof(NVIC->ICER));
memset ((uint32_t*) NVIC->ICPR, 0xFF, sizeof(NVIC->ICPR));
SysTick->CTRL = 0;
SCB->ICSR |= SCB_ICSR_PENDSTCLR_Msk;
SCB->VTOR = (uint32_t) FAPP2_ADDRESS;
/* Disable the HW Stack monitor for MSP and PSP */
R_MPU_SPMON->SP[MSP_SPMPU].PT = 0xA500;
R_MPU_SPMON->SP[PSP_SPMPU].PT = 0xA500;
R_MPU_SPMON->SP[MSP_SPMPU].CTL = 0x0000;
R_MPU_SPMON->SP[PSP_SPMPU].CTL = 0x0000;
//Jump to our application entry point
app_jumpTo ((uint32_t) FAPP2_ADDRESS);
}
Fault Status Windows shows
HFSR 0x40000000
MMFSR 0x0
UFSR 0x2
I am facing an issue while using the cJSON Library. I am assuming that there is a memory leak that is breaking the code after a certain time (40 mins to 1 hr).
I have copied my code below :
void my_work_handler_5(struct k_work *work)
{
char *ptr1[6];
int y=0;
static int counterdo = 0;
char *desc6 = "RSRP";
char *id6 = "dBm";
char *type6 = "RSRP";
char rsrp_str[100];
snprintf(rsrp_str, sizeof(rsrp_str), "%d", rsrp_current);
sensor5 = cJSON_CreateObject();
cJSON_AddItemToObject(sensor5, "description", cJSON_CreateString(desc6));
cJSON_AddItemToObject(sensor5, "Time", cJSON_CreateString(time_string));
cJSON_AddItemToObject(sensor5, "value", cJSON_CreateNumber(rsrp_current));
cJSON_AddItemToObject(sensor5, "unit", cJSON_CreateString(id6));
cJSON_AddItemToObject(sensor5, "type", cJSON_CreateString(type6));
/* print everything */
ptr1[counterdo] = cJSON_Print(sensor5);
printk("Counterdo value is : %d\n", counterdo);
cJSON_Delete(sensor5);
counterdo = counterdo + 1;
if (counterdo==6){
for(y=0;y<=counterdo;y++){
free(ptr1[y]);
}
counterdo = 0;
}
return;
}
I read some other threads regarding freeing up the memory and tried to do the same. Can anyone let me know if this is the right approach to free up the space allocated to the cJSON Object.
Regards,
Adeel.
Since cJSON is a portable library with no dependencies, this is better to look for a potential issue in your code on a PC: they are specialized tools available in this environment for facilitating the investigation. I am assuming here you have a Linux system, a Windows system with WSL or WSL2 installed, or a Linux virtual machine, available, and gcc, valgrind installed.
A minimal, self-contained, portable version of your code could be:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <cJSON.h>
static int rsrp_current = 1;
static char *time_string = NULL;
void
my_work_handler_5 ()
{
char *ptr1[6];
int y = 0;
static int counterdo = 0;
char *desc6 = "RSRP";
char *id6 = "dBm";
char *type6 = "RSRP";
char rsrp_str[100];
snprintf (rsrp_str, sizeof (rsrp_str), "%d", rsrp_current);
cJSON *sensor5 = cJSON_CreateObject ();
cJSON_AddItemToObject (sensor5, "description", cJSON_CreateString (desc6));
cJSON_AddItemToObject (sensor5, "Time", cJSON_CreateString (time_string));
cJSON_AddItemToObject (sensor5, "value", cJSON_CreateNumber (rsrp_current));
cJSON_AddItemToObject (sensor5, "unit", cJSON_CreateString (id6));
cJSON_AddItemToObject (sensor5, "type", cJSON_CreateString (type6));
/* print everything */
ptr1[counterdo] = cJSON_Print (sensor5);
printf ("Counterdo value is : %d\n", counterdo);
cJSON_Delete (sensor5);
counterdo = counterdo + 1;
if (counterdo == 6)
{
for (y = 0; y <= counterdo; y++)
{
free (ptr1[y]);
}
counterdo = 0;
}
return;
}
int
main (int argc, char **argv)
{
time_t curtime;
time (&curtime);
for (int n = 0; n < 3 * 6; n++)
{
my_work_handler_5 ();
}
}
Build procedure:
wget https://github.com/DaveGamble/cJSON/archive/v1.7.14.tar.gz
tar zxf v1.7.14.tar.gz
gcc -g -O0 -IcJSON-1.7.14 -o cjson cjson.c cJSON-1.7.14/cJSON.c
Running valgrind on the program:
valgrind --leak-check=full --show-leak-kinds=all --track-origins=yes --verbose ./cjson
..indicates some memory is being freed that was not previously allocated: Invalid free() / delete / delete[] / realloc():
==6747==
==6747== HEAP SUMMARY:
==6747== in use at exit: 0 bytes in 0 blocks
==6747== total heap usage: 271 allocs, 274 frees, 14,614 bytes allocated
==6747==
==6747== All heap blocks were freed -- no leaks are possible
==6747==
==6747== ERROR SUMMARY: 21 errors from 2 contexts (suppressed: 0 from 0)
==6747==
==6747== 3 errors in context 1 of 2:
==6747== Invalid free() / delete / delete[] / realloc()
==6747== at 0x483CA3F: free (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==6747== by 0x1094DA: my_work_handler_5 (cjson.c:42)
==6747== by 0x10955A: main (cjson.c:59)
==6747== Address 0x31 is not stack'd, malloc'd or (recently) free'd
==6747==
==6747==
==6747== 18 errors in context 2 of 2:
==6747== Conditional jump or move depends on uninitialised value(s)
==6747== at 0x483C9F5: free (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==6747== by 0x1094DA: my_work_handler_5 (cjson.c:42)
==6747== by 0x10955A: main (cjson.c:59)
==6747== Uninitialised value was created by a stack allocation
==6747== at 0x109312: my_work_handler_5 (cjson.c:11)
==6747==
==6747== ERROR SUMMARY: 21 errors from 2 contexts (suppressed: 0 from 0)
Replacing:
for (y = 0; y <= counterdo; y++)
{
free (ptr1[y]);
}
by:
for (y = 0; y < counterdo; y++)
{
free (ptr1[y]);
}
and executing valgrind again:
==6834==
==6834== HEAP SUMMARY:
==6834== in use at exit: 1,095 bytes in 15 blocks
==6834== total heap usage: 271 allocs, 256 frees, 14,614 bytes allocated
==6834==
==6834== Searching for pointers to 15 not-freed blocks
==6834== Checked 75,000 bytes
==6834==
==6834== 1,095 bytes in 15 blocks are definitely lost in loss record 1 of 1
==6834== at 0x483DFAF: realloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==6834== by 0x10B161: print (cJSON.c:1209)
==6834== by 0x10B25F: cJSON_Print (cJSON.c:1248)
==6834== by 0x1094AB: my_work_handler_5 (cjson.c:30)
==6834== by 0x10959C: main (cjson.c:59)
==6834==
==6834== LEAK SUMMARY:
==6834== definitely lost: 1,095 bytes in 15 blocks
==6834== indirectly lost: 0 bytes in 0 blocks
==6834== possibly lost: 0 bytes in 0 blocks
==6834== still reachable: 0 bytes in 0 blocks
==6834== suppressed: 0 bytes in 0 blocks
==6834==
==6834== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
Some memory is definitively being leaked.
The reason is that char *ptr1[6] is not static, and is therefore created on the stack every time my_work_handler_5() is being called. The pointers that were returned are by cJSON_Print() are therefore lost between two calls, and free() is being called on arbitrary pointer values, since ptr1[] is not initialized as it could be:
char *ptr1[6] = { NULL, NULL, NULL, NULL, NULL, NULL };
Since you are freeing memory every 6 calls, this is causing the memory leak you were suspecting.
Replacing:
char *ptr1[6];
by:
static char *ptr1[6];
compiling, running valgrind again:
==6927==
==6927== HEAP SUMMARY:
==6927== in use at exit: 0 bytes in 0 blocks
==6927== total heap usage: 271 allocs, 271 frees, 14,614 bytes allocated
==6927==
==6927== All heap blocks were freed -- no leaks are possible
==6927==
==6927== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
The modified version of the program should now work on your bare-metal system.
I am working with a particular encoder which uses 512M boot time allocated memory. I am supposed to allocate the memory through dts file configuration. The physical memory is devided into low and high as below in the dts file:
/* 256MB at 0x0 */
memory#0 {
device_type = "memory";
reg = <0x0 0x00000000 0x0 0x10000000>;
};
/* 2GB at 0x8010000000 */
memory#8000000000 {
device_type = "memory";
reg = <0x80 0x10000000 0x0 0x80000000>;
};
Now I want to allocate the boot time carved out memory from the high memory. I can think of creating a dts entry as below
encoder: encoder#0xxxxxxxxx {
compatible = "xyz, abc";
reg= <0x0 0x80000000 0x0 0x20000000>;
}
Here the encoder#xxxx is the actual device entry and it has already a set of register region as below:
encoder: encoder#0xxxxxxxxx{
compatible = "xyz, abc";
#address-cells = <1>;
#size-cells = <1>;
reg = <0x0 0xxxxxxxxxx 0x0 0x1234>;
status = "disabled";
};
So after adding the carvedout memory the entry would look like this:
encoder: encoder#0xxxxxxxxx{
compatible = "xyz, abc";
#address-cells = <1>;
#size-cells = <1>;
reg = <0x0 0xxxxxxxxxx 0x0 0x1234>;
reg= <0x0 0x80000000 0x0 0x20000000>;
status = "disabled";
};
Would this work? I am not sure though how the driver code would know where is the start address of the carved out memory and the size of it?
Can anyone please help?
Thanks.
I have achieved this by creating curving out memory from the lower part of the high memory for my device. To do this I modified my dtsi file to reflect the mew memory map as below:
/* 256MB at 0x0 - this is the low memory region*/
memory#0 {
device_type = "memory";
reg = <0x0 0x00000000 0x0 0x10000000>;
};
/* 1GB at 0x8010000000 - high memory region, this used to be 2GB earlier. I curved out to 1GB and allocated 1GB to my device*/
memory#8000000000 {
device_type = "memory";
reg = <0x80 0x10000000 0x0 0x40000000>;
};
/* 1GB carved out for my device.*/
my_device: mydevice#8050000000 {
compatible = "xxx,xxx-yyy";
reg = <0x00 0x20810000 0x0 0x010000>,
**<0x80 0x50000000 0x0 0x40000000>;**
interrupts = <GIC_SHARED xx IRQ_TYPE_LEVEL_HIGH>;
status = "disabled";
};
Allocation of 1GB was a bit too much and I have modified to it to 256MB but that is not important here.
Then in the driver I retried the memory details as below:
struct resource *res;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0); /*this is to get hold of the the registers of my device*/
res = platform_get_resource(pdev, IORESOURCE_MEM, 1); /* this is the device memory for my device which had been set aside in the dtsi file above.*/
For using the memories retrieve as above use the below:
res->start /* start address of the memory region */
res->end - res->start + 1 /* this is to calculate the size of the memory*/
devm_ioremap_resource(&pdev->dev, res); /* this will give the mapped kernel virtual address for the memory bank*/
I have a function (ANSI C) to retrieve time for our ntpd server.
This code work properly when I compile 32bit but doesn't work if I compile in armv64.
It works properly on iPhone 4,4S,5 (32bit), it doesn't work properly on Iphone 5s,6,6S (64bit).
I think that the problem is:
tmit=ntohl((time_t)buf[10]); //# get transmit time
time_t is now 8byte when compiled in armv64.....
Underneath you can find the source code...
Output Correct with Iphone 5 Simulator (32bit) ---------------------------
xxx.xxx.xxx.xxx PORT 123
sendto-->48
prima recv
recv-->48
tmit=-661900093
tmit=1424078403
1424078403-->Time: Mon Feb 16 10:20:03 2015
10:20:03 --> 37203
---------------------------------------------------------
Output Wrong with Iphone 6 Simulator (64bit) ---------------------------
xxx.xxx.xxx.xxx PORT 123
sendto-->48
prima recv
recv-->48
tmit=19612797
tmit=2105591293
2105591293-->Time: Tue Nov 19 00:47:09 38239
00:47:09 --> 2829
//---------------------------------------------------------------------------
long ntpdate(char *hostname) {
//ntp1.inrim.it (193.204.114.232)
//ntp2.inrim.it (193.204.114.233)
int portno=NTP_PORT; //NTP is port 123
int maxlen=1024; //check our buffers
int i=0; // misc var i
unsigned char msg[48]={010,0,0,0,0,0,0,0,0}; // the packet we send
unsigned long buf[maxlen]; // the buffer we get back
//struct in_addr ipaddr; //
struct protoent *proto; //
struct sockaddr_in server_addr;
int s; // socket
int tmit; // the time -- This is a time_t sort of
char ora[20]="";
//
//#we use the system call to open a UDP socket
//socket(SOCKET, PF_INET, SOCK_DGRAM, getprotobyname("udp")) or die "socket: $!";
proto=getprotobyname("udp");
s=socket(PF_INET, SOCK_DGRAM, proto->p_proto);
if(s==-1) {
//printf("ERROR socket=%d\n",s);
return -1;
}
//Setto il timeout per la ricezione --------------------
struct timeval tv;
tv.tv_sec = TIMEOUT_NTP; //sec
tv.tv_usec = 0;
if (setsockopt(s, SOL_SOCKET, SO_RCVTIMEO, &tv, sizeof(struct timeval)) != 0)
{
//printf("Error assigning socket option");
return -1;
}
memset( &server_addr, 0, sizeof( server_addr ));
server_addr.sin_family=AF_INET;
//trasformo il nome in ip
struct hostent *hp = gethostbyname(hostname);
if (hp == NULL) {
return -1;
} else {
sprintf(hostname_ip, "%s", inet_ntoa( *( struct in_addr*)( hp -> h_addr_list[0])));
}
#ifdef LOG_NTP
printf("%s-->%s PORT %d\n",hostname,hostname_ip,portno);
#endif
server_addr.sin_addr.s_addr = inet_addr(hostname_ip);
server_addr.sin_port=htons(portno);
//printf("ipaddr (in hex): %x\n",server_addr.sin_addr);
/*
* build a message. Our message is all zeros except for a one in the
* protocol version field
* msg[] in binary is 00 001 000 00000000
* it should be a total of 48 bytes long
*/
// send the data
i=sendto(s,msg,sizeof(msg),0,(struct sockaddr *)&server_addr,sizeof(server_addr));
#ifdef LOG_NTP
printf("sendto-->%d\n",i);
#endif
if (i==-1)
return -1;
#ifdef LOG_NTP
printf("prima recv\n");
#endif
// get the data back
i=recv(s,buf,sizeof(buf),0);
#ifdef LOG_NTP
printf("recv-->%d\n",i);
#endif
if (i==-1)
{
#ifdef LOG_NTP
printf("Error: %s (%d)\n", strerror(errno), errno);
#endif
return -1;
}
//printf("recvfr: %d\n",i);
//We get 12 long words back in Network order
//for(i=0;i<12;i++)
//printf("%d\t%-8x\n",i,ntohl(buf[i]));
/*
* The high word of transmit time is the 10th word we get back
* tmit is the time in seconds not accounting for network delays which
* should be way less than a second if this is a local NTP server
*/
tmit=ntohl((time_t)buf[10]); //# get transmit time
#ifdef LOG_NTP
printf("tmit=%d\n",tmit);
#endif
/*
* Convert time to unix standard time NTP is number of seconds since 0000
* UT on 1 January 1900 unix time is seconds since 0000 UT on 1 January
* 1970 There has been a trend to add a 2 leap seconds every 3 years.
* Leap seconds are only an issue the last second of the month in June and
* December if you don't try to set the clock then it can be ignored but
* this is importaint to people who coordinate times with GPS clock sources.
*/
tmit-= 2208988800U;
#ifdef LOG_NTP
printf("tmit=%d\n",tmit);
#endif
/* use unix library function to show me the local time (it takes care
* of timezone issues for both north and south of the equator and places
* that do Summer time/ Daylight savings time.
*/
//#compare to system time
#ifdef LOG_NTP
//printf("%d-->Time: %s\n",tmit,ctime((const time_t)&tmit));
printf("%d-->Time: %s\n",tmit,ctime((const time_t)&tmit));
#endif
//i=time(0);
//printf("%d-%d=%d\n",i,tmit,i-tmit);
//printf("System time is %d seconds off\n",i-tmit);
//Prendo l'ora e la converto in HH:MM:SS --> Sec
strftime(ora, 20, "%T", localtime((const time_t)&tmit));
#ifdef LOG_NTP
printf("%s --> %ld\n",ora, C2TIME(ora));
#endif
return C2TIME(ora);
}
I Solved the Problem!!!!!!!!!
uint32_t buf[maxlen];
uint32_t tmit;
instead of:
unsigned long buf[maxlen];
int tmit;
Defining a variable of type time_t
time_t tmit_temp=tmit;
printf("%d-->Time: %s\n",tmit,ctime((const time_t)&tmit_temp));
strftime(ora, 20, "%T", localtime((const time_t)&tmit_temp));
This works properly!!! ;-)
I am having a strange problem trying to read and write 9k bytes with open(), read() and write(). When I attempt to write 9k to a file and read it back, the data only goes up to 2250 bytes. After that everything is zeros.
Here is my code (except for the filename which isn't relevant, I'm just putting it to NSDocumentDirectory):
int fp = open([appFile cStringUsingEncoding:NSASCIIStringEncoding], O_RDWR | O_CREAT, 0644);
[_detailViewController log:#"first open() returns %i (err: %i)", fp, errno];
int data2[10000];
int data3[10000];
for (int i=0;i<10000;i++) data2[i] = 1;
[_detailViewController log:#"resetting seek to 0"];
int seekPos = lseek(fp, 0, SEEK_SET);
result = write(fp, data2, 9000);
[_detailViewController log:#"wrote 9k, result is %i", result];
[_detailViewController log:#"resetting seek to 0"];
seekPos = lseek(fp, 0, SEEK_SET);
result = read(fp, data3, 9000);
[_detailViewController log:#"read 9k, result is %i", result];
[_detailViewController log:#"values of data2[2248-2252] = 0x%x 0x%x 0x%x 0x%x 0x%x", data2[2248], data2[2249], data2[2250], data2[2251], data2[2252]];
[_detailViewController log:#"values of data3[2248-2252] = 0x%x 0x%x 0x%x 0x%x 0x%x", data3[2248], data3[2249], data3[2250], data3[2251], data3[2252]];
close(fp);
And here is the strange output:
2013-02-13 14:08:38.290 FileTester[2800:907] first open() returns 6 (err: 3)
2013-02-13 14:08:38.295 FileTester[2800:907] resetting seek to 0
2013-02-13 14:08:38.301 FileTester[2800:907] wrote 9k, result is 9000
2013-02-13 14:08:38.306 FileTester[2800:907] resetting seek to 0
2013-02-13 14:08:38.311 FileTester[2800:907] read 9k, result is 9000
2013-02-13 14:08:38.319 FileTester[2800:907] values of data2[2248-2252] = 0x1 0x1 0x1 0x1 0x1
2013-02-13 14:08:38.327 FileTester[2800:907] values of data3[2248-2252] = 0x1 0x1 0x0 0x0 0x0
As you can see on the last line, the data suddenly goes to zero.
Any ideas what I might be doing wrong? The thing that really gets me is that both the read() and write() return 9000.
As mentioned by ughoavgfhw (Thanks!) the problem was I was mixing up bytes and ints. 9000 bytes is the same thing as 2250 ints, since each int is 4 bytes.