I've been having problems with the way mathematica evaluates my integral. It is a multiple integral with the outer integral being a countour integral. When I press "Shift + Enter" to run mathematica, it says that it is running but after some time it stops evaluating and does not give any output.
Here the post says that on a 64-bit OS, Mathematica will freely use all the memory available and grind the system to a halt while in a 32-bit OS, the maximum memory allocated to any one program is limited.
Although I have a 64-bit OS, I tried using MemoryConstrained placed a 1 gig cap but Mathematica still stops the evaluation without giving any result. And when I check the memory used during evaluation using MaxMemoryUsed[], Mathematica did not even reach the 1 gig memory cap.
What are the possible sources of this problem?
Thanks in advance.
P.S. For reference, here is a picture of the specs of my computer
Related
I augmented the memory of Dr. Racket a week ago, now I want to reduce it to the same amount as before. So I limit it back to 128 MB. But that has no effect... It is always consuming much more then 128 MB...
It's really a problem because it causes my computer to overheat.
Does someone know how I can limit Dr. Racket so that he don't exceed 128 MB?
Here's a screenshot of the problem :
There is a difference between the memory used by a program and the memory used in total by DrRacket. When I start up DrRacket and before entering or running any program I see that DrRacket uses 250MB. The interaction window states I have limited memory to 128MB too so that means that that particular program cannot go beond those bounds, but there are featrues of DrRacket that uses alot more memory on you machine than mine.
I went into the settings and removed some features I don't use (like Algiol60). When restarting after that I used 50MB less memory which indeed confirms the memory is used by DrRacket and not programs.
For a particular complex program I guess background expansion might use a lot of memory. Perhaps you can turn that off as well to see if not the current used memory goes down.
About heat
As Óscar mentioned memory usage has little to do with heat as long as you don't hear the swap is being used (heavy disk usage). Heat has to do with CPU usage. When doing calculations the OS will make available resources available and perhaps increase the frequencey of the CPU which increases the heat.
If you are making a threaded application that has loops waiting for tasks make sure you are not making an active loop. Sleep might reduce activeness and perhaps Racket has better approaches (never done threded apps in Racket)
If you are calculating something the increase of CPU is natural. It's so that you get the answer earlier. Computer settings can be changed to favor battery time. Check both OS and BIOS. (That makes this not a Racket issue)
The memory shown in the Dr Racket status bar is N/A.
Experiment:
Choose Racket | Limit Memory and specify 8 MB (the minimum).
Choose File | New Tab.
In the Interactions pane allocate 8 MB of memory. For example enter (define x (make-bytes (* 8 1024 1024))). (I recommend assigning the result to a variable, like this, because I doubt you want Dr Racket to print 8 MB of bytes.)
The result I get:
Welcome to DrRacket, version 6.1.1.6--2014-12-21(aabe9d7/a) [3m].
Language: racket [custom]; memory limit: 8 MB.
> (define x (make-bytes (* 8 1024 1024)))
out of memory
>
Assuming you get the same result, there is some other reason your computer is running hotter.
I don't think that the extra memory being consumed is the cause for your computer overheating. More likely, it's because some function is consuming the CPU. Try to optimize the code, instead.
In fact, by limiting the available memory you might end up causing more disk paging, hence slowing things down and potentially consuming more CPU … and causing more overheating.
I have a 32-bit Delphi application running with /LARGEADDRESSAWARE flag on. This allows to allocate up to 4GB on a 64-bit system.
Am using threads (in a pool) to process files where each task loads a file in memory. When multiple threads are running (multiple files being loaded), at some point the EOutOfMemory hits me.
What would be the proper way to get the available address space so I can check if I have enough memory before processing the next file?
Something like:
if TotalMemoryUsed {from GetMemoryManagerState} + FileSize <
"AvailableUpToMaxAddressSpace" then NoOutOfMemory
I've tried using
TMemoryStatusEx.ullAvailVirtual for AvailableUpToMaxAddressSpace
but the results are not correct (sometimes 0, sometimes > than I actually have).
I don't think that you can reasonably and robustly expect to be able to predict ahead of time whether or not memory allocations will fail. At the very least you would probably need to write your own memory allocator that was dedicated to serving your application, and have a very strong understanding of the heap allocation requirements of your process.
Realistically the tractable way forward for you is to break free from the shackles of 32 bit address space. That is your fundamental problem. The way to escape from 32 bit address space is to compile for 64 bit. That requires XE2 or later.
You may need to continue supporting 32 bit versions of your application because you have users that are still on 32 bit systems. The modern versions of Delphi have 32 bit and 64 bit compilers and it is quite simple to write code that will compile and behave correctly under both scenarios.
For your 32 bit versions you are less likely to run into memory problems anyway because 32 bit systems tend to run on older hardware with fewer processors. In turn this means less demand on memory space because your thread pool tends to be smaller.
If you encounter machines with large enough processor counts to cause out of memory problems then one very simple and pragmatic approach is to give the user a mechanism to limit the number of threads used by your application's thread pool.
Since there are more processes running on the target system even if the necessary infrastructure would be available, if would be no use.
There is no guarantee that another process does not allocate the memory after you have checked its availability and before you actually allocate it. The right thing to do is writing code that will fail gracefully and catch the EOutOfMemory exception when it appears. Use it as a sign to stop creating more threads until some of them is already terminated.
Delphi is 32bit, so you can't allocate memory addresses larger than that.
Take a look at this:
What is a safe Maximum Stack Size or How to measure use of stack?
Most available desktop (cheap) x86 platforms now still nave no ECC memory support (Error Checking & Correction). But the rate of memory bit-flip errors is still growing (not the best SO thread, Large scale CERN 2007 study "Data integrity": "Bit Error Rate of 10-12 for their memory modules ... observed error rate is 4 orders of magnitude lower than expected"; 2009 Google's "DRAM Errors in the Wild: A Large-Scale Field Study"). For current hardware with data-intensive load (8 GB/s of reading) this means that single bit flip may occur every minute (10-12 vendors BER from CERN07) or once in two days (10-16 BER from CERN07). Google09 says that there can be up to 25000-75000 one-bit FIT per Mbit (failures in time per billion hours), which is equal to 1 - 5 bit errors per hour for 8GB of RAM ("mean correctable error rates of 2000–6000 per GB per year").
So, I want to know, is it possible to add some kind of software error detection in system-wide manner (check both user and kernel memory). For example, create a patch for Linux kernel and/or to system compiler to add some checksumming of every memory page, and try to detect silent memory corruptions (bit-flips) by regular recomputing of checksums?
For example, can we see all writes to memory (both from user and kernel space), to distinguish between intended memory changes from in-memory bit flips? Or can we somehow instrument all codes with some helper?
I understand that any kind of software memory ECC may cost a lot of performance and will not catch all errors, but I think it can be useful to detect at least some memory bit-flips early, before they will be reused in later computations or stored to hard drive.
I also understand that better way of data protection from memory bitflips is to switch to ECC hardware, but most PC there are still non-ECC.
The thing is, ECC is dirt cheap compared to "software ECC countermeasures". You can easily detect if they have ECC modules and complain (or print a warning) when they don't.
http://www.cyberciti.biz/faq/ecc-memory-modules/
For example, can we see all writes to memory (both from user and kernel space), to distinguish between intended memory changes from in-memory bit flips? Or can we somehow instrument all codes with some helper?
Er, you you will never "see" the bit-flips on the bus. They are literally caused by a particle hitting RAM, flipping a bit. Only much later can you notice that you read out something different than your wrote in. To detect this only via the bus, you would need a duplicate copy of all your RAM (i.e. create a shadow copy of what is in your real RAM, so you can verify every read returns what was written to that location.)
try to detect silent memory corruptions (bit-flips) by regular recomputing of checksums?
The Redis guy has a nice write-up on an algorithm for testing RAM for problems. http://antirez.com/news/43 But this is really looking for RAM errors, not random bit-flips.
If "recompute checksums" only works when you are NOT writing to the memory. That might be "good enough" but you'll need to figure out which pages are not being written to.
To catch 100% of the errors, every write must be pre-ceeded by computing the checksum of that block of memory, then comparing it to the recorded checksum (to make sure that block hasn't degraded in RAM). Only then is it safe to do the write and then update the checksum. As you can imagine, the performance of this will be horrible (at least 100x slower) performance.
I understand that any kind of software memory ECC may cost a lot of performance and will not catch all errors, but I think it can be useful to detect at least some memory bit-flips early, before they will be reused in later computations or stored to hard drive.
Well, there is a simple method to detect 100% of the errors, at a cost of 50% performance: Just run the computation on 2 boxes at once (or on one box at two different times, maybe with a RAM test in between if you are paranoid.) If the results differ, you have detected an error.
See also:
https://www.linuxquestions.org/questions/linux-hardware-18/how-to-detect-ecc-memory-errors-under-linux-886011/
The answer to the question is yes, and a proof for that is the software SoftECC posted in the comments!
Just a note that SoftECC is a kernel level solution. If a user-land app is used, it will be a third stage of redundancy, that seems not necessary.
I am developing for the Arduino Due which has 96k SRAM and 512k flash memory for code. If I have a program that will compile to, say, 50k, when I run the code, how much sram will I use? will I use 50k immediately, or only the memory used by the functions I call? Is there a way to measure this memory usage before I upload the sketch to the arduino?
You can run
arm-none-eabi-size bin.elf
Where:
bin.elf is the generated binary (look it up in the compile log)
arm-none-eabi-size is a tool included with Arduino for arm which lets you know the memory distribution of your binary. This program can be found inside the Arduino directory. In my mac, this is /Applications/Arduino.app/Contents/Resources/Java/hardware/tools/g++_arm_none_eabi/bin
This command will output:
text data bss dec hex filename
9648 0 1188 10836 2a54 /var/folders/jz/ylfb9j0s76xb57xrkb605djm0000gn/T/build2004175178561973401.tmp/sketch_oct24a.cpp.elf
data + bss is RAM, text is program memory.
Very important: This doesn't account for dynamic memory (created in stack), this is only RAM memory for static and global variables. There are other techniques to check the RAM usage dynamically, like this one, but it will depend on the linker capabilities of the compiler suite you are using.
Your whole program is loaded into arduino, so atleast 50K flash memory will be used. Then on running the code, you will allocate some variables, some on stack, some global which will take some memory too but on SRAM.
I am not sure if there is a way to exactly measure the memory required but you can get a rough estimation based on the number and types of variables being allocated in the code. Remember, the global variables will take the space during the entire time the code is running on arduino, the local variables( the ones that are declared within a pair of {..}) remain in the memory till the '}' brace also known as the scope of the variables. Also remember, the compiled 50K code which you are mentioning is just the code portion, it does not include your variables, not even the global ones. The code is store in Flash memory and the variables are stored in the SRAM. The variables start taking memory only during runtime.
Also I curious to know how you are calculating that your code uses 50K memory?
Here is a little library to output the avalaible RAM memory.
I used it a lot when my program was crashing with no bug in the code. It turned out that I was running out of RAM.
So it's very handy!
Avalaible Memory Library
Hope it helps! :)
I am trying to implement SLIC superpixel algorithm in Android tablet (SLIC)
I port the code which in C++ to work with android environment using stl-lib and all. What application doing is taking an image from camera and send data to process in native code.
I got the app running but the problem is that it took 20-30 second to process a single frame (640 x 400) while in my notebook running with visual studio application would be almost instantly finish!
I check the memory leak, their isn't any... is their anything that might cause computation time to be way more expensive than VS2010 in notebook?
I know this question might be very open and not really specific but I'm really in the dark too. Hope you guys can help.
Thanks
PS. I check running time for each process, I think that every line of code execution time just went up. I don't see any specific function that take way longer than usual.
PSS. Do you think follow may cause the slow?
Memory size : investigated, during native not much of paused time show from GC
STL-library : not investigate yet, is it possible that function like vector, max and min running in STL may cause significant slow?
Android environment it self?
Lower hardware specification of Android Tablet (Acer Iconia tab - 1GHz Nvidia Tegra 250 dual-core processor and has 1GB of RAM)
Would be better to run in Java?
PSSS. If you have time please check out the code
I've taken a look to your code and can make the following recommendations:
First of all, you need to add the line APP_ABI := armeabi-v7a into your Application.mk file. Otherwise your code is compiled for old armv5te architecture where you have no any FPU (all floating point arithmetic is emulated), have less registers available and so on.
Your SLIC implementation intensively uses double floating-point values for computation. You should replace them with float wherever possible because ARM still misses hardware support for double type.