Close script in R - memory

I'm trying to do something like this:
"Main.R"
for( i in ...){
...
...
source("file.R")
}
The problem is that when I'm running Main.R, It uses all memory RAM because file.R doesn't stop and it is creating a lot of them. (Sorry for my english).
So I get a message on Windows saying that memory couldn't read and write...
How can I fix this? Can I close ONLY file.R when it finishes?
PS:file.R call another scripts...
Thanks a lot.

You can use gc() during or after running of your function to release some memory. Also after rm() it may be useful.

The problem is that I was using pararell so at the end of file.RI have written stopCluster(cl) and the task close after that.

Related

AKTubularBells() and AKRhodesPiano() together cause error on the second

I'm using AKRhodesPiano() and AkTubularBells(). Both work alone. When I try to initialize both, I get the following error.
AKRhodesPiano.swift:init(frequency:amplitude:):88:Parameter Tree Failed
Notably, if I change the order of initialization, the error occurs for the last one of the two instantiated.
Adding the following line to the AKTubularBells playground right under the initialization of AKTubularBells is enough to trigger the error.
let tubularBells = AKTubularBells()
let temp = AKRhodesPiano() /// <- Add this line.
I saw in another post AKRhodesPiano error (crush) on AudioKit v4.2 that there was a recent error in the STK Physical models, so perhaps this is part of that. Any insight appreciated as always.
Thanks for noticing this, it only occurred when using those two nodes simultaneously, but it was basically just a cut-and-paste job gone bad. I fixed it on develop, so if you can rebuild the framework you'll be fine or else wait for the next release which should be soon.
Here's the fix:
https://github.com/AudioKit/AudioKit/commit/05651ff97a7ea7815a27de6a53eee0b5f7998920

How do I debug a memory issue in Rust?

I hope this question isn't too open-ended. I ran into a memory issue with Rust, where I got an "out of memory" from calling next on an Iterator trait object. I'm unsure how to debug it. Prints have only brought me to the point where the failure occurs. I'm not very familiar with other tools such as ltrace, so although I could create a trace (231MiB, pff), I didn't really know what to do with it. Is a trace like that useful? Would I do better to grab gdb/lldb? Or Valgrind?
In general I would try to do the following approach:
Boilerplate reduction: Try to narrow down the problem of the OOM, so that you don't have too much additional code around. In other words: the quicker your program crashes, the better. Sometimes it is also possible to rip out a specific piece of code and put it into an extra binary, just for the investigation.
Problem size reduction: Lower the problem from OOM to a simple "too much memory" so that you can actually tell the some part wastes something but that it does not lead to an OOM. If it is too hard to tell wether you see the issue or not, you can lower the memory limit. On Linux, this can be done using ulimit:
ulimit -Sv 500000 # that's 500MB
./path/to/exe --foo
Information gathering: If you problem is small enough, you are ready to collect information which has a lower noise level. There are multiple ways which you can try. Just remember to compile your program with debug symbols. Also it might be an advantage to turn off optimization since this usually leads to information loss. Both can be archived by NOT using the --release flag during compilation.
Heap profiling: One way is too use gperftools:
LD_PRELOAD="/usr/lib/libtcmalloc.so" HEAPPROFILE=/tmp/profile ./path/to/exe --foo
pprof --gv ./path/to/exe /tmp/profile/profile.0100.heap
This shows you a graph which symbolizes which parts of your program eat which amount of memory. See official docs for more details.
rr: Sometimes it's very hard to figure out what is actually happening, especially after you created a profile. Assuming you did a good job in step 2, you can use rr:
rr record ./path/to/exe --foo
rr replay
This will spawn a GDB with superpowers. The difference to a normal debug session is that you can not only continue but also reverse-continue. Basically your program is executed from a recording where you can jump back and forth as you want. This wiki page provides you some additional examples. One thing to point out is that rr only seems to work with GDB.
Good old debugging: Sometimes you get traces and recordings that are still way too large. In that case you can (in combination with the ulimit trick) just use GDB and wait until the program crashes:
gdb --args ./path/to/exe --foo
You now should get a normal debugging session where you can examine what the current state of the program was. GDB can also be launched with coredumps. The general problem with that approach is that you cannot go back in time and you cannot continue with execution. So you only see the current state including all stack frames and variables. Here you could also use LLDB if you want.
(Potential) fix + repeat: After you have a glue what might go wrong you can try to change your code. Then try again. If it's still not working, go back to step 3 and try again.
Valgrind and other tools work fine, and should work out of the box as of Rust 1.32. Earlier versions of Rust require changing the global allocator from jemalloc to the system's allocator so that Valgrind and friends know how to monitor memory allocations.
In this answer, I use the macOS developer tool Instruments, as I'm on macOS, but Valgrind / Massif / Cachegrind work similarly.
Example: An infinite loop
Here's a program that "leaks" memory by pushing 1MiB Strings into a Vec and never freeing it:
use std::{thread, time::Duration};
fn main() {
let mut held_forever = Vec::new();
loop {
held_forever.push("x".repeat(1024 * 1024));
println!("Allocated another");
thread::sleep(Duration::from_secs(3));
}
}
You can see memory growth over time, as well as the exact stack trace that allocated the memory:
Example: Cycles in reference counts
Here's an example of leaking memory by creating an infinite reference cycle:
use std::{cell::RefCell, rc::Rc};
struct Leaked {
data: String,
me: RefCell<Option<Rc<Leaked>>>,
}
fn main() {
let data = "x".repeat(5 * 1024 * 1024);
let leaked = Rc::new(Leaked {
data,
me: RefCell::new(None),
});
let me = leaked.clone();
*leaked.me.borrow_mut() = Some(me);
}
See also:
Why does Valgrind not detect a memory leak in a Rust program using nightly 1.29.0?
Handling memory leak in cyclic graphs using RefCell and Rc
Minimal `Rc` Dependency Cycle
In general, to debug, you can use either a log-based approach (either by inserting the logs yourself, or having a tool such a ltrace, ptrace, ... to generate the logs for you) or you can use a debugger.
Note that ltrace, ptrace or debugger-based approaches require that you be able to reproduce the problem; I tend to favor manual logs because I work in an industry where bug reports are generally too imprecise to allow immediate reproduction (and thus we use logs to create the reproducer scenario).
Rust supports both approaches, and the standard toolset that one uses for C or C++ programs works well for it.
My personal approach is to have some logging in place to quickly narrow down where the issue occurs, and if logging is insufficient to fire up a debugger for a more fine-combed inspection. In this case I would recommend going straight away for the debugger.
A panic is generated, which means that by breaking on the call to the panic hook, you get to see both the call stack and memory state at the moment where things go awry.
Launch your program with the debugger, set a break point on the panic hook, run the program, profit.

How catch ctrl-c in lua when ctrl-c is sent via the command line

I would like to know when the user from a command line presses control-c so I can save some stuff.
How do I do this? I've looked but haven't really seen anything.
Note: I'm somewhat familiar with lua, but I'm no expert. I mostly use lua to use the library Torch (http://torch.ch/)
Implementing a SIGINT handler is straightforward using the excellent luaposix library:
local signal = require("posix.signal")
signal.signal(signal.SIGINT, function(signum)
io.write("\n")
-- put code to save some stuff here
os.exit(128 + signum)
end)
Refer to the posix.signal module's API documentation for more information.
There exists io libraries that support this.
I know zmq and libuv
Libuv example with lluv binding - https://github.com/moteus/lua-lluv/blob/master/examples/sig.lua
ZeroMQ return EINTR from poll function when user press Ctrl-C
But I do not handle thi byself
windows : SetConsoleCtrlHandler
linux : signal
There are two behaviors of the signal which are undesirable, which will cause complexities in the code.
Program termination
Broken IO
The first behavior can be caught and remembered in a C program by using SetConsoleCtrlHandler/signal. This will allow your function to be called, and you can remember that the system needs to shutdown. Then at some point in the lua code you see it has happened (call to check), and perform your tidy up and shutdown.
The second behavior, is that a blocking operation (read/write) will be cancelled by the signal, and the operation will be unfinished. That would need to be checked at each IO event, and then re-started, or cancelled as appropriate.
require('sys')
sys.catch_ctrl_c()
I use this to catch the exit from cli.

Call to CFReadStreamRead stops execution in thread

NB: The entire code base for this project is so large that posting any meaningful amount wold render this question too localised, I have tried to distil any code down to the bare-essentials. I'm not expecting anyone to solve my problems directly but I will up vote those answers I find helpful or intriguing.
This project uses a modified version of AudioStreamer to playback audio files that are saved to locally to the device (iPhone).
The stream is set up and scheduled on the current loop using this code (unaltered from the standard AudioStreamer project as far as I know):
CFStreamClientContext context = {0, self, NULL, NULL, NULL};
CFReadStreamSetClient(
stream,
kCFStreamEventHasBytesAvailable | kCFStreamEventErrorOccurred | kCFStreamEventEndEncountered,
ASReadStreamCallBack,
&context);
CFReadStreamScheduleWithRunLoop(stream, CFRunLoopGetCurrent(), kCFRunLoopCommonModes);
The ASReadStreamCallBack calls:
- (void)handleReadFromStream:(CFReadStreamRef)aStream
eventType:(CFStreamEventType)eventType
On the AudioStreamer object, this all works fine until the stream is read using this code:
BOOL hasBytes = NO; //Added for debugging
hasBytes = CFReadStreamHasBytesAvailable(stream);
length = CFReadStreamRead(stream, bytes, kAQDefaultBufSize);
hasBytes is YES but when CFReadStreamRead is called execution stops, the App does not crash it just stops exciting, any break points below the CFReadStreamRead call are not hit and ASReadStreamCallBack is not called again.
I am at a loss to what might cause this, my best guess is the thread is being terminated? But the hows and whys is why I'm asking SO.
Has anyone seen this behaviour before? How can I track it down and ideas on how I might solve it will be very much welcome!
Additional Info Requested via Comments
This is 100% repeatable
CFReadStreamHasBytesAvailable was added by me for debugging but removing it has no effect
First, I assume that CFReadStreamScheduleWithRunLoop() is running on the same thread as CFReadStreamRead()?
Is this thread processing its runloop? Failure to do this is my main suspicion. Do you have a call like CFRunLoopRun() or equivalent on this thread?
Typically there is no reason to spawn a separate thread for reading streams asynchronously, so I'm a little confused about your threading design. Is there really a background thread involved here? Also, typically CFReadStreamRead() would be in your client callback (when you receive the kCFStreamEventHasBytesAvailable event (which it appears to be in the linked code), but you're suggesting ASReadStreamCallBack is never called. How have you modified AudioStreamer?
It is possible that the stream pointer is just corrupt in some way. CFReadStreamRead should certainly not block if bytes are available (it certainly would never block for more than a few milliseconds for local files). Can you provide the code you use to create the stream?
Alternatively, CFReadStreams send messages asynchronously but it is possible (but not likely) that it's blocking because the runloop isn't being processed.
If you prefer, I've uploaded my AudioPlayer inspired by Matt's AudioStreamer hosted at https://code.google.com/p/audjustable/. It supports local files (as well as HTTP). I think it does what you wanted (stream files from more than just HTTP).

Debugging Erlang heart timeouts

I use the heart program to restart an Erlang node when it becomes unresponsive. However, I am finding it hard to understand why the node freezes. SASL logs don't show any errors, and my own logs don't seem to show anything remarkable happening at those times. Can anybody give advice on debugging this sort of thing?
By default the heart program issues a SIGKILL to kill off the unresponsive VM so it can quickly start a new one. This makes getting any useful information about the VM pretty much impossible. Something I've tried in the past is to patch the heart program to avoid the hard kill and instead get the VM to create a crash dump and a coredump. I used a patch like this (this one is for Erlang/OTP R14B02):
--- erts/etc/common/heart.c.orig 2011-04-17 12:11:24.000000000 -0400
+++ erts/etc/common/heart.c 2011-04-17 12:12:36.000000000 -0400
## -559,10 +559,11 ##
int res;
if(heart_beat_kill_pid != 0){
pid = (pid_t) heart_beat_kill_pid;
- res = kill(pid,SIGKILL);
+ res = kill(pid,SIGUSR1);
+ sleep(4);
for(i=0; i < 5 && res == 0; ++i){
sleep(1);
- res = kill(pid,SIGKILL);
+ res = kill(pid,i < 2 ? SIGQUIT : SIGKILL);
}
if(errno != ESRCH){
print_error("Unable to kill old process, "
As you can see, with this patch heart will first issue a SIGUSR1 to try to get the VM to create a crash dump. Since this can take awhile, heart then sleeps for 4 seconds. You might have to increase this sleep time if you're not getting full crash dumps. After that, heart then tries twice to issue a SIGQUIT with the hope of getting a coredump, and if that fails, issues a SIGKILL.
Note that this patch will slow down heart's VM restart due to the time required to wait for the crash dumps and coredumps. If you use it in production, be aware of this limitation.
You could try to call erlang:halt/1 from your HEART_COMMAND thus creating a crash dump from the unresponsive node.
You can try using the erl_call tool with e.g. -a erlang halt 123.
If the erlang node can't respond to this is also interesting information.
Did you try increasing `HEART_BEAT_TIMEOUT? Maybe the node is just bogged down a bit an misses the timeout but doesn't freeze.
If you have any idea of why it is freezing you could try to trace the module using dbg.
http://www.erlang.org/doc/man/dbg.html
In short try
dbg:tracer(), dbg:p(all,c), dbg:tpl(Module, Function, x).
If you want to stop this tracing issue
dbg:ctpl()
See documentation for more info.
Note: Change Module and Function to whatever you want to trace, leave x as it is. You can also skip Function and only give Module, x.
Warning: Running this on a live system can be dangerous as the amount of information that is going to be printed to the shell can be enormous.

Resources