Erlang: Node freezes totaly. Now what? - erlang

Platform: windows 7 32bit, erlang R15B01.
I have developed an erlang server that simultaneously listens to 200 different tcp ports (200 gen_servers)
After a few minutes of moderate load(few clients in parallel) the entire node just freezes completely - even the shell freezes entirely.
How can this problem get diagnosed? is there a standard erlang approach for those kind of problems? (memory consumption was low ,so its not some kind of memory leak)
Important Edit
It seems that under werl.exe there is no such problem. only under erl.exe. Probably same as in http://erlang.2086793.n4.nabble.com/erl-exe-dies-but-werl-exe-does-not-on-both-Windows-XP-and-2008R2-with-R14B01-td3335030.html

If you kill your process with kill -SIGUSR1 <pid>, the erlang VM will generate a erlang crash dump file erl_crash.dump in the directory the app was started.
Then you can analyze it using the crash dump viewer.

A frozen erlang shell can be caused by uncaught exit signals. You can try to trap exits in the shell process (assuming it is the parent process of your server) which should give you the exit reason. See Reference manual on Errors

Related

How to clear GPU memory without 'kill pid'?

I use my school's server for deep learning experiment. I stopped the python script but somehow the GPU memory was not released. I don't have the root to kill the processes, how can I clear the GPU memory?
I tried 'sudo kill -9' and 'nvidia-smi', but it said 'insufficient permissions' (I am using the university's server).

docker build running out of memory, but plenty of memory seems to be available

I'm building an elixir/phoenix application using a docker container.
This has been working for some time now, but recently it stopped working, with the error always being associated with a lack of memory.
For instance, the most frequent point of failure is during the mix compile task of Elixir (the most time heavy task in the Dockerfile), which crashes with the error:
eheap_alloc: Cannot allocate 147852528 bytes of memory (of type "old_heap").
Crash dump is being written to: erl_crash.dump...done
Sometimes it might be able to get through that step, but will again fail at a later step, like brunch build which compiles the frontend code. Sometimes it just fails at some other step with no specific error message, just saying:
Killed
While this is happening, I can easily check htop and see that I'm using 3 or 4GB of RAM, out of 16GB total, so there's no lack of physical RAM at all.
After some digging, I found that sudo sysctl vm.overcommit_memory=1 could help, but no luck there either.
The exact same build runs fine on my other computer, which runs Arch Linux, while this one runs Ubuntu 16.04

getting process ID of a pub serve to then kill that thread/process

So when I run my Polymer Dart application, i use pub serve and the serve is created and served. It will stay running until until i break out of it. I was curious if there is a way to programmatically stop it.
One of the options I was looking at was looking at the running processes and then killing the pub serve process.
I was not sure though how i would get the process id to kill it, or unless there was another option.
Maybe someone has an even better approach to shutdown pub serve on the machine automatically, as a form of cleanup?
The issue I have noticed is that if i get the running proceesses, i only see "cmd" as a process so that isnt the best determining factor.
I was not sure if there was a way via pub on serve to get its process if, if it set a flag or global of sorts I could leverage
This is not a Dart or Pub question really, it's a Windows, MacOS, Linux etc. shell process control question.
The question is more suited to Stack Exchange Superuser https://superuser.com/ I believe. You could look over there for more detailed answer ... but ... assuming you are using the windows command prompt:
start /B starts a process in the background.
tasklist can be used to look up running process PIDs.
taskkill /PID kills a running process.
You can use help <command> or search for documentation.
I have not used these personally but it looks awkward as start /B does not give you the PID of the process it ran. Unix shells such as Bash have good facilities for running processes in the background. Windows Powershell may have better support also.

What is the best CLI tool to take memory dumps for C++ in Linux

What is the best CLI tool to take memory dumps for C++ processes in Linux. I have a program which monitors the memory usage of different processes running on Linux. For Java based proceses, I am using jstack and Jmap to take the thread and heap dumps. But, are there any good CLI tools take similar dumps for C++ based processes?? And if yes, how do we use them and once dump is taken how to analyse the dumps?
Any inuputs will be appreciated.
I would recommend using gcore which is an open source executable to dump for remote process. In order to achieve consistency, target process is being suspended while collecting memory, and resumed afterwards.
usage info can be found in the following link :
gsp.com/cgi-bin/man.cgi?section=1&topic=gcore
another option is via gcc, by attaching the process to gcc instantiation and typing the 'gcore' command, and then detaching it.
$ gdb --pid=123
(gdb) gcore
Saved corefile core.123
(gdb) detach

Apache-httpd processes die with segmentation fault on deployment

I'm running Rails 2.3.3 application which is deployed with passenger/mod_rails with ruby-enterprise-1.8.6-20090610 and apache httpd.
The problem is that whenever I deploy our application, hundreds of httpd processes start dying. I'm getting this error:
[notice] child pid NNNNN exit signal Segmentation fault(11)
After a short period of time 10-20min. those errors pass off.
This problem started after migrating our database to a separate and dedicated machine. So I think it could be a problem of the mysql-db connection pools and management, however I can not define it.
Does anyone could help me with this problem or just give me a clue how to debug it deeper. Thank you in advance.
Start by enabling core dumps on your server.
Then run it to get a core file to get a backtrace and get an initial idea of where the server is core dumping.
I'm going throught the same problem at the moment. Not with Rails though.
HTH

Resources