I am running a php script as a process on a Linux shell. With different tools (top, xdebug, ...) I see the dynamic memory consumption (heap memory) of this very complex script continuously rising.
How can I find out exactly the line in the code or the variable or the place, that is causing this behavior? Where is the memory leak of the php script?
Additional information:
Linux version 2.6.30-gentoo-r4
PHP Version 5.2.10-pl0-gentoo
I can modify the script
I can use xdebug
Please give a reason for closing this question.
Try this at suspect areas
echo memory_get_usage();
// Suspect code here
echo memory_get_usage();
Related
NodeMCU documentation states
NodeMCU firmware build now automatically generates a luac.cross image
as standard in the firmware root directory; this can be used to
compile and to syntax-check Lua source on the Development machine for
execution under NodeMCU Lua on the ESP8266.
Where do I get luac.cross from and how do I install it?
Do I build NodeMCU firmware from source on Mac and is luac.cross created as part of that process? I have been using the cloud service to create custom firmware. Is luac.cross available via cloud build?
Straight lua code has overwhelmed the MakerFocus NodeMCU board resulting runtime panic with out of memory issue. Hoping compiled code will reduce RAM needs.
Where do I get luac.cross from and how do I install it?
You gave the answer in the quote from the documentation you posted. Specifically this
NodeMCU firmware build now automatically generates a luac.cross image...
So, if you build the NodeMCU manually on your platform the build process will also create a lua.cross for your platform. That's the reason you cannot download or install lua.cross - it has to fit your platform i.e. OS et.al.
The logical next question would then be: how do I manually build NodeMCU on macOS?
I don't know the answer to that as I build with the Docker image (from yours truly) on macOS. Running the Docker build creates a luac.cross in the firmware root directory. However, as macOS is just the host OS for Docker in this setup luac.cross is for Linux rather than native for macOS. To use it you would start the Docker container again and run bash in it to get a shell to execute the Lua cross compilation: docker run --rm -ti -vpwd:/opt/nodemcu-firmware marcelstoer/nodemcu-build bash.
Straight lua code has overwhelmed the MakerFocus NodeMCU board resulting runtime panic with out of memory issue. Hoping compiled code will reduce RAM needs.
I hate to disillusion you, but if I had to bet I would expect that savings won't be significant enough to yield the expected results. As you already started reading documentation I'd like to point you to the relevant FAQ: How is NodeMCU Lua different to standard Lua? and Techniques for Reducing RAM
And maybe using LFS will be your life saver.
In case you want to use this tool regardless of the platform - you can use my API to build it:
curl -d #yourscript.lua -X POST https://nodemcu-luacross-run-64l7ehzjta-uc.a.run.app/compile > output.luac
When a Erlang system hangs I want to know what the system is doing during that time. For a c/c++ program, I can easily run the pstack, but I didn't find out a handy tool for this purpose in Erlang.
What is the pstack equivalent in Erlang?
Actually I want to check the running stack trace of the following process.
"/opt/couchbase/lib/erlang/erts-5.10.4.0.0.1/bin/beam.smp -P 327680 -K true -- -root /opt/couchbase/lib/erlang -progname erl --... "
and I started a new Erlang shell and start the webtool and check the appmon however I can't find the above application. What may cause this?
Thanks
Concerning pstack equivalent, have you read Erlang Profiling from official guide? It gives you lot of example on how to profile your application and find where your code stuck.
Another useful tools is observer it will show all working process, CPU usage, process stack and lot of more information.
If you don't see anything with these tools, you can try with Erlang debugger.
Now concerning couchbase, if your application is currently running, you can connect to it with Erlang shell and launch previous quoted commands and applications.
I don't know if you are using couchbase alone or with couchdb, but, if you want to use observer or other tools from command line, you can start couchdb with -i flag:
# -i use the interactive Erlang shell
couchdb -i
In case of your application run remotely without GUI, you can use etop, its a CLI alternative to observer. You can also dump etop output to file if you don't want to run it directly from your Erlang shell. IHMO, if you want more information concerning I/O or debug, use eprof, fprof and other profiling tools with dump file (see also eep profiling tool, easy to use).
Another alternative, if you are using SSH and want to see observer window, you can use X11Forwarding with ssh: ssh -X $yourserver or ssh -Y $yourserver and simply run observer:start(). in your Erlang shell.
I'm baffled. I have a VM running Ubuntu 14.04. I've followed procedures here: http://clang.llvm.org/docs/LibASTMatchersTutorial.html and am at the step to run ninja. This builds llvm and clang. Now, my VM is no slouch, I gave it 6GB of RAM and 4 CPUs and a 20GB swap file. The biggest problem comes at link time - it seems to start a large number of ld processes, each using at least 3-4GB or virtual memory, and at some point a lot of CPU each. But the the swap file grew to over 12GB and the processes are all IO bound, but I don't know if they are doing something useful, or thrashing. All I know is the disk is getting hammered and the jobs run forever. I've actually just dropped the CPU count to the VM to 1, to see if it might be more efficient with less parallelism, as I surmised the issue may be thrashing.
I suppose my disk could be slow... Any ideas? Should I be using make instead of ninja? My expertise is not Linux (although I'm getting there :-) ) So I'm following the tutorial but perhaps it is not recommended the "best" way to build the clang / llvm programs.
I have been there, It's happening with the latest svn release (but not if you get clang 3.8 or older releases). What is happening is that since during development a lot of debug information is also being generated for each compilation unit the file sizes are becoming big.
The solution is to turn off all the debug info that's been attached by default. You probably are not going to debug clang, so won't need it. SO instead of just doing this
cmake -G Ninja ../llvm -DLLVM_BUILD_TESTS=ON
What you should do is
cmake -G Ninja ../llvm -DLLVM_BUILD_TESTS=ON -DCMAKE_BUILD_TYPE=Release
All the other steps remain the same. Now I have not tested this with ninja, but have verified it with make on ubuntu (this tutorial, I modified the same thing in step 7). This should owkr as weel.
What is the best CLI tool to take memory dumps for C++ processes in Linux. I have a program which monitors the memory usage of different processes running on Linux. For Java based proceses, I am using jstack and Jmap to take the thread and heap dumps. But, are there any good CLI tools take similar dumps for C++ based processes?? And if yes, how do we use them and once dump is taken how to analyse the dumps?
Any inuputs will be appreciated.
I would recommend using gcore which is an open source executable to dump for remote process. In order to achieve consistency, target process is being suspended while collecting memory, and resumed afterwards.
usage info can be found in the following link :
gsp.com/cgi-bin/man.cgi?section=1&topic=gcore
another option is via gcc, by attaching the process to gcc instantiation and typing the 'gcore' command, and then detaching it.
$ gdb --pid=123
(gdb) gcore
Saved corefile core.123
(gdb) detach
I am setting ANT_OPTS in the environment to "-Xms256m -Xmx1024m". After setting this, I am not able to run ant files from command prompt. It throws me an error of:
"Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine."
Although I have enough physical memory available (more than 2048m available) to allot 1024m for ANT_OPTS, but still it throws the above error. Can there be any other reason why I cannot set Xmx to 1024m ?
Anyway, here is how to fix it:
Go to Start->Control Panel->System->Advanced(tab)->Environment Variables->System Variables->New:
Variable name: _JAVA_OPTIONS
Variable value: -Xmx512M
or
set _JAVA_OPTS="-Xmx512M"
or
Change the ant call as shown as below.
<exec>
<arg value="-J-Xmx512m" />
</exec>
then build the files again using the ant.
It worked for me.
You don't mention what OS you're running. If you're on Windows (especially 32-bit) I often see problems allocating more than, say, 800MB as heap, regardless of how much actual memory you have available. This isn't really Windows bashing: the Windows JVM wants to allocate all of its heap in a contiguous chunk and if it can't it fails to start.
I think Java maximum memory on Windows XP does a good job of explaining the problem and how you might try to solve it.
What ever you set initially as minimum heap, the JVM will try to allocate at start up.It seems in your machine (32 bit machine I assume) the JVM is unable to allocate and JVM start up fails. Try setting -Xms to 128 or less. It should work.