Whenever I use Jupyter notebook, it is using 99/100% of my CPU (according to task Manager). I am using windows 10.
Is there any way I can limit this CPU usage? I want to use my pc for some other task while keeping the jupyter kernel running.
Thanks in advance.
Related
I've been running Memgraph for a few days now and everything is working as expected. This is the first time that I'm using Docker.
I've noticed that when I shut down the Memgraph Platform my RAM is still used. I need to restart my computer to free up my RAM. Is there some switch that I can use to limit the memory that Memgraph Platform uses? Is there some way to release the memory after I shut it down?
If it is important, my OS is Windows 10 Professional and I have a 6 years old laptop with 8GB of RAM.
The issue you are experiencing is not related to Memgraph, but Docker or to WSL2 to be more precise. You say that you use Windows 10 so I presume your Docker is configured to use WSL2.
You didn't write which exact build of Windows 10 you are using, but depending on it WSL can use up to 80% of your RAM if you don't limit it.
When you run the Docker image you will see a process called vmmem. When you shutdown running Docker image this process will still occupy your RAM. Restarting your computer frees up the RAM, which is what you are experiencing.
The solution is not to change the configuration of your Memgraph, but to configure Docker. You need to limit the amount of memory that WSL2 can use. But be careful; this is a change that will affect all of your WSL2 instances, not just the Docker ones.
The exact steps that you need to do are:
Shutdown all of the WSL instances with wsl --shutdown
Edit the .wslconfig file (it is located in your user profile folder)
Add the following lines to it:
[wsl2]
memory=3GB
This will limit the RAM usage of WSL to 3GB. I hope that this will help you.
How can I make a simultaneous CPU and GPU stress test on Jetson Xavier machine (Ubuntu 18.04, Jetpack 4.6)?
The only code found is
https://github.com/JTHibbard/Xavier_AGX_Stress_Test with tough package incompatibility issues. It only works for CPU.
Anyone can contribute with providing another code or solve the issue with the mentioned one? A python code is preferred.
Solution found. For CPU stress test, the above link works. It needs numba package to be installed. For GPU stress test, the samples in cuda folder of the Nvidia Jetson machines can be simply and efficiently used. The samples are in the /usr/local/cuda/samples. Choose one and compile it using sudo make. The compiled test file will be accessible in /usr/local/cuda/samples/bin/aarch64/linux/release (aarch64 may differ in different architectures). Run the test and check the performances using sudo jtop in another command line.
I would like to modify an official linux kernel to test some possibilities for perf linux module (I need to modify some files in kernel/events/..., not only tools/perf/...).
Naively, I though of using a VM or Docker but I need to test my custom version with hardware performance counters (HPC); and it's a big problem :
Docker can take HPC but I understood but only by my host kernel, I can't test directly a custom kernel without installing it on my system (correct me if I am wrong)
The VM can't take HPC because it can't emulate it
What is the best way to test a custom kernel linux without installing directly the kernel on my ubuntu system ? And if I have to, what is the most elegant way to do these tests ? Thank you.
I found a solution : KVM + QEMU emulator.
To use PMU, I changed this parameter in the VM parameters (XML format) :
<cpu mode='host-passthrough'/>
Or you can add this option in cmd line :
-cpu host
I followed in part this page for building the kernel on qemu and for the counters this page.
I have constructed a machine-learning computer with two RTX 2070 SUPER NVIDIA GPUs connected with SLI Bridge, Windows OS (SLI verified in NVIDIA Control Panel).
I have benchmarked the system using http://ai-benchmark.com/alpha and got impressive results.
In order to take the best advantage of libraries that use the GPU for scientific tasks (cuDF) I have created a TensorFlow Linux container:
https://www.tensorflow.org/install/docker
using “latest-gpu-py3-jupyter” tag.
I have then connected PyCharm to this container and configured its interpreter as an interpreter of the same project (I mounted the host project folder in the container).
When I run the same benchmark on the container, I get the error:
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[50,56,56,144] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
[[node MobilenetV2/expanded_conv_2/depthwise/BatchNorm/FusedBatchNorm (defined at usr/local/lib/python3.6/dist-packages/ai_benchmark/utils.py:238) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
This error relates to the exhaustion of GPU memory inside the container.
Why is the GPU on the windows host successfully handle the computation and the GPU on the Linux container exhaust the memory?
What makes this difference? is that related to memory allocation in the container?
Here is an awesome link from docker.com that explains why your desired workflow won't work. It also wouldn't work with RAPIDS cudf either. Docker Desktop works using Hyper V which isolates the hardware and doesn't access to the GPU the way the Linux drivers expect. Also, nvidia-docker is linux only
I can let you know that RAPIDS (cudf) currently doesn't support this implementation either. Windows, however, does work better with a Linux host. For both tensorflow and cudf, I strongly recommend that you use (or dual boot) one of the recommended OSes as your host OS, found here: https://rapids.ai/start.html#prerequisites. If you need Windows in your workflow, you can run it on top of your Linux host.
There is a chance that in the future, a WSL version will allow you to run RAPIDS on Windows, letting you- on your own- craft an on Windows solution.
Hope this helps!
What is the best CLI tool to take memory dumps for C++ processes in Linux. I have a program which monitors the memory usage of different processes running on Linux. For Java based proceses, I am using jstack and Jmap to take the thread and heap dumps. But, are there any good CLI tools take similar dumps for C++ based processes?? And if yes, how do we use them and once dump is taken how to analyse the dumps?
Any inuputs will be appreciated.
I would recommend using gcore which is an open source executable to dump for remote process. In order to achieve consistency, target process is being suspended while collecting memory, and resumed afterwards.
usage info can be found in the following link :
gsp.com/cgi-bin/man.cgi?section=1&topic=gcore
another option is via gcc, by attaching the process to gcc instantiation and typing the 'gcore' command, and then detaching it.
$ gdb --pid=123
(gdb) gcore
Saved corefile core.123
(gdb) detach