Out of curiosity, I was playing with overflowing the the stack with this code:
fn main() {
let my_array: [i32; 3000000000] = [3; 3000000000];
println!("{}", my_array[0]);
}
And to my surprise I ended with three different outcomes:
1) This is what I expected:
thread '<main>' has overflowed its stack
Illegal instruction (core dumped)
2) Surprisingly vague:
Illegal instruction (core dumped)
3) Totally puzzling:
208333333
In order for stochastic nature to show up I had to restart the shell, otherwise results were deterministic ( I would get the same error message over and over).
I compiled with just:
rustc my_file.rs
and excuted with:
./my_file
My rustc version:
rustc 1.0.0 (a59de37e9 2015-05-13) (built 2015-05-14)
My ubuntu version:
Distributor ID: Ubuntu
Description: Ubuntu 14.04 LTS
Release: 14.04
Codename: trusty
Also the size of the array I am trying to create is 12 gigs, I am on a tiny laptop that does not have that amount of RAM.
Any ideas what could be going on here?
Edit:
I was playing with the size of array (which I think might be the reason for different errors, but why?), and got one more:
4) Makes perfect sense.
error: the type `[i32; 300000000000000]` is too big for the current architecture
and my system architecture is x86_64.
It seems that above randomness is related to my machine.
I checked the same code on another machine, that has the same rustc version, ubuntu version and the same architecture. And my results a much more predictable:
If size of the array 536870871 or greater (without getting to case 4) I get:
Illegal instruction (core dumped)
If size of array is 536870870 or smaller (without being small enough to actually work) I get:
thread '<main>' has overflowed its stack
Illegal instruction (core dumped)
Not a single time have I gotten a case 3) where I had garbage returned.
Related
I have a ubuntu 18.04 system with 4 gb ram and 500gb hard disk.I have installed elastic search in my system like "sudo apt-get install elasticsearch"
but while starting elastic search with command : sudo service elasticsearch start
My system getting stuck, I am unable to do any thing. how to fix this
Maybe I can answer this question as I have encountered this problem a while ago and have also resolved it after some R&D.My ROR application is Live and have more than 8000+ images and CPU Intensive as there is lot more happening in my server as every request uses LAT/LNG to serve a geo-calculated response.I was facing this low memory issue similarly like yours when i started and reindexed my data on elasticsearch.
My Learnings-
ElasticSearch is a monster that will eat up your ram.
No Matter how optimized your keywords are OR the data is, that you are pushing to the elasticSearch to get indexed, you must keep a default buffer of backup memory.
What I did...that resolved my issue...
Add a Swap space which acts like a backup memory in case your RAM is eaten up.
I also upgraded to 8GB HD to be on safer side.
Now since i have followed the above mentioned steps, i am not facing any low memory error despite the size of my indexex data becoming huge every month.Furthermore I am still good to go error free and memory related issues while indexing this huge data.
One more thing, which I recently did to improve memory consumption of my Rails app is injecting jmalloc inside my ruby 2.4.1.You can read more here but in simple words, It helps improve memory consumption of ruby apps.A copied explanation of Jmalloc would be like -
Ruby traditionally uses the C language function malloc to dynamically allocate, release, and re-allocate memory when storing objects. Jemalloc is a malloc(3) implementation developed by Jason Evans (hence the “je” initials at the start of malloc), which appears to be more effective at allocating memory compared to other allocators due to its focus on fragmentation avoidance and scalable concurrency support.
Below are my steps that I took to add Jmalloc into my existing Ruby 2.4.1 by reinstalling 2.4.1 on my Production server(after testing on staging/dev) using rvm.
===========CHECK RVM --
rvm info
============ check ruby version
ruby -v
ruby 2.4.1p111 (2017-03-22 revision 58053) [x86_64-linux]
========== if ruby is installed, reinstall with Jemalloc and compile
=============REINSTALL RUBY WITH JAMALLOC
rvm reinstall 2.4.1 -C --with-jemalloc --disable-binary
============VERIFY JMALLOC IN THE BELOW LIST of ruby compiling list
ruby -r rbconfig -e "puts RbConfig::CONFIG['LIBS']"
-lpthread -ljemalloc -lgmp -ldl -lcrypt -lm
The Results of jmalloc being injected in ruby were just outstanding even after testing with 2000 random requests, my overall App memory was only 364MB and remained the same throughout by testing with below mentioned testing gems.
gem "memory_profiler"
gem "derailed_benchmarks"
hope it helps
CPU and (RAM)memory used by Elasticsearch is very high. So you can tell the elastic search that you can only use a specific amount of memory.
First go to /etc/elasticsearch/
Just see the directory structure and you will get a file named jvm.options
Just allocate the memory by adding 2 lines of code in jvm.options file:
-Xms1g -Xmx1g
Just restart your elasticsearch and enjoy it :)
I have been trying to use a specific pretrained machine learning model for captioning pictures. I have been using https://github.com/unnonouno/densecap .
It comes with a Dockerfile setting up a whole cuda/torch/cudnn environement.
Predictions on a new picture are made by running the run_model.lua script. It does work when running it on the CPU by passing -gpu -1 but not when removing the arguement and running it on the GPU. I get the following error in that case:
THCudaCheck FAIL file=/tmp/luarocks_cutorch-scm-1-8398/cutorch/lib/THC/THCGeneral.c line=70 error=35 : CUDA driver version is insufficient for CUDA runtime version
/root/torch/install/bin/luajit:
/root/torch/install/share/lua/5.1/trepl/init.lua:389: loop or previous error loading module 'cutorch'
stack traceback:
[C]: in function 'error'
/root/torch/install/share/lua/5.1/trepl/init.lua:389: in function 'require'
./densecap/utils.lua:26: in function 'setup_gpus'
run_model.lua:145: in main chunk
[C]: in function 'dofile'
/root/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00406670
I have tried different things such as reinstalling cudnn by runnign luarocks install cudnn or downgrading from cudnn5 to cudnn4 without any success.
The issue appears to be with your CUDA driver:
CUDA driver version is insufficient for CUDA runtime version
Take a look at similar discussions here.
No need to change your cuDNN version. You just need to rectify your CUDA driver/toolkit compatibility.
I am trying to program a Xilinx zc706 board, which involves building a Linux kernel and setting up the bootloader. I am following the workflow given here.
The first step after downloading stuff involves making the device tree compiler, which I need to get UBoot, which I need to start up linux. I obtained the source for DTC off github, but when I went into the SDK shell, moved to the directory, and entered "MAKE", I got an error:
sed: -e expression #1, char 1: unknown command: `''
-x was unexpected at this time.
" " LEX convert-dtsv0-lexer.lex.c
process_begin: CreateProcess(NULL,flex -oconvert-dtsv0-lexer.lex.c convert-dtsv0-lexer.1, ...) failed.
and then followed with some other stuff saying files could not be found, presumably because this first thing failed.
I have no idea how to read this error, it's gibberish to me. Can someone explain either what's wrong with this build, or how I can get either the DTC or UBoot I would need to run a Zynq chip?
It looks like you're making things much harder on yourself by doing manually what PetaLinux will do automatically for you. Unless you're a die-hard Linux purist and want to build your embedded Linux system from scratch, you should stop reading after the first paragraph of the link you posted and head to the PetaLinux wiki page.
Follow the steps in the PetaLinux Tools Reference Guide to get your project up and running. It will handle building u-boot, rootfs, linux kernel, and device tree compiler for you (petalinux-* commands) and you can focus on developing your application.
when trying to acquire a Heapdump of my eclispe rcp application with the Java Memory Analayzer I get the following error message:
Error creating heap dump. jmap exit code = 1
4120: Unable to attach to 32-bit process running under WOW64
The -F option can be used when the target process is not responding
OS: 64bit Windows7
Java Memory Analyzer: 64bit
Application: 32bit
I tried the 32 and the 64 bit variation and got the same error.
Cans oemone tell me what the problem is?
This means that the jmap you are using is the one that is bundled with the 64 bit version of the JDK. If you are using this jmap to acquire a heap dump from a 32 bit JVM, then this error pops up.
Solution : Use the version of jmap that is bundled with the 32 bit JDK.
Let's say you have Java 64 bit version(so is for Java tools like jvisualvm and jstack) and the IDE/path, where ever you run javac command have java 32 bit in class path, then you will see such issues.
if you try to analyze this process from
1)java VisualVM, then it may not load your process properly, means you will not able to take thread/heap dumps for problematic process.
2) from jstack , then also it will create same problem like you mentioned above.
To solve the issue,make sure that everything is matched in the version.
I'm working on a Rails3 Project using Elasticsearch and Tire. After installing Elastic-search when I try to run it, it gives me the following error:
The stack size specified is too small, Specify at least 160k
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
I have Java 7 and my OS is Ubuntu 12.04. How do I resolve this error?
Do I need to install Java 6, and if so, how do I do that?
Had the same problem with an older version of elasticsearch (0.19.0).
Installed 0.19.8 and it works again.
You can get it here: elasticsearch-0.19.2.tar.gz
good luck!
Indeed there seems a problem with java 1.7 and older versions of es.
Or you can send the stack size option and set it to something greater then 160k when starting elasticsearch on the console
sudo ./bin/elasticsearch -Xss194k
Increase the stack size to an amount greater than 160k.
Edit file elasticsearch-0.xx.x/bin/elasticsearch.in.sh at about line 34 and increase -Xss to something larger such as -Xss256k for example.
# reduce the per-thread stack size
JAVA_OPTS="$JAVA_OPTS -Xss256k"