When I try to compile Phalcon, I get an error:
virtual memory exhausted: Cannot allocate memory
I am running the following commands
git clone --depth=1 git://github.com/phalcon/cphalcon.git
cd cphalcon/build
sudo ./install
I have a VPS with 1GB RAM
Add more swapfile may help. I met this problem when tried to compile YouCompleteMe for vim, solved it by adding swapfile.
https://www.digitalocean.com/community/articles/how-to-add-swap-on-ubuntu-14-04
It seems GCC is allocating a lot of memory, check this
https://web.archive.org/web/20141202015428/http://hostingfu.com/article/compiling-with-gcc-on-low-memory-vps
Stopping as many services as possible (Apache, MySQL etc.) will free up more memory, and Phalcon will compile. Worst case scenario you will need to increase the memory of your virtual box.
Thanks to #AndrewD for providing the link that works.
Another option that Andres suggested is to build from a different folder:
https://forum.phalconphp.com/discussion/7891/upgrading-from-201-to-205-getting-virtual-memory-exhausted-canno
The steps would be...
git clone --depth=1 git://github.com/phalcon/cphalcon.git
cd cphalcon/ext
sudo ./install
As he explained it, this approach uses less memory but takes more time and for newer version of GCC >4.7 the end result is the same.
Related
For reasons pertaining to storage and git, installing homebrew comes with the issue of:
Error:
homebrew-core is a shallow clone.
homebrew-cask is a shallow clone.
To `brew update`, first run:
git -C /usr/local/Homebrew/Library/Taps/homebrew/homebrew-core fetch --unshallow
git -C /usr/local/Homebrew/Library/Taps/homebrew/homebrew-cask fetch --unshallow
These commands may take a few minutes to run due to the large size of the repositories.
This restriction has been made on GitHub's request because updating shallow
clones is an extremely expensive operation due to the tree layout and traffic of
Homebrew/homebrew-core and Homebrew/homebrew-cask. We don't do this for you
automatically to avoid repeatedly performing an expensive unshallow operation in
CI systems (which should instead be fixed to not use shallow clones). Sorry for
the inconvenience!
It explicitly states what to do next, but I've found that running those git commands is a pretty terrible experience with slow internet (or just in general). The absolute best solution to such an issue would be if there were torrents which contained the entire project folder including git. Barring that, a simple download would be nice. Really, anything but git is in scope of this question.
To reiterate the issues with using git:
requires git
opaque UX by default (tends to hang without any updates for long periods of time)
not great for slow connections
wrong tool for a situation that only involves downloading files
Is there an alternative path to Homebrew that doesn't incorporate git?
As said in the comments, you can use the skip tap cloning feature of Homebrew. Note that this is, for now, a beta feature:
Skip Tap Cloning (beta)
You can instruct Homebrew to skip cloning the
Homebrew/homebrew-core tap during installation by setting the beta
HOMEBREW_INSTALL_FROM_API environment variable with the following:
export HOMEBREW_INSTALL_FROM_API=1
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
This will make Homebrew install formulae and casks from the
homebrew/core and homebrew/cask taps using Homebrew’s API instead of
local checkouts of these repositories.
I agree with OP 100%.
Homebrew should simply provide a downloadable file. Especially since GitHub has implemented a very unfriendly interface and the default shell in MacOS Ventura is now zsh which Homebrew specifically states will not work with their provided Terminal install command.
If you make it so difficult to install your software, then don't be surprised that fewer people use it.
Edit: I suppose they just don't really care, but I spent a few hours trying to get Homebrew up to date tonight because of GitHub.
Can anyone help me figure out why it took around 20G of my C disk to install QIIME2 through Docker?
Thank you!
Before installing QIIME2, I had 30GB in my C disk, but only remains 8GB after installation.
The short answer to that question is: QIIME2 is pretty big. But I'm sure you knew that already, so let's dig into the details.
First, the QIIME image is roughly 12GB when uncompressed. (This raises the question of where the other 8GB went if you lost 20GB in total. I don't have an answer to that.)
Using a tool called dive, I can explore the QIIME image, and see where that disk space is going. There's one entry that stands out in the log:
5.9 GB |1 QIIME2_RELEASE=2022.8 /bin/sh -c chmod -R a+rwx /opt/conda
For reference, the chmod command is a command which changes the permissions on a directory, without changing the directory itself. Yet, this command is responsible for half the size of the image. It turns out that due to the way docker works internally. If a layer changes the metadata or permissions of a file, then the original file must be re-included into the layer. More information
The remainder is 6GB, which comes mostly from a step where QIIME installs all of its dependencies. That's fairly reasonable for a project packaged with conda.
To summarize, it's an intersection of three factors:
Conda is fairly space-hungry, compared to equivalent pip packages.
QIIME has a lot of features and dependencies.
Every dependency is included twice.
Edit: this is now fixed in version 2022.11.
I'm using bazel on a computer with 4 GB RAM (to compile the tensorflow project). Bazel does however not take into account the amount of memory I have and spawns too many jobs causing my machine to swap and leading to a longer build time.
I already tried setting the ram_utilization_factor flag through the following lines in my ~/.bazelrc
build --ram_utilization_factor 30
test --ram_utilization_factor 30
but that did not help. How are these factors to be understood anyway? Should I just randomly try out some others?
Some other flags that might help:
--host_jvm_args can be used to set how much memory the JVM should use by setting -Xms and/or -Xmx, e.g., bazel --host_jvm_args=-Xmx4g --host_jvm_args=-Xms512m build //foo:bar (docs).
--local_resources in conjunction with the --ram_utilization_factor flag (docs).
--jobs=10 (or some other low number, it defaults to 200), e.g. bazel build --jobs=2 //foo:bar (docs).
Note that --host_jvm_args is a startup option so it goes before the command (build) and --jobs is a "normal" build option so it goes after the command.
For me, the --jobs argument from #kristina's answer worked:
bazel build --jobs=1 tensorflow:libtensorflow_all.so
Note: --jobs=1 must follow, not precede build, otherwise bazel will not recognize it. If you were to type bazel --jobs=1 build tensorflow:libtensorflow_all.so, you would get this error message:
Unknown Bazel startup option: '--jobs=1'.
Just wanted to second #sashoalm's comment that the --jobs=1 flag was what made bazel build finally work.
For reference, I'm running bazel on Lubuntu 17.04, running as a VirtualBox guest with about 1.5 GB RAM and two cores of an Intel i3 (I'm running a Thinkpad T460). I was following the O'Reilly tutorial on TensorFlow (https://www.oreilly.com/learning/dive-into-tensorflow-with-linux), and ran into trouble at the following step:
$ bazel build tensorflow/examples/label_image:label_image
Changing this to bazel build --jobs=1 tensorflow/... did the trick.
i ran into quite a few unstability that bazel build failed in my k8s cluster.
Besides --jobs=1, try this:
https://docs.bazel.build/versions/master/command-line-reference.html#flag--local_resources
E.g. --local_resources=4096,2.0,1.0
On my production server, which is hosted on digital ocean, if that helps, Ubuntu 12.04, I have RoR 4 and rake 10.1.1.
When I deploy, I run rake assets:precompile, and I've noticed a strange issue where if I have a rails console session open when I do this, I get the following output
~# rake assets:precompile
~# Killed
It's mainly annoying, but the reason I want it resolved is when hiring new developers, there will be deploy/console conflict nightmare.
Thanks,
Brian
Your precompile process is probably being killed because you are running out of RAM. You can confirm this by running top in another ssh session. To fix this, create a swap file that will be used when RAM is full.
Create SWAP Space on Ubuntu
You will probably end up needing some swap space if you plan on using Rails on Digital Ocean 512MB RAM droplet. Specifically, you will run out of RAM when compiling the assets resulting in the process being quietly killed and preventing successful deployments.
To see if you have a swap files:
sudo swapon -s
No swap file shown? Check how much disk space space you have:
df
To create a swap file:
Step 1: Allocate a file for swap
sudo fallocate -l 2048m /mnt/swap_file.swap
Step 2: Change permission
sudo chmod 600 /mnt/swap_file.swap
Step 3: Format the file for swapping device
sudo mkswap /mnt/swap_file.swap
Step 4: Enable the swap
sudo swapon /mnt/swap_file.swap
Step 5: Make sure the swap is mounted when you Reboot. First, open fstab
sudo nano /etc/fstab
Finally, add entry in fstab (only if it wasn't automatically added)
# /etc/fstab
/mnt/swap_file.swap none swap sw 0 0
Save and exit. You're done adding swap. Now your rake assets:precompile should complete without being killed.
Rake assets:precompile is a memory eating process.
So make sure you have enough RAM before using that command
I have an opsworks stack on aws and I'd to change my instance type.
I was using t1.micro and i just changed it to t1.small
Thanks a lot.
This uses a lot of RAM. To check how much available RAM memory you have free, use the command
free -m
This will show the available RAM in MB
A temporary solution would be to create a swap space.
I was going to add this as a comment to Jason R post above before you go into his steps, just to make sure it is a RAM resource issue.
you could also run
echo {1,2,3} > /proc/sys/vm/drop_caches
to clean up the cache memory, but it probably will not free up enough.
This might help someone. For me, since i couldn't use 'fallocate' command, i had to do:
sudo dd if=/dev/zero of=/mnt/4GB.swap bs=4096 count=1048576
sudo chmod 600 /mnt/4GBB.swap
sudo mkswap /mnt/4GB.swap
sudo swapon /mnt/4GB.swap
My goal is to install and fully setup Postgresql by following railscast video.
P.S I am on a Mountain Lion 10.8
$ brew install postgresql
seems okay.
$ initdb /usr/local/var/postgres
ok's ok's then...
FATAL: could not create shared memory segment: Cannot allocate memory
DETAIL: Failed system call was shmget(key=1, size=2072576, 03600).
HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded available memory or swap space, or exceeded your kernel's SHMALL parameter. You can either reduce the request size or reconfigure the kernel with larger SHMALL. To reduce the request size (currently 2072576 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.
So like a good young SO grasshopper I start googling. and come to This SO post:
PostgreSQL installation error -- Cannot allocate memory
the suggested answer in this post lead me to this answer:http://willbryant.net/software/mac_os_x/postgres_initdb_fatal_shared_memory_error_on_leopard
$ sudo sysctl -w kern.sysv.shmall=65536
Password:
kern.sysv.shmall: 1024 -> 65536
$ sudo sysctl -w kern.sysv.shmmax=16777216
kern.sysv.shmmax: 4194304 -> 16777216
looks like everything worked so far, but in order to protect my changes from reboot, I need to update my /etc/sysctl.conf file. The problem is that I can't find it!
how do I locate this file? From my peanut sized understanding of computers, there is no filepath that exists, and if it did what is before the /etc ?? it certainly is not on my desktop. all I get is no such file exists, but I don't know how to find this file.
Embarrassing. I was trying to CD into my file. just do $ cd /etc