Running Rails very slow inside Virtual Box Ubuntu 12.04 - ruby-on-rails

I have VirtualBox with Ubuntu 12.04. I use Vagant to setup my environment. I run Rails 3.2.9 on thin (rails s) and go to VirtualBox's IP adrress (10.10.11.xxx:3000) from browser on my host machine. At this moment I face troubles - page loads very very slowly, on Rails console i see how slowly it responses files (css, js, images): up to 5 seconds for each! But: if I go 0.0.0.0:3000 inside Ubuntu - it works perfect.
Inside VM, there are two Network interface
eth0 --> it is set by Vagrant (NAT)
eth1 --> bridge, has an external IP 10.10.11.xxx
Where is the problem? Where to look for a solution?
People said it is related to reverse DNS lookup problem. How can I solve it? anyone has idea?

Make sure you don't place your project in synced folder (by default it uses vboxsf which has known performance issues when number of files/directories are large).
Webrick Reverse DNS Lookup
Looks like you are using Webrick (thin doesn't seem to have this problem), edit its config.rb to disable reverse DNS lookup to speed it up.
For rbenv managed ruby, e.g. => ~/.rbenv/versions/1.9.3-p448/lib/ruby/1.9.1/webrick/config.rb
Change :DoNotReverseLookup => nil to :DoNotReverseLookup => true
NOTE: People mentioned stopping the avahi-daemon, you can try to stop it if you use it. My understanding is that it is NOT installed by default on Ubuntu Server (or other base installs) (but desktop).
Similar problems for more details
Webrick is very slow to respond. How to speed it up?
Rails 3.1 on Ubuntu 11.10 under VirtualBox very slow

Having very slow perform when running Ubuntu 12.10 and 13.04 in virtualbox? It’s because Ubuntu can’t use graphics card for acceleration, ubuntu uses CPU for rendering graphics trough LLVMpipe. It makes running ubuntu in virualbox really slow. http://namhuy.net/951/how-to-fix-slow-performance-ubuntu-13-04-running-in-virtualbox.html
To check if your Ubuntu 12.10 or 13.04 guest is using 3D acceleration
/usr/lib/nux/unity_support_test -p
You should see something like this
Not software rendered: no
Not blacklisted: yes
GLX fbconfig: yes
GLX texture from pixmap: yes
GL npot or rect textures: yes
GL vertex program: yes
GL fragment program: yes
GL vertex buffer object: yes
GL framebuffer object: yes
GL version is 1.4+: yes
Unity 3D supported: no
If you see “Not software rendered” and “Unity 3D supported” both say no. This means Unity is using slow LLVMpipe.
To enable 3D supported, fist you will need to update linux-headers
uname -r
sudo apt-get install linux-headers-$(uname -r)
sudo apt-get autoremove
sudo apt-get install build-essential
Now insert vitualbox guest iso from devices and to install manually
cd /media
ls
cd username
ls
cd VBOX*
ls
sudo ./VBoxLinuxAdditions.run
Insert vboxvideo to /etc/modules
sudo nano /etc/modules
Add “vboxvideo” at the end of the file
loop
lp
vboxvideo
Reboot the machine
sudo reboot

Related

How to install linux-modules-extra?

When I run sudo apt install linux-modules-extra-$(uname -r) in a Docker container based on a Ubuntu 20.04 on a single board computer running Ubuntu 18.04, I get the following errors:
E: Unable to locate package linux-modules-extra-4.15.0-143-generic
E: Couldn't find any package by glob 'linux-modules-extra-4.15.0-143-generic'
E: Couldn't find any package by regex 'linux-modules-extra-4.15.0-143-generic'
To me, this makes me wonder whether it is even possible to install linux-modules-extra-4.15.0-143-generic in Ubuntu 20.04? Maybe it is only compatible with Ubuntu 18.04?
Could anyone clarify this for me please?
In general, if you're building a kernel module, it has to match exactly the kernel that's running on the host system. If you're using a native Debian or Ubuntu system (without Docker), there's a system where kernel modules can be rebuilt or reinstalled when the host kernel is updated. See for example the Debian wiki KernelDKMS page.
In contrast, a Docker image is generally supposed to be portable across hosts. If you upgrade the host's kernel, or if you run a FROM ubuntu:18.04 image on an Ubuntu 20.04 host, the image isn't really supposed to be aware of this.
In your particular case, you can't get the kernel headers you need, because they're not part of the Ubuntu 18.04 distribution. For this particular case it might be possible to get the headers from the later version of Ubuntu, but it might not be possible in the general case; maybe because the system is actually running plain Debian or RHEL and the kernel build is different, maybe because the operator built their own kernel.
Since a Linux kernel module is so specific to the host it runs on, and since it can bypass any and all security concerns, it's not appropriate to try to install one in a container. Do it directly on the host instead.

Hash Sum mismatch error, and others, while Installing Docker on Ubuntu 18.04 VM (VirtualBox)

Preface that I'm running these commands on a VirtualBox VM, Ubuntu Server 18.04. Unfortunately I can't get the bidirectional clipboard to work so I have to post all the output as links. Super sorry about that.
I've been trying to install Docker on an Ubuntu 18.04 VM on VirtualBox running on a Windows 10 Home, Version 10.0.19041 host. I've been encountering issues at every turn.
First, I tried to install with apt-get after following the instructions on the Docker Ubuntu install tutorial, to no avail. I get an error with "Hash Sum mismatch" shown pretty frequently. I also tried running the convenience script (i.e. curl -fsSL https://get.docker.com -o get-docker.sh followed by sudo sh get-docker.sh) on a completely fresh machine, and got the same errors.
After I was unable to install with apt-get, I tried downloading the packages and installing manually. When trying to curl the packages with
sudo curl -k -O -L https://download.docker.com/linux/ubuntu/dists/bionic/pool/stable/amd64/docker-ce_18.09.9~3-0~ubuntu-bionic_amd64.deb
and the same curl for docker-ce-cli and containerd.io debs I'm able to complete the downloads just fine. Then, when I run
sudo dpkg -i ./docker-ce_18.09.9~3-0~ubuntu-bionic_amd64.deb
to install the packages, I get these dpkg Errors, claiming that the deb is corrupted. I get the same errors no matter which deb I specify.
I suppose at the end of the day, Docker isn't strictly necessary for the project I'm working on, but it's very frustrating that I'm at such a loss. I'd be very grateful if anyone can give me some guidance. Please feel free to comment if you need any more system info.
p.s. I've seen a couple theories but don't know how to address them.
Possibly, an issue with WSL2 and the Virtual Machine Platform on Windows. Discussed in this thread, but it didn't seem like anyone found a solution.
An issue related to apt-cache and /var/lib/apt/lists/*, which I've already cleaned cleared multiple times.
I've also run apt-get update more times than I can count.
Update here. In the end I was unable to install Docker on my VirtualBox VM. The thing that worked for me was booting up an Ubuntu 18.04 VM in VMware. With all the same specifications, I was able to install Docker and get my application running. If anyone comes back to this question and finds a fix for this on VirtualBox, please post it and I'll mark it as the answer!
The GCrypt so library has a real issue computing hashes when running VirtualBox or other VM software under Windows 10. I made a video about this.
https://youtu.be/inU8pQLXIkE
Here is the solution:
mkdir /etc/gcrypt
touch /etc/gcrypt/hwf.deny
cd /etc/gcrypt
sudo vi hwf.deny
(edit this file to read "all" without quotes on line 1
Save.
This will solve the issue.

(using WSL ubuntu app) system has not been booted with system as init system (PID 1). Can't operate [duplicate]

This question already has an answer here:
Enable Systemd in WSL 2
(1 answer)
Closed 2 months ago.
I'm a very first user of Ubuntu. I failed to install Ubuntu in wmware ,so I installed Ubuntu application in Microsoft app store and everything was quite all right.
But when I insert shutdown or halt command to power off my ubuntu I kept getting 'system has not been booted with system as init system (PID 1). Can't operate' error message.
I tried to using docker following with this link (https://blog.jayway.com/2017/04/19/running-docker-on-bash-on-windows/) but I failed after going to Number 2 process many times. I'm not sure my failure is because of installing docker toll box instead of normal one. (my computer is just windows 10. not a windows pro)
I think I have to try other thing. If you don't mind me asking, how can I slove this problem?
(and one more. If I just click 'X' button at the top of right side, is it different with shutting down Ubuntu using 'halt' or 'shutdown' command?
Thank you
I found this useful: https://linuxhandbook.com/system-has-not-been-booted-with-systemd/
In my case
# start docker using systemctl
sudo systemctl docker start
# returns:
#
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
The basic advice is:
# check if your system is using `systemd` or `sysvinit`
ps -p 1 -o comm=
If the command doesn't return systemd, and in my case, Ubuntu-20.04 on WSL, the command returned init, then use the command pattern
# start services using sysvinit
service docker start
This works for me.
Based on this: https://devblogs.microsoft.com/commandline/systemd-support-is-now-available-in-wsl/
systemd is now available in WSL Version 0.67.6 or higher.
To enable:
Open a command prompt:
# CHOOSE option A or B:
# A. check your version and ensure it is 0.67.6 or higher
wsl.exe --version
# B. run WSL update if the version is low
wsl.exe --update
Open a WSL prompt:
sudo nano /etc/wsl.conf
Add this to wsl.conf and save the file:
[boot]
systemd=true
Go back to the command prompt:
# warning: this will kill any shells/processes you have running!
wsl.exe --shutdown
First of all, Ubuntu installed via MS Store is using WSL (Windows Subsystem for Linux) Technology. It simply means there is no virtualization, Windows and Linux kernels are living side by side (the Linux kernel is not fully implemented yet).
So if you are trying to "shut down your Ubuntu", you would turn off the whole computer just like the Windows does. But in this case, WSL doesn't apparently have rights to do that.
In other words, you can look at your Ubuntu bash window just like any other terminal, like e.g. CMD or PowerShell.
When you start a program in the WSL (Ubuntu), you can see it also in Windows Task Manager - that's just a proof, that there is no virtualization.
Regarding docker:
If I'm not mistaken Windows 10 Home doesn't provide Hyper-V virtualization. It means you'll have to use a different one e.g. by using VirtualBox. In order to make it work, I can recommend you to follow this tutorial and especially for VirtualBox please check this answer here
Hope it helps :)

Docker and VirtualBox on Fedora 27

I installed virtualbox 5.2 from their website. And I ran this command..
docker-machine create -d virtualbox dev
then I got this..
Running pre-create checks...
Error with pre-create check: "We support Virtualbox starting with version 5. Your VirtualBox install is
"WARNING: The vboxdrv kernel module is not loaded. Either there is no module
available for the current kernel (4.14.3-300.fc27.x86_64) or it failed to
load. Please recompile the kernel module and install it by
sudo /sbin/vboxconfig
You will not be able to start VMs until this problem is fixed.\\n5.2.2r119230\".
Please upgrade at https://www.virtualbox.org"
I did as it suggested..
sudo /sbin/vboxconfig
then..
vboxdrv.sh: Stopping VirtualBox services.
vboxdrv.sh: Building VirtualBox kernel modules.
This system is currently not set up to build kernel modules.
Please install the Linux kernel "header" files matching the current kernel
for adding new hardware support to the system.
The distribution packages containing the headers are probably:
kernel-devel kernel-devel-4.14.3-300.fc27.x86_64
This system is currently not set up to build kernel modules.
Please install the Linux kernel "header" files matching the current kernel
for adding new hardware support to the system.
The distribution packages containing the headers are probably:
kernel-devel kernel-devel-4.14.3-300.fc27.x86_64
..and I don't understand anymore. Please help.
Install the kernel-devel package and it will continue. Btw, this is not a programming question, there are better forums for such questions...

How to install a minimal cuda driver file into Alpine linux

I'm wanting to install the minimal cuda runtime files into alpine linux and create a much smaller docker base with cuda than that provided by nvidia themselves. The nvidia official ones are enormous as usual.
How do I obtain these runtime files without pulling the entire cuda 8 toolkit during docker build?
I can't speak as to what other files might be needed. However, Nvidia drivers are compiled with glibc, and alpine uses musl to maintain its small footprint. You would likely need the nvidia driver's source code so you could recompile it with musl, or an alpine baseimage that implements glibc such as this one. I haven't tried using this yet, but I was able to sucessfully compile libcudacore with musl and gcc/make on an alpine 3.8 container. I have not yet been able to compile the entire Nvidia/Cuda toolkit yet. I will attempt to test this more when I have more time.
The reality is that Nvidia/CUDA is not supported in any way with Alpine Linux Musl or its libc port, and you will end up with a flaky image nevertheless even if you succeed with your alchemist venture.
Nvidia drivers and CUDA Toolkits are incredibly complex systems that honestly I can't see the point to compile it yourself for an unsupported system library or an unsupported port for libc, with all the unexpected to happen even in the case it compiles. Use Debian's slim images or Ubuntu minimal and install official supported files manually, as this is the smallest you can go. Or even better use the "huge" Nvidia DockerHub images (ubuntu LTS based).
Anyway, beyond this question, the Nvidia DockerHub ones are the best way to go, they are supported by the creators of CUDA Toolkit itself and they are no brainers. If you want to be picky go to their Gitlab's repository for dockers, you can build up Debian/Ubuntu by hand pretty easily and quick.
Yes they Nvidia DockerHub images are 1-2 gig's large, but normally you only have to download them once, as you use the image as a base, if you add your code to it only those layers of your code which are normally small to dozens of Mbi are to be recurrently pulled/pushed, not the entire image, so honestly I can't see a reason why people is so much concerned about image sizes, small is better no doubt but up to a point, spending your valuable time in your actual needs is far better.
somebody's solution for alpine-cuda:
https://arto.s3.amazonaws.com/notes/cuda
Drivers
https://developer.nvidia.com/vulkan-driver
$ lsmod | fgrep nvidia
$ nvidia-smi
Driver Installation
https://us.download.nvidia.com/XFree86/Linux-x86_64/390.77/README/
https://github.com/NVIDIA/nvidia-installer
Driver Installation on Alpine Linux
https://github.com/sgerrand/alpine-pkg-glibc
https://github.com/sgerrand/alpine-pkg-glibc/releases
https://wiki.alpinelinux.org/wiki/Running_glibc_programs
$ apk add sudo bash ca-certificates wget xz make gcc linux-headers
$ wget -q -O /etc/apk/keys/sgerrand.rsa.pub https://raw.githubusercontent.com/sgerrand/alpine-pkg-glibc/master/sgerrand.rsa.pub
$ wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.27-r0/glibc-2.27-r0.apk
$ wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.27-r0/glibc-bin-2.27-r0.apk
$ wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.27-r0/glibc-dev-2.27-r0.apk
$ wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.27-r0/glibc-i18n-2.27-r0.apk
$ apk add glibc-2.27-r0.apk glibc-bin-2.27-r0.apk glibc-dev-2.27-r0.apk glibc-i18n-2.27-r0.apk
$ /usr/glibc-compat/bin/localedef -i en_US -f UTF-8 en_US.UTF-8
$ bash NVIDIA-Linux-x86_64-390.77.run --check
$ bash NVIDIA-Linux-x86_64-390.77.run --extract-only
$ cd NVIDIA-Linux-x86_64-390.77 && ./nvidia-installer
Driver Uninstallation
$ nvidia-uninstall
Driver Troubleshooting
Uncompressing NVIDIA Accelerated Graphics Driver for Linux-x86_64 390.77NVIDIA-Linux-x86_64-390.77.run: line 998: /tmp/makeself.XXX/xz: No such file or directory\nExtraction failed.
$ apk add xz # Alpine Linux
bash: ./nvidia-installer: No such file or directory
Install the glibc compatibility layer package for Alpine Linux.
ERROR: You do not appear to have libc header files installed on your system. Please install your distribution's libc development package.
$ apk add musl-dev # Alpine Linux
ERROR: Unable to find the kernel source tree for the currently running kernel. Please make sure you have installed the kernel source files for your kernel and that they are properly configured
$ apk add linux-vanilla-dev # Alpine Linux
ERROR: Failed to execute `/sbin/ldconfig`: The installer has encountered the following error during installation: 'Failed to execute `/sbin/ldconfig`'. Would you like to continue installation anyway?
Continue installation.
Toolkit
https://developer.nvidia.com/cuda-toolkit
https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/
Toolkit Download
https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu&target_version=1604&target_type=runfilelocal
$ wget -c https://developer.nvidia.com/compute/cuda/9.2/Prod2/local_installers/cuda_9.2.148_396.37_linux
Toolkit Installation
https://docs.nvidia.com/cuda/cuda-installation-guide-linux/
Toolkit Installation on Alpine Linux
$ apk add sudo bash
$ sudo bash cuda_9.2.148_396.37_linux
# You are attempting to install on an unsupported configuration. Do you wish to continue? y
# Install NVIDIA Accelerated Graphics Driver for Linux-x86_64 396.37? y
# Do you want to install the OpenGL libraries? y
# Do you want to run nvidia-xconfig? n
# Install the CUDA 9.2 Toolkit? y
# Enter Toolkit Location: /opt/cuda-9.2
# Do you want to install a symbolic link at /usr/local/cuda? y
# Install the CUDA 9.2 Samples? y
# Enter CUDA Samples Location: /opt/cuda-9.2/samples
$ sudo ln -s cuda-9.2 /opt/cuda
$ export PATH="/opt/cuda/bin:$PATH"
Toolkit Uninstallation
$ sudo /opt/cuda-9.2/bin/uninstall_cuda_9.2.pl
Toolkit Troubleshooting
Cannot find termcap: Can't find a valid termcap file at /usr/share/perl5/core_perl/Term/ReadLine.pm line 377.
$ export PERL_RL="Perl o=0"
gcc: error trying to exec 'cc1plus': execvp: No such file or directory
$ apk add g++ # Alpine Linux
cicc: Relink `/usr/lib/libgcc_s.so.1' with `/usr/glibc-compat/lib/libc.so.6' for IFUNC symbol `memset'
https://github.com/sgerrand/alpine-pkg-glibc/issues/58
$ scp /lib/x86_64-linux-gnu/libgcc_s.so.1 root#alpine:/usr/glibc-compat/lib/libgcc_s.so.1
$ sudo /usr/glibc-compat/sbin/ldconfig /usr/glibc-compat/lib /lib /usr/lib
Compiler
https://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc/
$ nvcc -V
Please define what you actually mean by "into Alpine Linux".
Regardless whether you're running the workloads directly on the host or in a container or chroot - you need to install the whole NVidia driver stack (including Cuda libs, kernel drivers, etc) on the host. Also kernel and userland drivers are two sides of the same product, both have to have the same version.
This means: whatever the host OS actually is, it has to be exactly one of those directly supported by NVidia. You have to use exactly the kernel versions (and configurations) that Nvidia built their proprietary/binary-only drivers for. Using a different kernel version or recompiling it with different configuration MIGHT POSSIBLY work, but it's DANGEROUS. Even with exactly the officially supported distros, it's still gambling, and depending on moon phase or whether some Chinese rice bag fallen over. It often works, but when it doesn't anymore, you're most likely out of luck.
Now when you're putting your workloads into some separate OS image, e.g. chroot or container, you also have to have the same driver package version in that image, too. One of the primary reasons for using containers or chroots - isolating and decoupling applications from host OS (so you don't need to fit them in anymore and do upgrades independently, even have container images independent from the host OS) - is now immediately voided. Host and workload need to fit together exactly.
In short: if you wanna have a CUDA workload, both host OS as well as workload image (container, chroot, etc) need to be supported by that, and they both need to have the same driver version installed. Anything else is just russian roulette.
Since somebody mentioned "nvidia-docker". This breaks the security isolation that docker is originally meant for. (just look at the source, which actually is available somewhere on github). It's nothing but a better chroot. And still, host and docker image need to have the same driver stack version installed.
Finally, I'd like to ask the question, what your actual use case is here.
Be warned: this all might be okay for playing games on an totally unimportant home computer, but really not suited for anything professional, where stability and security matter. If you're bound to certain data security / privacy regulations like GDPO, keep far away from this - you just cannot comply to these regulations with those proprietary drivers. Legally dangerous.
--mtx
Addendum: why do proprietary kernel drivers never work reliably ?
Express answer: the Linux kernel was never ever made for that, this just isn't supported.
Longer answer: kernel modules are NOT external programs, that are executed in some isolated environment (like eg. done with userland programs) - they are (by definition) integral pieces of the kernel that just happen to be lazily loaded when needed. (they are not even like shared libraries / DLLs). This means that they have to fit - on binary level - exactly to the actual build of the kernel you're running. When compiling the kernel, there're lots of config options that influence the actual internal binary layout in subtle ways, e.g. enabling/disabling some features can change the layout of certain data structures, cpu specific optimizations can change datastructures, calling conventions, locking mechanisms, and much much more.
And those things also change from kernel version to another. We're e.g. doing lots of internal refactorings (e.g. in data structures, macros and inline functions) after which the same piece source code generates very different binary code.
Therefore, any kernel modules always need to be compiled exactly for a specific kernel image (with the same config options, against the same includes, with the same compiler flags), or you risks horrible failures that could lead to lockups, security flaws, data corruptions or even total data loss.
You have been warned.
To clarify, this is just the driver. Not cuda. That's another story.
In fact this turns out to be much easier than expected. I just didn't quite /understand how far nvidia-docker project had come and quite how it worked.
Basically, download and install the latest nvidia-docker. From the nvidia-docker project.
https://github.com/NVIDIA/nvidia-docker/releases
Then create an alpine linux Dockerfile.
FROM alpine:3.5
LABEL com.nvidia.volumes.needed="nvidia_driver"
ENV PATH /usr/local/nvidia/bin:/usr/local/cuda/bin:${PATH}
ENV LD_LIBRARY_PATH /usr/local/nvidia/lib:/usr/local/nvidia/lib64
RUN /bin/sh
Build it.
docker build -t alpine-nvidia
Run
nvidia-docker run -ti --rm alpine-nvidia
Note the use of the nvidia-docker cli instead of the normal docker cli.
nvidia-docker calls docker cli with extra parameters.

Resources