Git push Heroku master takes forever - ruby-on-rails

I'm using a VM based on Ubuntu 12.04 (ruby 1.9.2p290 and rails 3.1.0) and my app work perfectly on local. I'm using Git and when I try to git push heroku master it doesn't work. I get:
Counting objects: 435, done. Compressing objects: 100% (215/215), done. Writing objects: 100% (435/435), 73.35 KiB, done. Total 435 (delta 171), reused 435 (delta 171)
And it never finish, so it doesn't push anything to Heroku. The terminal standby forever.
Operative System information:
jobs
[1]+ Running git push heroku master &
ps -x
PID TTY STAT TIME COMMAND
1078 ? Ssl 0:00 gnome-session --session=ubuntu
1135 ? Sl 0:00 /usr/bin/VBoxClient --clipboard
1147 ? Sl 0:00 /usr/bin/VBoxClient --display
1154 ? Sl 0:00 /usr/bin/VBoxClient --seamless
1162 ? Sl 0:19 /usr/bin/VBoxClient --draganddrop
1167 ? Ss 0:00 /usr/bin/ssh-agent /usr/bin/dbus-launch --exit-with-s
1171 ? S 0:00 /usr/bin/dbus-launch --exit-with-session gnome-sessio
1172 ? Ss 0:01 //bin/dbus-daemon --fork --print-pid 5 --print-addres
1246 ? Sl 0:00 /usr/bin/gnome-keyring-daemon --start --components=se
1250 ? Sl 0:02 /usr/lib/gnome-settings-daemon/gnome-settings-daemon
1329 ? S 0:00 /usr/lib/gvfs/gvfsd
1334 ? Sl 0:00 /usr/lib/gvfs//gvfs-fuse-daemon -f /home/ubuntu/.gvfs
1401 ? Sl 0:03 metacity
1417 ? S 0:00 /usr/lib/i386-linux-gnu/gconf/gconfd-2
1421 ? S<l 0:01 /usr/bin/pulseaudio --start --log-target=syslog
1426 ? Sl 0:01 unity-2d-panel
1427 ? Sl 0:07 unity-2d-shell
1430 ? S 0:00 /usr/lib/pulseaudio/pulse/gconf-helper
1447 ? Sl 0:01 /usr/lib/bamf/bamfdaemon
1450 ? Sl 0:00 /usr/lib/gnome-settings-daemon/gnome-fallback-mount-h
1453 ? Sl 0:02 nautilus -n
1455 ? Sl 0:00 /usr/lib/policykit-1-gnome/polkit-gnome-authenticatio
1457 ? Sl 0:00 bluetooth-applet
1468 ? Sl 0:00 nm-applet
1482 ? S 0:00 /usr/lib/gvfs/gvfs-gdu-volume-monitor
1500 ? Sl 0:00 /usr/lib/gvfs/gvfs-afc-volume-monitor
1504 ? S 0:00 /usr/lib/gvfs/gvfs-gphoto2-volume-monitor
1518 ? S 0:00 /usr/lib/gvfs/gvfsd-trash --spawner :1.9 /org/gtk/gvf
1521 ? Sl 0:01 /usr/lib/unity/unity-panel-service
1523 ? Sl 0:00 /usr/lib/dconf/dconf-service
1539 ? Sl 0:00 /usr/lib/indicator-datetime/indicator-datetime-servic
1541 ? Sl 0:00 /usr/lib/indicator-printers/indicator-printers-servic
1543 ? Sl 0:00 /usr/lib/indicator-messages/indicator-messages-servic
1545 ? Sl 0:00 /usr/lib/indicator-session/indicator-session-service
1547 ? Sl 0:00 /usr/lib/indicator-application/indicator-application-
1549 ? Sl 0:00 /usr/lib/indicator-sound/indicator-sound-service
1574 ? S 0:00 /usr/lib/geoclue/geoclue-master
1591 ? S 0:00 /usr/lib/ubuntu-geoip/ubuntu-geoip-provider
1597 ? Sl 0:00 /usr/lib/gnome-disk-utility/gdu-notification-daemon
1603 ? S 0:00 /usr/lib/gvfs/gvfsd-metadata
1609 ? Sl 0:00 /usr/lib/indicator-appmenu/hud-service
1620 ? Sl 0:00 /usr/lib/unity-lens-applications/unity-applications-d
1622 ? Sl 0:00 /usr/lib/unity-lens-files/unity-files-daemon
1624 ? Sl 0:00 /usr/lib/unity-lens-music/unity-music-daemon
1626 ? Sl 0:00 /usr/bin/python /usr/lib/unity-lens-video/unity-lens-
1653 ? Sl 0:00 /usr/bin/zeitgeist-daemon
1661 ? Sl 0:00 telepathy-indicator
1668 ? Sl 0:00 /usr/lib/zeitgeist/zeitgeist-fts
1672 ? Sl 0:00 zeitgeist-datahub
1676 ? S 0:00 /bin/cat
1682 ? Sl 0:00 /usr/lib/telepathy/mission-control-5
1701 ? Sl 0:00 gnome-screensaver
1703 ? Sl 0:00 /usr/bin/python /usr/lib/unity-scope-video-remote/uni
1728 ? Sl 0:05 gnome-terminal
1734 ? S 0:00 gnome-pty-helper
1738 pts/2 Ss 0:00 bash
1796 ? Sl 0:00 update-notifier
1954 pts/2 S 0:00 git push heroku master
1955 pts/2 S 0:00 ssh git#heroku.com git-receive-pack 'polar-island-471
1959 pts/2 R+ 0:00 ps -x

There are many reasons why your push to Heroku can timeout. In my experience, the most common reason is due to site size, and by site size, I mean 3 things: the size of your Git repo, the size of your site that gets pushed to Heroku (Git repo - ignored files), and the size of the gems you use. Heroku is not particularly robust when it comes to accommodating big sites (or long running processes, for that matter) and if you get too big, you can cause your push to hang / timeout and intermittently too, which can be perplexing.
.git folder
I have had the site get unexplainably large and saw that over time my .git folder in the root of the project had grown to 600mb. Thank goodness I noticed a warning in the Heroku deploy chatter that warned me that my Git repo was too large. Anyway, since that folder is managed by Git behind the scenes, I ended up starting a fresh Git repo and moving my code over to it, which shrunk my site by 90%.
.slugignore
Another pitfall that caused my site to become large enough to timeout was allowing things like logs, my temp directory, and my Solr indexing directory to be included in the project. After I excluded all of those folders in my .slugignore file, pushing became very fast. And yes, you could get the same basic effect using .gitignore, but there are some things that I like to manage through Git, but ignore when pushing to Heroku. That's when the .slugignore comes in handy.
gems
By completing the steps above, I reduced my site to a reasonable size, but still had occasional timeouts. Then I realized that gems contribute to your site size too. So I removed a handful of unused gems from my gemfile, and I was able to reduce my slug compile time from 900+ seconds to 250 seconds and my overall deployment time from 15+ minutes with frequent timeouts to under 10 minutes. What a relief to not have to wait so long for each deploy and risk timing out half the time.
Depending on your particular setup, any or all of these factors can really hurt you. In my case I was doing everything wrong. However, even if you are not timing out, you may still want to prune your site as much as possible to cut down your deployment time.
how to make Heroku not suck
http://www.stormconsultancy.co.uk/blog/development/6-ways-to-get-more-bang-for-your-heroku-buck-while-making-your-rails-site-super-snappy/

The reason was, that my connection was very slow. It was the Access Point of my smartphone.

Related

Why does \b exclude a process in grep?

I can grep my processes:
ladislav#cool:~$ ps aux | grep "node example.js"
ladislav 18231 0.1 0.3 11116444 50812 ? Ssl 22:46 0:00 node example.js server m 0 found
ladislav 18257 0.0 0.0 11600 712 pts/0 S+ 22:49 0:00 grep --color=auto node example.js
But I want to get rid of the second line. I tried following command which works:
ladislav#cool:~$ ps aux | grep "\bnode example.js"
ladislav 18231 0.0 0.3 11116444 50812 ? Ssl 22:46 0:00 node example.js server m 0 found
Could someone explain why or how it works? I don't understand it. I think it should not work.
Better do this:
pgrep -fl node example.js
It's especially designed for this task.

Docker container increases ram

I have launched several docker containers and using docker stats, I have verified that one of them increases the consumption of ram memory since it starts until it is restarted.
My question is if there is any way to verify where such consumption comes from within the docker container. There is some way to check the consumption inside the container, something of the docker stats style but for the inside of the container.
Thanks for your cooperation.
Not sure if it's what you are asking for, but here's an example:
(Before your start):
Run a test container docker run --rm -it ubuntu
Install stress by typing apt-get update and apt-get install stress
Run stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 * 0.9;}' < /proc/meminfo)k --vm-keep -m 1 (it will start consuming memory)
1. with top
If you go to a new terminal you can type docker container exec -it <your container name> top and you will get something like the following:
(notice that the %MEM usage of PID 285 is 68.8%)
docker container exec -it dreamy_jang top
top - 12:46:04 up 22 min, 0 users, load average: 1.48, 1.55, 1.12
Tasks: 4 total, 2 running, 2 sleeping, 0 stopped, 0 zombie
%Cpu(s): 20.8 us, 0.8 sy, 0.0 ni, 78.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 6102828 total, 150212 free, 5396604 used, 556012 buff/cache
KiB Swap: 1942896 total, 1937508 free, 5388 used. 455368 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
285 root 20 0 4209376 4.007g 212 R 100.0 68.8 6:56.90 stress
1 root 20 0 18500 3148 2916 S 0.0 0.1 0:00.09 bash
274 root 20 0 36596 3072 2640 R 0.0 0.1 0:00.21 top
284 root 20 0 8240 1192 1116 S 0.0 0.0 0:00.00 stress
2. with ps aux
Again, from a new terminal you type docker container exec -it <your container name> ps aux
(notice that the %MEM usage of PID 285 is 68.8%)
docker container exec -it dreamy_jang ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 18500 3148 pts/0 Ss 12:25 0:00 /bin/bash
root 284 0.0 0.0 8240 1192 pts/0 S+ 12:39 0:00 stress --vm-byt
root 285 99.8 68.8 4209376 4201300 pts/0 R+ 12:39 8:53 stress --vm-byt
root 286 0.0 0.0 34400 2904 pts/1 Rs+ 12:48 0:00 ps aux
My source for this stress thing is from this question: How to fill 90% of the free memory?

docker on upstart on scaleway

I have docker container based on ubuntu 12.04 and wish start it on scaleway This instantApp run on ubuntu 15.04 with systemd. For my container I need upstart. I turn on upstart by this recommendation:
Install the upstart-sysv package, which will remove ubuntu-standard and systemd-sysv (but should not remove anything else -- if it does, yell!), and run sudo update-initramfs -u. After that, grub's "Advanced options" menu will have a corresponding "Ubuntu, with Linux ... (systemd)" entry where you can do an one-time boot with systemd.
Now my server running with upstart:
# ps aux|grep upstart
root 1447 0.0 0.0 2632 1744 ? S 13:44 0:00 upstart-udev-bridge --daemon
root 1598 0.0 0.0 2044 176 ? S 13:44 0:00 upstart-file-bridge --daemon
root 2571 0.0 0.0 2032 1128 ? S 13:44 0:00 upstart-socket-bridge --daemon
root 32408 0.0 0.0 3156 1472 pts/4 S+ 14:27 0:00 grep --color=auto upstart
but docker not running:
# service docker status
* Docker is managed via upstart, try using service docker status
# service docker start
* Docker is managed via upstart, try using service docker start
How I can start docker as daemon?
See answer for this Ask Ubuntu question - it's a workaround to get things running again until the Kernel bug is address: https://askubuntu.com/questions/683462/docker-is-managed-via-upstart-try-using-service-docker

passenger + nginx fails when I disconnect from terminal

I use Ubuntu 12.04 LTS, rvm, passenger and nginx installed by passenger.
I connect to my server with Putty, start nginx via init.d/nginx and my Rails application works well.
But when I disconnect from terminal, I see standard application errors (Something went wrong, etc.).
nginx error log output:
<internal:lib/rubygems/custom_require>:29:in `require': no such file to load -- rubygems (LoadError)
from <internal:lib/rubygems/custom_require>:29:in `require'
from <internal:gem_prelude>:167:in `load_full_rubygems_library'
from <internal:gem_prelude>:217:in `try_activate'
from <internal:lib/rubygems/custom_require>:32:in `rescue in require'
from <internal:lib/rubygems/custom_require>:29:in `require'
from /var/lib/passenger-standalone/3.0.18-x86_64-ruby1.9.3-linux-gcc4.6.3-1002/support/helper-scripts/passenger-spawn-server:75:in `<main>'
*** Passenger ERROR (ext/common/ApplicationPool/../SpawnManager.h:220):
Could not start the spawn server: /home/torteg/.rvm/wrappers/ruby-1.9.2-p320/ruby: No such file or directory (2)
*** Passenger ERROR (ext/common/ApplicationPool/../SpawnManager.h:220):
Could not start the spawn server: /home/torteg/.rvm/wrappers/ruby-1.9.2-p320/ruby: No such file or directory (2)
ps aux output:
root 5066 0.0 0.0 220928 1936 ? Ssl 15:46 0:00 PassengerWatchdog
root 5069 0.0 0.0 1872956 2340 ? Sl 15:46 0:00 PassengerHelperAgent
root 5071 0.5 0.2 114348 10172 ? Sl 15:46 0:00 Passenger spawn server
nobody 5074 0.0 0.1 169324 4688 ? Sl 15:46 0:00 PassengerLoggingAgent
root 5105 0.0 0.0 39472 1028 ? Ss 15:46 0:00 nginx: master process /opt/nginx/sbin/nginx
torteg 5106 0.0 0.0 39892 2276 ? S 15:46 0:00 nginx: worker process
torteg 5116 13.2 1.5 225720 62432 ? Sl 15:46 0:03 Passenger ApplicationSpawner: /webapps/ngt-storage
torteg 5132 2.4 1.5 230940 64520 ? Sl 15:46 0:00 Rack: /webapps/ngt-storage
root 5141 0.1 0.1 160656 7272 ? Ss 15:47 0:00 sshd: torteg [priv]
torteg 5145 0.0 0.0 164168 1820 ? S 15:47 0:00 sshd: torteg [priv]
torteg 5291 0.0 0.0 160656 2656 ? S 15:47 0:00 sshd: torteg#pts/3
So when you ssh into your production server, somewhere else in the world (or in the cloud) and you visit mydomain.com it works. As soon as you log out of that ssh connection, nginx and passenger stop working? How are these two independent events tied to each other?
What service are you using to host this app?
Possible answer (will clean this up when you get answers to us)
I see you are using rvm too... unless the rvm path isn't set in your deploy user (just thinking out loud)
Created new user deploy with default bash shell. Installed rvm for this user. Then I set user and passenger_user to deploy in nginx.conf. Cleaned precompiled assets.. Works well!

Passenger processes restart even though PassengerPoolIdleTime is 0

I have set PassengerPoolIdleTime to 0, with the expectation that this means I can "warm" up a bunch of passenger processes on my server, and the next time I have a burst of traffic (even if it is days later), they will all be warmed up and ready to accept requests.
What I'm seeing instead is that every morning when I get up, passenger-status shows only a handful of processes and they have all only been up since midnight. The previous day I'd warmed up a bunch of processes and the last time I looked at passenger-status (before midnight) there were 50.
Here's the entire Passenger-related snippet from my httpd.conf (I'm on CentOS):
LoadModule passenger_module /usr/local/lib/ruby/gems/1.8/gems/passenger 2.2.11/ext/apache2/mod_passenger.so
PassengerRoot /usr/local/lib/ruby/gems/1.8/gems/passenger-2.2.11
PassengerRuby /usr/local/bin/ruby
PassengerMaxPoolSize 60
PassengerPoolIdleTime 0
I've checked the crontabs for root and apache, to see if there might be something triggering an apache restart, but I don't see it.
Here's a snippet of passenger-status, about 11hours and 46minutes after midnight:
----------- General information -----------
max = 60
count = 3
active = 0
inactive = 3
Waiting on global queue: 0
----------- Domains -----------
/var/www/myapp/current:
PID: 20704 Sessions: 0 Processed: 360 Uptime: 11h 44m 16s
PID: 20706 Sessions: 0 Processed: 4249 Uptime: 11h 44m 9s
PID: 20708 Sessions: 0 Processed: 14189 Uptime: 11h 44m 9s
And here's what I see if I do a ps aux | grep apache:
apache 13297 0.0 0.0 546652 5312 ? Sl 14:28 0:00 /usr/sbin/httpd.worker
apache 13332 0.0 0.0 546652 5336 ? Sl 14:28 0:00 /usr/sbin/httpd.worker
apache 13334 0.0 0.0 546652 5328 ? Sl 14:28 0:00 /usr/sbin/httpd.worker
root 16841 0.0 0.0 6004 628 pts/0 S+ 15:48 0:00 grep apache
root 20478 0.0 0.0 88724 3640 ? Sl 04:02 0:01 /usr/local/lib/ruby/gems/1.8/gems/passenger-2.2.11/ext/apache2/ApplicationPoolServerExecutable 0 /usr/local/lib/ruby/gems/1.8/gems/passenger-2.2.11/bin/passenger-spawn-server /usr/local/bin/ruby /tmp/passenger.30916
apache 20704 0.0 1.7 251080 135164 ? S 04:02 0:06 Rails: /var/www/apps/myapp/current
apache 20706 0.2 1.7 255188 137704 ? S 04:02 1:52 Rails: /var/www/apps/myapp/current
apache 20708 0.9 1.7 255180 139332 ? S 04:02 6:26 Rails: /var/www/apps/myapp/current
The server is on UTC, so 04:02 corresponds to 12:02am my time (EDT).
Assuming that lograte is the culprit, I'd suggest using the copytruncate feature instead of reloading on postrotate. copytruncate isn't atomic, meaning you could lose a couple second's worth of logs. You'll also briefly double the disk space consumed by that log file. Here's some details.
/var/log/apache2/*.log {
weekly
missingok
rotate 52
compress
delaycompress
notifempty
create 640 root adm
sharedscripts
copytruncate
#postrotate
# /etc/init.d/apache2 reload > /dev/null
endscript
}
You could send your logs to a program which logs to a file based on date and eliminates logrotate...
CustomLog "|/usr/local/bin/my_log_script" combined
I discovered what was happening. Here is my logrotate conf file for httpd:
/var/log/httpd/*log {
missingok
notifempty
sharedscripts
postrotate
/sbin/service httpd reload > /dev/null 2>/dev/null || true
endscript
}
It's the postrotate script that is doing it. Reloading apache causes the passenger processes to die off.
Anyone have any good suggestions for how to do this without having to reload apache? Or a way to reload apache without killing off the passenger processes (if that's possible)?
Easiest way to logrotate without restarting/reloading a service is to use 'copyontruncate' option. That way logrotate will copy the contents of a log file to another file, and empty the current log file. That way service continues to log to the same file, and logrotate does it's thing. For example:
/var/log/httpd/*log {
copyontruncate
missingok
notifempty
sharedscripts
}

Resources