Problem:
I am trying to get sphinx running again after server reboot. There seems to be no sphinx.conf file when I try to start it running:
>searchd
Sphinx 2.0.4-release (r3135)
Copyright (c) 2001-2012, Andrew Aksyonoff
Copyright (c) 2008-2012, Sphinx Technologies Inc (http://sphinxsearch.com)
FATAL: no readable config file (looked in /etc/sphinxsearch/sphinx.conf, ./sphinx.conf).
I have run:
rake thinking_sphinx:configure
rake thinking_sphinx:index
rake thinking_sphinx:start
The problem is for some reason no etc/sphinxsearch/sphinx.conf file is being created... I am new to thinking_sphinx and this might not be the only problem (with the site), but it doesn't seem to be set up fully. For out put and more information read below:
Background info:
I am working on a project I didn't set up initially. We rebooted the server to see some of the changes we made in a constants file. But after the reboot the project no longer displays when you navigate to the site. When you put in the straight ip address it just says "Welcome to Nginx".
The port is open and working through our hosting server, so I was told I have to restart some services. One of the issues I came upon was with thinking_sphinx. This was the rake tasks for sphinx site I referenced. As well as common configuration issues for sphinx.
I set up the sphinx.yml development paths (we aren't using production). Then I ran
>rake thinking_sphinx:index
which seems to have worked even though it output some warnings:
Generating Configuration to /home/potato/streetpotato/config/development.sphinx.conf
(0.2ms) SELECT ##global.sql_mode, ##session.sql_mode;
Sphinx 2.0.4-release (r3135)
Copyright (c) 2001-2012, Andrew Aksyonoff
Copyright (c) 2008-2012, Sphinx Technologies Inc (http://sphinxsearch.com)
using config file '/home/potato/streetpotato/config/development.sphinx.conf'...
indexing index 'bar_core'...
WARNING: collect_hits: mem_limit=0 kb too low, increasing to 14080 kb
collected 249 docs, 0.0 MB
sorted 0.0 Mhits, 100.0% done
total 249 docs, 32394 bytes
total 0.254 sec, 127298 bytes/sec, 978.49 docs/sec
indexing index 'bar_delta'...
WARNING: collect_hits: mem_limit=0 kb too low, increasing to 14080 kb
collected 0 docs, 0.0 MB
total 0 docs, 0 bytes
total 0.003 sec, 0 bytes/sec, 0.00 docs/sec
skipping non-plain index 'bar'...
indexing index 'synonym_core'...
WARNING: collect_hits: mem_limit=0 kb too low, increasing to 13568 kb
collected 3 docs, 0.0 MB
sorted 0.0 Mhits, 100.0% done
total 3 docs, 103 bytes
total 0.003 sec, 30356 bytes/sec, 884.17 docs/sec
indexing index 'synonym_delta'...
WARNING: collect_hits: mem_limit=0 kb too low, increasing to 13568 kb
collected 0 docs, 0.0 MB
total 0 docs, 0 bytes
total 0.002 sec, 0 bytes/sec, 0.00 docs/sec
skipping non-plain index 'synonym'...
indexing index 'user_core'...
WARNING: collect_hits: mem_limit=0 kb too low, increasing to 13568 kb
collected 100 docs, 0.0 MB
sorted 0.0 Mhits, 100.0% done
total 100 docs, 3146 bytes
total 0.013 sec, 239348 bytes/sec, 7608.03 docs/sec
skipping non-plain index 'user'...
total 11 reads, 0.000 sec, 3.8 kb/call avg, 0.0 msec/call avg
total 37 writes, 0.000 sec, 2.5 kb/call avg, 0.0 msec/call avg
Then I ran
>rake thinking_sphinx:configure
Generating Configuration to /home/potato/streetpotato/config/development.sphinx.conf
(0.2ms) SELECT ##global.sql_mode, ##session.sql_mode;
Lastly running:
>rake thinking_sphinx:start
Started successfully (pid 29623).
Now even though my log says:
[Fri Nov 16 19:34:29.820 2012] [29623] accepting connections
There is still no sphinx.conf file being generated and when I try to use the searchd command it still gives me the error...
>searchd --stop
Sphinx 2.0.4-release (r3135)
Copyright (c) 2001-2012, Andrew Aksyonoff
Copyright (c) 2008-2012, Sphinx Technologies Inc (http://sphinxsearch.com)
FATAL: no readable config file (looked in /etc/sphinxsearch/sphinx.conf, ./sphinx.conf).
I am at a loss, I know this is super long but only because I am so lost and trying to give as much information as possible. I got further then I did yesterday with this but it still doesn't seem to be fully working. I might have to do more set up with unicorn or thin as well. I'm just trying to figure out how to get the site back up and running again... If any one has run into similar issues with their site going down after reboot and got it back up (specifically a rails project on Nginx and unicorn or thin using sphinx) any insight would be appreciated.
Thanks,
Alan
Calm down!! :-)
Firstly, you don't need a /etc/sphinxsearch/sphinx.conf file; that is just the default file that searchd tries to use when you don't specify any configuration file.
As your log output shows, your rails application is using /home/potato/streetpotato/config/development.sphinx.conf file when it starts the searchd process.
Run ps -fe | grep searchd on your dev machine; you should see something like this as the output:
501 14128 1 0 0:00.00 ttys004 0:00.00 searchd --pidfile --config /home/potato/streetpotato/config/development.sphinx.conf
501 14130 13546 0 0:00.00 ttys004 0:00.01 grep searchd
So rails app calls searchd with --config /home/potato/streetpotato/config/development.sphinx.conf argument, to specify a different conf file.
From your logs, it is clear that thinkingsphinx is running fine. You can confirm it further by logging into rails console and running a search method on one of the models which have thinking_sphinx indexes defined on them.
Eg: If your app has an Article model as shown in the above link, the following command will show all articles having National Parks in them:
$ rails console
> Article.search( "National Parks" )
=> [#<Article id: 15,... >, #<Article id: 22,...>,...]
The real problem is the application not showing after restarting the server. That has nothing to do with thinking sphinx which is running fine.
Try rolling back all the changes made in the constants file that you mention above, and make sure the application is working fine. Then start making the changes one by one and isolate the one change that breaks your application.
So yeah, this is a hole in ThinkingSphinx (IMHO) -- you can start the sphinxd server using the various rake tasks (which generate the config as needed) ... but this doesn't work in production.
On a project I worked in last year (running on a Linux server) we created an /etc/init.d script to start sphinxd -- it takes options including a path to the configuration file. We did our deploys with capistrano, and put generated code in app/shared -- a directory outside of the source tree. I believe there are some predefined capistrano tasks that will rebuild the Rails-specific config files when models change or otherwise affect what Sphinx does (same as the rake task you mention).
This was one of those cases for us where we had been putting off site search for a long time, and one of our developers got it "all set up" in an afternoon. Getting it deployed took a lot more work.
(Just saw answer from #prakash-murthy -- he provides some details of how to specify config path when you initialized sphinxd). But the trick is to have it start when the system starts and pointing to the config that ThinkingSphinx generates.)
Ok so after a day n a half I finally set it all up and got it running (it was more then just sphinx). I also had to get nginx and unicorn up and running in the background, since we didn't have scripts set up to restart them when the server was rebooted...
When rebooting the server you have to restart some services before the app will be accessible:
1) thinking_sphinx
reference sites
http://pat.github.com/ts/en/rake_tasks.html
http://www.claytonlz.com/2010/09/thinkingsphinx-conf-problems/
a)create/modify app/config/sphinx.yml
development:
morphology: stem_en
port: 9312
bin_path: "/usr/bin" # set up the path to binary for searchd
searchd_binary_name: searchd
indexer_binary_name: indexer
#mem_limit: 128M
test:
morphology: stem_en
port: 9312
mem_limit: 128M
production:
morphology: stem_en
port: 9312
mem_limit: 512M
# the searchd ip, in case it's not on localhost
# address: 10.10.0.0
# this is by default included in db/sphinx
# searchd_file_path: "/path/to/shared/folder/sphinx"
b)rake thinking_sphinx:index
c)rake thinking_sphinx:configure # creates config/development.sphinx.conf which helps define sphinx's indexing
d)# then you have to start sphinx, there are 2 ways to do this
rake thinking_sphinx:start
rake thinking_sphinx:stop
OR
searchd
searchd --stop
# only the rake commands worked for me, when I tried to run searchd
# I got an error FATAL: no readable config file (looked in /etc/sphinxsearch/sphinx.conf, ./sphinx.conf).
# for some reason we dont have a sphinx.conf file, but the rake commands work without it
e)# once you start thinking_sphinx check log/searchd.log file for the line
[Fri Nov 16 19:34:29.820 2012] [29623] accepting connections
2) nginx
reference site:
http://wiki.nginx.org/CommandLine
a) check that nginx is up and running
i) start server
# to check where nginx resides type in this into server console
which nginx
# whatever path it gives you is how you start the server this is my path
/usr/sbin/nginx
ii) stop server
/usr/sbin/nginx -s stop # use the path given by which command
3) unicorn (starting app server)
reference site:
http://codelevy.com/2010/02/09/getting-started-with-unicorn.html
a) test if unicorn will run after previous changes
unicorn_rails -p 3000
# the site should now be up and running, check that it is
# console should now log the different actions you do on the site
b) create unicorn.rb in config folder (if none is there)
# only start this step if the step above got the site running
# close the console or exit the process you started above
# contents of unicorn.rb
worker_processes 2 #(starts 2 child processes, not completely neccissary)
preload_app true
timeout 30
listen 3000
after_fork do |server, worker|
ActiveRecord::Base.establish_connection
end
c) run unicorn in the background
# make sure you exited the process above before running this
unicorn_rails -c config/unicorn.rb -D
# this was giving me an error that it said was logged by stderr
# I got the command to run by adding a command to the front
http://stackoverflow.com/questions/2325152/check-for-stdout-or-stderr
exec 2> /dev/null unicorn_rails -c config/unicorn.rb -D
d) (optional) check stats from starting unicorn
i) pgrep -lf unicorn_rails
#sample output
5374 unicorn_rails master -c config/unicorn.rb -D
5388 unicorn_rails worker[0] -c config/unicorn.rb -D # not needed currently
5391 unicorn_rails worker[1] -c config/unicorn.rb -D # not needed currently
ii) cat tmp/pids/unicorn.pid # from inside the streetpotato folder
#sample output
5374
Related
We have recently updated our ruby/elasticbeanstalk platform to AWS Linux 2 / Ruby (Ruby 2.7 running on 64bit Amazon Linux 2/3.2.0)
A part of our Ruby deployment is a delayed_job (daemon gem)
After many attempts to have a bash script from the .platform/hooks/postdeploy/ folder, I have offically declared I am stuck. Here is the error from eb-engine.log:
2020/12/08 04:18:44.162454 [INFO] Running platform hook: .platform/hooks/postdeploy/restart_delayed_job.sh
2020/12/08 04:18:44.191301 [ERROR] An error occurred during execution of command [app-deploy] - [RunAppDeployPostDeployHooks]. Stop running the command. Error: Command .platform/hooks/postdeploy/restart_delayed_job.sh failed with error exit status 127
2020/12/08 04:18:44.191327 [INFO] Executing cleanup logic
2020/12/08 04:18:44.191448 [INFO] CommandService Response: {"status":"FAILURE","api_version":"1.0","results":[{"status":"FAILURE","msg":"Engine execution has encountered an error.","returncode":1,"events":[{"msg":"Instance deployment failed. For details, see 'eb-engine.log'.","timestamp":1607401124,"severity":"ERROR"}]}]}```
Here is one of many scripts I have attempted:
#!/bin/bash
#Using similar syntax as the appdeploy pre hooks that is managed by AWS
exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1>delayed_job_err.out 2>&1
# Loading environment data
# source /etc/profile.d/sh.local #created from other .ebextension file
EB_APP_USER=$(/opt/elasticbeanstalk/bin/get-config platformconfig -k AppUser)
EB_APP_CURRENT_DIR=$(/opt/elasticbeanstalk/bin/get-config platformconfig -k AppDeployDir)
#EB_APP_PIDS_DIR=/home/webapp/pids
/opt/elasticbeanstalk/bin/get-config environment | jq -r 'to_entries | .[] | "export \(.key)=\"\(.value)\""' > /tmp/envvars
source /tmp/envvars
cd /var/app
cd $EB_APP_CURRENT_DIR
su -s /bin/bash -c "bin/delayed_job restart" $EB_APP_USER```
Here is the delayed_job file:
#!/usr/bin/env ruby
require File.expand_path(File.join(File.dirname(__FILE__), '..', 'config', 'environment'))
require 'delayed/command'
Delayed::Command.new(ARGV).daemonize
As you can see I'm doing my best to load up the env variables. The delayed_job seems to run just fine as root from within the EB Linux 2 host with the env vars loaded.
total 12
-rwxrwxr-x 1 webapp webapp 179 Dec 8 04:15 001_load_envs.sh
-rw-r--r-- 1 root root 251 Dec 8 04:46 delayed_job_err.out
-rwxrwxr-x 1 webapp webapp 1144 Dec 8 04:15 restart_delayed_job.sh
[root#ip-172-16-100-178 postdeploy]# cat delayed_job_err.out
/var/app/current/vendor/bundle/ruby/2.7.0/gems/json-1.8.6/lib/json/common.rb:155: warning: Using the last argument as keyword parameters is deprecated
delayed_job: warning: no instances running. Starting...
delayed_job: process with pid 5292 started.
Any help would be appreciated..
I am also using elasticbeanstalk on Amazon Linux 2
I am using resque which needs a restart postdeploy. Following is my postdeploy hook which restarts resque workers
.platform/hooks/postdeploy/0020_restart_resque_workers.sh
#!/usr/bin/env bash
. /opt/elasticbeanstalk/deployment/env
cd /var/app/current/
su -c "RAILS_ENV=production bundle exec rake resque:restart_workers" webapp ||
echo "resque workers restarted."
true
Notice the environment variable setup. It simply executes /opt/elasticbeanstalk/deployment/env which will give you the environment.
Hope you are able to use above script by simply replcaing command to restart delayed job instead of resque workers.
I have one rails app run with Phusion Passenger as a standalone server with the command bundle exec passenger start --port 8000 --user ubuntu --daemonize.
The issue is that Passenger launches too many processes for my work and consumes quite a lot of memory. The server is used for my private work, so there is almost no service request. How can I control the number of processes with Phusion Passenger? What configuration option should be minimum in memory consumption?
Edit
With --max-pool-size 1, I don't see dramatic improve; I still have multiple RubyApp and preloaders.
Edit 2 (working with nginx)
From https://www.phusionpassenger.com/documentation/Users%20guide%20Nginx%203.0.html I could learn more about the options that I can add to nginx.conf file.
passenger_max_pool_size 1;
passenger_pool_idle_time 1;
passenger-status shows much less memory usage (only one pool).
buntu#ip-172-31-63-19 public> sudo passenger-status
Version : 5.0.21
Date : 2015-11-06 05:50:24 +0000
Instance: aSCyt3IW (nginx/1.8.0 Phusion_Passenger/5.0.21)
----------- General information -----------
Max pool size : 1
App groups : 1
Processes : 1
Requests in top-level queue : 0
----------- Application groups -----------
/home/ubuntu/webapp/rails/passenger-ruby-rails-demo/public (development):
App root: /home/ubuntu/webapp/rails/passenger-ruby-rails-demo
Requests in queue: 0
* PID: 3099 Sessions: 0 Processed: 49 Uptime: 33s
CPU: 1% Memory : 69M Last used: 11s ago
Try this:
passenger start --max-pool-size <NUMBER>
I am attempting to run Celery as a Windows Service using Supervisord. I followed the configuration laid out on the Celery site and [here][1]. I have set up a virtual environment to run supervisord through cygwin.I have highlighted the lines I think are most important (with **). It appears supervisord and rabbitMQ are working. The problem is with Celery.
I setup the service with the commands:
$ cygrunsrv --install supervisord --path /usr/bin/python --args "/usr/bin/supervisord -n -c /usr/etc/supervisord.conf"
$ supervisord
UPDATED: I now have the following in my supervisord.log file:
2014-08-07 12:46:40,676 INFO exited: celery (exit status 1; not expected)
2014-08-07 12:47:07,187 INFO Increased RLIMIT_NOFILE limit to 1024
2014-08-07 12:47:07,238 INFO RPC interface 'supervisor' initialized
2014-08-07 12:47:07,251 INFO daemonizing the supervisord process
2014-08-07 12:47:07,253 INFO supervisord started with pid 7508
2014-08-07 12:47:08,272 INFO spawned: 'celery' with pid 8056
**2014-08-07 12:47:08,833 INFO success: celery entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)**
The config file is:
[inet_http_server] ; inet (TCP) server disabled by default
port=127.0.0.1:8072 ; (ip_address:port specifier, *:port for all iface)
username = user
password = 123
[supervisord]
logfile= /home/HBA/venv/logFiles/supervisord.log ; (main log file;default $CWD/supervisord.log)
logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10 ; (num of main logfile rotation backups;default 10)
loglevel=info ; (log level;default info; others: debug,warn,trace)
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false ; (start in foreground if true;default false)
minfds=1024 ; (min. avail startup file descriptors;default 1024)
minprocs=200 ; (min. avail process descriptors;default 200)
;user=HBA ; (default is current user, required if root)
childlogdir=/tmp ; ('AUTO' child log dir, default $TEMP)
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=http://127.0.0.1:8072 ; use an http:// url to specify an inet socket
[program:celery]
command= celery worker -A runLogProject --loglevel=INFO ; the program (relative uses PATH, can take args)
directory= /home/HBA/venv/runLogProject
environment=PATH="/home/HBA/venv/;/home/HBA/venv/Scripts/"
numprocs=1
stdout_logfile= /home/HBA/venv/logFiles/%(program_name)s/worker.log ; stdout log path, NONE for none; default AUTO
stderr_logfile= /home/HBA/venv/logFiles/%(program_name)s/worker.log ; stderr log path, NONE for none; default AUTO
autostart=true ; start at supervisord start (default: true)
autorestart=true ; whether/when to restart (default: unexpected)
startsecs=0
stopwaitsecs=1000
killasgroup=true
My celery log file gives me:
**[2014-08-07 19:46:40,584: ERROR/MainProcess] Process 'Worker-4' pid:12284 exited with 'signal -1'
[2014-08-07 19:46:40,584: ERROR/MainProcess] Process 'Worker-3' pid:4432 exited with 'signal -1'
[2014-08-07 19:46:40,584: ERROR/MainProcess] Process 'Worker-2' pid:9120 exited with 'signal -1'
[2014-08-07 19:46:40,584: ERROR/MainProcess] Process 'Worker-1' pid:6280 exited with 'signal -1'**
C:\Python27\lib\site-packages\celery\apps\worker.py:161: CDeprecationWarning:
Starting from version 3.2 Celery will refuse to accept pickle by default.
The pickle serializer is a security concern as it may give attackers
the ability to execute any command. It's important to secure
your broker from unauthorized access when using pickle, so we think
that enabling pickle should require a deliberate action and not be
the default choice.
If you depend on pickle then you should set a setting to disable this
warning and to be sure that everything will continue working
when you upgrade to Celery 3.2::
CELERY_ACCEPT_CONTENT = ['pickle', 'json', 'msgpack', 'yaml']
You must only enable the serializers that you will actually use.
warnings.warn(CDeprecationWarning(W_PICKLE_DEPRECATED))
[2014-08-07 19:47:08,822: WARNING/MainProcess] C:\Python27\lib\site-packages\celery\apps\worker.py:161: CDeprecationWarning:
Starting from version 3.2 Celery will refuse to accept pickle by default.
The pickle serializer is a security concern as it may give attackers
the ability to execute any command. It's important to secure
your broker from unauthorized access when using pickle, so we think
that enabling pickle should require a deliberate action and not be
the default choice.
If you depend on pickle then you should set a setting to disable this
warning and to be sure that everything will continue working
when you upgrade to Celery 3.2::
CELERY_ACCEPT_CONTENT = ['pickle', 'json', 'msgpack', 'yaml']
You must only enable the serializers that you will actually use.
warnings.warn(CDeprecationWarning(W_PICKLE_DEPRECATED))
**[2014-08-07 19:47:08,944: INFO/MainProcess] Connected to amqp://guest:**#127.0.0.1:5672//
[2014-08-07 19:47:08,954: INFO/MainProcess] mingle: searching for neighbors
[2014-08-07 19:47:09,963: INFO/MainProcess] mingle: all alone**
C:\Python27\lib\site-packages\celery\fixups\django.py:236: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2014-08-07 19:47:09,982: WARNING/MainProcess] C:\Python27\lib\site-packages\celery\fixups\django.py:236: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2014-08-07 19:47:09,982: WARNING/MainProcess] celery#CORONADO ready.
I solved my issue using the following command: /home/HBA/venv/Scripts/celery worker -A runLogProject --loglevel=INFO
My biggest issue was an unfamiliarity with virtual environments. I needed to make sure the files were in the correct folders within the venv.
Ok so i have a rails application on our server and I have thinking shinx installed. Every day at 12:30am I run a cron job that truncates the account and contact tables and reinserts them...here is my crons
30 0 * * * /bin/bash -l -c 'cd /var/www/active && script/rails runner -e production '\''Account.db_insert'\$
34 0 * * * /bin/bash -l -c 'cd /var/www/active && RAILS_ENV=production bundle exec rake ts:index --silent'
34 0 * * * /bin/bash -l -c 'cd /var/www/active && RAILS_ENV=production bundle exec rake ts:restart --silent'
As you can see the crons executes the Account.db_insert which reinserts contacts and accounts but everyday I get an email with an error....here are the two that i got recently....
Yesterdays Error
using config file '/var/www/active/config/production.sphinx.conf'...
indexing index 'contact_core'...
FATAL: failed to lock /var/www/active/db/sphinx/production/contact_core.spl: Resource temporarily unavailable, will not index. Try --rotate option.
Todays Error
using config file '/var/www/active/config/production.sphinx.conf'...
indexing index 'contact_core'...
collected 403 docs, 0.0 MB
sorted 0.1 Mhits, 100.0% done
ERROR: index 'contact_core': rename /var/www/active/db/sphinx/production/contact_core.tmp.spl to /var/www/active/db/sphinx/production/contact_core.new.spl failed: No such file or directory.
total 403 docs, 25092 bytes
total 0.069 sec, 361571 bytes/sec, 5807.16 docs/sec
distributed index 'contact' can not be directly indexed; skipping.
total 5 reads, 0.001 sec, 133.7 kb/call avg, 0.2 msec/call avg
total 10 writes, 0.012 sec, 177.8 kb/call avg, 1.2 msec/call avg
Any Idea what I am doing wrong....
UPDATE
After the suggestion to run
/usr/local/bin/indexer --config /etc/sphinxsearch/sphinx.conf --all --rotate
I got this error
using config file '/etc/sphinxsearch/sphinx.conf'...
ERROR: unknown key name 'client_timeout' in /etc/sphinxsearch/sphinx.conf line 591 col 16.
FATAL: failed to parse config file '/etc/sphinxsearch/sphinx.conf'.
As the error message indicates, you might want to run the indexer with the --rotate switch.
I, too, use Thinking Sphinx but found it easier to just use the Sphinx indexer directly. I have Thinking Sphinx generate my config for me, then I can just call out to searchd and indexer directly, in my cron:
*/30 * * * * /usr/local/bin/indexer --config /etc/sphinxsearch/sphinx.conf --all --rotate
I have foundthis to be more helpful than going through the Rake tasks. In addition, its much faster as the whole Rails stack doesnt have to be loaded.
I had a problem this morning deploying an application with capistrano.
# git push
# cap deploy:setup
Something strange happened and than I wasn't able to ssh to my host anymore.
Technical staff says (in Italian): "the commands you have run overwrote the shell binaries causing the system to be no more usable". Two options: I am a stupid, or they are wrong.
Here's the shell output on cap:deploy and then the error on ssh. Once the system (VPS) has been rebooted, I wasn't able to ssh anymore.
Any ideas?
mattia#desktop:/var/www/rails/my_application$ git push
Counting objects: 239, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (191/191), done.
Writing objects: 100% (202/202), 379.77 KiB, done.
Total 202 (delta 44), reused 0 (delta 0)
To ssh://mattia#my_application.it/~/git/my_application.git
96c1f19..3cc9e1c master -> master
mattia#desktop:/var/www/rails/my_application$ cap deploy:setup
* executing `deploy:setup'
* executing "mkdir -p /var/www/rails/my_application /var/www/rails/my_application/releases /var/www/rails/my_application/shared /var/www/rails/my_application/shared/system /var/www/rails/my_application/shared/log /var/www/rails/my_application/shared/pids && chmod g+w /var/www/rails/my_application /var/www/rails/my_application/releases /var/www/rails/my_application/shared /var/www/rails/my_application/shared/system /var/www/rails/my_application/shared/log /var/www/rails/my_application/shared/pids"
servers: ["beta.my_application.it"]
[beta.my_application.it] executing command
** [out :: beta.my_application.it]
** [out :: beta.my_application.it] malloc: ../bash/parse.y:2823: assertion botched
** [out :: beta.my_application.it] nunits < 30
** [out :: beta.my_application.it] Aborting...
command finished
failed: "env PATH=/usr/local/bin:/usr/bin:/bin GEM_PATH=/var/lib/gems/1.9.1 sh -c 'mkdir -p /var/www/rails/my_application /var/www/rails/my_application/releases /var/www/rails/my_application/shared /var/www/rails/my_application/shared/system /var/www/rails/my_application/shared/log /var/www/rails/my_application/shared/pids && chmod g+w /var/www/rails/my_application /var/www/rails/my_application/releases /var/www/rails/my_application/shared /var/www/rails/my_application/shared/system /var/www/rails/my_application/shared/log /var/www/rails/my_application/shared/pids'" on beta.my_application.it
mattia#desktop:/var/www/rails/my_application$ ssh beta.my_application.it
Linux my_application 2.6.18-194.26.1.el5.028stab079.2ent #1 SMP Fri Dec 17 19:44:51 MSK 2010 i686
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Mon Feb 7 12:00:53 2011 from dynamic-adsl-xx-xx-xx-xx.------.------.it
malloc: ../bash/subst.c:4494: assertion botched
realloc: called with unallocated block argument
Aborting...Connection to beta.my_application.it closed.
The short-answer is no, unless you have other plugins that aren't standard, or someone gave you a messed up Gem. (Almost nobody bothers to validate the gem signatures.) The standard deploy:setup only creates a couple of symlinks, and directories.
It does run as root, and in theory if you were to set your variables to values (untested) such as set :deploy_to, '/bin/bash', it may damage the binary, but unless you did that, I'd say that's a non-issue.
You can debug this, without relying on a shell - by using SSH in command mode:
# ssh myuser#myserver -c 'history'
Which will dump out the history file (bash) of that user, so you can test if there's been any tampering on the server, you can also check it as root, and/or run commands such as who, last and other one-liners which give you back logs (you can also cat /var/log/messages and look for suspicious activity.
I'd say that the chance of Capistrano being responsible for this is 0 (Source: I'm the maintainer.) - but you can probably get your system back into a working state using the SHS command mode, as I mentioned above (ssh myuser#myserver -c 'aptitude install bash --force' for example)
A word to the wise, if you never figure out how this happened, erase the server and change your passwords… just use this as a method to get things back up and running. It's not a very subtle tactic, but if you've been hacked, a hacker could easily throw you out by making a user which uses an alternative shell, and corrupting yours.
It would also be a huge help from your admins, if they could give you /bin/bash - the contents of the file, so you can see if it's text, junk, corrupted binary, or something from your deploy.