Capistrano destination path already exists, not an empty directory - ruby-on-rails

While deploying a Rails app with Capistrano on Ubuntu 14.04, I am getting the following error:
fatal: destination path '/var/www/APP-NAME/repo' already exists and is not an empty directory.
cf5a389e] Running /usr/bin/env [ -f /var/www/rd/repo/HEAD ] on LINODE-INSTANCE-IP
DEBUG[cf5a389e] Command: [ -f /var/www/rd/repo/HEAD ]
DEBUG[cf5a389e] Finished in 0.005 seconds with exit status 1 (failed).
DEBUG[8899b95c] Running /usr/bin/env if test ! -d /var/www/rd; then echo "Directory does not exist '/var/www/rd'" 1>&2; false; fi on LINODE-INSTANCE-IP
DEBUG[8899b95c] Command: if test ! -d /var/www/rd; then echo "Directory does not exist '/var/www/rd'" 1>&2; false; fi
DEBUG[8899b95c] Finished in 0.005 seconds with exit status 0 (successful).
INFO[fc5f524b] Running /usr/bin/env git clone --mirror GIT_REPO_URL /var/www/APP-NAME/repo on LINODE-INSTANCE-IP
DEBUG[fc5f524b] Command: cd /var/www/APP-NAME && ( GIT_ASKPASS=/bin/echo GIT_SSH=/tmp/rd/git-ssh.sh /usr/bin/env git clone --mirror GIT-REPO-URL /var/www/APP-NAME/repo )
DEBUG[fc5f524b] fatal: destination path '/var/www/APP-NAME/repo' already exists and is not an empty directory.
Here are config files:
config/deploy/production.rb
config/deploy.rb
The only reason for this error I can find online is;
same host in more than one role, so that they're racing? For example I mean that you might have the same IP address defined as an :app role host more than once.
Which I guess isn't fitting with the above config files.

I had the same problem. The reason is in double definition of the role and/or server.
Try to remove
server 'SERVER-IP', user: 'USERNAME', roles: %w{app}
in production.rb and
role :app, "SERVER-IP"
in deploy.rb. The latter seems to be just simple syntax while the former - is extended one, so you in fact you declare roles twice (three time to be more precise: 2 in production.rb and 1 in deploy.rb). Hope it helps.

On a related note, I ran into this issue when changing my DB connections file setup on one of my deploys.
In this case, the old structure had one shared file for DB settings, while the newer had two. Even though these were declared in the recipe, I was getting errors when automated deletion of older builds were being done.
To resolve, I just deleted the older builds, and ran the cap [server name] deploy a few times to clear it out, and to verify this was no longer occurring. So far, it's been fine.

Related

8th wall web app setup child compilation failed

I am new to 8th wall. I have cloned 8th wall web from git and executed below steps properly
# cd <directory_where_you_saved_sample_project_files>
# cd serve
# npm install
# cd ..
# ./serve/bin/serve -d <sample_project_location>
but on execution of last step which is for ex.
./serve/bin/serve -n -d gettingstarted/xraframe/ -p 7777
I am getting below errors
Failed to compile.
Error: Child compilation failed: Entry module not found: Error:
Can't resolve
'C:\8thWall_Project\web\serve\bin\gettingstarted\xraframe"
\index.html' in 'C:\8thWall_Project\web\serve': Error: Can't resolve
'C:\8thWall_Project\web\serve\bin\gettingstarted\xraframe"
\index.html' in 'C:\8thWall_Project\web\serve'
compiler.js:79 childCompiler.runAsChild
[serve]/[html-webpack-plugin]/lib/compiler.js:79:16
Compiler.js:306 compile
[serve]/[webpack]/lib/Compiler.js:306:11
Compiler.js:631 hooks.afterCompile.callAsync.err
[serve]/[webpack]/lib/Compiler.js:631:15
Hook.js:154 AsyncSeriesHook.lazyCompileHook
[serve]/[tapable]/lib/Hook.js:154:20
Compiler.js:628 compilation.seal.err
[serve]/[webpack]/lib/Compiler.js:628:31
Hook.js:154 AsyncSeriesHook.lazyCompileHook
[serve]/[tapable]/lib/Hook.js:154:20
Compilation.js:1325 hooks.optimizeAssets.callAsync.err
[serve]/[webpack]/lib/Compilation.js:1325:35
Any idea or pointers what is missing?
Thanks
I don't know why, but bat file don't want to be opened by path. Just go to the serve\bin directory and launch bat from here, like that:
7777 is unnecessary. problem was, that it can't find path to your xraframe
project, as you are in another directory, so you have to go tow directories up in ypur path for xraframe
It seems as if you're attempting this on a Windows computer. The serve process for Windows is slightly different than on macOS.
Instead of the normal serve script, use the serve.bat executable.
serve\bin\serve.bat -n -d gettingstarted\xraframe -p 7777
https://docs.8thwall.com/web/#locally-from-windows

opensipsctl start gives an error: opensips.pid does not exist

When I run opensipsctl start command for start opensips that time I got one error.
ERROR: PID file /var/run/opensips.pid does not exist -- OpenSIPS start failed
So please help me to solve it.
open up opensipsctl, it includes the file opensipsctlrc, which defined $PID_FILE as /var/run/opensips.pid
Then in opensipsctl, when you run start one of the checks is..
if [ ! -s $PID_FILE ] ; then
echo
merr "PID file $PID_FILE does not exist -- OpenSIPS start failed"
exit 1
fi
Which is saying if then check of whethever '/var/run/opensips.pid exists and is bigger than 0 bytes' fails, then echo out the above error.
This means the file isn't being created.
If you look just above that line it does..
if [ $SYSLOG = 1 ] ; then
$OSIPSBIN -P $PID_FILE $STARTOPTIONS 1>/dev/null 2>/dev/null
else
$OSIPSBIN -P $PID_FILE -E $STARTOPTIONS
fi
Which is where opensips actually starts. I would suggest adding the following to your opensips.cfg if you havn't already..
# Logging
debug=6
log_stderror=no
log_facility=LOG_LOCAL0
..now everything will be logged to /var/log/syslog on boot.
Try boot again, then look at that log for info about what's happened.
Another thing to check, is the user you're running opensips as has permission to access the directory it's trying to create the pid file in.
I had the same error & it was driving me mad as well. I managed to trace it down to one of two things - I had both!
1/ A misconfiguration in the OpenSIPS config file. journalctl -xe should be able to tell you what the error is
2/ Something else is listening on the port that you are trying to listen on
For 2, you can try the below, if you have Ubuntu, to see if anything is already listening on that port
lsof -i :5060
I was able to see logs and fix issue by below steps
Set log_level=4 in opensips.cfg to view debug logs in /var/log/syslog
debug is deprecated in 2.4 and higher version.
You can refer here for different log level

Capistrano many failed status on linked_files and linked_dirs what do they mean?

OK so all is working perfectly well as far as I can see, but I do see a lot of "failed" status on most of the linked_files and linked_dirs tasks and I am wondering if they deserve any attention. Here are a few examples:
DEBUG [423a17e1] Running [ -L /home/caluebat/www/ravenfort/releases/20160312213815/tmp/pids ] on xxx.xxx.xxx.xxx
DEBUG [423a17e1] Command: [ -L /home/caluebat/www/ravenfort/releases/20160312213815/tmp/pids ]
DEBUG [423a17e1] Finished in 0.470 seconds with exit status 1 (failed).
DEBUG [541d2f8a] Running [ -d /home/caluebat/www/ravenfort/releases/20160312213815/tmp/pids ] on xxx.xxx.xxx.xxx
DEBUG [541d2f8a] Command: [ -d /home/caluebat/www/ravenfort/releases/20160312213815/tmp/pids ]
DEBUG [541d2f8a] Finished in 0.476 seconds with exit status 1 (failed).
I was unable to find any detail on the capistrano official docs and they send either here or the mailing list for questions.
I would appreciate any clarification on the above failures.
Thank you very much.
Don't worry about it!
Whenever cap runs a command that returns a non-zero result, it prints the line in red and says "failed". This can be misleading, because it runs a lot of commands just to see what already exists. For instance [ -d foo ] means "Is there a directory named foo?" It's not actually a failure, it's just cap inspecting the target machine to find out what work it needs to do.
If cap hits a real error, it will quit early and you'll get a stack trace and/or actual error message.

jenkins plugin for triggering build whenever any file changed in a given directory

I am looking for functionality where we have a directory with some files in it.
Whenever any one makes a change in any of the files in the directory, jenkins shoukd trigger a build.
Is there any plugin or mathod for this functionality. Please advise.
Thanks in advance.
I have not tried it myself, but The FSTrigger plugin seems to do what you want:
FSTrigger provides polling mechanisms to monitor a file system and
trigger a build if a file or a set of files have changed.
If you can monitor the directory with a script, you can trigger the build with a HTTP GET, for example with wget or curl:
wget -O- $JENKINS_URL/job/JOBNAME/build
Although slightly related.. it seems like this issue was about monitoring static files on system.. however there are many version control systems for just this purpose.
I answered this in another post if you're using git to track changes on the files themselves:
#!/bin/bash
set -e
job_name="whatever"
JOB_URL="http://myserver:8080/job/${job_name}/"
FILTER_PATH="path/to/folder/to/monitor"
python_func="import json, sys
obj = json.loads(sys.stdin.read())
ch_list = obj['changeSet']['items']
_list = [ j['affectedPaths'] for j in ch_list ]
for outer in _list:
for inner in outer:
print inner
"
_affected_files=`curl --silent ${JOB_URL}${BUILD_NUMBER}'/api/json' | python -c "$python_func"`
if [ -z "`echo \"$_affected_files\" | grep \"${FILTER_PATH}\"`" ]; then
echo "[INFO] no changes detected in ${FILTER_PATH}"
exit 0
else
echo "[INFO] changed files detected: "
for a_file in `echo "$_affected_files" | grep "${FILTER_PATH}"`; do
echo " $a_file"
done;
fi;
You can add the check directly to the top of the job's exec shell, and it will exit 0 if no changes detected.. Hence, you can always poll the top level of the repo for check-in's to trigger a build. And only complete a build if the files in question change.

Can "cap deploy:setup" destroy BASH?

I had a problem this morning deploying an application with capistrano.
# git push
# cap deploy:setup
Something strange happened and than I wasn't able to ssh to my host anymore.
Technical staff says (in Italian): "the commands you have run overwrote the shell binaries causing the system to be no more usable". Two options: I am a stupid, or they are wrong.
Here's the shell output on cap:deploy and then the error on ssh. Once the system (VPS) has been rebooted, I wasn't able to ssh anymore.
Any ideas?
mattia#desktop:/var/www/rails/my_application$ git push
Counting objects: 239, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (191/191), done.
Writing objects: 100% (202/202), 379.77 KiB, done.
Total 202 (delta 44), reused 0 (delta 0)
To ssh://mattia#my_application.it/~/git/my_application.git
96c1f19..3cc9e1c master -> master
mattia#desktop:/var/www/rails/my_application$ cap deploy:setup
* executing `deploy:setup'
* executing "mkdir -p /var/www/rails/my_application /var/www/rails/my_application/releases /var/www/rails/my_application/shared /var/www/rails/my_application/shared/system /var/www/rails/my_application/shared/log /var/www/rails/my_application/shared/pids && chmod g+w /var/www/rails/my_application /var/www/rails/my_application/releases /var/www/rails/my_application/shared /var/www/rails/my_application/shared/system /var/www/rails/my_application/shared/log /var/www/rails/my_application/shared/pids"
servers: ["beta.my_application.it"]
[beta.my_application.it] executing command
** [out :: beta.my_application.it]
** [out :: beta.my_application.it] malloc: ../bash/parse.y:2823: assertion botched
** [out :: beta.my_application.it] nunits < 30
** [out :: beta.my_application.it] Aborting...
command finished
failed: "env PATH=/usr/local/bin:/usr/bin:/bin GEM_PATH=/var/lib/gems/1.9.1 sh -c 'mkdir -p /var/www/rails/my_application /var/www/rails/my_application/releases /var/www/rails/my_application/shared /var/www/rails/my_application/shared/system /var/www/rails/my_application/shared/log /var/www/rails/my_application/shared/pids && chmod g+w /var/www/rails/my_application /var/www/rails/my_application/releases /var/www/rails/my_application/shared /var/www/rails/my_application/shared/system /var/www/rails/my_application/shared/log /var/www/rails/my_application/shared/pids'" on beta.my_application.it
mattia#desktop:/var/www/rails/my_application$ ssh beta.my_application.it
Linux my_application 2.6.18-194.26.1.el5.028stab079.2ent #1 SMP Fri Dec 17 19:44:51 MSK 2010 i686
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Mon Feb 7 12:00:53 2011 from dynamic-adsl-xx-xx-xx-xx.------.------.it
malloc: ../bash/subst.c:4494: assertion botched
realloc: called with unallocated block argument
Aborting...Connection to beta.my_application.it closed.
The short-answer is no, unless you have other plugins that aren't standard, or someone gave you a messed up Gem. (Almost nobody bothers to validate the gem signatures.) The standard deploy:setup only creates a couple of symlinks, and directories.
It does run as root, and in theory if you were to set your variables to values (untested) such as set :deploy_to, '/bin/bash', it may damage the binary, but unless you did that, I'd say that's a non-issue.
You can debug this, without relying on a shell - by using SSH in command mode:
# ssh myuser#myserver -c 'history'
Which will dump out the history file (bash) of that user, so you can test if there's been any tampering on the server, you can also check it as root, and/or run commands such as who, last and other one-liners which give you back logs (you can also cat /var/log/messages and look for suspicious activity.
I'd say that the chance of Capistrano being responsible for this is 0 (Source: I'm the maintainer.) - but you can probably get your system back into a working state using the SHS command mode, as I mentioned above (ssh myuser#myserver -c 'aptitude install bash --force' for example)
A word to the wise, if you never figure out how this happened, erase the server and change your passwords… just use this as a method to get things back up and running. It's not a very subtle tactic, but if you've been hacked, a hacker could easily throw you out by making a user which uses an alternative shell, and corrupting yours.
It would also be a huge help from your admins, if they could give you /bin/bash - the contents of the file, so you can see if it's text, junk, corrupted binary, or something from your deploy.

Resources