which application_rb file should I edit? - ruby-on-rails

On my server I did a search with sudo find . -name "application_controller.rb". There's so many files of that name. Which one should I edit so the changes will be reflected on my server?
./home/myapp/apps/myapp/releases/20140208000704/app/controllers/application_controller.rb
./home/myapp/apps/myapp/releases/20140116094931/app/controllers/application_controller.rb
./home/myapp/apps/myapp/releases/20140114154804/app/controllers/application_controller.rb
./home/myapp/apps/myapp/releases/20140117124202/app/controllers/application_controller.rb
./home/myapp/apps/myapp/releases/20140120094758/app/controllers/application_controller.rb
./home/myapp/apps/myapp/releases/20140116102758/app/controllers/application_controller.rb
./home/myapp/apps/myapp/releases/20140117125636/app/controllers/application_controller.rb
./home/myapp/apps/myapp/releases/20140116123905/app/controllers/application_controller.rb
./home/myapp/apps/myapp/releases/20140116115403/app/controllers/application_controller.rb
./home/myapp/apps/myapp/releases/20140117090645/app/controllers/application_controller.rb
./home/myapp/apps/myapp/releases/20140121091622/app/controllers/application_controller.rb
./home/myapp/apps/myapp/releases/20140212214841/app/controllers/application_controller.rb
./home/myapp/apps/myapp/releases/20140205001541/app/controllers/application_controller.rb
./home/myapp/apps/myapp/shared/cached-copy/app/controllers/application_controller.rb
./home/myapp/apps/myapp/shared/bundle/ruby/1.8/gems/best_in_place-1.1.2/test_app/app/controllers/application_co ntroller.rb
./home/myapp/apps/myapp/shared/bundle/ruby/1.8/gems/sass-rails-3.2.5/test/fixtures/sass_project/app/controllers /application_controller.rb
./home/myapp/apps/myapp/shared/bundle/ruby/1.8/gems/sass-rails-3.2.5/test/fixtures/engine_project/app/controlle rs/engine_project/application_controller.rb
./home/myapp/apps/myapp/shared/bundle/ruby/1.8/gems/sass-rails-3.2.5/test/fixtures/engine_project/test/dummy/ap p/controllers/application_controller.rb
./home/myapp/apps/myapp/shared/bundle/ruby/1.8/gems/sass-rails-3.2.5/test/fixtures/scss_project/app/controllers /application_controller.rb
./home/myapp/apps/myapp/shared/bundle/ruby/1.8/gems/devise-2.0.6/test/rails_app/app/controllers/application_con troller.rb
./home/myapp/apps/myapp/shared/bundle/ruby/1.8/gems/rest-graph-2.0.1/example/rails3/app/controllers/application _controller.rb
./home/myapp/apps/myapp/shared/bundle/ruby/1.8/gems/rest-graph-2.0.1/example/rails2/app/controllers/application _controller.rb
./home/myapp/apps/myapp/shared/bundle/ruby/1.8/gems/railties-3.2.3/lib/rails/generators/rails/app/templates/app /controllers/application_controller.rb
./home/myapp/apps/myapp/shared/bundle/ruby/1.8/gems/railties-3.2.3/guides/code/getting_started/app/controllers/ application_controller.rb
./home/myapp/apps/myapp/shared/bundle/ruby/1.8/gems/gmaps4rails-1.5.6/spec/dummy/app/controllers/application_co ntroller.rb
./home/myapp/.rvm/gems/ruby-1.8.7-p374/gems/best_in_place-1.1.2/test_app/app/controllers/application_controller.rb
./home/myapp/.rvm/gems/ruby-1.8.7-p374/gems/email_spec-1.2.1/examples/rails3_root/app/controllers/application_contr oller.rb
./home/myapp/.rvm/gems/ruby-1.8.7-p374/gems/sass-rails-3.2.5/test/fixtures/sass_project/app/controllers/application _controller.rb
./home/myapp/.rvm/gems/ruby-1.8.7-p374/gems/sass-rails-3.2.5/test/fixtures/engine_project/app/controllers/engine_pr oject/application_controller.rb
./home/myapp/.rvm/gems/ruby-1.8.7-p374/gems/sass-rails-3.2.5/test/fixtures/engine_project/test/dummy/app/controller s/application_controller.rb
./home/myapp/.rvm/gems/ruby-1.8.7-p374/gems/sass-rails-3.2.5/test/fixtures/scss_project/app/controllers/application _controller.rb
./home/myapp/.rvm/gems/ruby-1.8.7-p374/gems/devise-2.0.6/test/rails_app/app/controllers/application_controller.rb
./home/myapp/.rvm/gems/ruby-1.8.7-p374/gems/rest-graph-2.0.1/example/rails3/app/controllers/application_controller. rb
./home/myapp/.rvm/gems/ruby-1.8.7-p374/gems/rest-graph-2.0.1/example/rails2/app/controllers/application_controller. rb
./home/myapp/.rvm/gems/ruby-1.8.7-p374/gems/railties-3.2.3/lib/rails/generators/rails/app/templates/app/controllers /application_controller.rb
./home/myapp/.rvm/gems/ruby-1.8.7-p374/gems/railties-3.2.3/guides/code/getting_started/app/controllers/application_ controller.rb
./home/myapp/.rvm/gems/ruby-1.8.7-p374/gems/passenger-4.0.14/test/stub/rails3.0/app/controllers/application_control ler.rb
./home/myapp/.rvm/gems/ruby-1.8.7-p374/gems/passenger-4.0.14/test/stub/rails3.2/app/controllers/application_control ler.rb
./home/myapp/.rvm/gems/ruby-1.8.7-p374/gems/passenger-4.0.14/test/stub/rails2.3/app/controllers/application_control ler.rb
./home/myapp/.rvm/gems/ruby-1.8.7-p374/gems/passenger-4.0.14/test/stub/rails4.0/app/controllers/application_control ler.rb
./home/myapp/.rvm/gems/ruby-1.8.7-p374/gems/passenger-4.0.14/test/stub/rails3.1/app/controllers/application_control ler.rb
./home/myapp/.rvm/gems/ruby-1.8.7-p374/gems/passenger-4.0.14/test/stub/rails_apps/2.3/empty/app/controllers/applica tion_controller.rb
./home/myapp/.rvm/gems/ruby-1.8.7-p374/gems/passenger-4.0.14/test/stub/rails_apps/2.3/mycook/app/controllers/applic ation_controller.rb
myapp#myapp:/$ sufo nano ./home/myapp/.rvm/gems/ruby-1.8.7-p374/gems/passenger-4.0.14/test/stub/rails_apps/ 2.3/mycook/app/controllers/application_controller.rb
-bash: sufo: command not found
myapp#myapp:/$ sudo nano ./home/myapp/.rvm/gems/ruby-1.8.7-p374/gems/passenger-4.0.14/test/stub/rails_apps/ 2.3/mycook/app/controllers/application_controller.rb
myapp#myapp:/$

You shouldn't change any of them. Really.
./home/myapp/apps/myapp/releases suggests you are using something like Capistrano to deploy your application. You should update your source code, commit, and deploy it.
If that's not an option, then you should see if ./home/myapp/apps/myapp/current/app/controllers/application_controller.rb exists and edit that. If not, pick the most recent version in `./home/myapp/apps/myapp/releases.
Once edited you'll need to restart the web server for it to pick up the changes.
Again.. I would highly suggest not doing this unless you have no other option.

Related

How do i remove Docker completely?

i installed docker follow this post
https://blog.ssdnodes.com/blog/getting-started-docker-vps/
sudo curl -sS https://get.docker.com/ | sh
this leak seem unsafe, i wanted to remove everything with docker and reinstall again in other method.
after remove all and check with
find / -name '*docker*'
the log
/proc/sys/net/ipv4/conf/docker0
/proc/sys/net/ipv4/neigh/docker0
/proc/sys/net/ipv6/conf/docker0
/proc/sys/net/ipv6/neigh/docker0
/proc/1/task/1/net/dev_snmp6/docker0
/proc/1/net/dev_snmp6/docker0
/proc/2/task/2/net/dev_snmp6/docker0
/proc/2/net/dev_snmp6/docker0
/proc/3/task/3/net/dev_snmp6/docker0
/proc/3/net/dev_snmp6/docker0
/proc/84/task/84/net/dev_snmp6/docker0
/proc/84/net/dev_snmp6/docker0
/proc/92/task/92/net/dev_snmp6/docker0
/proc/92/net/dev_snmp6/docker0
/proc/96/task/96/net/dev_snmp6/docker0
/proc/96/net/dev_snmp6/docker0
/proc/114/task/114/net/dev_snmp6/docker0
/proc/114/net/dev_snmp6/docker0
/proc/127/task/127/net/dev_snmp6/docker0
/proc/127/net/dev_snmp6/docker0
/proc/133/task/133/net/dev_snmp6/docker0
/proc/133/net/dev_snmp6/docker0
/proc/134/task/134/net/dev_snmp6/docker0
/proc/134/net/dev_snmp6/docker0
/proc/151/task/151/net/dev_snmp6/docker0
/proc/151/net/dev_snmp6/docker0
/proc/368/task/368/net/dev_snmp6/docker0
/proc/368/net/dev_snmp6/docker0
/proc/371/task/371/net/dev_snmp6/docker0
/proc/371/net/dev_snmp6/docker0
/proc/372/task/372/net/dev_snmp6/docker0
/proc/372/net/dev_snmp6/docker0
/proc/373/task/373/net/dev_snmp6/docker0
/proc/373/task/376/net/dev_snmp6/docker0
/proc/373/task/390/net/dev_snmp6/docker0
/proc/373/net/dev_snmp6/docker0
/proc/386/task/386/net/dev_snmp6/docker0
/proc/386/net/dev_snmp6/docker0
/proc/393/task/393/net/dev_snmp6/docker0
/proc/393/net/dev_snmp6/docker0
/proc/404/task/404/net/dev_snmp6/docker0
/proc/404/net/dev_snmp6/docker0
/proc/407/task/407/net/dev_snmp6/docker0
/proc/407/net/dev_snmp6/docker0
/proc/408/task/408/net/dev_snmp6/docker0
/proc/408/net/dev_snmp6/docker0
/proc/416/task/416/net/dev_snmp6/docker0
/proc/416/net/dev_snmp6/docker0
/proc/448/task/448/net/dev_snmp6/docker0
/proc/448/net/dev_snmp6/docker0
/proc/572/task/572/net/dev_snmp6/docker0
/proc/572/net/dev_snmp6/docker0
/proc/574/task/574/net/dev_snmp6/docker0
/proc/574/net/dev_snmp6/docker0
/proc/2523/task/2523/net/dev_snmp6/docker0
/proc/2523/net/dev_snmp6/docker0
/proc/2526/task/2526/net/dev_snmp6/docker0
/proc/2526/net/dev_snmp6/docker0
/proc/3110/task/3110/net/dev_snmp6/docker0
/proc/3110/net/dev_snmp6/docker0
/proc/3111/task/3111/net/dev_snmp6/docker0
/proc/3111/net/dev_snmp6/docker0
/proc/3114/task/3114/net/dev_snmp6/docker0
/proc/3114/net/dev_snmp6/docker0
/usr/bin/pm2-docker
/usr/libexec/docker
/usr/lib/node_modules/pm2/bin/pm2-docker
/usr/lib/node_modules/pm2/node_modules/systeminformation/lib/dockerSocket.js
/usr/lib/node_modules/pm2/node_modules/systeminformation/lib/docker.js
/usr/lib/node_modules/pm2/node_modules/#pm2/io/docker-compose.yml
/usr/lib/firewalld/services/docker-swarm.xml
/usr/lib/firewalld/services/docker-registry.xml
/sys/devices/virtual/net/docker0
/sys/class/net/docker0
/var/cache/yum/x86_64/7/docker-ce-stable
/var/lib/docker-engine
/var/lib/yum/repos/x86_64/7/docker-ce-stable
/etc/yum.repos.d/docker-ce.repo
/etc/systemd/system/docker.service.d
how do i remove docker completely ? is this file inside my host are safe ?
Quick summary of my comments so far, so you can accept it as an answer:
You downloaded the script from a trustworthy source (docker.com) and via HTTPS so there is extremely little risk of your system being compromised.
If your system was compromised, uninstalling docker would likely not solve the problem.
With those two caveats out of the way: The script you ran does a lot of magic to determine your operating system, and then delegates the actual installation to the appropriate package manager, meaning you can simply uninstall it through the usual package management tools of your Linux distribution.

pyhdfs.HdfsIOException: Failed to find datanode, suggest to check cluster health. excludeDatanodes=null

I am trying to run hadoop using docker provided here:
https://github.com/big-data-europe/docker-hadoop
I use the following command:
docker-compose up -d
to up the service and am able to access it and browse file system using: localhost:9870. Problem rises whenever I try to use pyhdfs to put file on HDFS. Here is my sample code:
hdfs_client = HdfsClient(hosts = 'localhost:9870')
# Determine the output_hdfs_path
output_hdfs_path = 'path/to/test/dir'
# Does the output path exist? If not then create it
if not hdfs_client.exists(output_hdfs_path):
hdfs_client.mkdirs(output_hdfs_path)
hdfs_client.create(output_hdfs_path + 'data.json', data = 'This is test.', overwrite = True)
If test directory does not exist on HDFS, the code is able to successfully create it but when it gets to the .create part it throws the following exception:
pyhdfs.HdfsIOException: Failed to find datanode, suggest to check cluster health. excludeDatanodes=null
What surprises me is that my code is able to create the empty directory but fails to put the file on HDFS. My docker-compose.yml file is exactly the same as the one provided in the github repo. The only change I've made is in the hadoop.env file where I change:
CORE_CONF_fs_defaultFS=hdfs://namenode:9000
to
CORE_CONF_fs_defaultFS=hdfs://localhost:9000
I have seen this other post on sof and tried the following command:
hdfs dfs -mkdir hdfs:///demofolder
which works fine in my case. Any help is much appreciated.
I would keep the default CORE_CONF_fs_defaultFS=hdfs://namenode:9000 setting.
Works fine for me after adding a forward slash to the paths
import pyhdfs
fs = pyhdfs.HdfsClient(hosts="namenode")
output_hdfs_path = '/path/to/test/dir'
if not fs.exists(output_hdfs_path):
fs.mkdirs(output_hdfs_path)
fs.create(output_hdfs_path + '/data.json', data = 'This is test.')
# check that it's present
list(fs.walk(output_hdfs_path))
[('/path/to/test/dir', [], ['data.json'])]

can't add custom script in overcommit gem

I'm not sure if this is something I should post in the official repository issue(I sometimes see 'question' tag there). But if you think this is the appropriate place to ask this, would be great if someone could help me out.
I've been trying to add a custom script in overcommit gem with no luck.
What it says in the official document is to add lines in .overcommit.yml:
PostCheckout:
CustomScript:
enabled: true
required_executable: './bin/custom-script'
(so I've done so:)
PrePush:
customHook:
enabled: true
required_executable: 'custom-hook'
and to put the script in .git-hooks directory in the project root. So I put this script in .git-hook dir for test:
#custom-hook.sh
echo hey
Here's the sweet error message:
Hook must specify a `required_executable` or `command` that is tracked by git (i.e. is a path relative to the root of the repository) so that it can be signed
/Users/hiroki/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/overcommit-0.34.2/lib/overcommit/hook_signer.rb:39:in `hook_path'
/Users/hiroki/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/overcommit-0.34.2/lib/overcommit/hook_signer.rb:92:in `hook_contents'
/Users/hiroki/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/overcommit-0.34.2/lib/overcommit/hook_signer.rb:88:in `signature'
/Users/hiroki/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/overcommit-0.34.2/lib/overcommit/hook_signer.rb:61:in `signature_changed?'
/Users/hiroki/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/overcommit-0.34.2/lib/overcommit/hook_loader/plugin_hook_loader.rb:51:in `select'
/Users/hiroki/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/overcommit-0.34.2/lib/overcommit/hook_loader/plugin_hook_loader.rb:51:in `modified_plugins'
/Users/hiroki/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/overcommit-0.34.2/lib/overcommit/hook_loader/plugin_hook_loader.rb:55:in `check_for_modified_plugins'
/Users/hiroki/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/overcommit-0.34.2/lib/overcommit/hook_loader/plugin_hook_loader.rb:8:in `load_hooks'
/Users/hiroki/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/overcommit-0.34.2/lib/overcommit/hook_runner.rb:195:in `load_hooks'
/Users/hiroki/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/overcommit-0.34.2/lib/overcommit/hook_runner.rb:32:in `block in run'
/Users/hiroki/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/overcommit-0.34.2/lib/overcommit/interrupt_handler.rb:84:in `isolate_from_interrupts'
/Users/hiroki/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/overcommit-0.34.2/lib/overcommit/hook_runner.rb:28:in `run'
.git/hooks/pre-push:79:in `<main>'
Obviously, it complains it can't find the executable so I'm guessing the format isn't right, but there are little information out there and I'm stuck.
From the error message, it seems that the custom-hook must be relative to the root directory of your git repository. Perhaps try putting that one into ./bin/custom-hook?

Homebrew: Can't start elastic search

I'm in a big trouble, I can't start Elasticsearch and I need it for run my rails locally, please tell me what's going on. I installed Elasticsearch in the normal fashion then I did the following:
elasticsearch --config=/usr/local/opt/elasticsearch/config/elasticsearch.yml
But it shows the following error: [2015-11-01 20:36:50,574][INFO ][bootstrap] es.config is no longer supported. elasticsearch.yml must be placed in the config directory and cannot be renamed.
I tried several alternative ways of run it, like:
elasticsearch -f -D
But then I get the following error, and I can't find any useful for solve it, it seems to be related with file perms but not sure:
java.io.IOException: Resource not found: "org/joda/time/tz/data/ZoneInfoMap" ClassLoader: sun.misc.Launcher$AppClassLoader#33909752
at org.joda.time.tz.ZoneInfoProvider.openResource(ZoneInfoProvider.java:210)
at org.joda.time.tz.ZoneInfoProvider.<init>(ZoneInfoProvider.java:127)
at org.joda.time.tz.ZoneInfoProvider.<init>(ZoneInfoProvider.java:86)
at org.joda.time.DateTimeZone.getDefaultProvider(DateTimeZone.java:514)
at org.joda.time.DateTimeZone.getProvider(DateTimeZone.java:413)
at org.joda.time.DateTimeZone.forID(DateTimeZone.java:216)
at org.joda.time.DateTimeZone.getDefault(DateTimeZone.java:151)
at org.joda.time.chrono.ISOChronology.getInstance(ISOChronology.java:79)
at org.joda.time.DateTimeUtils.getChronology(DateTimeUtils.java:266)
at org.joda.time.format.DateTimeFormatter.selectChronology(DateTimeFormatter.java:968)
at org.joda.time.format.DateTimeFormatter.printTo(DateTimeFormatter.java:672)
at org.joda.time.format.DateTimeFormatter.printTo(DateTimeFormatter.java:560)
at org.joda.time.format.DateTimeFormatter.print(DateTimeFormatter.java:644)
at org.elasticsearch.Build.<clinit>(Build.java:51)
at org.elasticsearch.node.Node.<init>(Node.java:135)
at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:145)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:170)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:270)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
[2015-11-01 20:40:57,602][INFO ][node ] [Centurius] version[2.0.0], pid[22063], build[de54438/2015-10-22T08:09:48Z]
[2015-11-01 20:40:57,605][INFO ][node ] [Centurius] initializing ...
Exception in thread "main" java.lang.IllegalStateException: failed to load bundle [] due to jar hell
Likely root cause: java.security.AccessControlException: access denied ("java.io.FilePermission" "/usr/local/Cellar/elasticsearch/2.0.0/libexec/antlr-runtime-3.5.jar" "read")
at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472)
at java.security.AccessController.checkPermission(AccessController.java:884)
at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
at java.lang.SecurityManager.checkRead(SecurityManager.java:888)
at java.util.zip.ZipFile.<init>(ZipFile.java:210)
at java.util.zip.ZipFile.<init>(ZipFile.java:149)
at java.util.jar.JarFile.<init>(JarFile.java:166)
at java.util.jar.JarFile.<init>(JarFile.java:103)
at org.elasticsearch.bootstrap.JarHell.checkJarHell(JarHell.java:173)
at org.elasticsearch.plugins.PluginsService.loadBundles(PluginsService.java:340)
at org.elasticsearch.plugins.PluginsService.<init>(PluginsService.java:113)
at org.elasticsearch.node.Node.<init>(Node.java:144)
at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:145)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:170)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:270)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
Refer to the log for complete error details.
Thanks for your help.
There are some changes with libexec with Elasticsearch/homebrew installation and that is why it is failing to start. There is a PR #45644 currently being worked on. Till the PR gets accepted, you can use the same formula to fix the installation of Elasticsearch.
First uninstall the earlier/older version. Then edit the formula of Elasticsearch:
$ brew edit elasticsearch
And use the formula from the PR.
Then do brew install elasticsearch, it should work fine.
To start Elasticsearch, just do:
$ elasticsearch
config option is no longer valid. For custom config, use path.config:
$ elasticsearch --path.conf=/usr/local/opt/elasticsearch/config

Deploying a Git subdirectory in Capistrano

My master branch layout is like this:
/ <-- top level
/client <-- desktop client source files
/server <-- Rails app
What I'd like to do is only pull down the /server directory in my deploy.rb, but I can't seem to find any way to do that. The /client directory is huge, so setting up a hook to copy /server to / won't work very well, it needs to only pull down the Rails app.
Without any dirty forking action but even dirtier !
In my config/deploy.rb :
set :deploy_subdir, "project/subdir"
Then I added this new strategy to my Capfile :
require 'capistrano/recipes/deploy/strategy/remote_cache'
class RemoteCacheSubdir < Capistrano::Deploy::Strategy::RemoteCache
private
def repository_cache_subdir
if configuration[:deploy_subdir] then
File.join(repository_cache, configuration[:deploy_subdir])
else
repository_cache
end
end
def copy_repository_cache
logger.trace "copying the cached version to #{configuration[:release_path]}"
if copy_exclude.empty?
run "cp -RPp #{repository_cache_subdir} #{configuration[:release_path]} && #{mark}"
else
exclusions = copy_exclude.map { |e| "--exclude=\"#{e}\"" }.join(' ')
run "rsync -lrpt #{exclusions} #{repository_cache_subdir}/* #{configuration[:release_path]} && #{mark}"
end
end
end
set :strategy, RemoteCacheSubdir.new(self)
For Capistrano 3.0, I use the following:
In my Capfile:
# Define a new SCM strategy, so we can deploy only a subdirectory of our repo.
module RemoteCacheWithProjectRootStrategy
def test
test! " [ -f #{repo_path}/HEAD ] "
end
def check
test! :git, :'ls-remote', repo_url
end
def clone
git :clone, '--mirror', repo_url, repo_path
end
def update
git :remote, :update
end
def release
git :archive, fetch(:branch), fetch(:project_root), '| tar -x -C', release_path, "--strip=#{fetch(:project_root).count('/')+1}"
end
end
And in my deploy.rb:
# Set up a strategy to deploy only a project directory (not the whole repo)
set :git_strategy, RemoteCacheWithProjectRootStrategy
set :project_root, 'relative/path/from/your/repo'
All the important code is in the strategy release method, which uses git archive to archive only a subdirectory of the repo, then uses the --strip argument to tar to extract the archive at the right level.
UPDATE
As of Capistrano 3.3.3, you can now use the :repo_tree configuration variable, which makes this answer obsolete. For example:
set :repo_url, 'https://example.com/your_repo.git'
set :repo_tree, 'relative/path/from/your/repo' # relative path to project root in repo
See http://capistranorb.com/documentation/getting-started/configuration.
We're also doing this with Capistrano by cloning down the full repository, deleting the unused files and folders and move the desired folder up the hierarchy.
deploy.rb
set :repository, "git#github.com:name/project.git"
set :branch, "master"
set :subdir, "server"
after "deploy:update_code", "deploy:checkout_subdir"
namespace :deploy do
desc "Checkout subdirectory and delete all the other stuff"
task :checkout_subdir do
run "mv #{current_release}/#{subdir}/ /tmp && rm -rf #{current_release}/* && mv /tmp/#{subdir}/* #{current_release}"
end
end
As long as the project doesn't get too big this works pretty good for us, but if you can, create an own repository for each component and group them together with git submodules.
You can have two git repositories (client and server) and add them to a "super-project" (app). In this "super-project" you can add the two repositories as submodules (check this tutorial).
Another possible solution (a bit more dirty) is to have separate branches for client and server, and then you can pull from the 'server' branch.
There is a solution. Grab crdlo's patch for capistrano and the capistrano source from github. Remove your existing capistrano gem, appy the patch, setup.rb install, and then you can use his very simple configuration line set :project, "mysubdirectory" to set a subdirectory.
The only gotcha is that apparently github doesn't "support the archive command" ... at least when he wrote it. I'm using my own private git repo over svn and it works fine, I haven't tried it with github but I imagine if enough people complain they'll add that feature.
Also see if you can get capistrano authors to add this feature into cap at the relevant bug.
For Capistrano 3, based on #Thomas Fankhauser answer:
set :repository, "git#github.com:name/project.git"
set :branch, "master"
set :subdir, "relative_path_to_my/subdir"
namespace :deploy do
desc "Checkout subdirectory and delete all the other stuff"
task :checkout_subdir do
subdir = fetch(:subdir)
subdir_last_folder = File.basename(subdir)
release_subdir_path = File.join(release_path, subdir)
tmp_base_folder = File.join("/tmp", "capistrano_subdir_hack")
tmp_destination = File.join(tmp_base_folder, subdir_last_folder)
cmd = []
# Settings for my-zsh
# cmd << "unsetopt nomatch && setopt rmstarsilent"
# create temporary folder
cmd << "mkdir -p #{tmp_base_folder}"
# delete previous temporary files
cmd << "rm -rf #{tmp_base_folder}/*"
# move subdir contents to tmp
cmd << "mv #{release_subdir_path}/ #{tmp_destination}"
# delete contents inside release
cmd << "rm -rf #{release_path}/*"
# move subdir contents to release
cmd << "mv #{tmp_destination}/* #{release_path}"
cmd = cmd.join(" && ")
on roles(:app) do
within release_path do
execute cmd
end
end
end
end
after "deploy:updating", "deploy:checkout_subdir"
Unfortunately, git provides no way to do this. Instead, the 'git way' is to have two repositories -- client and server, and clone the one(s) you need.
I created a snipped that works with Capistrano 3.x based in previous anwers and other information found in github:
# Usage:
# 1. Drop this file into lib/capistrano/remote_cache_with_project_root_strategy.rb
# 2. Add the following to your Capfile:
# require 'capistrano/git'
# require './lib/capistrano/remote_cache_with_project_root_strategy'
# 3. Add the following to your config/deploy.rb
# set :git_strategy, RemoteCacheWithProjectRootStrategy
# set :project_root, 'subdir/path'
# Define a new SCM strategy, so we can deploy only a subdirectory of our repo.
module RemoteCacheWithProjectRootStrategy
include Capistrano::Git::DefaultStrategy
def test
test! " [ -f #{repo_path}/HEAD ] "
end
def check
test! :git, :'ls-remote -h', repo_url
end
def clone
git :clone, '--mirror', repo_url, repo_path
end
def update
git :remote, :update
end
def release
git :archive, fetch(:branch), fetch(:project_root), '| tar -x -C', release_path, "--strip=#{fetch(:project_root).count('/')+1}"
end
end
It's also available as a Gist on Github.
dont know if anyone is still interested on this. but just letting you guys if anyone is looking for an answer.
now we can use: :repo_tree
https://capistranorb.com/documentation/getting-started/configuration/
Looks like it's also not working with codebasehq.com so I ended up making capistrano tasks that cleans the mess :-) Maybe there's actually a less hacky way of doing this by overriding some capistrano tasks...
This has been working for me for a few hours.
# Capistrano assumes that the repository root is Rails.root
namespace :uploads do
# We have the Rails application in a subdirectory rails_app
# Capistrano doesn't provide an elegant way to deal with that
# for the git case. (For subversion it is straightforward.)
task :mv_rails_app_dir, :roles => :app do
run "mv #{release_path}/rails_app/* #{release_path}/ "
end
end
before 'deploy:finalize_update', 'uploads:mv_rails_app_dir'
You might declare a variable for the directory (here rails_app).
Let's see how robust it is. Using "before" is pretty weak.

Resources