Linking custom taps to existing bottles in bintray - docker

As it usually happens sometimes you get stuck with older version of some software. But you still want to be able to install it seamlessly using homebrew. In my company we are stuck with old version of docker so I created my own tap from the old version of docker Formula.
class Docker191 < Formula
desc "Pack, ship and run any application as a lightweight container"
homepage "https://www.docker.com/"
url "https://github.com/docker/docker.git",
:tag => "v1.9.1",
:revision => "a34a1d598c6096ed8b5ce5219e77d68e5cd85462"
revision 1
head "https://github.com/docker/docker.git"
bottle do
cellar :any_skip_relocation
root_url "https://homebrew.bintray.com/bottles"
sha256 "fe02c921afd6863be441b85ae069e24b7c5b13e97615b47c994ea8064e082bf1" => :el_capitan
sha256 "8a3137f5d6155491e9c4833d80ca819d8bf6d31f38595be713baed06f5283c92" => :yosemite
sha256 "282f987005f81d9a82269827b66aa1044dfbd8645c23845d59e21aad93dc99e0" => :mavericks
end
option "without-completions", "Disable bash/zsh completions"
depends_on "go" => :build
conflicts_with "docker", :because => "Differing version of the same formula"
def install
ENV["AUTO_GOPATH"] = "1"
ENV["DOCKER_CLIENTONLY"] = "1"
system "hack/make.sh", "dynbinary"
build_version = build.head? ? File.read("VERSION").chomp : version
bin.install "bundles/#{build_version}/dynbinary/docker-#{build_version}" => "docker"
if build.with? "completions"
bash_completion.install "contrib/completion/bash/docker"
fish_completion.install "contrib/completion/fish/docker.fish"
zsh_completion.install "contrib/completion/zsh/_docker"
end
end
test do
system "#{bin}/docker", "--version"
end
end
It all builds fine from the source but I was wondering whether it is possible to pull the bottles already built and available in bintray instead of building from source.
I tried to do it by adding
root_url "https://homebrew.bintray.com/bottles"
But because my Formula name is Docker191 instead of just Docker it tries to pull from nonexistent path
==> Downloading https://homebrew.bintray.com/bottles/docker191-1.9.1_1.el_capitan.bottle.tar.gz
instead of
==> Downloading https://homebrew.bintray.com/bottles/docker-1.9.1_1.el_capitan.bottle.tar.gz
Is there a way to fix the bottle name as well? I cannot change the name of formula to Docker because I need multiple versions to be available.

Related

How can I get dependabot to ignore a docker minor version

I'm trying to stay one minor version behind the latest python version, and I was hoping to use dependabot to help with that.
I'm using the python slim docker image as my base image, and based on that plus the dependabot docs I've added the following to my dependabot.yml:
- package-ecosystem: "docker"
directory: "/"
schedule:
interval: "daily"
ignore:
- dependency-name: "python"
versions: [ "3.10.x" ]
This is not working. When I tell the 3.10 PR to "ignore this minor version", however, it does so successfully and states that it won't bother me about 3.10.x versions anymore, so clearly the logic is in there somewhere
It is using Gem::Requirement here: https://github.com/dependabot/dependabot-core/blob/c0945b376ef12f3551e22f185dc6f20c56049296/docker/lib/dependabot/docker/requirement.rb#L8
I haven't tested this specific scenario, but I am using something similar with success. I think this will work:
ignore:
- dependency-name: "python"
versions: ["~> 3.10", "< 3.11"]
It appears to anyway, when testing against Gem::Requirement:
>> r = Gem::Requirement.new("~> 3.10", "< 3.11")
=> Gem::Requirement.new(["~> 3.10", "< 3.11"])
>> r.satisfied_by?(Gem::Version.new('3.11'))
=> false
>> r.satisfied_by?(Gem::Version.new('3.10'))
=> true
>> r.satisfied_by?(Gem::Version.new('3.10.1'))
=> true

How to create custom brew bottle

I have a go application that I would like to distribute for a few developers. Is it possible with "bottles & taps"?
I have tried:
brew tap-new <name of tap>
This gives me a local repository:
Initialized empty Git repository in <local path>
I possible what to do then, I can't find any documentation for custom bottle?
It's vaguely documented right over here; your formula would probably define
bottle do
root_url "https://my-internal-server/"
sha256 "..." => :sierra
sha256 "..." => :el_capitan
sha256 "..." => :yosemite
end

Not able to find curator.yml (elasticsearch-curator) in linux

Official site of elasticsearch says the default config file exists in /home/username/.curator/curator.yml
https://www.elastic.co/guide/en/elasticsearch/client/curator/current/command-line.html
But there is no such folder.
Also, I tried creating curator.yml and give path using --config option. But, it throws me error
curator --config ./curator.yml
Error: no such option: --config
Installation was done using apt
sudo apt-get update && sudo apt-get install elasticsearch-curator
Help me create a config file as I want to delete my log-indexes
Please note that the documentation does not say it that file exists after creation, it says:
If --config and CONFIG.YML are not provided, Curator will look in ~/.curator/curator.yml for the configuration file.
The file must be created by the end user.
Also, if you installed via:
sudo apt-get update && sudo apt-get install elasticsearch-curator
but did not add the official Elastic repository for Curator, then you installed an older version. Please check which version you are running with:
$ curator --version
curator, version 5.4.1
If you do not see the current version (5.4.1 at the time this answer was added), then you do not have the appropriate repository installed.
The official documentation provides an example client configuration file here.
There are also many examples of action files in the examples
Yes, one needs to create both the curator.yml as well as action.yml files.
Since I am on centos 7, I happened to install curator from RPM, and in its default /opt/elastic-curator' I could follow up this good blog (but badly formatted!) : https://anchormen.nl/blog/big-data-services/monitoring-aws-redshift-with-elasticsearch/ to ge the files as follows(you may modify according to your needs) :
curator.yml
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
client:
hosts:
- <host1>
- <host2, likewise upto hostN >
port: 9200
url_prefix:
use_ssl: False
certificate:
client_cert:
client_key:
ssl_no_validate: False
http_auth:
timeout: 30
master_only: False
logging:
loglevel: INFO
logfile: /var/log/curator.log
logformat: default
blacklist: []
and an action.yml as follows :
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: rollover
description: Rollover the index associated with index 'name', which should be in the form of prefix-000001 (or similar),or prefix-YYYY.MM.DD-1.
options:
disable_action: False
name: redshift_metrics_daily
conditions:
max_age: 1d
extra_settings:
index.number_of_shards: 2
index.number_of_replicas: 1
2:
action: rollover
description: Rollover the index associated with index 'name' , which should be in the form of prefix-000001 (or similar), or prefix-YYYY.MM.DD-1.
options:
disable_action: False
name: redshift_query_metadata_daily
conditions:
max_age: 1d
extra_settings:
index.number_of_shards: 2
index.number_of_replicas: 1

Could not evaluate: undefined class/module Puppet::Util::TagSet

I'm having an issue with the latest puppet version and a module called vcsdeploy. Unfortunately I'm not familiar with Ruby and it's own idiosyncrasies, so I'm hoping someone with a little more experience can point me in the right direction.
The module in question can be found here in all it's glory. The particular issue I'm experiencing is an error at line 194 in lib/puppet/provider/vcsdeploy/svn.rb: "Could not evaluate: undefined class/module Puppet::Util::TagSet"
For those who don't want to spelunk the source code, here's the code that's causing the error:
valid_options = [ 'path', 'owner', 'group', 'dirmode', 'filemode', 'source', 'user', 'pass', 'name', 'version', 'selrange', 'selrole', 'seltype', 'seluser', 'templates' ]
#resource_copy = {}
debug "creating resource_copy for #{resource[:name]}"
valid_options.each {|option|
if (option && resource[option.to_sym])
#resource_copy[option.to_sym] = resource[option.to_sym]
end
}
I would assume that Puppet::Util::TagSet is used to some degree elsewhere throughout puppet and it's various modules however this is the only one that's causing a problem.
Anyone got any pointers that I could use to start this investigation?
More System Information:
Operating System: CentOS 6.5
Installation Method: RPM packages
Foreman Version: 1.5
Puppet Version: 3.5.1
I have also verified that the file tag_set.rb exists at the location:
/usr/lib/ruby/site_ruby/1.8/puppet/util/tag_set.rb
What the module fails to document is that it requires Puppet 3.3 which introduced this piece of code (see the commit).

Deploying a Git subdirectory in Capistrano

My master branch layout is like this:
/ <-- top level
/client <-- desktop client source files
/server <-- Rails app
What I'd like to do is only pull down the /server directory in my deploy.rb, but I can't seem to find any way to do that. The /client directory is huge, so setting up a hook to copy /server to / won't work very well, it needs to only pull down the Rails app.
Without any dirty forking action but even dirtier !
In my config/deploy.rb :
set :deploy_subdir, "project/subdir"
Then I added this new strategy to my Capfile :
require 'capistrano/recipes/deploy/strategy/remote_cache'
class RemoteCacheSubdir < Capistrano::Deploy::Strategy::RemoteCache
private
def repository_cache_subdir
if configuration[:deploy_subdir] then
File.join(repository_cache, configuration[:deploy_subdir])
else
repository_cache
end
end
def copy_repository_cache
logger.trace "copying the cached version to #{configuration[:release_path]}"
if copy_exclude.empty?
run "cp -RPp #{repository_cache_subdir} #{configuration[:release_path]} && #{mark}"
else
exclusions = copy_exclude.map { |e| "--exclude=\"#{e}\"" }.join(' ')
run "rsync -lrpt #{exclusions} #{repository_cache_subdir}/* #{configuration[:release_path]} && #{mark}"
end
end
end
set :strategy, RemoteCacheSubdir.new(self)
For Capistrano 3.0, I use the following:
In my Capfile:
# Define a new SCM strategy, so we can deploy only a subdirectory of our repo.
module RemoteCacheWithProjectRootStrategy
def test
test! " [ -f #{repo_path}/HEAD ] "
end
def check
test! :git, :'ls-remote', repo_url
end
def clone
git :clone, '--mirror', repo_url, repo_path
end
def update
git :remote, :update
end
def release
git :archive, fetch(:branch), fetch(:project_root), '| tar -x -C', release_path, "--strip=#{fetch(:project_root).count('/')+1}"
end
end
And in my deploy.rb:
# Set up a strategy to deploy only a project directory (not the whole repo)
set :git_strategy, RemoteCacheWithProjectRootStrategy
set :project_root, 'relative/path/from/your/repo'
All the important code is in the strategy release method, which uses git archive to archive only a subdirectory of the repo, then uses the --strip argument to tar to extract the archive at the right level.
UPDATE
As of Capistrano 3.3.3, you can now use the :repo_tree configuration variable, which makes this answer obsolete. For example:
set :repo_url, 'https://example.com/your_repo.git'
set :repo_tree, 'relative/path/from/your/repo' # relative path to project root in repo
See http://capistranorb.com/documentation/getting-started/configuration.
We're also doing this with Capistrano by cloning down the full repository, deleting the unused files and folders and move the desired folder up the hierarchy.
deploy.rb
set :repository, "git#github.com:name/project.git"
set :branch, "master"
set :subdir, "server"
after "deploy:update_code", "deploy:checkout_subdir"
namespace :deploy do
desc "Checkout subdirectory and delete all the other stuff"
task :checkout_subdir do
run "mv #{current_release}/#{subdir}/ /tmp && rm -rf #{current_release}/* && mv /tmp/#{subdir}/* #{current_release}"
end
end
As long as the project doesn't get too big this works pretty good for us, but if you can, create an own repository for each component and group them together with git submodules.
You can have two git repositories (client and server) and add them to a "super-project" (app). In this "super-project" you can add the two repositories as submodules (check this tutorial).
Another possible solution (a bit more dirty) is to have separate branches for client and server, and then you can pull from the 'server' branch.
There is a solution. Grab crdlo's patch for capistrano and the capistrano source from github. Remove your existing capistrano gem, appy the patch, setup.rb install, and then you can use his very simple configuration line set :project, "mysubdirectory" to set a subdirectory.
The only gotcha is that apparently github doesn't "support the archive command" ... at least when he wrote it. I'm using my own private git repo over svn and it works fine, I haven't tried it with github but I imagine if enough people complain they'll add that feature.
Also see if you can get capistrano authors to add this feature into cap at the relevant bug.
For Capistrano 3, based on #Thomas Fankhauser answer:
set :repository, "git#github.com:name/project.git"
set :branch, "master"
set :subdir, "relative_path_to_my/subdir"
namespace :deploy do
desc "Checkout subdirectory and delete all the other stuff"
task :checkout_subdir do
subdir = fetch(:subdir)
subdir_last_folder = File.basename(subdir)
release_subdir_path = File.join(release_path, subdir)
tmp_base_folder = File.join("/tmp", "capistrano_subdir_hack")
tmp_destination = File.join(tmp_base_folder, subdir_last_folder)
cmd = []
# Settings for my-zsh
# cmd << "unsetopt nomatch && setopt rmstarsilent"
# create temporary folder
cmd << "mkdir -p #{tmp_base_folder}"
# delete previous temporary files
cmd << "rm -rf #{tmp_base_folder}/*"
# move subdir contents to tmp
cmd << "mv #{release_subdir_path}/ #{tmp_destination}"
# delete contents inside release
cmd << "rm -rf #{release_path}/*"
# move subdir contents to release
cmd << "mv #{tmp_destination}/* #{release_path}"
cmd = cmd.join(" && ")
on roles(:app) do
within release_path do
execute cmd
end
end
end
end
after "deploy:updating", "deploy:checkout_subdir"
Unfortunately, git provides no way to do this. Instead, the 'git way' is to have two repositories -- client and server, and clone the one(s) you need.
I created a snipped that works with Capistrano 3.x based in previous anwers and other information found in github:
# Usage:
# 1. Drop this file into lib/capistrano/remote_cache_with_project_root_strategy.rb
# 2. Add the following to your Capfile:
# require 'capistrano/git'
# require './lib/capistrano/remote_cache_with_project_root_strategy'
# 3. Add the following to your config/deploy.rb
# set :git_strategy, RemoteCacheWithProjectRootStrategy
# set :project_root, 'subdir/path'
# Define a new SCM strategy, so we can deploy only a subdirectory of our repo.
module RemoteCacheWithProjectRootStrategy
include Capistrano::Git::DefaultStrategy
def test
test! " [ -f #{repo_path}/HEAD ] "
end
def check
test! :git, :'ls-remote -h', repo_url
end
def clone
git :clone, '--mirror', repo_url, repo_path
end
def update
git :remote, :update
end
def release
git :archive, fetch(:branch), fetch(:project_root), '| tar -x -C', release_path, "--strip=#{fetch(:project_root).count('/')+1}"
end
end
It's also available as a Gist on Github.
dont know if anyone is still interested on this. but just letting you guys if anyone is looking for an answer.
now we can use: :repo_tree
https://capistranorb.com/documentation/getting-started/configuration/
Looks like it's also not working with codebasehq.com so I ended up making capistrano tasks that cleans the mess :-) Maybe there's actually a less hacky way of doing this by overriding some capistrano tasks...
This has been working for me for a few hours.
# Capistrano assumes that the repository root is Rails.root
namespace :uploads do
# We have the Rails application in a subdirectory rails_app
# Capistrano doesn't provide an elegant way to deal with that
# for the git case. (For subversion it is straightforward.)
task :mv_rails_app_dir, :roles => :app do
run "mv #{release_path}/rails_app/* #{release_path}/ "
end
end
before 'deploy:finalize_update', 'uploads:mv_rails_app_dir'
You might declare a variable for the directory (here rails_app).
Let's see how robust it is. Using "before" is pretty weak.

Resources