Vagrant Provision fails at installing Ruby Gem chef-vault - ruby-on-rails

As the new intern, I'm supposed to get one of our applications running on my local machine (OS X). It's a large set of files to run the application and it uses frameworks that I am not familiar with such as vagrant and chef.
I was told that it should be as easy as cloning the repo, running vagrant up, and viewing the page in my browser but I've encountered a few problems. Now, when I go into the directory and run vagrant up it shows a few questionable things:
Admins-MacBook-Pro:db_archive_chef ahayden$ VAGRANT_LOG=info vagrant up
INFO global: Vagrant version: 2.1.2
INFO global: Ruby version: 2.4.4
INFO global: RubyGems version: 2.6.14.1
INFO global: VAGRANT_LOG="info"
INFO global: VAGRANT_INSTALLER_VERSION="2"
INFO global: VAGRANT_INSTALLER_EMBEDDED_DIR="/opt/vagrant/embedded"
INFO global: VAGRANT_INSTALLER_ENV="1"
INFO global: VAGRANT_EXECUTABLE="/opt/vagrant/embedded/gems/2.1.2/gems/vagrant-2.1.2/bin/vagrant"
WARN global: resolv replacement has not been enabled!
INFO global: Plugins:
INFO global: - vagrant-berkshelf = [installed: 5.1.2 constraint: > 0]
INFO global: - virtualbox = [installed: 0.8.6 constraint: > 0]
INFO global: Loading plugins!
INFO global: Loading plugin `vagrant-berkshelf` with default require: `vagrant-berkshelf`
INFO root: Version requirements from Vagrantfile: [">= 1.5"]
INFO root: - Version requirements satisfied!
INFO manager: Registered plugin: berkshelf
INFO global: Loading plugin `virtualbox` with default require: `virtualbox`
/Users/ahayden/.vagrant.d/gems/2.4.4/gems/virtualbox-0.8.6/lib/virtualbox/com/ffi/util.rb:93: warning: key "io" is duplicated and overwritten on line 107
INFO vagrant: `vagrant` invoked: ["up"]
INFO environment: Environment initialized (#<Vagrant::Environment:0x00000001040deee0>)
INFO environment: - cwd: /Users/ahayden/Development/LSS/db_archive_chef
INFO environment: Home path: /Users/ahayden/.vagrant.d
INFO environment: Local data path: /Users/ahayden/Development/LSS/db_archive_chef/.vagrant
INFO environment: Running hook: environment_plugins_loaded
INFO runner: Preparing hooks for middleware sequence...
INFO runner: 1 hooks defined.
INFO root: Version requirements from Vagrantfile: [">= 1.5.0"]
INFO root: - Version requirements satisfied!
INFO loader: Loading configuration in order: [:home, :root]
INFO command: Active machine found with name default. Using provider: virtualbox
INFO environment: Getting machine: default (virtualbox)
INFO environment: Uncached load of machine.
INFO base: VBoxManage path: VBoxManage
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "--version"]
INFO subprocess: Command not in installer, restoring original environment...
INFO meta: Using VirtualBox driver: VagrantPlugins::ProviderVirtualBox::Driver::Version_5_2
INFO base: VBoxManage path: VBoxManage
INFO loader: Set "2174531280_machine_default" = []
INFO loader: Loading configuration in order: [:home, :root, "2174531280_machine_default"]
INFO box_collection: Box found: bento/ubuntu-14.04 (virtualbox)
INFO environment: Running hook: authenticate_box_url
INFO host: Autodetecting host type for [#<Vagrant::Environment: /Users/ahayden/Development/LSS/db_archive_chef>]
INFO host: Detected: darwin!
INFO runner: Preparing hooks for middleware sequence...
INFO runner: 2 hooks defined.
INFO runner: Running action: authenticate_box_url #<Vagrant::Action::Builder:0x00000001030ab348>
INFO loader: Loading configuration in order: [:"2175328800_bento/ubuntu-14.04_virtualbox", :home, :root, "2174531280_machine_default"]
INFO machine: Initializing machine: default
INFO machine: - Provider: VagrantPlugins::ProviderVirtualBox::Provider
INFO machine: - Box: #<Vagrant::Box:0x00000001034acc08>
INFO machine: - Data dir: /Users/ahayden/Development/LSS/db_archive_chef/.vagrant/machines/default/virtualbox
INFO base: VBoxManage path: VBoxManage
INFO meta: Using VirtualBox driver: VagrantPlugins::ProviderVirtualBox::Driver::Version_5_2
INFO base: VBoxManage path: VBoxManage
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "showvminfo", "92b0cc90-127e-4e19-8c75-73b5bf0b5506"]
INFO subprocess: Command not in installer, restoring original environment...
INFO machine: New machine ID: nil
INFO base: VBoxManage path: VBoxManage
ERROR loader: Unknown config sources: [:"2175328800_bento/ubuntu-14.04_virtualbox"]
INFO base: VBoxManage path: VBoxManage
INFO meta: Using VirtualBox driver: VagrantPlugins::ProviderVirtualBox::Driver::Version_5_2
INFO base: VBoxManage path: VBoxManage
INFO environment: Getting machine: default (virtualbox)
INFO environment: Returning cached machine: default (virtualbox)
INFO command: With machine: default (#
INFO interface: info: Bringing machine 'default' up with 'virtualbox' provider...
Bringing machine 'default' up with 'virtualbox' provider...
INFO batch_action: Enabling parallelization by default.
INFO batch_action: Disabling parallelization because provider doesn't support it: virtualbox
INFO batch_action: Batch action will parallelize: false
INFO batch_action: Starting action: #<Vagrant::Machine:0x0000000100a51238> up {:destroy_on_error=>true, :install_provider=>false, :parallel=>true, :provision_ignore_sentinel=>false, :provision_types=>nil}
INFO machine: Calling action: up on provider VirtualBox (new VM)
INFO environment: Acquired process lock: dotlock
INFO environment: Released process lock: dotlock
INFO environment: Acquired process lock: machine-action-1c8a0b7102d23451e5804c5357d8a327
INFO base: VBoxManage path: VBoxManage
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "list", "hostonlyifs"]
INFO subprocess: Command not in installer, restoring original environment...
<Proc:0x000000010157ff60#/opt/vagrant/embedded/gems/2.1.2/gems/vagrant-2.1.2/lib/vagrant/action/warden.rb:94 (lambda)>
INFO warden: Calling IN action: #<Vagrant::Action::Builtin::HandleBox:0x00000001015fc448>
INFO handle_box: Machine already has box. HandleBox will not run.
INFO warden: Calling IN action: #<VagrantPlugins::Berkshelf::Action::Check:0x000000010135cee0>
INFO subprocess: Starting process: ["/usr/local/bin/berks", "--version", "--format", "json"]
INFO subprocess: Command not in installer, restoring original environment...
default: The Berkshelf shelf is at "/Users/ahayden/.berkshelf/vagrant-berkshelf/shelves/berkshelf20180730-4484-1fezzea-default"
INFO prepare_clone: no clone master, not preparing clone snapshot
INFO warden: Calling IN action: #<VagrantPlugins::ProviderVirtualBox::Action::Import:0x0000000100a5add8>
INFO interface: info: Importing base box 'bento/ubuntu-14.04'...
INFO interface: info: ==> default: Importing base box 'bento/ubuntu-14.04'...
==> default: Importing base box 'bento/ubuntu-14.04'...
INFO interface: info: Progress: 90%
Progress: 90%
==> default: Checking if box 'bento/ubuntu-14.04' is up to date...
INFO downloader: Downloader starting download:
INFO downloader: -- Source: https://vagrantcloud.com/bento/ubuntu-14.04
INFO downloader: -- Destination: /var/folders/gf/skrz9ljj2z3b3vm947tt5r680000gp/T/vagrant-load-metadata20180730-4484-lo2vxi
INFO subprocess: Starting process: ["/opt/vagrant/embedded/bin/curl", "-q", "--fail", "--location", "--max-redirs", "10", "--verbose", "--user-agent", "Vagrant/2.1.2 (+https://www.vagrantup.com; ruby2.4.4)", "-H", "Accept: application/json", "--output", "/var/folders/gf/skrz9ljj2z3b3vm947tt5r680000gp/T/vagrant-load-metadata20180730-4484-lo2vxi", "https://vagrantcloud.com/bento/ubuntu-14.04"]
INFO subprocess: Command in the installer. Specifying DYLD_LIBRARY_PATH...
==> default: Updating Vagrant's Berkshelf...
INFO subprocess: Starting process: ["/usr/local/bin/berks", "vendor", "/Users/ahayden/.berkshelf/vagrant-berkshelf/shelves/berkshelf20180730-4484-1fezzea-default", "--berksfile", "/Users/ahayden/Development/LSS/db_archive_chef/Berksfile"]
INFO subprocess: Command not in installer, restoring original environment...
INFO interface: output: Resolving cookbook dependencies...
Fetching 'db_archive' from source at .
Using chef-vault (3.1.0)
Using db_archive (0.3.14) from source at .
Using hostsfile (3.0.1)
INFO interface: output: ==> default: Resolving cookbook dependencies...
==> default: Fetching 'db_archive' from source at .
==> default: Using chef-vault (3.1.0)
==> default: Using db_archive (0.3.14) from source at .
==> default: Using hostsfile (3.0.1)
==> default: Vendoring chef-vault (3.1.0) to /Users/ahayden/.berkshelf/vagrant-berkshelf/shelves/berkshelf20180730-4484-1fezzea-default/chef-vault
==> default: Vendoring db_archive (0.3.14) to /Users/ahayden/.berkshelf/vagrant-berkshelf/shelves/berkshelf20180730-4484-1fezzea-default/db_archive
==> default: Vendoring hostsfile (3.0.1) to /Users/ahayden/.berkshelf/vagrant-berkshelf/shelves/berkshelf20180730-4484-1fezzea-default/hostsfile
INFO warden: Calling IN action: #<VagrantPlugins::Berkshelf::Action::Upload:0x000000010171f3e8>
INFO upload: Provisioner does need to upload
INFO warden: Calling IN action: #<Vagrant::Action::Builtin::Provision:0x00000001016de3c0>
INFO provision: Checking provisioner sentinel file...
INFO interface: warn: The cookbook path '/Users/ahayden/Development/LSS/db_archive_chef/cookbooks' doesn't exist. Ignoring...
==> default: Clearing any previously set network interfaces...
INFO network: Searching for matching hostonly network: 172.28.128.1
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "list", "hostonlyifs"]
INFO subprocess: Command not in installer, restoring original environment...
INFO interface: info: ==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "showvminfo", "560c0ba3-253c-478d-8cc9-97d8c2fbb1da", "--machinereadable"]
INFO subprocess: Command not in installer, restoring original environment...
INFO interface: detail: SSH address: 127.0.0.1:2222
INFO interface: detail: default: SSH address: 127.0.0.1:2222
default: SSH address: 127.0.0.1:2222
INFO ssh: Attempting SSH connection...
INFO ssh: Attempting to connect to SSH...
INFO ssh: - Host: 127.0.0.1
INFO ssh: - Port: 2222
INFO ssh: - Username: vagrant
INFO ssh: - Password? false
INFO ssh: - Key Path: ["/Users/ahayden/.vagrant.d/insecure_private_key"]
INFO ssh: SSH not ready: #<Vagrant::Errors::NetSSHException: An error occurred in the underlying SSH library that Vagrant uses.
The error message is shown below. In many cases, errors from this
library are caused by ssh-agent issues. Try disabling your SSH
agent or removing some keys and try again.
If the problem persists, please report a bug to the net-ssh project.
timeout during server version negotiating>
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "showvminfo", "560c0ba3-253c-478d-8cc9-97d8c2fbb1da", "--machinereadable"]
INFO subprocess: Command not in installer, restoring original environment...
["/Users/ahayden/.vagrant.d/insecure_private_key"]
INFO ssh: SSH is ready!
INFO ssh: Execute: (sudo=false)
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "showvminfo", "560c0ba3-253c-478d-8cc9-97d8c2fbb1da", "--machinereadable"]
INFO subprocess: Command not in installer, restoring original environment...
INFO guest: Autodetecting host type for [#<Vagrant::Machine: default (VagrantPlugins::ProviderVirtualBox::Provider)>]
INFO ssh: Execute: if test -r /etc/os-release; then
source /etc/os-release && test 'xLinux Mint' = "x$ID" && exit
fi
if test -x /usr/bin/lsb_release; then
/usr/bin/lsb_release -i 2>/dev/null | grep -qi 'Linux Mint' && exit
fi
if test -r /etc/issue; then
cat /etc/issue | grep -qi 'Linux Mint' && exit
fi
exit 1
(sudo=false)
INFO ssh: Execute: grep 'ostree=' /proc/cmdline (sudo=false)
INFO ssh: Execute: [ -x /usr/bin/lsb_release ] && /usr/bin/lsb_release -i 2>/dev/null | grep Trisquel (sudo=false)
INFO ssh: Execute: if test -r /etc/os-release; then
source /etc/os-release && test 'xelementary' = "x$ID" && exit
fi
if test -x /usr/bin/lsb_release; then
/usr/bin/lsb_release -i 2>/dev/null | grep -qi 'elementary' && exit
fi
if test -r /etc/issue; then
cat /etc/issue | grep -qi 'elementary' && exit
fi
exit 1
(sudo=false)
INFO ssh: Execute: uname -s | grep -i 'DragonFly' (sudo=false)
INFO ssh: Execute: cat /etc/pld-release (sudo=false)
INFO ssh: Execute: grep 'Amazon Linux' /etc/os-release (sudo=false)
INFO ssh: Execute: grep 'Fedora release' /etc/redhat-release (sudo=false)
INFO ssh: Execute: if test -r /etc/os-release; then
source /etc/os-release && test 'xkali' = "x$ID" && exit
fi
if test -x /usr/bin/lsb_release; then
/usr/bin/lsb_release -i 2>/dev/null | grep -qi 'kali' && exit
fi
if test -r /etc/issue; then
cat /etc/issue | grep -qi 'kali' && exit
fi
exit 1
(sudo=false)
INFO ssh: Execute: grep Funtoo /etc/gentoo-release (sudo=false)
INFO ssh: Execute: if test -r /etc/os-release; then
source /etc/os-release && test 'xubuntu' = "x$ID" && exit
fi
if test -x /usr/bin/lsb_release; then
/usr/bin/lsb_release -i 2>/dev/null | grep -qi 'ubuntu' && exit
fi
if test -r /etc/issue; then
cat /etc/issue | grep -qi 'ubuntu' && exit
fi
exit 1
(sudo=false)
INFO guest: Detected: ubuntu!
INFO ssh: SSH is ready!
INFO ssh: Execute: (sudo=false)
INFO ssh: Inserting key to avoid password: ssh-rsa AAAA/ vagrant
INFO interface: detail:
Inserting generated public key within guest...
INFO interface: detail: default:
default: Inserting generated public key within guest...
default:
default: Inserting generated public key within guest...
INFO ssh: SSH is ready!
INFO ssh: Execute: (sudo=false)
INFO guest: Execute capability: insert_public_key [#<Vagrant::Machine: default (VagrantPlugins::ProviderVirtualBox::Provider)>, "ssh-rsa AAAA/ vagrant"] (ubuntu)
INFO ssh: Execute: mkdir -p ~/.ssh
chmod 0700 ~/.ssh
cat '/tmp/vagrant-insert-pubkey-1532971970' >> ~/.ssh/authorized_keys && chmod 0600 ~/.ssh/authorized_keys
result=$?
rm -f '/tmp/vagrant-insert-pubkey-1532971970'
exit $result
(sudo=false)
INFO host: Execute capability: set_ssh_key_permissions [#<Vagrant::Environment: /Users/ahayden/Development/LSS/db_archive_chef>, #<Pathname:/Users/ahayden/Development/LSS/db_archive_chef/.vagrant/machines/default/virtualbox/private_key>] (darwin)
INFO interface: detail: Removing insecure key from the guest if it's present...
INFO ssh: Execute: if test -f ~/.ssh/authorized_keys; then
grep -v -x -f '/tmp/vagrant-remove-pubkey-1532971970' ~/.ssh/authorized_keys > ~/.ssh/authorized_keys.tmp
mv ~/.ssh/authorized_keys.tmp ~/.ssh/authorized_keys && chmod 0600 ~/.ssh/authorized_keys
result=$?
fi
rm -f '/tmp/vagrant-remove-pubkey-1532971970'
exit $result
(sudo=false)
INFO interface: detail: Key inserted! Disconnecting and reconnecting using new SSH key...
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "showvminfo", "560c0ba3-253c-478d-8cc9-97d8c2fbb1da", "--machinereadable"]
INFO subprocess: Command not in installer, restoring original environment...
INFO ssh: SSH is ready!
INFO ssh: Execute: (sudo=false)
INFO interface: output: Machine booted and ready!
INFO warden: Calling OUT action: #<VagrantPlugins::ProviderVirtualBox::Action::SaneDefaults:0x00000001014560a8>
INFO interface: info: Setting hostname..
INFO ssh: SSH is ready!
INFO ssh: Execute: (sudo=false)
INFO guest: Execute capability: change_host_name [#<Vagrant::Machine: default (VagrantPlugins::ProviderVirtualBox::Provider)>, "db-archive"] (ubuntu)
INFO ssh: Execute: hostname -f | grep '^db-archive$' (sudo=false)
INFO ssh: Execute: # Set the hostname
echo 'db-archive' > /etc/hostname
hostname -F /etc/hostname
if command -v hostnamectl; then
hostnamectl set-hostname 'db-archive'
fi
# Prepend ourselves to /etc/hosts
grep -w 'db-archive' /etc/hosts || {
if grep -w '^127\.0\.1\.1' /etc/hosts ; then
sed -i'' 's/^127\.0\.1\.1\s.*$/127.0.1.1\tdb-archive\tdb-archive/' /etc/hosts
else
sed -i'' '1i 127.0.1.1\tdb-archive\tdb-archive' /etc/hosts
fi
}
# Update mailname
echo 'db-archive' > /etc/mailname
# Restart hostname services
if test -f /etc/init.d/hostname; then
/etc/init.d/hostname start || true
fi
if test -f /etc/init.d/hostname.sh; then
/etc/init.d/hostname.sh start || true
fi
if test -x /sbin/dhclient ; then
/sbin/dhclient -r
/sbin/dhclient -nw
fi
(sudo=true)
INFO warden: Calling OUT action: #<Vagrant::Action::Builtin::SetHostname:0x00000001014560d0>
INFO synced_folders: Invoking synced folder enable: virtualbox
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "guestproperty", "get", "560c0ba3-253c-478d-8cc9-97d8c2fbb1da", "/VirtualBox/GuestInfo/OS/Product"]
INFO subprocess: Command not in installer, restoring original environment...
INFO interface: output: Mounting shared folders...
INFO interface: detail: /vagrant =>
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "showvminfo", "560c0ba3-253c-478d-8cc9-97d8c2fbb1da", "--machinereadable"]
INFO subprocess: Command not in installer, restoring original environment...
INFO ssh: SSH is ready!
INFO ssh: Execute: (sudo=false)
INFO guest: Execute capability: mount_virtualbox_shared_folder [#<Vagrant::Machine: default (VagrantPlugins::ProviderVirtualBox::Provider)>, "vagrant", "/vagrant", {:guestpath=>"/vagrant", :hostpath=>"/Users/ahayden/Development/LSS/db_archive_chef", :disabled=>false, :__vagrantfile=>true, :owner=>"vagrant", :group=>"vagrant"}] (ubuntu)
/sbin/initctl emit --no-wait vagrant-mounted MOUNTPOINT=/vagrant
fi
(sudo=true)
INFO ssh: Execute: id -u vagrant (sudo=false)
INFO ssh: Execute: getent group vagrant (sudo=false)
INFO ssh: Execute: mkdir -p /etc/chef (sudo=true)
INFO ssh: Execute: mount -t vboxsf -o uid=1000,gid=1000 etc_chef /etc/chef (sudo=true)
INFO ssh: Execute: chown 1000:1000 /etc/chef (sudo=true)
INFO ssh: Execute: if command -v /sbin/init && /sbin/init 2>/dev/null --version | grep upstart; then
/sbin/initctl emit --no-wait vagrant-mounted MOUNTPOINT=/etc/chef
fi
(sudo=true)
INFO provision: Writing provisioning sentinel so we don't provision again
INFO interface: info: Running provisioner: chef_solo...
INFO guest: Execute capability: chef_installed [#<Vagrant::Machine: default (VagrantPlugins::ProviderVirtualBox::Provider)>, "chef", :"12.10.24"] (ubuntu)
INFO ssh: Execute: test -x /opt/chef/bin/knife&& /opt/chef/bin/knife --version | grep 'Chef: 12.10.24' (sudo=true)
INFO interface: detail: Installing Chef (12.10.24)...
INFO interface: detail: default: Installing Chef (12.10.24)...
default: Installing Chef (12.10.24)...
INFO ssh: SSH is ready!
INFO ssh: Execute: (sudo=false)
INFO guest: Execute capability: chef_install [#<Vagrant::Machine: default (VagrantPlugins::ProviderVirtualBox::Provider)>, "chef", :"12.10.24", "stable", "https://omnitruck.chef.io", {:product=>"chef", :channel=>"stable", :version=>:"12.10.24", :omnibus_url=>"https://omnitruck.chef.io", :force=>false, :download_path=>nil}] (ubuntu)
INFO ssh: Execute: apt-get update -y -qq (sudo=true)
INFO ssh: Execute: apt-get install -y -qq curl (sudo=true)
INFO ssh: Execute: curl -sL https://omnitruck.chef.io/install.sh | bash -s -- -P "chef" -c "stable" -v "12.10.24" (sudo=true)
==> default: Running chef-solo...
==> default: [2018-07-30T17:33:12+00:00] INFO: Forking chef instance to converge...
INFO interface: info: Starting Chef Client, version 12.10.24
==> default: [2018-07-30T17:33:12+00:00] INFO: *** Chef 12.10.24 ***
INFO interface: info: [2018-07-30T17:33:12+00:00] INFO: Platform: x86_64-linux
INFO interface: info: [2018-07-30T17:33:14+00:00] INFO: Setting the run_list to ["recipe[chef-vault]", "recipe[db_archive::update]", "recipe[db_archive::install_packages]", "recipe[db_archive::install_hostsfile]", "recipe[db_archive::install_nginx]"] from CLI options
==> default: [2018-07-30T17:33:14+00:00] INFO: Starting Chef Run for ahayden
INFO interface: info: [2018-07-30T17:33:14+00:00] INFO: Running start handlers
INFO interface: info: ==> default: [2018-07-30T17:33:14+00:00] INFO: Start handlers complete.
==> default: [2018-07-30T17:33:14+00:00] INFO: Start handlers complete.
INFO interface: info: Installing Cookbook Gems:
INFO interface: info: Running handlers:
[2018-07-30T17:33:15+00:00] ERROR: Exception handlers complete
Chef Client failed. 0 resources updated in 03 seconds
[2018-07-30T17:33:15+00:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out
[2018-07-30T17:33:15+00:00] ERROR: Expected process to exit with [0], but received '5'
---- Begin output of bundle install ----
STDOUT: Don't run Bundler as root. Bundler can ask for sudo if it is needed, and
installing your bundle as root will break this application for all non-root
users on this machine.
Fetching gem metadata from https://rubygems.org/..........
Fetching version metadata from https://rubygems.org/..
Resolving dependencies...
Installing chef-vault 3.3.0
Gem::InstallError: chef-vault requires Ruby version >= 2.2.0.
Using bundler 1.11.2
An error occurred while installing chef-vault (3.3.0), and Bundler cannot
continue.
Make sure that `gem install chef-vault -v '3.3.0'` succeeds before bundling.
STDERR:
Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
ERROR warden: Error occurred: Chef never successfully completed! Any errors should be visible in the
output above. Please fix your recipes so that they properly complete.
INFO warden: Beginning recovery process...
INFO warden: Recovery complete.
INFO warden: Beginning recovery process...
INFO warden: Recovery complete.
ERROR warden: Error occurred: Chef never successfully completed! Any errors should be visible in the
output above. Please fix your recipes so that they properly complete.
INFO environment: Released process lock: machine-action-1c8a0b7102d23451e5804c5357d8a327
INFO environment: Running hook: environment_unload
INFO runner: Preparing hooks for middleware sequence...
INFO runner: 1 hooks defined.
INFO runner: Running action: environment_unload #<Vagrant::Action::Builder:0x0000000101164c50>
ERROR vagrant: Vagrant experienced an error! Details:
ERROR vagrant: #<VagrantPlugins::Chef::Provisioner::Base::ChefError: Chef never successfully completed! Any errors should be visible in the
output above. Please fix your recipes so that they properly complete.>
ERROR vagrant: Chef never successfully completed! Any errors should be visible in the
output above. Please fix your recipes so that they properly complete.
ERROR vagrant: /plugins/provisioners/chef/provisioner/chef_solo.rb:220:in `run_chef_solo'
/plugins/provisioners/chef/provisioner/chef_solo.rb:65:in `provision'
/lib/vagrant/action/builtin/provision.rb:138:in `run_provisioner'
/lib/vagrant/action/warden.rb:95:in `call'
/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
/lib/vagrant/action/warden.rb:34:in `call'
/lib/vagrant/action/builder.rb:116:in `call'
/lib/vagrant/action/runner.rb:66:in `block in run'
/lib/vagrant/util/busy.rb:19:in `busy'
/lib/vagrant/action/runner.rb:66:in `run'
/lib/vagrant/environment.rb:510:in `hook'
/lib/vagrant/action/builtin/provision.rb:126:in `call'
/lib/vagrant/action/builtin/provision.rb:126:in `block in call'
/lib/vagrant/action/builtin/provision.rb:103:in `each'
/lib/vagrant/action/builtin/provision.rb:103:in `call'
/lib/vagrant/action/warden.rb:34:in `call'
/lib/vagrant-berkshelf/action/upload.rb:23:in `call'
/lib/vagrant/action/warden.rb:34:in `call'
/lib/vagrant-berkshelf/action/install.rb:19:in `call'
/lib/vagrant/action/warden.rb:34:in `call'
/lib/vagrant-berkshelf/action/save.rb:21:in `call'
/lib/vagrant/action/warden.rb:34:in `call'
/plugins/providers/virtualbox/action/clear_forwarded_ports.rb:15:in `call'
/opt/vagrant/embedded/gems/2.1.2/gems/vagrant-/machine.rb:194:in `call'
/opt/vagrant/embedded/gems/2.1.2/gems/vagrant-/machine.rb:194:in `action
/opt/vagrant/embedded/gems/2.1.2/gems/vagrant-/batch_action.rb:82:in `block (2 levels) in run'
INFO interface: error: Chef never successfully completed! Any errors should be visible in the
output above. Please fix your recipes so that they properly complete.
I had to omit some things from the backtrace in order to post it...
The first sign is towards the top WARN global: resolv replacement
has not been enabled!
The next area of concern util.rb:93: warning: key "io" is duplicated
and overwritten on line 107
Then there are many cases of: Starting process: ["/usr/local/bin/VBoxManage", "showvminfo", "92b0cc90-127e-4e19-8c75-73b5bf0b5506"] INFO subprocess: Command not in installer, restoring original environment... . This happens very man times with VBoxManage and a couple other times with curl and berks. I think this is the problem.
At the end, it seems to finally fail with a gem install error for chef-vault. It says the gem requires Ruby version >2.2 which I do have.
Vagrantfile:
VAGRANTFILE_API_VERSION = '2'
Vagrant.require_version '>= 1.5.0'
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.hostname = 'db-archive'
if Vagrant.has_plugin?("vagrant-omnibus")
config.omnibus.chef_version = 'latest'
end
config.vm.box = 'bento/ubuntu-14.04'
config.vm.network :private_network, type: 'dhcp'
config.vm.network 'forwarded_port', guest: 80, host: 8080
config.vm.network 'forwarded_port', guest: 443, host: 8443
config.vm.synced_folder "#{ENV['HOME']}/Documents/src/db_archive", '/var/www/db_archive'
config.vm.synced_folder "#{ENV['HOME']}/.chef", '/etc/chef'
config.berkshelf.enabled = true
config.vm.provision :chef_solo do |chef|
chef.channel = 'stable'
chef.version = '12.10.24'
chef.environment = 'vagrant'
chef.environments_path = 'environments'
chef.run_list = [
"recipe[chef-vault]",
"recipe[db_archive::update]",
"recipe[db_archive::install_packages]",
"recipe[db_archive::install_hostsfile]",
"recipe[db_archive::install_nginx]"
]
chef.data_bags_path = 'data_bags'
chef.node_name = 'ahayden'
end
end

You are using Chef 12, which is no longer supported by the latest chef-vault. You'll need to upgrade your version of Chef.

In my metadata.rb file, I changed the line depends 'chef-vault' to depends 'chef-vault', '=2.1.1'. Then when I ran vagrant destroy && vagrant up it worked fine.

Related

molecule test seems to ignore ansible.cfg's remote_tmp setting

I am trying to use molecule to test a very basic role.
(venv) [red#jumphost docker-ops]$ cat roles/fake_role/tasks/main.yml
---
# tasks file for fake_role
- name: fake_role | debug remote_tmp
debug:
msg: "remote_tmp is {{ remote_tmp | default('not_set') }}"
- name: who am i
shell:
cmd: whoami
register: whoami_output
- name: debug who am i
debug:
msg: "{{ whoami_output }}"
This is my molecule.yml:
(venv) [red#jumphost docker-ops]$ cat roles/fake_role/molecule/default/molecule.yml
---
dependency:
name: galaxy
driver:
name: docker
# platforms:
# - name: instance
platforms:
- name: instance
image: docker.io/pycontribs/centos:7
pre_build_image: true
privileged: true
volume mounts:
- "sys/fs/cgroup:/sys/fs/cgroup:rw"
command: "/usr/sbin/init"
provisioner:
name: ansible
verifier:
name: ansible
And when I run ansible version I can see my ansible.cfg is /etc/ansible/ansible.cfg and I set the remote_tmp in it.
(venv) [red#jumphost fake_role]$ ansible --version
/home/red/GIT/venv/lib64/python3.6/site-packages/ansible/parsing/vault/__init__.py:44: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography and will be removed in a future release.
from cryptography.exceptions import InvalidSignature
ansible [core 2.11.12]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/red/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/red/GIT/venv/lib64/python3.6/site-packages/ansible
ansible collection location = /home/red/.ansible/collections:/usr/share/ansible/collections
executable location = /home/russell.cecala/GIT/venv/bin/ansible
python version = 3.6.8 (default, Oct 19 2021, 05:14:06) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
jinja version = 3.0.3
libyaml = True
(venv) [red#ajumphost fake_role]$ grep remote_tmp /etc/ansible/ansible.cfg
#remote_tmp = ~/.ansible/tmp
remote_tmp = /tmp
When I run ...
(venv) [red#jumphost docker-ops]$ cd roles/fake_role/
(venv) [russell.cecala#jumphost fake_role]$ molecule test
... I get this output ...
... lots of output ...
PLAY [Converge] ****************************************************************
TASK [Include red.fake_role] *****************************************
/home/red/GIT/venv/lib64/python3.6/site-packages/ansible/parsing/vault/__init__.py:44: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography and will be removed in a future release.
from cryptography.exceptions import InvalidSignature
TASK [brightpattern.fake_role : fake_role | debug remote_tmp] ******************
ok: [instance] => {
"msg": "remote_tmp is not_set"
}
TASK [red.fake_role : who am i] **************************************
fatal: [instance]: UNREACHABLE! => {"changed": false, "msg": "Failed to create temporary directory.
In some cases, you may have been able to authenticate and did not have permissions on the
target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted
in \"/tmp\", for more error information use -vvv. Failed command was:
( umask 77 && mkdir -p \"` echo ~/.ansible/tmp `\"&& mkdir \"` echo ~/.ansible/tmp/ansible-tmp-1668100608.7567627-2234645-21513917172593 `\" && echo ansible-tmp-1668100608.7567627-2234645-21513917172593=\"` echo ~/.ansible/tmp/ansible-tmp-1668100608.7567627-2234645-21513917172593 `\" ), exited with result 1",
"unreachable": true}
PLAY RECAP *********************************************************************
instance : ok=1 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
... a lot more output ...
Why wasn't remote_tmp set to /tmp?
UPDATE:
Here is my new molecule.yml:
(venv) [red#ap-jumphost fake_role]$ cat molecule/default/molecule.yml
---
dependency:
name: galaxy
driver:
name: docker
platforms:
- name: instance
image: docker.io/pycontribs/centos:7
pre_build_image: true
privileged: true
volume mounts:
- "sys/fs/cgroup:/sys/fs/cgroup:rw"
command: "/usr/sbin/init"
provisioner:
name: ansible
config_options:
defaults:
remote_tmp: /tmp
verifier:
name: ansible
But I am still getting the same error:
(venv) [red#ap-jumphost fake_role]$ molecule test
...
INFO Running default > prepare
WARNING Skipping, prepare playbook not configured.
INFO Running default > converge
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the
controller starting with Ansible 2.12. Current version: 3.6.8 (default, Oct 19
2021, 05:14:06) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]. This feature will be
removed from ansible-core in version 2.12. Deprecation warnings can be disabled
by setting deprecation_warnings=False in ansible.cfg.
PLAY [Converge] ****************************************************************
TASK [Include red.fake_role] *****************************************
/home/red/GIT/venv/lib64/python3.6/site-packages/ansible/parsing/vault/__init__.py:44: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography and will be removed in a future release.
from cryptography.exceptions import InvalidSignature
TASK [red.fake_role : fake_role | debug remote_tmp] ******************
ok: [instance] => {
"msg": "remote_tmp is not_set"
}
TASK [red.fake_role : fake_role | debug ansible_remote_tmp] **********
ok: [instance] => {
"msg": "ansible_remote_tmp is not_set"
}
TASK [red.fake_role : who am i] **************************************
fatal: [instance]: UNREACHABLE! => {"changed": false, "msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo /tmp `\"&& mkdir \"` echo /tmp/ansible-tmp-1668192366.5684752-2515263-14400147623756 `\" && echo ansible-tmp-1668192366.5684752-2515263-14400147623756=\"` echo /tmp/ansible-tmp-1668192366.5684752-2515263-14400147623756 `\" ), exited with result 1", "unreachable": true}
PLAY RECAP *********************************************************************
instance : ok=2 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
WARNING Retrying execution failure 4 of: ansible-playbook --inventory /home/red/.cache/molecule/fake_role/default/inventory --skip-tags molecule-notest,notest /home/red/GIT/docker-ops/roles/fake_role/molecule/default/converge.yml
CRITICAL Ansible return code was 4, command was: ['ansible-playbook', '--inventory', '/home/red/.cache/molecule/fake_role/default/inventory', '--skip-tags', 'molecule-notest,notest', '/home/red/GIT/docker-ops/roles/fake_role/molecule/default/converge.yml']
Easier to read error message:
fatal: [instance]: UNREACHABLE! =>
{"changed": false,
"msg": "Failed to create temporary directory.In some cases, you may have been able to
authenticate and did not have permissions on the target directory. Consider
changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\",
for more error information use -vvv.
Failed command was: ( umask 77 && mkdir -p \"` echo /tmp `\"&& mkdir \"` echo /tmp/ansible-tmp-1668192366.5684752-2515263-14400147623756 `\" && echo ansible-tmp-1668192366.5684752-2515263-14400147623756=\"` echo /tmp/ansible-tmp-1668192366.5684752-2515263-14400147623756 `\" ), exited with result 1", "unreachable": true}
I did happen to notice that the
~/.cache/molecule/fake_role/default/ansible.cfg file does have remote_tmp set.
[defaults]
ansible_managed = Ansible managed: Do NOT edit this file manually!
display_failed_stderr = True
forks = 50
retry_files_enabled = False
host_key_checking = False
nocows = 1
interpreter_python = auto_silent
remote_tmp = /tmp
[ssh_connection]
scp_if_ssh = True
control_path = %(directory)s/%%h-%%p-%%r
Molecule produces it's own ansible.cfg for its own test use which will not take into account any global or local existing config file.
Depending on your version/configuration, this file is either created in:
molecule/<scenario>/.molecule/ansible.cfg
/home/<user>/.cache/molecule/<role>/<scenario>/ansible.cfg.
The easiest way to see where that file is generated and used on your own platform is to run molecule in --debug mode and inspect the output for the ANSIBLE_CONFIG variable in current use.
Now don't try to modify that file as it will be overwritten at some point anyway. Instead, you have to modify your provisionner environment in molecule.yml.
Below is an example adapted from the documentation for your particular case.
provisioner:
name: ansible
config_options:
defaults:
remote_tmp: /tmp
You can force regenerating the ansible.cfg cache file (and other molecule cached/temporary resources) for your scenario by running molecule reset
Please pay attention in the documentation link to the note warning you that some ansible.cfg config variables are blacklisted to warranty molecule functioning and will not be taken into account

Neo4j cannot be started because the database files require upgrading and upgrades are disabled in the configuration

Getting the error "Neo4j cannot be started because the database files require upgrading and upgrades are disabled in the configuration. Please set 'dbms.allow_upgrade' to 'true' in your configuration file" when I try to connect neo4j through its Java driver.
Despite that I have set the property dbms.allow_upgrade to true in the neo4j.conf file, nothing is changed.
This worked for me, upgrade to the 3.3.1 neo4j Docker image! I tried the following but this didn't work either for 3.0:
docker run \
--publish=7474:7474 --publish=7687:7687 \
--volume=$HOME/neo4j/data:/data \
--volume=$HOME/neo4j/logs:/logs \
--env=NEO4J_dbms_allow_upgrade=true \
--env=NEO4J_dbms_allow_format_migration=true \
neo4j:3.0
===============================================================
fflintstone#OPTIPLEX790 MINGW64 /c/Users/fflintstone
$ docker run \
> --publish=7474:7474 --publish=7687:7687 \
> --volume=$HOME/neo4j/data:/data \
> --volume=$HOME/neo4j/logs:/logs \
> --env=NEO4J_dbms_allow_upgrade=true \
> --env=NEO4J_dbms_allow_format_migration=true \
> neo4j:3.0
Starting Neo4j.
2017-12-26 06:47:03.172+0000 INFO ======== Neo4j 3.0.12 ========
2017-12-26 06:47:03.228+0000 INFO No SSL certificate found, generating a self-signed certificate..
2017-12-26 06:47:04.204+0000 INFO Starting...
2017-12-26 06:47:05.140+0000 INFO Bolt enabled on 0.0.0.0:7687.
2017-12-26 06:47:05.560+0000 ERROR Neo4j cannot be started, because the database files require upgrading and upgrades ar
e disabled in configuration. Please set 'dbms.allow_format_migration' to 'true' in your configuration file and try again
.
fflintstone#OPTIPLEX790 MINGW64 /c/Users/fflintstone
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
neo4j 3.0 39739226e15b 13 days ago 606MB
fflintstone#OPTIPLEX790 MINGW64 /c/Users/fflintstone
$ docker run --publish=7474:7474 --publish=7687:7687 --volume=$HOME/neo4j/data:/data --volume=$HOME/neo4j/l
ogs:/logs neo4j:3.3.1
Unable to find image 'neo4j:3.3.1' locally
3.3.1: Pulling from library/neo4j
2fdfe1cd78c2: Pull complete
82630fd6e5ba: Pull complete
119d364c885d: Pull complete
46f8fad107ee: Pull complete
fe7f5c604f04: Pull complete
6fd4ca7c99ff: Pull complete
d242a75fec77: Pull complete
Digest: sha256:baeb76f0d4785817c2a3608796ff0104a8f87ed89fe3391ef467eb6f0a1fc40e
Status: Downloaded newer image for neo4j:3.3.1
Active database: graph.db
Directories in use:
home: /var/lib/neo4j
config: /var/lib/neo4j/conf
logs: /logs
plugins: /var/lib/neo4j/plugins
import: /var/lib/neo4j/import
data: /var/lib/neo4j/data
certificates: /var/lib/neo4j/certificates
run: /var/lib/neo4j/run
Starting Neo4j.
2017-12-26 06:51:42.867+0000 WARN Unknown config option: causal_clustering.discovery_listen_address
2017-12-26 06:51:42.872+0000 WARN Unknown config option: causal_clustering.raft_advertised_address
2017-12-26 06:51:42.872+0000 WARN Unknown config option: causal_clustering.raft_listen_address
2017-12-26 06:51:42.872+0000 WARN Unknown config option: ha.host.coordination
2017-12-26 06:51:42.872+0000 WARN Unknown config option: causal_clustering.transaction_advertised_address
2017-12-26 06:51:42.873+0000 WARN Unknown config option: causal_clustering.discovery_advertised_address
2017-12-26 06:51:42.873+0000 WARN Unknown config option: ha.host.data
2017-12-26 06:51:42.874+0000 WARN Unknown config option: causal_clustering.transaction_listen_address
2017-12-26 06:51:42.917+0000 INFO ======== Neo4j 3.3.1 ========
2017-12-26 06:51:42.995+0000 INFO Starting...
2017-12-26 06:51:45.790+0000 INFO Bolt enabled on 0.0.0.0:7687.
2017-12-26 06:51:52.936+0000 INFO Started.
2017-12-26 06:51:55.374+0000 INFO Remote interface available at http://localhost:7474/
In my case, the problem was that the configuration file was not the one I thought.
According to this documentation, it should have been under <neo4j-home>/conf/neo4j.conf
However, running neo4j stop and then neo4j start printed out a couple of key paths.
Directories in use:
home: /var/lib/neo4j
config: /etc/neo4j
logs: /var/log/neo4j
plugins: /var/lib/neo4j/plugins
import: /var/lib/neo4j/import
data: /var/lib/neo4j/data
certificates: /var/lib/neo4j/certificates
run: /var/run/neo4j
So, editing the config found under /etc/neo4j worked.

Strange behavior using docker

i'm using neo4j with docker.
At the large, the product is running, but I cannot config it.
I added some configs in a neo4j/conf/neo4j.conf (which are ignored|).
This is my launch command:
docker run -e NEO4J_dbms_security_procedures_unrestricted=apoc.\\\* --publish=7474:7474 --publish=7687:7687 --volume=$HOME/arianne/2017/neo4j/data:/data --volume=$HOME/arianne/2017/neo4j/logs:/logs --volume=$HOME/arianne/2017/neo4j/conf:/conf --rm -e NEO4J_AUTH=none --volume=$HOME/arianne/2017/neo4j/plugins:/plugins neo4j:3.2
and this is the loading log ...
Active database: graph.db
Directories in use:
home: /var/lib/neo4j
config: /var/lib/neo4j/conf
logs: /logs
plugins: /plugins
import: /var/lib/neo4j/import
data: /var/lib/neo4j/data
certificates: /var/lib/neo4j/certificates
run: /var/lib/neo4j/run
Starting Neo4j.
2017-07-05 10:18:27.679+0000 WARN Unknown config option:
causal_clustering.discovery_listen_address
2017-07-05 10:18:27.681+0000 WARN Unknown config option:
causal_clustering.raft_advertised_address
2017-07-05 10:18:27.681+0000 WARN Unknown config option:
causal_clustering.raft_listen_address
2017-07-05 10:18:27.681+0000 WARN Unknown config option:
ha.host.coordination
2017-07-05 10:18:27.681+0000 WARN Unknown config option:
causal_clustering.transaction_advertised_address
2017-07-05 10:18:27.682+0000 WARN Unknown config option:
causal_clustering.discovery_advertised_address
2017-07-05 10:18:27.682+0000 WARN Unknown config option: ha.host.data
2017-07-05 10:18:27.682+0000 WARN Unknown config option:
causal_clustering.transaction_listen_address
2017-07-05 10:18:27.717+0000 INFO ======== Neo4j 3.2.1 ========
2017-07-05 10:18:27.918+0000 INFO No SSL certificate found,
generating a self-signed certificate..
2017-07-05 10:18:28.759+0000 INFO Starting...
2017-07-05 10:18:30.433+0000 INFO Bolt enabled on 0.0.0.0:7687.
2017-07-05 10:18:40.255+0000 INFO Started.
2017-07-05 10:18:42.441+0000 INFO Remote interface available at
http://192.168.10.196:7474/
The directory are totally different from those I put on the command line, and the directory var/lib/neo4j doesnt exists!!!
Any idea?
Paolo
This works for me :
sudo docker run \
-p 7474:7474 \
-p 7687:7687 \
-p 7473:7473 \
-v $HOME/dockerneo4j/data:/data \
-v $HOME/dockerneo4j/logs:/logs \
-v $HOME/dockerneo4j/import:/import \
-v $HOME/dockerneo4j/conf:/conf \
-v $HOME/dockerneo4j/plugins:/plugins \
neo4j:3.2.1
The config-file in dockerneo4j/conf is used (easily tested by changing the databasename in there and checking in dockerneo4j/data if a new database folder is created). The apoc plugins are picked up too.
It is correct (although - granted - confusing) that the log shows /var/lib/neo4j directories, those are the internal ones in your docker image.
Hope this helps,
Tom
P.S. http://kvangundy.com/wp/set-up-neo4j-and-docker/ has all the information.

GitLab CE docker container keeps crashing at startup

I'm trying to run GitLab CE edition as a docker container (gitlab/gitlab-ce) via docker compose, following the instructions at http://doc.gitlab.com/omnibus/docker.
The problem is that every time I start with docker-compose up -d, the container crashes/exits after about a minute. I collected all information that could be useful, there are some chef-related error message that I'm not able to decrypt. The environment runs insides an Ubuntu Vagrant virtual machine.
I tried to use a different tagged version of the image instead of the :latest, but getting similar results.
docker-compose.yml relevant snippet:
gitlab:
image: gitlab/gitlab-ce
container_name: my_gitlab
volumes:
- ./runtime/gitlab/config:/etc/gitlab
- ./runtime/gitlab/data:/var/opt/gitlab
- ./runtime/gitlab/logs:/var/log/gitlab
ports:
- 443:443
- 22:22
- 8082:80
following is the log file saved in ./runtime/gitlab/logs (volume for /var/log/gitlab)
# Logfile created on 2016-04-28 08:07:43 +0000 by logger.rb/44203
[2016-04-28T08:07:44+00:00] INFO: Started chef-zero at chefzero://localhost:8889 with repository at /opt/gitlab/embedded
One version per cookbook
[2016-04-28T08:07:44+00:00] INFO: Forking chef instance to converge...
[2016-04-28T08:07:44+00:00] INFO: *** Chef 12.6.0 ***
[2016-04-28T08:07:44+00:00] INFO: Chef-client pid: 36
[2016-04-28T08:07:47+00:00] INFO: HTTP Request Returned 404 Not Found: Object not found: chefzero://localhost:8889/nodes/bcfc5b569532
[2016-04-28T08:07:48+00:00] INFO: Setting the run_list to ["recipe[gitlab]"] from CLI options
[2016-04-28T08:07:48+00:00] INFO: Run List is [recipe[gitlab]]
[2016-04-28T08:07:48+00:00] INFO: Run List expands to [gitlab]
[2016-04-28T08:07:48+00:00] INFO: Starting Chef Run for bcfc5b569532
[2016-04-28T08:07:48+00:00] INFO: Running start handlers
[2016-04-28T08:07:48+00:00] INFO: Start handlers complete.
[2016-04-28T08:07:48+00:00] INFO: HTTP Request Returned 404 Not Found: Object not found:
[2016-04-28T08:07:52+00:00] INFO: Loading cookbooks [gitlab#0.0.1, runit#0.14.2, package#0.0.0]
[2016-04-28T08:07:54+00:00] INFO: directory[/etc/gitlab] owner changed to 0
[2016-04-28T08:07:54+00:00] INFO: directory[/etc/gitlab] group changed to 0
[2016-04-28T08:07:54+00:00] INFO: directory[/etc/gitlab] mode changed to 775
[2016-04-28T08:07:54+00:00] WARN: Cloning resource attributes for directory[/var/opt/gitlab] from prior resource (CHEF-3694)
[2016-04-28T08:07:54+00:00] WARN: Previous directory[/var/opt/gitlab]: /opt/gitlab/embedded/cookbooks/cache/cookbooks/gitlab/recipes/default.rb:43:in `from_file'
[2016-04-28T08:07:54+00:00] WARN: Current directory[/var/opt/gitlab]: /opt/gitlab/embedded/cookbooks/cache/cookbooks/gitlab/recipes/users.rb:24:in `from_file'
[2016-04-28T08:07:54+00:00] WARN: Selected upstart because /sbin/init --version is showing upstart.
[2016-04-28T08:07:54+00:00] WARN: Cloning resource attributes for directory[/etc/sysctl.d] from prior resource (CHEF-3694)
[2016-04-28T08:07:54+00:00] WARN: Previous directory[/etc/sysctl.d]: /opt/gitlab/embedded/cookbooks/cache/cookbooks/gitlab/definitions/sysctl.rb:22:in `block in from_file'
[2016-04-28T08:07:54+00:00] WARN: Current directory[/etc/sysctl.d]: /opt/gitlab/embedded/cookbooks/cache/cookbooks/gitlab/definitions/sysctl.rb:22:in `block in from_file'
[2016-04-28T08:07:54+00:00] WARN: Cloning resource attributes for file[/etc/sysctl.d/90-postgresql.conf] from prior resource (CHEF-3694)
.
. several similar WARN: log entries
.
[2016-04-28T08:07:55+00:00] INFO: directory[/var/opt/gitlab] owner changed to 0
[2016-04-28T08:07:55+00:00] INFO: directory[/var/opt/gitlab] group changed to 0
[2016-04-28T08:07:55+00:00] INFO: directory[/var/opt/gitlab] mode changed to 755
.
.
.
[2016-04-28T08:07:57+00:00] INFO: template[/var/opt/gitlab/gitlab-rails/etc/rack_attack.rb] owner changed to 0
[2016-04-28T08:07:57+00:00] INFO: template[/var/opt/gitlab/gitlab-rails/etc/rack_attack.rb] group changed to 0
[2016-04-28T08:07:57+00:00] INFO: template[/var/opt/gitlab/gitlab-rails/etc/rack_attack.rb] mode changed to 644
[2016-04-28T08:07:58+00:00] INFO: Running queued delayed notifications before re-raising exception
[2016-04-28T08:07:58+00:00] INFO: template[/var/opt/gitlab/gitlab-rails/etc/gitlab.yml] sending run action to execute[clear the gitlab-rails cache] (delayed)
[2016-04-28T08:09:02+00:00] ERROR: Running exception handlers
[2016-04-28T08:09:02+00:00] ERROR: Exception handlers complete
[2016-04-28T08:09:02+00:00] FATAL: Stacktrace dumped to /opt/gitlab/embedded/cookbooks/cache/chef-stacktrace.out
[2016-04-28T08:09:02+00:00] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report
[2016-04-28T08:09:02+00:00] ERROR: Chef::Exceptions::MultipleFailures
[2016-04-28T08:09:02+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
/opt/gitlab/embedded/bin/chef-client:23:in `<main>'root#bcfc5b569532:/# tail -f /opt/gitlab/embedded/cookbooks/cache/chef-stacktrace.out
/opt/gitlab/embedded/lib/ruby/gems/2.1.0/gems/chef-12.6.0/lib/chef/local_mode.rb:44:in `with_server_connectivity'
/opt/gitlab/embedded/lib/ruby/gems/2.1.0/gems/chef-12.6.0/lib/chef/application.rb:203:in `run_chef_client'
/opt/gitlab/embedded/lib/ruby/gems/2.1.0/gems/chef-12.6.0/lib/chef/application/client.rb:413:in `block in interval_run_chef_client'
/opt/gitlab/embedded/lib/ruby/gems/2.1.0/gems/chef-12.6.0/lib/chef/application/client.rb:403:in `loop'
/opt/gitlab/embedded/lib/ruby/gems/2.1.0/gems/chef-12.6.0/lib/chef/application/client.rb:403:in `interval_run_chef_client'
/opt/gitlab/embedded/lib/ruby/gems/2.1.0/gems/chef-12.6.0/lib/chef/application/client.rb:393:in `run_application'
/opt/gitlab/embedded/lib/ruby/gems/2.1.0/gems/chef-12.6.0/lib/chef/application.rb:58:in `run'
/opt/gitlab/embedded/lib/ruby/gems/2.1.0/gems/chef-12.6.0/bin/chef-client:26:in `<top (required)>'
/opt/gitlab/embedded/bin/chef-client:23:in `load'
/opt/gitlab/embedded/bin/chef-client:23:in `<main>'
<...here the container terminates and my exec bash shell returns...>
Below the output from docker logs -f for the container. The log is very long (>12K lines), so I tried to look for lines containing useful info but am not sure I found them all:
Thank you for using GitLab Docker Image!
Current version: gitlab-ce=8.7.0-ce.0
Configure GitLab for your system by editing /etc/gitlab/gitlab.rb file
And restart this container to reload settings.
To do it use docker exec:
docker exec -it gitlab vim /etc/gitlab/gitlab.rb
docker restart gitlab
For a comprehensive list of configuration options please see the Omnibus GitLab readme
https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md
If this container fails to start due to permission problems try to fix it by executing:
docker exec -it gitlab update-permissions
docker restart gitlab
Preparing services...
Starting services...
Configuring GitLab package...
Configuring GitLab...
[2016-04-28T08:02:39+00:00] INFO: GET /organizations/chef/nodes/bcfc5b569532
[2016-04-28T08:02:39+00:00] INFO: #<ChefZero::RestErrorResponse: 404: Object not found: chefzero://localhost:8889/nodes/bcfc5b569532>
.
.
.
/opt/gitlab/embedded/bin/chef-client:23:in `load'
/opt/gitlab/embedded/bin/chef-client:23:in `<main>'
[2016-04-28T08:02:39+00:00] INFO:
--- RESPONSE (404) ---
{
"error": [
"Object not found: chefzero://localhost:8889/nodes/bcfc5b569532"
]
}
--- END RESPONSE ---
.
.
.
...a lot of logs (~12K lines), including some errors like the following one:
.
.
.
--- END RESPONSE ---
init (upstart 1.12.1)
[0m
================================================================================[0m
[31mError executing action `create` on resource 'link[/var/log/gitlab/gitlab-rails/sidekiq.log]'[0m
================================================================================[0m
[0mErrno::EPROTO[0m
-------------[0m
Protocol error # sys_fail2 - (/var/log/gitlab/sidekiq/current, /var/log/gitlab/gitlab-rails/sidekiq.log)[0m
.
.
.
================================================================================
Error executing action `create` on resource 'link[/var/log/gitlab/gitlab-rails/sidekiq.log]'
================================================================================
Errno::EPROTO
-------------
Protocol error # sys_fail2 - (/var/log/gitlab/sidekiq/current, /var/log/gitlab/gitlab-rails/sidekiq.log)
Resource Declaration:
---------------------
# In /opt/gitlab/embedded/cookbooks/cache/cookbooks/gitlab/recipes/gitlab-rails.rb
281: link legacy_sidekiq_log_file do
282: to File.join(node['gitlab']['sidekiq']['log_directory'], 'current')
283: not_if { File.exists?(legacy_sidekiq_log_file) }
284: end
285:
Compiled Resource:
------------------
# Declared in /opt/gitlab/embedded/cookbooks/cache/cookbooks/gitlab/recipes/gitlab-rails.rb:281:in `from_file'
link("/var/log/gitlab/gitlab-rails/sidekiq.log") do
action [:create]
retries 0
retry_delay 2
default_guard_interpreter :default
to "/var/log/gitlab/sidekiq/current"
link_type :symbolic
target_file "/var/log/gitlab/gitlab-rails/sidekiq.log"
declared_type :link
cookbook_name "gitlab"
recipe_name "gitlab-rails"
not_if { #code block }
end
<output ends>
My Gitlab container was crashing on run too, until I noticed that there was a rights issue (Gitlab not having rights on its own files because there were externally replaced, especially the config file gitlab.rb).
This fixed my problem:
docker exec -it my-gitlab-container update-permissions
docker exec -it my-gitlab-container gitlab-ctl reconfigure
docker restart my-gitlab-container
I'm not sure my issue is related to yours but in my case, I wanted to migrate the gitlab volumes to others directory because of space availability. There was a permission issue because I ran :
cp -R /my/old/gitlab /my/new/gitlab
insteda of :
cp -a /my/old/gitlab /my/new/gitlab
The -a preserve the attributes including permissions which were problematic for our container.
cheers
sudo chmod g+s /opt/gitlab/data/git-data/repositories/
Where /opt/gitlab/ is the linked docker share

Not able to make Jenkins perforce plugin to work with ssh

I am not quite familiar with Jenkins but for some reason I am not able to make the perforce plugin to work. I will list down the problem and what I have tried so as to get a better understanding.
Jenkins Version - 1.561
Perforce Plugin Version - 1.3.27 (I have perforce path configured in Jenkins)
System - Ubuntu 10.04
Problem:
In the Source Code Management's Project Details section ( when you try to configure a new job ) I get "Unable to check workspace against depot" error.
P4PORT(hostname:port) - rsh:ssh -q -a -x -l p4ssh -q -x xxx.xxx.com /bin/true
Username - ialok
Password - N.A (Connection to SCM is via key authentication so left it blank)
Workspace(client) - ialok_jenkins
I let Jenkins create workspace and manage its view by checking the checkbox for both "Let Jenkins Create Workspace" and "Let Jenkins Manage Workspace View"
Client View Type is a View Map with the following mapping:
//sandbox/srkamise/... //ialok_jenkins/srkamise/...
I have the keys loaded prior to starting jenkins and the jenkins process runs as my user (ialok)
~$ ps aux | grep jenkins
ialok 16608 0.0 0.0 14132 552 ? Ss 11:08 0:00 /usr/bin/daemon --name=ialok --inherit --env=JENKINS_HOME=/var/lib/jenkins --output=/var/log/jenkins/jenkins.log --pidfile=/var/run/jenkins/jenkins.pid -- /usr/bin/java -Djava.awt.headless=true -jar /usr/share/jenkins/jenkins.war --webroot=/var/cache/jenkins/war --httpPort=8080 --ajp13Port=-1
ialok 16609 1.0 13.9 1448716 542156 ? Sl 11:08 1:04 /usr/bin/java -Djava.awt.headless=true -jar /usr/share/jenkins/jenkins.war --webroot=/var/cache/jenkins/war --httpPort=8080 --ajp13Port=-1
Additionally, I used envInject plugin and "Under Prepare an environment for the run" I added SSD_AGENT_PID, SSH_AUTH_SOCK, P4USER, P4PORT environment parameters. (I did try without envInject but faced the same issue)
It looks like some authentication problem as I double checked the path to p4 executable along with the project mapping and addition of keys to my environment.
Here is the log file indicating a failed run:
Started by user anonymous
[EnvInject] - Loading node environment variables.
[EnvInject] - Preparing an environment for the build.
[EnvInject] - Keeping Jenkins system variables.
[EnvInject] - Keeping Jenkins build variables.
[EnvInject] - Injecting as environment variables the properties content
P4CONFIG=.perforce
P4PORT=rsh:ssh -q -a -x -l p4ssh -q -x xxx.xxx.com /bin/true
P4USER=ialok
SSH_AGENT_PID=25752
SSH_AUTH_SOCK=/tmp/keyring-7GAS75/ssh
[EnvInject] - Variables injected successfully.
[EnvInject] - Injecting contributions.
Building in workspace /var/lib/jenkins/jobs/fin/workspace
Using master perforce client: ialok_jenkins
[workspace] $ /usr/bin/p4 workspace -o ialok_jenkins
Changing P4 Client Root to: /var/lib/jenkins/jobs/fin/workspace
Changing P4 Client View from:
Changing P4 Client View to:
//sandbox/srkamise/... //ialok_jenkins/srkamise/...
Saving new client ialok_jenkins
[workspace] $ /usr/bin/p4 -s client -i
Caught exception communicating with perforce. TCP receive failed. read: socket: Connection reset by peer
For Command: /usr/bin/p4 -s client -i
With Data:
===================
Client: ialok_jenkins
Description:
Root: /var/lib/jenkins/jobs/fin/workspace
Options: noallwrite clobber nocompress unlocked nomodtime rmdir
LineEnd: local
View:
//sandbox/srkamise/... //ialok_jenkins/srkamise/...
===================
com.tek42.perforce.PerforceException: TCP receive failed. read: socket: Connection reset by peer
For Command: /usr/bin/p4 -s client -i
With Data:
===================
Client: ialok_jenkins
Description:
Root: /var/lib/jenkins/jobs/fin/workspace
Options: noallwrite clobber nocompress unlocked nomodtime rmdir
LineEnd: local
View:
//sandbox/srkamise/... //ialok_jenkins/srkamise/...
===================
at com.tek42.perforce.parse.AbstractPerforceTemplate.saveToPerforce(AbstractPerforceTemplate.java:270)
at com.tek42.perforce.parse.Workspaces.saveWorkspace(Workspaces.java:77)
at hudson.plugins.perforce.PerforceSCM.saveWorkspaceIfDirty(PerforceSCM.java:1787)
at hudson.plugins.perforce.PerforceSCM.checkout(PerforceSCM.java:895)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1251)
at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:604)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:513)
at hudson.model.Run.execute(Run.java:1709)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:231)
ERROR: Unable to communicate with perforce. TCP receive failed. read: socket: Connection reset by peer
For Command: /usr/bin/p4 -s client -i
With Data:
===================
Client: ialok_jenkins
Description:
Root: /var/lib/jenkins/jobs/fin/workspace
Options: noallwrite clobber nocompress unlocked nomodtime rmdir
LineEnd: local
View:
//sandbox/srkamise/... //ialok_jenkins/srkamise/...
===================
Finished: FAILURE
The P4PORT typically is of the form 'hostname.port'. Examples would be:
workshop.perforce.com:1666
myserver.mycompany.net:2500
Here's some docs: http://www.perforce.com/perforce/doc.current/manuals/cmdref/P4PORT.html

Resources