molecule test seems to ignore ansible.cfg's remote_tmp setting - docker

I am trying to use molecule to test a very basic role.
(venv) [red#jumphost docker-ops]$ cat roles/fake_role/tasks/main.yml
---
# tasks file for fake_role
- name: fake_role | debug remote_tmp
debug:
msg: "remote_tmp is {{ remote_tmp | default('not_set') }}"
- name: who am i
shell:
cmd: whoami
register: whoami_output
- name: debug who am i
debug:
msg: "{{ whoami_output }}"
This is my molecule.yml:
(venv) [red#jumphost docker-ops]$ cat roles/fake_role/molecule/default/molecule.yml
---
dependency:
name: galaxy
driver:
name: docker
# platforms:
# - name: instance
platforms:
- name: instance
image: docker.io/pycontribs/centos:7
pre_build_image: true
privileged: true
volume mounts:
- "sys/fs/cgroup:/sys/fs/cgroup:rw"
command: "/usr/sbin/init"
provisioner:
name: ansible
verifier:
name: ansible
And when I run ansible version I can see my ansible.cfg is /etc/ansible/ansible.cfg and I set the remote_tmp in it.
(venv) [red#jumphost fake_role]$ ansible --version
/home/red/GIT/venv/lib64/python3.6/site-packages/ansible/parsing/vault/__init__.py:44: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography and will be removed in a future release.
from cryptography.exceptions import InvalidSignature
ansible [core 2.11.12]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/red/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/red/GIT/venv/lib64/python3.6/site-packages/ansible
ansible collection location = /home/red/.ansible/collections:/usr/share/ansible/collections
executable location = /home/russell.cecala/GIT/venv/bin/ansible
python version = 3.6.8 (default, Oct 19 2021, 05:14:06) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
jinja version = 3.0.3
libyaml = True
(venv) [red#ajumphost fake_role]$ grep remote_tmp /etc/ansible/ansible.cfg
#remote_tmp = ~/.ansible/tmp
remote_tmp = /tmp
When I run ...
(venv) [red#jumphost docker-ops]$ cd roles/fake_role/
(venv) [russell.cecala#jumphost fake_role]$ molecule test
... I get this output ...
... lots of output ...
PLAY [Converge] ****************************************************************
TASK [Include red.fake_role] *****************************************
/home/red/GIT/venv/lib64/python3.6/site-packages/ansible/parsing/vault/__init__.py:44: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography and will be removed in a future release.
from cryptography.exceptions import InvalidSignature
TASK [brightpattern.fake_role : fake_role | debug remote_tmp] ******************
ok: [instance] => {
"msg": "remote_tmp is not_set"
}
TASK [red.fake_role : who am i] **************************************
fatal: [instance]: UNREACHABLE! => {"changed": false, "msg": "Failed to create temporary directory.
In some cases, you may have been able to authenticate and did not have permissions on the
target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted
in \"/tmp\", for more error information use -vvv. Failed command was:
( umask 77 && mkdir -p \"` echo ~/.ansible/tmp `\"&& mkdir \"` echo ~/.ansible/tmp/ansible-tmp-1668100608.7567627-2234645-21513917172593 `\" && echo ansible-tmp-1668100608.7567627-2234645-21513917172593=\"` echo ~/.ansible/tmp/ansible-tmp-1668100608.7567627-2234645-21513917172593 `\" ), exited with result 1",
"unreachable": true}
PLAY RECAP *********************************************************************
instance : ok=1 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
... a lot more output ...
Why wasn't remote_tmp set to /tmp?
UPDATE:
Here is my new molecule.yml:
(venv) [red#ap-jumphost fake_role]$ cat molecule/default/molecule.yml
---
dependency:
name: galaxy
driver:
name: docker
platforms:
- name: instance
image: docker.io/pycontribs/centos:7
pre_build_image: true
privileged: true
volume mounts:
- "sys/fs/cgroup:/sys/fs/cgroup:rw"
command: "/usr/sbin/init"
provisioner:
name: ansible
config_options:
defaults:
remote_tmp: /tmp
verifier:
name: ansible
But I am still getting the same error:
(venv) [red#ap-jumphost fake_role]$ molecule test
...
INFO Running default > prepare
WARNING Skipping, prepare playbook not configured.
INFO Running default > converge
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the
controller starting with Ansible 2.12. Current version: 3.6.8 (default, Oct 19
2021, 05:14:06) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]. This feature will be
removed from ansible-core in version 2.12. Deprecation warnings can be disabled
by setting deprecation_warnings=False in ansible.cfg.
PLAY [Converge] ****************************************************************
TASK [Include red.fake_role] *****************************************
/home/red/GIT/venv/lib64/python3.6/site-packages/ansible/parsing/vault/__init__.py:44: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography and will be removed in a future release.
from cryptography.exceptions import InvalidSignature
TASK [red.fake_role : fake_role | debug remote_tmp] ******************
ok: [instance] => {
"msg": "remote_tmp is not_set"
}
TASK [red.fake_role : fake_role | debug ansible_remote_tmp] **********
ok: [instance] => {
"msg": "ansible_remote_tmp is not_set"
}
TASK [red.fake_role : who am i] **************************************
fatal: [instance]: UNREACHABLE! => {"changed": false, "msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo /tmp `\"&& mkdir \"` echo /tmp/ansible-tmp-1668192366.5684752-2515263-14400147623756 `\" && echo ansible-tmp-1668192366.5684752-2515263-14400147623756=\"` echo /tmp/ansible-tmp-1668192366.5684752-2515263-14400147623756 `\" ), exited with result 1", "unreachable": true}
PLAY RECAP *********************************************************************
instance : ok=2 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
WARNING Retrying execution failure 4 of: ansible-playbook --inventory /home/red/.cache/molecule/fake_role/default/inventory --skip-tags molecule-notest,notest /home/red/GIT/docker-ops/roles/fake_role/molecule/default/converge.yml
CRITICAL Ansible return code was 4, command was: ['ansible-playbook', '--inventory', '/home/red/.cache/molecule/fake_role/default/inventory', '--skip-tags', 'molecule-notest,notest', '/home/red/GIT/docker-ops/roles/fake_role/molecule/default/converge.yml']
Easier to read error message:
fatal: [instance]: UNREACHABLE! =>
{"changed": false,
"msg": "Failed to create temporary directory.In some cases, you may have been able to
authenticate and did not have permissions on the target directory. Consider
changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\",
for more error information use -vvv.
Failed command was: ( umask 77 && mkdir -p \"` echo /tmp `\"&& mkdir \"` echo /tmp/ansible-tmp-1668192366.5684752-2515263-14400147623756 `\" && echo ansible-tmp-1668192366.5684752-2515263-14400147623756=\"` echo /tmp/ansible-tmp-1668192366.5684752-2515263-14400147623756 `\" ), exited with result 1", "unreachable": true}
I did happen to notice that the
~/.cache/molecule/fake_role/default/ansible.cfg file does have remote_tmp set.
[defaults]
ansible_managed = Ansible managed: Do NOT edit this file manually!
display_failed_stderr = True
forks = 50
retry_files_enabled = False
host_key_checking = False
nocows = 1
interpreter_python = auto_silent
remote_tmp = /tmp
[ssh_connection]
scp_if_ssh = True
control_path = %(directory)s/%%h-%%p-%%r

Molecule produces it's own ansible.cfg for its own test use which will not take into account any global or local existing config file.
Depending on your version/configuration, this file is either created in:
molecule/<scenario>/.molecule/ansible.cfg
/home/<user>/.cache/molecule/<role>/<scenario>/ansible.cfg.
The easiest way to see where that file is generated and used on your own platform is to run molecule in --debug mode and inspect the output for the ANSIBLE_CONFIG variable in current use.
Now don't try to modify that file as it will be overwritten at some point anyway. Instead, you have to modify your provisionner environment in molecule.yml.
Below is an example adapted from the documentation for your particular case.
provisioner:
name: ansible
config_options:
defaults:
remote_tmp: /tmp
You can force regenerating the ansible.cfg cache file (and other molecule cached/temporary resources) for your scenario by running molecule reset
Please pay attention in the documentation link to the note warning you that some ansible.cfg config variables are blacklisted to warranty molecule functioning and will not be taken into account

Related

Creating file via ansible directly in container

I want to create a file, directly in a container directory.
I created a directory before:
- name: create private in container
ansible.builtin.file:
path: playcontainer:/etc/ssl/private/
state: directory
mode: 0755
But it doesn´t let me create a file in that directory
- name: openssl key
openssl_privatekey:
path: playcontainer:/etc/ssl/private/playkey.key
size: "{{ key_size }}"
type: "{{ key_type }}"`
What am I missing?
From scratch full example to interact with a container from ansible.
Please note that this is not always what you want to do. In this specific case, unless if testing an ansible role for example, the key should be written inside the image at build time when running your Dockerfile, or bind mounted from host at container start. You should not mess with the container filesystem once started on production.
First we create a container for our test:
docker run -d --rm --name so_example python:latest sleep infinity
Now we need an inventory to target that container (inventories/default/main.yml)
---
all:
vars:
ansible_connection: docker
hosts:
so_example:
Finally a test playbook.yml to achieve your goal:
---
- hosts: all
gather_facts: false
vars:
key_path: /etc/ssl/private
key_size: 4096
key_type: RSA
tasks:
- name: Make sure package requirements are met
apt:
name: python3-pip
state: present
- name: Make sure python requirements are met
pip:
name: cryptography
state: present
- name: Create private directory
file:
path: "{{ key_path }}"
state: directory
owner: root
group: root
mode: 0750
- name: Create a key
openssl_privatekey:
path: "{{ key_path }}/playkey.key"
size: "{{ key_size }}"
type: "{{ key_type }}"
Running the playbook gives:
$ ansible-playbook -i inventories/default/ playbook.yml
PLAY [all] *****************************************************************************************************************************************************************************************
TASK [Make sure package requirements are met] ******************************************************************************************************************************************************
changed: [so_example]
TASK [Make sure python requirements are met] *******************************************************************************************************************************************************
changed: [so_example]
TASK [Create private directory] ********************************************************************************************************************************************************************
changed: [so_example]
TASK [Create a key] ********************************************************************************************************************************************************************************
changed: [so_example]
PLAY RECAP *****************************************************************************************************************************************************************************************
so_example : ok=4 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
We can now check that the file is there
$ docker exec so_example ls -l /etc/ssl/private
total 5
-rw------- 1 root root 3243 Sep 15 13:28 playkey.key
$ docker exec so_example head -2 /etc/ssl/private/playkey.key
-----BEGIN RSA PRIVATE KEY-----
MIIJKQIBAAKCAgEA6xrz5kQuXbd59Bq0fqnwJ+dhkcHWCMh4sZO6UNCfodve7JP0
Clean-up:
docker stop so_example

How do I fix "KeyError: 'getpwuid(): uid not found: 1000'" when running ansible-playbook from jenkins

I have this jenkins pipeline that I use to execute ansible-playbook.
pipeline {
agent {
docker {
image 'gableroux/ansible'
args '-i --entrypoint='
}
}
stages {
stage('Setup parameters') {
steps {
script {
properties([
parameters([
string( defaultValue: 'hosts', name: 'INVENTORY', trim: true ),
string( defaultValue: '\'*\'', name: 'LIMIT', trim: true ),
string( defaultValue: 'shell', name: 'PLAYBOOK', trim: true ),
string( defaultValue: '--list-hosts', name: 'EXTRA_PARAMS', trim: true )
])
])
}
}
}
stage('Execute Ansible Playbook.') {
steps {
script {
env.DEFAULT_LOCAL_TMP = env.WORKSPACE_TMP
env.HOME = env.WORKSPACE
sh """
ansible-playbook -i ${INVENTORY} -l "${LIMIT}" ${PLAYBOOK} ${EXTRA_PARAMS}
"""
}
}
}
}
}
I pass it these parameters:
INVENTORY -> ",localhost"
LIMIT -> ' '
PLAYBOOK -> 'the_playbook.yml'
EXTRA_PARAMS -> -vvv --connection=local --user root
The contents of the_playbook.yml are:
---
- name: "Playing with Ansible and Git"
hosts: localhost
connection: local
tasks:
- name: "just execute a ls -lrt command"
shell: "ls -lrt"
register: "output"
- debug: var=output.stdout_lines
When my pipeline runs I get this error message:
+ ansible-playbook -i ,localhost -l ' ' the_playbook.yml -vvv '--connection=local' --user root
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the
controller starting with Ansible 2.12. Current version: 3.7.12 (default, Sep 8
2021, 01:55:52) [GCC 10.3.1 20210424]. This feature will be removed from
ansible-core in version 2.12. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
ansible-playbook [core 2.11.5]
config file = /var/jenkins_home/workspace/run_ansibleplaybook/ansible.cfg
configured module search path = ['/var/jenkins_home/workspace/run_ansibleplaybook/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
ansible collection location = /var/jenkins_home/workspace/run_ansibleplaybook/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-playbook
python version = 3.7.12 (default, Sep 8 2021, 01:55:52) [GCC 10.3.1 20210424]
jinja version = 3.0.1
libyaml = False
Using /var/jenkins_home/workspace/run_ansibleplaybook/ansible.cfg as config file
Parsed ,localhost inventory source with host_list plugin
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: the_playbook.yml *****************************************************
1 plays in the_playbook.yml
PLAY [Playing with Ansible and Git] ********************************************
TASK [Gathering Facts] *********************************************************
task path: /var/jenkins_home/workspace/run_ansibleplaybook/the_playbook.yml:2
The full traceback is:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/ansible/executor/task_executor.py", line 158, in run
res = self._execute()
File "/usr/local/lib/python3.7/site-packages/ansible/executor/task_executor.py", line 532, in _execute
self._connection = self._get_connection(cvars, templar)
File "/usr/local/lib/python3.7/site-packages/ansible/executor/task_executor.py", line 874, in _get_connection
ansible_playbook_pid=to_text(os.getppid())
File "/usr/local/lib/python3.7/site-packages/ansible/plugins/loader.py", line 837, in get_with_context
obj.__init__(instance, *args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/ansible/plugins/connection/local.py", line 50, in __init__
self.default_user = getpass.getuser()
File "/usr/local/lib/python3.7/getpass.py", line 169, in getuser
return pwd.getpwuid(os.getuid())[0]
KeyError: 'getpwuid(): uid not found: 1000'
fatal: [localhost]: FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
What can I do to fix it?
Try the following.
sh '''
echo "tempuser:x:$(id -u):$(id -g):,,,:${HOME}:/bin/bash" >> /etc/passwd
echo "tempuser:x:$(id -G | cut -d' ' -f 2)" >> /etc/group
'''
sh """
ansible-playbook -i ${INVENTORY} -l "${LIMIT}" ${PLAYBOOK} ${EXTRA_PARAMS}
"""
Or you can try creating a user with UID 1000 in the base image. Refer this for more information.

Failing to start nginx container when volumes is used (using ansible and docker-compose)

I am trying to start an nginx container using ansible with docker-compose from one machine to a different machine.
Whenever I include nginx.conf to the volumes, there is an error which I do not understand. The container is only created but not starting.
MACHINE-1
Command to run the playbook: ansible-playbook -v nginx-playbook.yml -l ubuntu_node_1 -u root
my playbook:
- name: nginx-docker_compose
hosts: all
gather_facts: yes
become: yes
tasks:
- community.general.docker_compose:
project_name: nginx
definition:
version: '2'
services:
web:
image: nginx:latest
volumes:
- ./vars/nginx.conf:/etc/nginx/nginx.conf:ro
ports:
- "8080:80"
[EDITED]
Here is the error:
Using /etc/ansible/ansible.cfg as config file
PLAY [nginx-docker_compose] ********************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************************************
[DEPRECATION WARNING]: Distribution Ubuntu 18.04 on host 172.31.15.176 should use /usr/bin/python3, but is using /usr/bin/python for backward compatibility with prior
Ansible releases. A future Ansible release will default to using the discovered platform python for this host. See
https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information. This feature will be removed in version 2.12. Deprecation
warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
ok: [172.31.15.176]
TASK [community.general.docker_compose] ********************************************************************************************************************************
fatal: [172.31.15.176]: FAILED! => {"changed": false, "errors": [], "module_stderr": "Recreating nginx_web_1 ... \n\u001b[1A\u001b[2K\nRecreating nginx_web_1 ... \n\u001b[1B", "module_stdout": "", "msg": "Error starting project Encountered errors while bringing up the project."}
PLAY RECAP *************************************************************************************************************************************************************
172.31.15.176 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
[root#ip-172-31-12-130 docker_server]# ansible-playbook -v nginx-playbook.yml -l ubuntu_node_1 -u root
Using /etc/ansible/ansible.cfg as config file
PLAY [nginx-docker_compose] ********************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************************************
[DEPRECATION WARNING]: Distribution Ubuntu 18.04 on host 172.31.15.176 should use /usr/bin/python3, but is using /usr/bin/python for backward compatibility with prior
Ansible releases. A future Ansible release will default to using the discovered platform python for this host. See
https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information. This feature will be removed in version 2.12. Deprecation
warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
ok: [172.31.15.176]
TASK [community.general.docker_compose] ********************************************************************************************************************************
fatal: [172.31.15.176]: FAILED! => {"changed": false, "errors": [], "module_stderr": "Recreating 9b102bbf98c2_nginx_web_1 ... \n\u001b[1A\u001b[2K\nRecreating 9b102bbf98c2_nginx_web_1 ... \n\u001b[1B", "module_stdout": "", "msg": "Error starting project Encountered errors while bringing up the project."}
PLAY RECAP *************************************************************************************************************************************************************
172.31.15.176 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
NOTE: When I try to run nginx container directly using docker-compose with the same config on MACHINE-2, it works.
I believe there are some permission issues happening while trying to execute the playbook from MACHINE-1 to MACHINE-2 but can not figure it out.
It works now. Thanks to #mdaniel.
Things I changed:
I wrote the entire directory in the playbook- /home/some_more_folders/nginx.conf
and copied the same file with same directory structure on the destination machine.
Still open questions:
Any idea why is it necessary to copy any file to the destination machine (such as nginx.conf)?
How this manual process of copying of config files to destination machine for docker-compose be automated?

Ansible Skipping Docker Build

Trying to get Ansible set up to learn about it, so could be a very simple mistake but I can't find the answer to it anywhere. When I try to run ansible-playbook it's just simply skipping the job with the following output:
ansible-playbook -i hosts simple-devops-image.yml --check
PLAY [all] ***********************************************************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************************************************************************************************************************************************************************
[WARNING]: Platform linux on host 127.0.0.1 is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more
information.
ok: [127.0.0.1]
TASK [build docker image using war file] *****************************************************************************************************************************************************************************************************************************************************
skipping: [127.0.0.1]
PLAY RECAP ***********************************************************************************************************************************************************************************************************************************************************************************
127.0.0.1 : ok=1 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
My .yml playbook file:
---
- hosts: all
become: yes
tasks:
- name: build docker image using war file
command: docker build -t simple-devops-image .
args:
chdir: /usr/local/src
My hosts file:
[localhost]
127.0.0.1 ansible_connection=local
command module is skipped when executing with check mode. Remove —check from ansible-playbook command to build docker image.
Here is a note from the doc:
Check mode is supported when passing creates or removes. If running in check mode and either of these are specified, the module will check for the existence of the file and report the correct changed status. If these are not supplied, the task will be skipped.

Vagrant Provision fails at installing Ruby Gem chef-vault

As the new intern, I'm supposed to get one of our applications running on my local machine (OS X). It's a large set of files to run the application and it uses frameworks that I am not familiar with such as vagrant and chef.
I was told that it should be as easy as cloning the repo, running vagrant up, and viewing the page in my browser but I've encountered a few problems. Now, when I go into the directory and run vagrant up it shows a few questionable things:
Admins-MacBook-Pro:db_archive_chef ahayden$ VAGRANT_LOG=info vagrant up
INFO global: Vagrant version: 2.1.2
INFO global: Ruby version: 2.4.4
INFO global: RubyGems version: 2.6.14.1
INFO global: VAGRANT_LOG="info"
INFO global: VAGRANT_INSTALLER_VERSION="2"
INFO global: VAGRANT_INSTALLER_EMBEDDED_DIR="/opt/vagrant/embedded"
INFO global: VAGRANT_INSTALLER_ENV="1"
INFO global: VAGRANT_EXECUTABLE="/opt/vagrant/embedded/gems/2.1.2/gems/vagrant-2.1.2/bin/vagrant"
WARN global: resolv replacement has not been enabled!
INFO global: Plugins:
INFO global: - vagrant-berkshelf = [installed: 5.1.2 constraint: > 0]
INFO global: - virtualbox = [installed: 0.8.6 constraint: > 0]
INFO global: Loading plugins!
INFO global: Loading plugin `vagrant-berkshelf` with default require: `vagrant-berkshelf`
INFO root: Version requirements from Vagrantfile: [">= 1.5"]
INFO root: - Version requirements satisfied!
INFO manager: Registered plugin: berkshelf
INFO global: Loading plugin `virtualbox` with default require: `virtualbox`
/Users/ahayden/.vagrant.d/gems/2.4.4/gems/virtualbox-0.8.6/lib/virtualbox/com/ffi/util.rb:93: warning: key "io" is duplicated and overwritten on line 107
INFO vagrant: `vagrant` invoked: ["up"]
INFO environment: Environment initialized (#<Vagrant::Environment:0x00000001040deee0>)
INFO environment: - cwd: /Users/ahayden/Development/LSS/db_archive_chef
INFO environment: Home path: /Users/ahayden/.vagrant.d
INFO environment: Local data path: /Users/ahayden/Development/LSS/db_archive_chef/.vagrant
INFO environment: Running hook: environment_plugins_loaded
INFO runner: Preparing hooks for middleware sequence...
INFO runner: 1 hooks defined.
INFO root: Version requirements from Vagrantfile: [">= 1.5.0"]
INFO root: - Version requirements satisfied!
INFO loader: Loading configuration in order: [:home, :root]
INFO command: Active machine found with name default. Using provider: virtualbox
INFO environment: Getting machine: default (virtualbox)
INFO environment: Uncached load of machine.
INFO base: VBoxManage path: VBoxManage
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "--version"]
INFO subprocess: Command not in installer, restoring original environment...
INFO meta: Using VirtualBox driver: VagrantPlugins::ProviderVirtualBox::Driver::Version_5_2
INFO base: VBoxManage path: VBoxManage
INFO loader: Set "2174531280_machine_default" = []
INFO loader: Loading configuration in order: [:home, :root, "2174531280_machine_default"]
INFO box_collection: Box found: bento/ubuntu-14.04 (virtualbox)
INFO environment: Running hook: authenticate_box_url
INFO host: Autodetecting host type for [#<Vagrant::Environment: /Users/ahayden/Development/LSS/db_archive_chef>]
INFO host: Detected: darwin!
INFO runner: Preparing hooks for middleware sequence...
INFO runner: 2 hooks defined.
INFO runner: Running action: authenticate_box_url #<Vagrant::Action::Builder:0x00000001030ab348>
INFO loader: Loading configuration in order: [:"2175328800_bento/ubuntu-14.04_virtualbox", :home, :root, "2174531280_machine_default"]
INFO machine: Initializing machine: default
INFO machine: - Provider: VagrantPlugins::ProviderVirtualBox::Provider
INFO machine: - Box: #<Vagrant::Box:0x00000001034acc08>
INFO machine: - Data dir: /Users/ahayden/Development/LSS/db_archive_chef/.vagrant/machines/default/virtualbox
INFO base: VBoxManage path: VBoxManage
INFO meta: Using VirtualBox driver: VagrantPlugins::ProviderVirtualBox::Driver::Version_5_2
INFO base: VBoxManage path: VBoxManage
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "showvminfo", "92b0cc90-127e-4e19-8c75-73b5bf0b5506"]
INFO subprocess: Command not in installer, restoring original environment...
INFO machine: New machine ID: nil
INFO base: VBoxManage path: VBoxManage
ERROR loader: Unknown config sources: [:"2175328800_bento/ubuntu-14.04_virtualbox"]
INFO base: VBoxManage path: VBoxManage
INFO meta: Using VirtualBox driver: VagrantPlugins::ProviderVirtualBox::Driver::Version_5_2
INFO base: VBoxManage path: VBoxManage
INFO environment: Getting machine: default (virtualbox)
INFO environment: Returning cached machine: default (virtualbox)
INFO command: With machine: default (#
INFO interface: info: Bringing machine 'default' up with 'virtualbox' provider...
Bringing machine 'default' up with 'virtualbox' provider...
INFO batch_action: Enabling parallelization by default.
INFO batch_action: Disabling parallelization because provider doesn't support it: virtualbox
INFO batch_action: Batch action will parallelize: false
INFO batch_action: Starting action: #<Vagrant::Machine:0x0000000100a51238> up {:destroy_on_error=>true, :install_provider=>false, :parallel=>true, :provision_ignore_sentinel=>false, :provision_types=>nil}
INFO machine: Calling action: up on provider VirtualBox (new VM)
INFO environment: Acquired process lock: dotlock
INFO environment: Released process lock: dotlock
INFO environment: Acquired process lock: machine-action-1c8a0b7102d23451e5804c5357d8a327
INFO base: VBoxManage path: VBoxManage
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "list", "hostonlyifs"]
INFO subprocess: Command not in installer, restoring original environment...
<Proc:0x000000010157ff60#/opt/vagrant/embedded/gems/2.1.2/gems/vagrant-2.1.2/lib/vagrant/action/warden.rb:94 (lambda)>
INFO warden: Calling IN action: #<Vagrant::Action::Builtin::HandleBox:0x00000001015fc448>
INFO handle_box: Machine already has box. HandleBox will not run.
INFO warden: Calling IN action: #<VagrantPlugins::Berkshelf::Action::Check:0x000000010135cee0>
INFO subprocess: Starting process: ["/usr/local/bin/berks", "--version", "--format", "json"]
INFO subprocess: Command not in installer, restoring original environment...
default: The Berkshelf shelf is at "/Users/ahayden/.berkshelf/vagrant-berkshelf/shelves/berkshelf20180730-4484-1fezzea-default"
INFO prepare_clone: no clone master, not preparing clone snapshot
INFO warden: Calling IN action: #<VagrantPlugins::ProviderVirtualBox::Action::Import:0x0000000100a5add8>
INFO interface: info: Importing base box 'bento/ubuntu-14.04'...
INFO interface: info: ==> default: Importing base box 'bento/ubuntu-14.04'...
==> default: Importing base box 'bento/ubuntu-14.04'...
INFO interface: info: Progress: 90%
Progress: 90%
==> default: Checking if box 'bento/ubuntu-14.04' is up to date...
INFO downloader: Downloader starting download:
INFO downloader: -- Source: https://vagrantcloud.com/bento/ubuntu-14.04
INFO downloader: -- Destination: /var/folders/gf/skrz9ljj2z3b3vm947tt5r680000gp/T/vagrant-load-metadata20180730-4484-lo2vxi
INFO subprocess: Starting process: ["/opt/vagrant/embedded/bin/curl", "-q", "--fail", "--location", "--max-redirs", "10", "--verbose", "--user-agent", "Vagrant/2.1.2 (+https://www.vagrantup.com; ruby2.4.4)", "-H", "Accept: application/json", "--output", "/var/folders/gf/skrz9ljj2z3b3vm947tt5r680000gp/T/vagrant-load-metadata20180730-4484-lo2vxi", "https://vagrantcloud.com/bento/ubuntu-14.04"]
INFO subprocess: Command in the installer. Specifying DYLD_LIBRARY_PATH...
==> default: Updating Vagrant's Berkshelf...
INFO subprocess: Starting process: ["/usr/local/bin/berks", "vendor", "/Users/ahayden/.berkshelf/vagrant-berkshelf/shelves/berkshelf20180730-4484-1fezzea-default", "--berksfile", "/Users/ahayden/Development/LSS/db_archive_chef/Berksfile"]
INFO subprocess: Command not in installer, restoring original environment...
INFO interface: output: Resolving cookbook dependencies...
Fetching 'db_archive' from source at .
Using chef-vault (3.1.0)
Using db_archive (0.3.14) from source at .
Using hostsfile (3.0.1)
INFO interface: output: ==> default: Resolving cookbook dependencies...
==> default: Fetching 'db_archive' from source at .
==> default: Using chef-vault (3.1.0)
==> default: Using db_archive (0.3.14) from source at .
==> default: Using hostsfile (3.0.1)
==> default: Vendoring chef-vault (3.1.0) to /Users/ahayden/.berkshelf/vagrant-berkshelf/shelves/berkshelf20180730-4484-1fezzea-default/chef-vault
==> default: Vendoring db_archive (0.3.14) to /Users/ahayden/.berkshelf/vagrant-berkshelf/shelves/berkshelf20180730-4484-1fezzea-default/db_archive
==> default: Vendoring hostsfile (3.0.1) to /Users/ahayden/.berkshelf/vagrant-berkshelf/shelves/berkshelf20180730-4484-1fezzea-default/hostsfile
INFO warden: Calling IN action: #<VagrantPlugins::Berkshelf::Action::Upload:0x000000010171f3e8>
INFO upload: Provisioner does need to upload
INFO warden: Calling IN action: #<Vagrant::Action::Builtin::Provision:0x00000001016de3c0>
INFO provision: Checking provisioner sentinel file...
INFO interface: warn: The cookbook path '/Users/ahayden/Development/LSS/db_archive_chef/cookbooks' doesn't exist. Ignoring...
==> default: Clearing any previously set network interfaces...
INFO network: Searching for matching hostonly network: 172.28.128.1
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "list", "hostonlyifs"]
INFO subprocess: Command not in installer, restoring original environment...
INFO interface: info: ==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "showvminfo", "560c0ba3-253c-478d-8cc9-97d8c2fbb1da", "--machinereadable"]
INFO subprocess: Command not in installer, restoring original environment...
INFO interface: detail: SSH address: 127.0.0.1:2222
INFO interface: detail: default: SSH address: 127.0.0.1:2222
default: SSH address: 127.0.0.1:2222
INFO ssh: Attempting SSH connection...
INFO ssh: Attempting to connect to SSH...
INFO ssh: - Host: 127.0.0.1
INFO ssh: - Port: 2222
INFO ssh: - Username: vagrant
INFO ssh: - Password? false
INFO ssh: - Key Path: ["/Users/ahayden/.vagrant.d/insecure_private_key"]
INFO ssh: SSH not ready: #<Vagrant::Errors::NetSSHException: An error occurred in the underlying SSH library that Vagrant uses.
The error message is shown below. In many cases, errors from this
library are caused by ssh-agent issues. Try disabling your SSH
agent or removing some keys and try again.
If the problem persists, please report a bug to the net-ssh project.
timeout during server version negotiating>
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "showvminfo", "560c0ba3-253c-478d-8cc9-97d8c2fbb1da", "--machinereadable"]
INFO subprocess: Command not in installer, restoring original environment...
["/Users/ahayden/.vagrant.d/insecure_private_key"]
INFO ssh: SSH is ready!
INFO ssh: Execute: (sudo=false)
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "showvminfo", "560c0ba3-253c-478d-8cc9-97d8c2fbb1da", "--machinereadable"]
INFO subprocess: Command not in installer, restoring original environment...
INFO guest: Autodetecting host type for [#<Vagrant::Machine: default (VagrantPlugins::ProviderVirtualBox::Provider)>]
INFO ssh: Execute: if test -r /etc/os-release; then
source /etc/os-release && test 'xLinux Mint' = "x$ID" && exit
fi
if test -x /usr/bin/lsb_release; then
/usr/bin/lsb_release -i 2>/dev/null | grep -qi 'Linux Mint' && exit
fi
if test -r /etc/issue; then
cat /etc/issue | grep -qi 'Linux Mint' && exit
fi
exit 1
(sudo=false)
INFO ssh: Execute: grep 'ostree=' /proc/cmdline (sudo=false)
INFO ssh: Execute: [ -x /usr/bin/lsb_release ] && /usr/bin/lsb_release -i 2>/dev/null | grep Trisquel (sudo=false)
INFO ssh: Execute: if test -r /etc/os-release; then
source /etc/os-release && test 'xelementary' = "x$ID" && exit
fi
if test -x /usr/bin/lsb_release; then
/usr/bin/lsb_release -i 2>/dev/null | grep -qi 'elementary' && exit
fi
if test -r /etc/issue; then
cat /etc/issue | grep -qi 'elementary' && exit
fi
exit 1
(sudo=false)
INFO ssh: Execute: uname -s | grep -i 'DragonFly' (sudo=false)
INFO ssh: Execute: cat /etc/pld-release (sudo=false)
INFO ssh: Execute: grep 'Amazon Linux' /etc/os-release (sudo=false)
INFO ssh: Execute: grep 'Fedora release' /etc/redhat-release (sudo=false)
INFO ssh: Execute: if test -r /etc/os-release; then
source /etc/os-release && test 'xkali' = "x$ID" && exit
fi
if test -x /usr/bin/lsb_release; then
/usr/bin/lsb_release -i 2>/dev/null | grep -qi 'kali' && exit
fi
if test -r /etc/issue; then
cat /etc/issue | grep -qi 'kali' && exit
fi
exit 1
(sudo=false)
INFO ssh: Execute: grep Funtoo /etc/gentoo-release (sudo=false)
INFO ssh: Execute: if test -r /etc/os-release; then
source /etc/os-release && test 'xubuntu' = "x$ID" && exit
fi
if test -x /usr/bin/lsb_release; then
/usr/bin/lsb_release -i 2>/dev/null | grep -qi 'ubuntu' && exit
fi
if test -r /etc/issue; then
cat /etc/issue | grep -qi 'ubuntu' && exit
fi
exit 1
(sudo=false)
INFO guest: Detected: ubuntu!
INFO ssh: SSH is ready!
INFO ssh: Execute: (sudo=false)
INFO ssh: Inserting key to avoid password: ssh-rsa AAAA/ vagrant
INFO interface: detail:
Inserting generated public key within guest...
INFO interface: detail: default:
default: Inserting generated public key within guest...
default:
default: Inserting generated public key within guest...
INFO ssh: SSH is ready!
INFO ssh: Execute: (sudo=false)
INFO guest: Execute capability: insert_public_key [#<Vagrant::Machine: default (VagrantPlugins::ProviderVirtualBox::Provider)>, "ssh-rsa AAAA/ vagrant"] (ubuntu)
INFO ssh: Execute: mkdir -p ~/.ssh
chmod 0700 ~/.ssh
cat '/tmp/vagrant-insert-pubkey-1532971970' >> ~/.ssh/authorized_keys && chmod 0600 ~/.ssh/authorized_keys
result=$?
rm -f '/tmp/vagrant-insert-pubkey-1532971970'
exit $result
(sudo=false)
INFO host: Execute capability: set_ssh_key_permissions [#<Vagrant::Environment: /Users/ahayden/Development/LSS/db_archive_chef>, #<Pathname:/Users/ahayden/Development/LSS/db_archive_chef/.vagrant/machines/default/virtualbox/private_key>] (darwin)
INFO interface: detail: Removing insecure key from the guest if it's present...
INFO ssh: Execute: if test -f ~/.ssh/authorized_keys; then
grep -v -x -f '/tmp/vagrant-remove-pubkey-1532971970' ~/.ssh/authorized_keys > ~/.ssh/authorized_keys.tmp
mv ~/.ssh/authorized_keys.tmp ~/.ssh/authorized_keys && chmod 0600 ~/.ssh/authorized_keys
result=$?
fi
rm -f '/tmp/vagrant-remove-pubkey-1532971970'
exit $result
(sudo=false)
INFO interface: detail: Key inserted! Disconnecting and reconnecting using new SSH key...
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "showvminfo", "560c0ba3-253c-478d-8cc9-97d8c2fbb1da", "--machinereadable"]
INFO subprocess: Command not in installer, restoring original environment...
INFO ssh: SSH is ready!
INFO ssh: Execute: (sudo=false)
INFO interface: output: Machine booted and ready!
INFO warden: Calling OUT action: #<VagrantPlugins::ProviderVirtualBox::Action::SaneDefaults:0x00000001014560a8>
INFO interface: info: Setting hostname..
INFO ssh: SSH is ready!
INFO ssh: Execute: (sudo=false)
INFO guest: Execute capability: change_host_name [#<Vagrant::Machine: default (VagrantPlugins::ProviderVirtualBox::Provider)>, "db-archive"] (ubuntu)
INFO ssh: Execute: hostname -f | grep '^db-archive$' (sudo=false)
INFO ssh: Execute: # Set the hostname
echo 'db-archive' > /etc/hostname
hostname -F /etc/hostname
if command -v hostnamectl; then
hostnamectl set-hostname 'db-archive'
fi
# Prepend ourselves to /etc/hosts
grep -w 'db-archive' /etc/hosts || {
if grep -w '^127\.0\.1\.1' /etc/hosts ; then
sed -i'' 's/^127\.0\.1\.1\s.*$/127.0.1.1\tdb-archive\tdb-archive/' /etc/hosts
else
sed -i'' '1i 127.0.1.1\tdb-archive\tdb-archive' /etc/hosts
fi
}
# Update mailname
echo 'db-archive' > /etc/mailname
# Restart hostname services
if test -f /etc/init.d/hostname; then
/etc/init.d/hostname start || true
fi
if test -f /etc/init.d/hostname.sh; then
/etc/init.d/hostname.sh start || true
fi
if test -x /sbin/dhclient ; then
/sbin/dhclient -r
/sbin/dhclient -nw
fi
(sudo=true)
INFO warden: Calling OUT action: #<Vagrant::Action::Builtin::SetHostname:0x00000001014560d0>
INFO synced_folders: Invoking synced folder enable: virtualbox
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "guestproperty", "get", "560c0ba3-253c-478d-8cc9-97d8c2fbb1da", "/VirtualBox/GuestInfo/OS/Product"]
INFO subprocess: Command not in installer, restoring original environment...
INFO interface: output: Mounting shared folders...
INFO interface: detail: /vagrant =>
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "showvminfo", "560c0ba3-253c-478d-8cc9-97d8c2fbb1da", "--machinereadable"]
INFO subprocess: Command not in installer, restoring original environment...
INFO ssh: SSH is ready!
INFO ssh: Execute: (sudo=false)
INFO guest: Execute capability: mount_virtualbox_shared_folder [#<Vagrant::Machine: default (VagrantPlugins::ProviderVirtualBox::Provider)>, "vagrant", "/vagrant", {:guestpath=>"/vagrant", :hostpath=>"/Users/ahayden/Development/LSS/db_archive_chef", :disabled=>false, :__vagrantfile=>true, :owner=>"vagrant", :group=>"vagrant"}] (ubuntu)
/sbin/initctl emit --no-wait vagrant-mounted MOUNTPOINT=/vagrant
fi
(sudo=true)
INFO ssh: Execute: id -u vagrant (sudo=false)
INFO ssh: Execute: getent group vagrant (sudo=false)
INFO ssh: Execute: mkdir -p /etc/chef (sudo=true)
INFO ssh: Execute: mount -t vboxsf -o uid=1000,gid=1000 etc_chef /etc/chef (sudo=true)
INFO ssh: Execute: chown 1000:1000 /etc/chef (sudo=true)
INFO ssh: Execute: if command -v /sbin/init && /sbin/init 2>/dev/null --version | grep upstart; then
/sbin/initctl emit --no-wait vagrant-mounted MOUNTPOINT=/etc/chef
fi
(sudo=true)
INFO provision: Writing provisioning sentinel so we don't provision again
INFO interface: info: Running provisioner: chef_solo...
INFO guest: Execute capability: chef_installed [#<Vagrant::Machine: default (VagrantPlugins::ProviderVirtualBox::Provider)>, "chef", :"12.10.24"] (ubuntu)
INFO ssh: Execute: test -x /opt/chef/bin/knife&& /opt/chef/bin/knife --version | grep 'Chef: 12.10.24' (sudo=true)
INFO interface: detail: Installing Chef (12.10.24)...
INFO interface: detail: default: Installing Chef (12.10.24)...
default: Installing Chef (12.10.24)...
INFO ssh: SSH is ready!
INFO ssh: Execute: (sudo=false)
INFO guest: Execute capability: chef_install [#<Vagrant::Machine: default (VagrantPlugins::ProviderVirtualBox::Provider)>, "chef", :"12.10.24", "stable", "https://omnitruck.chef.io", {:product=>"chef", :channel=>"stable", :version=>:"12.10.24", :omnibus_url=>"https://omnitruck.chef.io", :force=>false, :download_path=>nil}] (ubuntu)
INFO ssh: Execute: apt-get update -y -qq (sudo=true)
INFO ssh: Execute: apt-get install -y -qq curl (sudo=true)
INFO ssh: Execute: curl -sL https://omnitruck.chef.io/install.sh | bash -s -- -P "chef" -c "stable" -v "12.10.24" (sudo=true)
==> default: Running chef-solo...
==> default: [2018-07-30T17:33:12+00:00] INFO: Forking chef instance to converge...
INFO interface: info: Starting Chef Client, version 12.10.24
==> default: [2018-07-30T17:33:12+00:00] INFO: *** Chef 12.10.24 ***
INFO interface: info: [2018-07-30T17:33:12+00:00] INFO: Platform: x86_64-linux
INFO interface: info: [2018-07-30T17:33:14+00:00] INFO: Setting the run_list to ["recipe[chef-vault]", "recipe[db_archive::update]", "recipe[db_archive::install_packages]", "recipe[db_archive::install_hostsfile]", "recipe[db_archive::install_nginx]"] from CLI options
==> default: [2018-07-30T17:33:14+00:00] INFO: Starting Chef Run for ahayden
INFO interface: info: [2018-07-30T17:33:14+00:00] INFO: Running start handlers
INFO interface: info: ==> default: [2018-07-30T17:33:14+00:00] INFO: Start handlers complete.
==> default: [2018-07-30T17:33:14+00:00] INFO: Start handlers complete.
INFO interface: info: Installing Cookbook Gems:
INFO interface: info: Running handlers:
[2018-07-30T17:33:15+00:00] ERROR: Exception handlers complete
Chef Client failed. 0 resources updated in 03 seconds
[2018-07-30T17:33:15+00:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out
[2018-07-30T17:33:15+00:00] ERROR: Expected process to exit with [0], but received '5'
---- Begin output of bundle install ----
STDOUT: Don't run Bundler as root. Bundler can ask for sudo if it is needed, and
installing your bundle as root will break this application for all non-root
users on this machine.
Fetching gem metadata from https://rubygems.org/..........
Fetching version metadata from https://rubygems.org/..
Resolving dependencies...
Installing chef-vault 3.3.0
Gem::InstallError: chef-vault requires Ruby version >= 2.2.0.
Using bundler 1.11.2
An error occurred while installing chef-vault (3.3.0), and Bundler cannot
continue.
Make sure that `gem install chef-vault -v '3.3.0'` succeeds before bundling.
STDERR:
Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
ERROR warden: Error occurred: Chef never successfully completed! Any errors should be visible in the
output above. Please fix your recipes so that they properly complete.
INFO warden: Beginning recovery process...
INFO warden: Recovery complete.
INFO warden: Beginning recovery process...
INFO warden: Recovery complete.
ERROR warden: Error occurred: Chef never successfully completed! Any errors should be visible in the
output above. Please fix your recipes so that they properly complete.
INFO environment: Released process lock: machine-action-1c8a0b7102d23451e5804c5357d8a327
INFO environment: Running hook: environment_unload
INFO runner: Preparing hooks for middleware sequence...
INFO runner: 1 hooks defined.
INFO runner: Running action: environment_unload #<Vagrant::Action::Builder:0x0000000101164c50>
ERROR vagrant: Vagrant experienced an error! Details:
ERROR vagrant: #<VagrantPlugins::Chef::Provisioner::Base::ChefError: Chef never successfully completed! Any errors should be visible in the
output above. Please fix your recipes so that they properly complete.>
ERROR vagrant: Chef never successfully completed! Any errors should be visible in the
output above. Please fix your recipes so that they properly complete.
ERROR vagrant: /plugins/provisioners/chef/provisioner/chef_solo.rb:220:in `run_chef_solo'
/plugins/provisioners/chef/provisioner/chef_solo.rb:65:in `provision'
/lib/vagrant/action/builtin/provision.rb:138:in `run_provisioner'
/lib/vagrant/action/warden.rb:95:in `call'
/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
/lib/vagrant/action/warden.rb:34:in `call'
/lib/vagrant/action/builder.rb:116:in `call'
/lib/vagrant/action/runner.rb:66:in `block in run'
/lib/vagrant/util/busy.rb:19:in `busy'
/lib/vagrant/action/runner.rb:66:in `run'
/lib/vagrant/environment.rb:510:in `hook'
/lib/vagrant/action/builtin/provision.rb:126:in `call'
/lib/vagrant/action/builtin/provision.rb:126:in `block in call'
/lib/vagrant/action/builtin/provision.rb:103:in `each'
/lib/vagrant/action/builtin/provision.rb:103:in `call'
/lib/vagrant/action/warden.rb:34:in `call'
/lib/vagrant-berkshelf/action/upload.rb:23:in `call'
/lib/vagrant/action/warden.rb:34:in `call'
/lib/vagrant-berkshelf/action/install.rb:19:in `call'
/lib/vagrant/action/warden.rb:34:in `call'
/lib/vagrant-berkshelf/action/save.rb:21:in `call'
/lib/vagrant/action/warden.rb:34:in `call'
/plugins/providers/virtualbox/action/clear_forwarded_ports.rb:15:in `call'
/opt/vagrant/embedded/gems/2.1.2/gems/vagrant-/machine.rb:194:in `call'
/opt/vagrant/embedded/gems/2.1.2/gems/vagrant-/machine.rb:194:in `action
/opt/vagrant/embedded/gems/2.1.2/gems/vagrant-/batch_action.rb:82:in `block (2 levels) in run'
INFO interface: error: Chef never successfully completed! Any errors should be visible in the
output above. Please fix your recipes so that they properly complete.
I had to omit some things from the backtrace in order to post it...
The first sign is towards the top WARN global: resolv replacement
has not been enabled!
The next area of concern util.rb:93: warning: key "io" is duplicated
and overwritten on line 107
Then there are many cases of: Starting process: ["/usr/local/bin/VBoxManage", "showvminfo", "92b0cc90-127e-4e19-8c75-73b5bf0b5506"] INFO subprocess: Command not in installer, restoring original environment... . This happens very man times with VBoxManage and a couple other times with curl and berks. I think this is the problem.
At the end, it seems to finally fail with a gem install error for chef-vault. It says the gem requires Ruby version >2.2 which I do have.
Vagrantfile:
VAGRANTFILE_API_VERSION = '2'
Vagrant.require_version '>= 1.5.0'
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.hostname = 'db-archive'
if Vagrant.has_plugin?("vagrant-omnibus")
config.omnibus.chef_version = 'latest'
end
config.vm.box = 'bento/ubuntu-14.04'
config.vm.network :private_network, type: 'dhcp'
config.vm.network 'forwarded_port', guest: 80, host: 8080
config.vm.network 'forwarded_port', guest: 443, host: 8443
config.vm.synced_folder "#{ENV['HOME']}/Documents/src/db_archive", '/var/www/db_archive'
config.vm.synced_folder "#{ENV['HOME']}/.chef", '/etc/chef'
config.berkshelf.enabled = true
config.vm.provision :chef_solo do |chef|
chef.channel = 'stable'
chef.version = '12.10.24'
chef.environment = 'vagrant'
chef.environments_path = 'environments'
chef.run_list = [
"recipe[chef-vault]",
"recipe[db_archive::update]",
"recipe[db_archive::install_packages]",
"recipe[db_archive::install_hostsfile]",
"recipe[db_archive::install_nginx]"
]
chef.data_bags_path = 'data_bags'
chef.node_name = 'ahayden'
end
end
You are using Chef 12, which is no longer supported by the latest chef-vault. You'll need to upgrade your version of Chef.
In my metadata.rb file, I changed the line depends 'chef-vault' to depends 'chef-vault', '=2.1.1'. Then when I ran vagrant destroy && vagrant up it worked fine.

Resources