gitlab mail configuration with docker-mailserver - docker

I have an issue sending an email from gitlab in a docker container through an other container using docker-mailserver from https://github.com/docker-mailserver/docker-mailserver
I did setup everything needed from both of them and I'm able de send and receive emails using any email client between two accounts I created. But yet I cant get it to send an email through gitlab, it wont even show log errors in both containers
Here's my gitlab.rb content:
external_url 'https://gitlab.example.com'
gitlab_rails['smtp_enable'] = true
gitlab_rails['smtp_address'] = "mail.example.com"
gitlab_rails['smtp_port'] = 587
gitlab_rails['smtp_user_name'] = "gitlab#example.com"
gitlab_rails['smtp_password'] = "password"
gitlab_rails['smtp_domain'] = "example.com"
gitlab_rails['smtp_authentication'] = "login"
gitlab_rails['smtp_enable_starttls_auto'] = true
gitlab_rails['smtp_tls'] = true
gitlab_rails['smtp_openssl_verify_mode'] = 'peer'
gitlab_rails['gitlab_email_from'] = 'gitlab#example.com'
gitlab_rails['gitlab_email_reply_to'] = 'noreply#example.com'
gitlab_rails['gitlab_email_display_name'] = 'Gitlab'
Result of gitlab-ctl status
root#gitlab:/# gitlab-ctl status
run: alertmanager: (pid 314) 3675s; run: log: (pid 311) 3675s
down: gitaly: 0s, normally up, want up; run: log: (pid 333) 3674s
run: gitlab-exporter: (pid 332) 3674s; run: log: (pid 328) 3674s
run: gitlab-kas: (pid 327) 3674s; run: log: (pid 325) 3674s
run: gitlab-workhorse: (pid 320) 3674s; run: log: (pid 319) 3674s
run: logrotate: (pid 28399) 74s; run: log: (pid 321) 3674s
run: nginx: (pid 316) 3675s; run: log: (pid 315) 3675s
run: postgres-exporter: (pid 312) 3675s; run: log: (pid 309) 3675s
run: postgresql: (pid 326) 3674s; run: log: (pid 324) 3674s
run: prometheus: (pid 323) 3674s; run: log: (pid 322) 3674s
run: puma: (pid 336) 3674s; run: log: (pid 335) 3674s
run: redis: (pid 331) 3674s; run: log: (pid 330) 3674s
run: redis-exporter: (pid 313) 3675s; run: log: (pid 310) 3675s
run: sidekiq: (pid 318) 3674s; run: log: (pid 317) 3674s
run: sshd: (pid 31) 3691s; run: log: (pid 30) 3691s
result of Notify.test_email inside gitlab-rails console
irb(main):001:0> Notify.test_email('admin#example.com', 'Message Subject', 'Message Body').deliver_now
Delivered mail 634d5dc01207b_7508468c312ce#gitlab.example.com.mail (30092.1ms)
Traceback (most recent call last):
1: from (irb):1
Net::OpenTimeout (Net::OpenTimeout)

A possible cause could be that your cloud provider blocks port 25 outgoing. Several ISPs and cloud providers usually block it.
You could try using an SMTP relay agent like SendGrid and test your setup.
Reference: https://docker-mailserver.github.io/docker-mailserver/edge/config/advanced/mail-forwarding/relay-hosts/

Related

Running Concourse on Portainer

I seem to be having trouble starting concourse on Portainer using the stacks section. I have attached all of the relevant files below but I feel like I am missing something. I know there might be a way to start this using the command line but I am looking for a simple solution that is just a compose file if possible. This way when I teach this to others later the process for setup is easier.
I have the following compose file that concourse provided:
version: '3'
services:
concourse-db:
image: postgres
restart: unless-stopped
environment:
POSTGRES_DB: concourse
POSTGRES_PASSWORD: concourse_pass
POSTGRES_USER: concourse_user
PGDATA: /database
concourse:
image: concourse/concourse
restart: unless-stopped
command: quickstart
privileged: true
depends_on: [concourse-db]
ports: ["8080:8080"]
expose:
- "8080"
environment:
CONCOURSE_POSTGRES_HOST: concourse-db
CONCOURSE_POSTGRES_USER: concourse_user
CONCOURSE_POSTGRES_PASSWORD: concourse_pass
CONCOURSE_POSTGRES_DATABASE: concourse
CONCOURSE_EXTERNAL_URL: http://localhost:8080
CONCOURSE_ADD_LOCAL_USER: test:test
CONCOURSE_MAIN_TEAM_LOCAL_USER: test
# instead of relying on the default "detect"
CONCOURSE_WORKER_BAGGAGECLAIM_DRIVER: overlay
CONCOURSE_CLIENT_SECRET: Y29uY291cnNlLXdlYgo=
CONCOURSE_TSA_CLIENT_SECRET: Y29uY291cnNlLXdvcmtlcgo=
CONCOURSE_X_FRAME_OPTIONS: allow
CONCOURSE_CONTENT_SECURITY_POLICY: "*"
CONCOURSE_CLUSTER_NAME: tutorial
CONCOURSE_WORKER_CONTAINERD_DNS_SERVER: "8.8.8.8"
CONCOURSE_WORKER_RUNTIME: "containerd"
I am getting errors on both the web and the database. Here are the outputs:
{"timestamp":"2022-03-23T14:57:06.947153851Z","level":"info","source":"worker","message":"worker.beacon-runner.beacon.signal.signalled","data":{"session":"4.1.6"}}
{"timestamp":"2022-03-23T14:57:06.947202860Z","level":"info","source":"worker","message":"worker.beacon-runner.logging-runner-exited","data":{"session":"12"}}
{"timestamp":"2022-03-23T14:57:06.947240334Z","level":"error","source":"quickstart","message":"quickstart.worker-runner.logging-runner-exited","data":{"error":"Exit trace for group:\ngarden exited with error: Exit trace for group:\ncontainerd-garden-backend exited with error: setup host network failed: error appending iptables rule: running [/sbin/iptables -t filter -A INPUT -i concourse0 -j REJECT --reject-with icmp-host-prohibited --wait]: exit status 1: iptables: No chain/target/match by that name.\n\ncontainerd exited with nil\n\ncontainer-sweeper exited with nil\nvolume-sweeper exited with nil\ndebug exited with nil\nbaggageclaim exited with nil\nhealthcheck exited with nil\nbeacon exited with nil\n","session":"2"}}
{"timestamp":"2022-03-23T14:57:06.947348599Z","level":"info","source":"web","message":"web.tsa-runner.logging-runner-exited","data":{"session":"2"}}
{"timestamp":"2022-03-23T14:57:06.947457476Z","level":"info","source":"atc","message":"atc.tracker.drain.start","data":{"session":"26.1"}}
{"timestamp":"2022-03-23T14:57:06.947657430Z","level":"info","source":"atc","message":"atc.tracker.drain.waiting","data":{"session":"26.1"}}
{"timestamp":"2022-03-23T14:57:06.947670921Z","level":"info","source":"atc","message":"atc.tracker.drain.done","data":{"session":"26.1"}}
{"timestamp":"2022-03-23T14:57:06.949573381Z","level":"info","source":"web","message":"web.atc-runner.logging-runner-exited","data":{"session":"1"}}
{"timestamp":"2022-03-23T14:57:06.950178927Z","level":"info","source":"quickstart","message":"quickstart.web-runner.logging-runner-exited","data":{"session":"1"}}
error: Exit trace for group:
worker exited with error: Exit trace for group:
garden exited with error: Exit trace for group:
containerd-garden-backend exited with error: setup host network failed: error appending iptables rule: running [/sbin/iptables -t filter -A INPUT -i concourse0 -j REJECT --reject-with icmp-host-prohibited --wait]: exit status 1: iptables: No chain/target/match by that name.
containerd exited with nil
container-sweeper exited with nil
volume-sweeper exited with nil
debug exited with nil
baggageclaim exited with nil
healthcheck exited with nil
beacon exited with nil
and the output for the db
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /database ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default time zone ... Etc/UTC
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok
Success. You can now start the database server using:
pg_ctl -D /database -l logfile start
initdb: warning: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
waiting for server to start....2022-03-23 14:35:37.062 UTC [50] LOG: starting PostgreSQL 14.2 (Debian 14.2-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
2022-03-23 14:35:37.144 UTC [50] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2022-03-23 14:35:37.539 UTC [51] LOG: database system was shut down at 2022-03-23 14:35:35 UTC
2022-03-23 14:35:37.617 UTC [50] LOG: database system is ready to accept connections
done
server started
CREATE DATABASE
/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
waiting for server to shut down...2022-03-23 14:35:40.541 UTC [50] LOG: received fast shutdown request
.2022-03-23 14:35:40.562 UTC [50] LOG: aborting any active transactions
2022-03-23 14:35:40.565 UTC [50] LOG: background worker "logical replication launcher" (PID 57) exited with exit code 1
2022-03-23 14:35:40.566 UTC [52] LOG: shutting down
2022-03-23 14:35:40.829 UTC [50] LOG: database system is shut down
done
server stopped
PostgreSQL init process complete; ready for start up.
2022-03-23 14:35:40.961 UTC [1] LOG: starting PostgreSQL 14.2 (Debian 14.2-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
2022-03-23 14:35:40.961 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2022-03-23 14:35:40.961 UTC [1] LOG: listening on IPv6 address "::", port 5432
2022-03-23 14:35:41.012 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2022-03-23 14:35:41.065 UTC [64] LOG: database system was shut down at 2022-03-23 14:35:40 UTC
2022-03-23 14:35:41.108 UTC [1] LOG: database system is ready to accept connections
2022-03-23 14:57:46.294 UTC [1] LOG: received fast shutdown request
2022-03-23 14:57:46.317 UTC [1] LOG: aborting any active transactions
2022-03-23 14:57:46.319 UTC [1] LOG: background worker "logical replication launcher" (PID 70) exited with exit code 1
2022-03-23 14:57:46.319 UTC [65] LOG: shutting down

WDIO docker run: [1643987609.767][SEVERE]: bind() failed: Cannot assign requested address (99)

There is an error while run wdio test in Docker using Jenkins. I have no idea how to solve this problem :(
The same config run successfully on local env (windows + docker).
This is wdio config. I used default dockerOptions.
wdio.conf
import { config as sharedConfig } from './wdio.shared.conf'
export const config: WebdriverIO.Config = {
...sharedConfig,
...{
host: 'localhost',
services: ['docker'],
dockerLogs: './logs',
dockerOptions: {
image: 'selenium/standalone-chrome:4.1.2-20220131',
healthCheck: {
url: 'http://localhost:4444',
maxRetries: 3,
inspectInterval: 7000,
startDelay: 15000
},
options: {
p: ['4444:4444'],
shmSize: '2g'
}
},
capabilities: [{
acceptInsecureCerts: true,
browserName: 'chrome',
browserVersion: 'latest',
'goog:chromeOptions': {
args: [ '--verbose', '--headless', '--disable-gpu', 'window-size=1920,1800','--no-sandbox', '--disable-dev-shm-usage', '--disable-extensions'],
}
}]
}
}
After that, I try to run UI test via jenkins:
19:37:34 Run `npm audit` for details.
19:37:34 + npm run test:ci -- --spec ./test/specs/claim.BNB.spec.ts
19:37:34
19:37:34 > jasmine-boilerplate#1.0.0 test:ci
19:37:34 > wdio run wdio.ci.conf.ts
And got an error.
Logs attached:
wdio.log
2022-02-04T16:59:20.725Z DEBUG #wdio/utils:initialiseServices: initialise service "docker" as NPM package
2022-02-04T16:59:20.758Z INFO #wdio/cli:launcher: Run onPrepare hook
2022-02-04T16:59:20.760Z DEBUG wdio-docker-service: Docker command: docker run --cidfile /home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/selenium_standalone_chrome_latest.cid --rm -p 4444:4444 -p 5900:5900 --shm-size 2g selenium/standalone-chrome:latest
2022-02-04T16:59:20.769Z WARN wdio-docker-service: Connecting dockerEventsListener: 6283
2022-02-04T16:59:20.772Z INFO wdio-docker-service: Cleaning up CID files
2022-02-04T16:59:20.834Z INFO wdio-docker-service: Launching docker image 'selenium/standalone-chrome:latest'
2022-02-04T16:59:20.841Z INFO wdio-docker-service: Docker container is ready
2022-02-04T16:59:20.841Z DEBUG #wdio/cli:utils: Finished to run "onPrepare" hook in 82ms
2022-02-04T16:59:20.842Z INFO #wdio/cli:launcher: Run onWorkerStart hook
2022-02-04T16:59:20.843Z DEBUG #wdio/cli:utils: Finished to run "onWorkerStart" hook in 0ms
2022-02-04T16:59:20.843Z INFO #wdio/local-runner: Start worker 0-0 with arg: run,wdio.ci.conf.ts,--spec,./test/specs/claim.BNB.spec.ts
2022-02-04T16:59:22.034Z DEBUG #wdio/local-runner: Runner 0-0 finished with exit code 1
2022-02-04T16:59:22.035Z INFO #wdio/cli:launcher: Run onComplete hook
2022-02-04T16:59:22.036Z INFO wdio-docker-service: Shutting down running container
2022-02-04T16:59:32.372Z INFO wdio-docker-service: Cleaning up CID files
2022-02-04T16:59:32.373Z INFO wdio-docker-service: Docker container has stopped
2022-02-04T16:59:32.374Z WARN wdio-docker-service: Disconnecting dockerEventsListener: 6283
2022-02-04T16:59:32.374Z DEBUG #wdio/cli:utils: Finished to run "onComplete" hook in 10339ms
2022-02-04T16:59:32.430Z INFO #wdio/local-runner: Shutting down spawned worker
2022-02-04T16:59:32.681Z INFO #wdio/local-runner: Waiting for 0 to shut down gracefully
wdio-0-0.log
2022-02-04T16:59:21.223Z INFO #wdio/local-runner: Run worker command: run
2022-02-04T16:59:21.513Z DEBUG #wdio/config:utils: Found 'ts-node' package, auto-compiling TypeScript files
2022-02-04T16:59:21.714Z DEBUG #wdio/local-runner:utils: init remote session
2022-02-04T16:59:21.717Z DEBUG #wdio/utils:initialiseServices: initialise service "docker" as NPM package
2022-02-04T16:59:21.828Z DEBUG #wdio/local-runner:utils: init remote session
2022-02-04T16:59:21.840Z INFO devtools:puppeteer: Initiate new session using the DevTools protocol
2022-02-04T16:59:21.841Z INFO devtools: Launch Google Chrome with flags: --enable-automation --disable-popup-blocking --disable-extensions --disable-background-networking --disable-background-timer-throttling --disable-backgrounding-occluded-windows --disable-sync --metrics-recording-only --disable-default-apps --mute-audio --no-first-run --no-default-browser-check --disable-hang-monitor --disable-prompt-on-repost --disable-client-side-phishing-detection --password-store=basic --use-mock-keychain --disable-component-extensions-with-background-pages --disable-breakpad --disable-dev-shm-usage --disable-ipc-flooding-protection --disable-renderer-backgrounding --force-fieldtrials=*BackgroundTracing/default/ --enable-features=NetworkService,NetworkServiceInProcess --disable-features=site-per-process,TranslateUI,BlinkGenPropertyTrees --window-position=0,0 --window-size=1200,900 --headless --disable-gpu --window-size=1920,1800 --no-sandbox --disable-dev-shm-usage --disable-extensions
2022-02-04T16:59:21.911Z ERROR #wdio/runner: Error:
at new LauncherError (/home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/node_modules/chrome-launcher/src/utils.ts:31:18)
at new ChromePathNotSetError (/home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/node_modules/chrome-launcher/dist/utils.js:33:9)
at Object.linux (/home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/node_modules/chrome-launcher/src/chrome-finder.ts:153:11)
at Function.getFirstInstallation (/home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/node_modules/chrome-launcher/src/chrome-launcher.ts:182:61)
at Launcher.launch (/home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/node_modules/chrome-launcher/src/chrome-launcher.ts:252:37)
at launch (/home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/node_modules/chrome-launcher/src/chrome-launcher.ts:74:18)
at launchChrome (/home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/node_modules/devtools/build/launcher.js:80:55)
at launch (/home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/node_modules/devtools/build/launcher.js:179:16)
at Function.newSession (/home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/node_modules/devtools/build/index.js:50:54)
at remote (/home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/node_modules/webdriverio/build/index.js:67:43)
wdio-chromedriver.log
Starting ChromeDriver 97.0.4692.71 (adefa7837d02a07a604c1e6eff0b3a09422ab88d-refs/branch-heads/4692#{#1247}) on port 9515
Only local connections are allowed.
Please see https://chromedriver.chromium.org/security-considerations for suggestions on keeping ChromeDriver safe.
ChromeDriver was started successfully.
[1643987609.767][SEVERE]: bind() failed: Cannot assign requested address (99)
docker-log.txt
2022-02-04 16:59:21,482 INFO Included extra file "/etc/supervisor/conf.d/selenium.conf" during parsing
2022-02-04 16:59:21,484 INFO supervisord started with pid 7
Trapped SIGTERM/SIGINT/x so shutting down supervisord...
2022-02-04 16:59:22,487 INFO spawned: 'xvfb' with pid 9
2022-02-04 16:59:22,489 INFO spawned: 'vnc' with pid 10
2022-02-04 16:59:22,491 INFO spawned: 'novnc' with pid 11
2022-02-04 16:59:22,492 INFO spawned: 'selenium-standalone' with pid 12
2022-02-04 16:59:22,493 WARN received SIGTERM indicating exit request
2022-02-04 16:59:22,493 INFO waiting for xvfb, vnc, novnc, selenium-standalone to die
Setting up SE_NODE_GRID_URL...
2022-02-04 16:59:22,501 INFO success: xvfb entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2022-02-04 16:59:22,501 INFO success: vnc entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2022-02-04 16:59:22,501 INFO success: novnc entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
Selenium Grid Standalone configuration:
[network]
relax-checks = true
[node]
session-timeout = "300"
override-max-sessions = false
detect-drivers = false
max-sessions = 1
[[node.driver-configuration]]
display-name = "chrome"
stereotype = '{"browserName": "chrome", "browserVersion": "97.0", "platformName": "Linux"}'
max-sessions = 1
Starting Selenium Grid Standalone...
16:59:22.930 INFO [LoggingOptions.configureLogEncoding] - Using the system default encoding
16:59:22.939 INFO [OpenTelemetryTracer.createTracer] - Using OpenTelemetry for tracing
16:59:23.452 INFO [NodeOptions.getSessionFactories] - Detected 4 available processors
16:59:23.493 INFO [NodeOptions.report] - Adding chrome for {"browserVersion": "97.0","browserName": "chrome","platformName": "Linux","se:vncEnabled": true} 1 times
16:59:23.505 INFO [Node.<init>] - Binding additional locator mechanisms: name, id, relative
16:59:23.526 INFO [LocalDistributor.add] - Added node 150c2c05-2b08-4ba9-929a-45fef66bb193 at http://172.17.0.2:4444. Health check every 120s
16:59:23.540 INFO [GridModel.setAvailability] - Switching node 150c2c05-2b08-4ba9-929a-45fef66bb193 (uri: http://172.17.0.2:4444) from DOWN to UP
16:59:23.645 INFO [Standalone.execute] - Started Selenium Standalone 4.1.2 (revision 9a5a329c5a): http://172.17.0.2:4444
2022-02-04 16:59:26,091 INFO waiting for xvfb, vnc, novnc, selenium-standalone to die
2022-02-04 16:59:29,095 INFO waiting for xvfb, vnc, novnc, selenium-standalone to die
2022-02-04 16:59:32,097 INFO waiting for xvfb, vnc, novnc, selenium-standalone to die

[HTCONDOR][kubernetes / k8s] : Unable to start minicondor image within k8s - condor_master not working

POST EDIT
The issue is due to :
PSP (Pod security policy) By default escalation is not permit for my condor user. That is why it is not working. because the supervisord is running as root user and try to write logs and start condor collector as root and not as an other user (i.e condor)
Description
The mini-condor base image is not starting as expected on kubernetes rancher pod.
I am using :
This image : https://hub.docker.com/r/htcondor/mini In a custom namespace in rancher (k8s)
ps : the image was working perfectly on :
a local env
minikube default installation
I am running it as a simple deployment :
When the pod is starting, the Kubernetes default log file is displaying :
2021-09-15 09:26:36,908 INFO supervisord started with pid 1
2021-09-15 09:26:37,911 INFO spawned: 'condor_master' with pid 20
2021-09-15 09:26:37,912 INFO spawned: 'condor_restd' with pid 21
2021-09-15 09:26:37,917 INFO exited: condor_restd (exit status 127; not expected)
2021-09-15 09:26:37,924 INFO exited: condor_master (exit status 4; not expected)
2021-09-15 09:26:38,926 INFO spawned: 'condor_master' with pid 22
2021-09-15 09:26:38,928 INFO spawned: 'condor_restd' with pid 23
2021-09-15 09:26:38,932 INFO exited: condor_restd (exit status 127; not expected)
2021-09-15 09:26:38,936 INFO exited: condor_master (exit status 4; not expected)
2021-09-15 09:26:40,939 INFO spawned: 'condor_master' with pid 24
2021-09-15 09:26:40,943 INFO spawned: 'condor_restd' with pid 25
2021-09-15 09:26:40,947 INFO exited: condor_restd (exit status 127; not expected)
2021-09-15 09:26:40,948 INFO exited: condor_master (exit status 4; not expected)
2021-09-15 09:26:43,953 INFO spawned: 'condor_master' with pid 26
2021-09-15 09:26:43,955 INFO spawned: 'condor_restd' with pid 27
2021-09-15 09:26:43,959 INFO exited: condor_restd (exit status 127; not expected)
2021-09-15 09:26:43,968 INFO gave up: condor_restd entered FATAL state, too many start retries too quickly
2021-09-15 09:26:43,969 INFO exited: condor_master (exit status 4; not expected)
2021-09-15 09:26:44,970 INFO gave up: condor_master entered FATAL state, too many start retries too quickly
Here is a brief cmd and output result:
CMD
output
condor_status
CEDAR:6001:Failed to connect to <127.0.0.1:9618>
condor_master
ERROR "Cannot open log file '/var/log/condor/MasterLog'" at line 174 in file /var/lib/condor/execute/slot1/dir_17406/userdir/.tmpruBd6F/BUILD/condor-9.0.5/src/condor_utils/dprintf_setup.cpp`
1)first try to fix the issue
I decided to customize the image, but the error is the same
The docker images used to try to fix the permission issue
Image :
FROM htcondor/mini:9.2-el7
RUN condor_master
RUN chown condor:root /var/
RUN chown condor:root /var/log
RUN chown -R condor:root /var/log/
RUN chown -R condor:condor /var/log/condor
RUN chown condor:condor /var/log/condor/ProcLog
RUN chown condor:condor /var/log/condor/MasterLog
RUN chmod 775 -R /var/
Kubernetes - rancher
yaml file :
apiVersion: apps/v1
kind: Deployment
metadata:
name: htcondor-mini--all-in-one
namespace: grafana-exporter
spec:
containers:
- image: <custom_image>
imagePullPolicy: Always
name: htcondor-mini--all-in-one
resources: {}
securityContext:
capabilities: {}
stdin: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
tty: true
dnsConfig: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
Here is a brief cmd and output result:
CMD
output
condor_status
CEDAR:6001:Failed to connect to <127.0.0.1:9618>
condor_master
ERROR "Cannot open log file '/var/log/condor/MasterLog'" at line 174 in file /var/lib/condor/execute/slot1/dir_17406/userdir/.tmpruBd6F/BUILD/condor-9.0.5/src/condor_utils/dprintf_setup.cpp`
ls -ld /var/
drwxrwxr-x 1 condor root 17 Nov 13 2020 /var/
ls -ld /var/log/
drwxrwxr-x 1 condor root 65 Oct 7 11:54 /var/log/
ls -ld /var/log/condor
drwxrwxr-x 1 condor condor 240 Oct 7 11:23 /var/log/condor
ls -ld /var/log/condor/MasterLog
-rwxrwxr-x 1 condor condor 3243 Oct 7 11:23 /var/log/condor/MasterLog
MasterLog content :
10/07/21 11:23:21 ******************************************************
10/07/21 11:23:21 ** condor_master (CONDOR_MASTER) STARTING UP
10/07/21 11:23:21 ** /usr/sbin/condor_master
10/07/21 11:23:21 ** SubsystemInfo: name=MASTER type=MASTER(2) class=DAEMON(1)
10/07/21 11:23:21 ** Configuration: subsystem:MASTER local:<NONE> class:DAEMON
10/07/21 11:23:21 ** $CondorVersion: 9.2.0 Sep 23 2021 BuildID: 557262 PackageID: 9.2.0-1 $
10/07/21 11:23:21 ** $CondorPlatform: x86_64_CentOS7 $
10/07/21 11:23:21 ** PID = 7
10/07/21 11:23:21 ** Log last touched time unavailable (No such file or directory)
10/07/21 11:23:21 ******************************************************
10/07/21 11:23:21 Using config source: /etc/condor/condor_config
10/07/21 11:23:21 Using local config sources:
10/07/21 11:23:21 /etc/condor/config.d/00-htcondor-9.0.config
10/07/21 11:23:21 /etc/condor/config.d/00-minicondor
10/07/21 11:23:21 /etc/condor/config.d/01-misc.conf
10/07/21 11:23:21 /etc/condor/condor_config.local
10/07/21 11:23:21 config Macros = 73, Sorted = 73, StringBytes = 1848, TablesBytes = 2692
10/07/21 11:23:21 CLASSAD_CACHING is OFF
10/07/21 11:23:21 Daemon Log is logging: D_ALWAYS D_ERROR
10/07/21 11:23:21 SharedPortEndpoint: waiting for connections to named socket master_7_43af
10/07/21 11:23:21 SharedPortEndpoint: failed to open /var/lock/condor/shared_port_ad: No such file or directory
10/07/21 11:23:21 SharedPortEndpoint: did not successfully find SharedPortServer address. Will retry in 60s.
10/07/21 11:23:21 Permission denied error during DISCARD_SESSION_KEYRING_ON_STARTUP, continuing anyway
10/07/21 11:23:21 Adding SHARED_PORT to DAEMON_LIST, because USE_SHARED_PORT=true (to disable this, set AUTO_INCLUDE_SHARED_PORT_IN_DAEMON_LIST=False)
10/07/21 11:23:21 SHARED_PORT is in front of a COLLECTOR, so it will use the configured collector port
10/07/21 11:23:21 Master restart (GRACEFUL) is watching /usr/sbin/condor_master (mtime:1632433213)
10/07/21 11:23:21 Cannot remove wait-for-startup file /var/lock/condor/shared_port_ad
10/07/21 11:23:21 WARNING: forward resolution of ip6-localhost doesn't match 127.0.0.1!
10/07/21 11:23:21 WARNING: forward resolution of ip6-loopback doesn't match 127.0.0.1!
10/07/21 11:23:22 Started DaemonCore process "/usr/libexec/condor/condor_shared_port", pid and pgroup = 9
10/07/21 11:23:22 Waiting for /var/lock/condor/shared_port_ad to appear.
10/07/21 11:23:22 Found /var/lock/condor/shared_port_ad.
10/07/21 11:23:22 Cannot remove wait-for-startup file /var/log/condor/.collector_address
10/07/21 11:23:23 Started DaemonCore process "/usr/sbin/condor_collector", pid and pgroup = 10
10/07/21 11:23:23 Waiting for /var/log/condor/.collector_address to appear.
10/07/21 11:23:23 Found /var/log/condor/.collector_address.
10/07/21 11:23:23 Started DaemonCore process "/usr/sbin/condor_negotiator", pid and pgroup = 11
10/07/21 11:23:23 Started DaemonCore process "/usr/sbin/condor_schedd", pid and pgroup = 12
10/07/21 11:23:24 Started DaemonCore process "/usr/sbin/condor_startd", pid and pgroup = 15
10/07/21 11:23:24 Daemons::StartAllDaemons all daemons were started
A huge thanks for reading. Hope it will help many other people.
Cause of the issue
The issue is due to :
PSP policy (Pod security policy)
By default escalation is not permit for my condor user.
SOLUTION
THE BEST SOLUTION I found at the moment is to run EVERYTHING as condor user and give the permisssion to the condor users. To do so you need :
In the supervisord.conf : Run supervisor as condor user
In the supervisord.conf : run log and socket in /tmp
In the Dockerfile : Change the owner of most of folder by condor
In the deployment.yamlset the ID to 64 (condor user)
Dockerfile
FROM htcondor/mini:9.2-el7
# SET WORKDIR
WORKDIR /home/condor/
RUN chown condor:condor /home/condor
# COPY SUPERVISOR
COPY supervisord.conf /etc/supervisord.conf
# Need to run the cmd to create all dir
RUN condor_master
# FIX PERMISSION ISSUES FOR RANCHER
RUN chown -R condor:condor /var/log/ /tmp &&\
chown -R restd:restd /home/restd &&\
chmod 755 -R /home/restd
supervisord.conf:
[supervisord]
user=condor
nodaemon=true
logfile = /tmp/supervisord.log
directory = /tmp
pidfile = /tmp/supervisord.pid
childlogdir = /tmp
# next 3 sections contain using supervisorctl to manage daemons
[unix_http_server]
file=/tmp/supervisord.sock
chown=condor:condor
chmod=0777
user=condor
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock
[program:condor_master]
user=condor
command=/usr/sbin/condor_master -f
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile = /var/log/condor_master.log
stderr_logfile = /var/log/condor_master.error.log
deployment.yaml
apiVersion: apps/v1
kind: Deployment
spec:
containers:
- image: <condor-image>
imagePullPolicy: Always
name: htcondor-exporter
ports:
- containerPort: 8080
name: myport
protocol: TCP
resources: {}
securityContext:
capabilities: {}
runAsNonRoot: false
runAsUser: 64
stdin: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
tty: true

Docker using Rails setup with errors

So I am creating a minimal rails app with postgresql database. I want to ensure the rails app works and make sure the docker and your working rails app are as similar as possible.
Here's my Dockerfile content:
FROM ruby:latest
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /rails_docker
WORKDIR /rails_docker
COPY Gemfile /rails_docker/Gemfile
COPY Gemfile.lock /rails_docker/Gemfile.lock
RUN bundle install
COPY . /rails_docker
And here's my docker-compose.yml file content:
version: '3'
services:
db:
image: postgres
restart: always
environment:
POSTGRES_USER: 'samnorton'
POSTGRES_PASSWORD: 'grace0512'
volumes:
- postgres-data:/var/lib/postgresql/data
ports:
- '9999:5432'
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/rails_docker
ports:
- "3000:3000"
depends_on:
- db
volumes:
postgres-data:
driver: local
I have also set up a minimal Gemfile and Gemfile lock. When I run sudo docker-compose up I run in the ff error:
data-K54C:~/Desktop/rails_docker$ sudo docker-compose up
[sudo] password for sam:
railsdocker_db_1 is up-to-date
Starting railsdocker_web_1 ...
Starting railsdocker_web_1 ... done
Attaching to railsdocker_db_1, railsdocker_web_1
db_1 | 2019-12-12 14:20:38.333 UTC [1] LOG: starting PostgreSQL 12.0 (Debian 12.0-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1 | 2019-12-12 14:20:38.342 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2019-12-12 14:20:38.342 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2019-12-12 14:20:38.411 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2019-12-12 14:20:38.547 UTC [25] LOG: database system was shut down at 2019-12-12 14:20:06 UTC
db_1 | 2019-12-12 14:20:38.609 UTC [1] LOG: database system is ready to accept connections
db_1 | 2019-12-12 14:24:47.800 UTC [1] LOG: received smart shutdown request
db_1 | 2019-12-12 14:24:47.841 UTC [1] LOG: background worker "logical replication launcher" (PID 31) exited with exit code 1
db_1 | 2019-12-12 14:24:47.844 UTC [26] LOG: shutting down
db_1 | 2019-12-12 14:24:48.094 UTC [1] LOG: database system is shut down
db_1 | 2019-12-12 15:54:38.528 UTC [1] LOG: starting PostgreSQL 12.0 (Debian 12.0-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1 | 2019-12-12 15:54:38.543 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2019-12-12 15:54:38.543 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2019-12-12 15:54:38.627 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2019-12-12 15:54:38.806 UTC [23] LOG: database system was shut down at 2019-12-12 14:24:48 UTC
db_1 | 2019-12-12 15:54:39.053 UTC [1] LOG: database system is ready to accept connections
db_1 | 2019-12-12 16:40:42.473 UTC [1] LOG: received smart shutdown request
db_1 | 2019-12-12 16:40:42.590 UTC [1] LOG: background worker "logical replication launcher" (PID 29) exited with exit code 1
db_1 | 2019-12-12 16:40:42.590 UTC [24] LOG: shutting down
db_1 | 2019-12-12 16:40:43.398 UTC [1] LOG: database system is shut down
db_1 | 2019-12-13 00:02:44.643 UTC [1] LOG: starting PostgreSQL 12.0 (Debian 12.0-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1 | 2019-12-13 00:02:44.665 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2019-12-13 00:02:44.665 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2019-12-13 00:02:44.751 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2019-12-13 00:02:44.947 UTC [23] LOG: database system was shut down at 2019-12-12 16:40:43 UTC
db_1 | 2019-12-13 00:02:45.179 UTC [1] LOG: database system is ready to accept connections
db_1 | 2019-12-13 00:32:00.742 UTC [1] LOG: received smart shutdown request
db_1 | 2019-12-13 00:32:01.089 UTC [1] LOG: background worker "logical replication launcher" (PID 29) exited with exit code 1
db_1 | 2019-12-13 00:32:01.089 UTC [24] LOG: shutting down
db_1 | 2019-12-13 00:32:02.353 UTC [1] LOG: database system is shut down
db_1 | 2019-12-13 01:01:34.874 UTC [1] LOG: starting PostgreSQL 12.0 (Debian 12.0-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1 | 2019-12-13 01:01:34.896 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2019-12-13 01:01:34.896 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2019-12-13 01:01:35.035 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2019-12-13 01:01:35.479 UTC [23] LOG: database system was shut down at 2019-12-13 00:32:01 UTC
db_1 | 2019-12-13 01:01:35.873 UTC [1] LOG: database system is ready to accept connections
web_1 | Usage:
web_1 | rails new APP_PATH [options]
web_1 |
web_1 | Options:
web_1 | [--skip-namespace], [--no-skip-namespace] # Skip namespace (affects only isolated applications)
web_1 | -r, [--ruby=PATH] # Path to the Ruby binary of your choice
web_1 | # Default: /usr/local/bin/ruby
web_1 | -m, [--template=TEMPLATE] # Path to some application template (can be a filesystem path or URL)
web_1 | -d, [--database=DATABASE] # Preconfigure for selected database (options: mysql/postgresql/sqlite3/oracle/frontbase/ibm_db/sqlserver/jdbcmysql/jdbcsqlite3/jdbcpostgresql/jdbc)
web_1 | # Default: sqlite3
web_1 | [--skip-yarn], [--no-skip-yarn] # Don't use Yarn for managing JavaScript dependencies
web_1 | [--skip-gemfile], [--no-skip-gemfile] # Don't create a Gemfile
web_1 | -G, [--skip-git], [--no-skip-git] # Skip .gitignore file
web_1 | [--skip-keeps], [--no-skip-keeps] # Skip source control .keep files
web_1 | -M, [--skip-action-mailer], [--no-skip-action-mailer] # Skip Action Mailer files
web_1 | -O, [--skip-active-record], [--no-skip-active-record] # Skip Active Record files
web_1 | [--skip-active-storage], [--no-skip-active-storage] # Skip Active Storage files
web_1 | -P, [--skip-puma], [--no-skip-puma] # Skip Puma related files
web_1 | -C, [--skip-action-cable], [--no-skip-action-cable] # Skip Action Cable files
web_1 | -S, [--skip-sprockets], [--no-skip-sprockets] # Skip Sprockets files
web_1 | [--skip-spring], [--no-skip-spring] # Don't install Spring application preloader
web_1 | [--skip-listen], [--no-skip-listen] # Don't generate configuration that depends on the listen gem
web_1 | [--skip-coffee], [--no-skip-coffee] # Don't use CoffeeScript
web_1 | -J, [--skip-javascript], [--no-skip-javascript] # Skip JavaScript files
web_1 | [--skip-turbolinks], [--no-skip-turbolinks] # Skip turbolinks gem
web_1 | -T, [--skip-test], [--no-skip-test] # Skip test files
web_1 | [--skip-system-test], [--no-skip-system-test] # Skip system test files
web_1 | [--skip-bootsnap], [--no-skip-bootsnap] # Skip bootsnap gem
web_1 | [--dev], [--no-dev] # Setup the application with Gemfile pointing to your Rails checkout
web_1 | [--edge], [--no-edge] # Setup the application with Gemfile pointing to Rails repository
web_1 | [--rc=RC] # Path to file containing extra configuration options for rails command
web_1 | [--no-rc], [--no-no-rc] # Skip loading of extra configuration options from .railsrc file
web_1 | [--api], [--no-api] # Preconfigure smaller stack for API only apps
web_1 | -B, [--skip-bundle], [--no-skip-bundle] # Don't run bundle install
web_1 | [--webpack=WEBPACK] # Preconfigure for app-like JavaScript with Webpack (options: react/vue/angular/elm/stimulus)
web_1 |
web_1 | Runtime options:
web_1 | -f, [--force] # Overwrite files that already exist
web_1 | -p, [--pretend], [--no-pretend] # Run but do not make any changes
web_1 | -q, [--quiet], [--no-quiet] # Suppress status output
web_1 | -s, [--skip], [--no-skip] # Skip files that already exist
web_1 |
web_1 | Rails options:
web_1 | -h, [--help], [--no-help] # Show this help message and quit
web_1 | -v, [--version], [--no-version] # Show Rails version number and quit
web_1 |
web_1 | Description:
web_1 | The 'rails new' command creates a new Rails application with a default
web_1 | directory structure and configuration at the path you specify.
web_1 |
web_1 | You can specify extra command-line arguments to be used every time
web_1 | 'rails new' runs in the .railsrc configuration file in your home directory.
web_1 |
web_1 | Note that the arguments specified in the .railsrc file don't affect the
web_1 | defaults values shown above in this help message.
web_1 |
web_1 | Example:
web_1 | rails new ~/Code/Ruby/weblog
web_1 |
web_1 | This generates a skeletal Rails installation in ~/Code/Ruby/weblog.
railsdocker_web_1 exited with code 0
in the the curent case you are not using the right rails directory i think
I am not sure if I am doing it right, I wonder if its the WORKDIR I have setup or something is running on port 3000. Any idea what is wrong with my setup?
UPDATE:
After running sudo docker-compose build and sudo docker-compose up I got these errors:
data-K54C:~/Desktop/rails_docker$ sudo docker-compose up
railsdocker_db_1 is up-to-date
Recreating railsdocker_web_1 ...
Recreating railsdocker_web_1 ... error
ERROR: for railsdocker_web_1 no such image: sha256:6d066f5f04e34f6f442d4a68fb4124e1093bb6a976593087d5ebc92478abfaae: No such image: sha256:6d066f5f04e34f6f442d4a68fb4124e1093bb6a976593087d5ebc92478abfaae
ERROR: for web no such image: sha256:6d066f5f04e34f6f442d4a68fb4124e1093bb6a976593087d5ebc92478abfaae: No such image: sha256:6d066f5f04e34f6f442d4a68fb4124e1093bb6a976593087d5ebc92478abfaae
ERROR: Encountered errors while bringing up the project.
sam#sam-K54C:~/Desktop/rails_docker$ clear
Try to replace the command as the following:
command: bundle exec bin/rails s -p 3000 -b '0.0.0.0'
That should work.

Vagrant Provision fails at installing Ruby Gem chef-vault

As the new intern, I'm supposed to get one of our applications running on my local machine (OS X). It's a large set of files to run the application and it uses frameworks that I am not familiar with such as vagrant and chef.
I was told that it should be as easy as cloning the repo, running vagrant up, and viewing the page in my browser but I've encountered a few problems. Now, when I go into the directory and run vagrant up it shows a few questionable things:
Admins-MacBook-Pro:db_archive_chef ahayden$ VAGRANT_LOG=info vagrant up
INFO global: Vagrant version: 2.1.2
INFO global: Ruby version: 2.4.4
INFO global: RubyGems version: 2.6.14.1
INFO global: VAGRANT_LOG="info"
INFO global: VAGRANT_INSTALLER_VERSION="2"
INFO global: VAGRANT_INSTALLER_EMBEDDED_DIR="/opt/vagrant/embedded"
INFO global: VAGRANT_INSTALLER_ENV="1"
INFO global: VAGRANT_EXECUTABLE="/opt/vagrant/embedded/gems/2.1.2/gems/vagrant-2.1.2/bin/vagrant"
WARN global: resolv replacement has not been enabled!
INFO global: Plugins:
INFO global: - vagrant-berkshelf = [installed: 5.1.2 constraint: > 0]
INFO global: - virtualbox = [installed: 0.8.6 constraint: > 0]
INFO global: Loading plugins!
INFO global: Loading plugin `vagrant-berkshelf` with default require: `vagrant-berkshelf`
INFO root: Version requirements from Vagrantfile: [">= 1.5"]
INFO root: - Version requirements satisfied!
INFO manager: Registered plugin: berkshelf
INFO global: Loading plugin `virtualbox` with default require: `virtualbox`
/Users/ahayden/.vagrant.d/gems/2.4.4/gems/virtualbox-0.8.6/lib/virtualbox/com/ffi/util.rb:93: warning: key "io" is duplicated and overwritten on line 107
INFO vagrant: `vagrant` invoked: ["up"]
INFO environment: Environment initialized (#<Vagrant::Environment:0x00000001040deee0>)
INFO environment: - cwd: /Users/ahayden/Development/LSS/db_archive_chef
INFO environment: Home path: /Users/ahayden/.vagrant.d
INFO environment: Local data path: /Users/ahayden/Development/LSS/db_archive_chef/.vagrant
INFO environment: Running hook: environment_plugins_loaded
INFO runner: Preparing hooks for middleware sequence...
INFO runner: 1 hooks defined.
INFO root: Version requirements from Vagrantfile: [">= 1.5.0"]
INFO root: - Version requirements satisfied!
INFO loader: Loading configuration in order: [:home, :root]
INFO command: Active machine found with name default. Using provider: virtualbox
INFO environment: Getting machine: default (virtualbox)
INFO environment: Uncached load of machine.
INFO base: VBoxManage path: VBoxManage
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "--version"]
INFO subprocess: Command not in installer, restoring original environment...
INFO meta: Using VirtualBox driver: VagrantPlugins::ProviderVirtualBox::Driver::Version_5_2
INFO base: VBoxManage path: VBoxManage
INFO loader: Set "2174531280_machine_default" = []
INFO loader: Loading configuration in order: [:home, :root, "2174531280_machine_default"]
INFO box_collection: Box found: bento/ubuntu-14.04 (virtualbox)
INFO environment: Running hook: authenticate_box_url
INFO host: Autodetecting host type for [#<Vagrant::Environment: /Users/ahayden/Development/LSS/db_archive_chef>]
INFO host: Detected: darwin!
INFO runner: Preparing hooks for middleware sequence...
INFO runner: 2 hooks defined.
INFO runner: Running action: authenticate_box_url #<Vagrant::Action::Builder:0x00000001030ab348>
INFO loader: Loading configuration in order: [:"2175328800_bento/ubuntu-14.04_virtualbox", :home, :root, "2174531280_machine_default"]
INFO machine: Initializing machine: default
INFO machine: - Provider: VagrantPlugins::ProviderVirtualBox::Provider
INFO machine: - Box: #<Vagrant::Box:0x00000001034acc08>
INFO machine: - Data dir: /Users/ahayden/Development/LSS/db_archive_chef/.vagrant/machines/default/virtualbox
INFO base: VBoxManage path: VBoxManage
INFO meta: Using VirtualBox driver: VagrantPlugins::ProviderVirtualBox::Driver::Version_5_2
INFO base: VBoxManage path: VBoxManage
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "showvminfo", "92b0cc90-127e-4e19-8c75-73b5bf0b5506"]
INFO subprocess: Command not in installer, restoring original environment...
INFO machine: New machine ID: nil
INFO base: VBoxManage path: VBoxManage
ERROR loader: Unknown config sources: [:"2175328800_bento/ubuntu-14.04_virtualbox"]
INFO base: VBoxManage path: VBoxManage
INFO meta: Using VirtualBox driver: VagrantPlugins::ProviderVirtualBox::Driver::Version_5_2
INFO base: VBoxManage path: VBoxManage
INFO environment: Getting machine: default (virtualbox)
INFO environment: Returning cached machine: default (virtualbox)
INFO command: With machine: default (#
INFO interface: info: Bringing machine 'default' up with 'virtualbox' provider...
Bringing machine 'default' up with 'virtualbox' provider...
INFO batch_action: Enabling parallelization by default.
INFO batch_action: Disabling parallelization because provider doesn't support it: virtualbox
INFO batch_action: Batch action will parallelize: false
INFO batch_action: Starting action: #<Vagrant::Machine:0x0000000100a51238> up {:destroy_on_error=>true, :install_provider=>false, :parallel=>true, :provision_ignore_sentinel=>false, :provision_types=>nil}
INFO machine: Calling action: up on provider VirtualBox (new VM)
INFO environment: Acquired process lock: dotlock
INFO environment: Released process lock: dotlock
INFO environment: Acquired process lock: machine-action-1c8a0b7102d23451e5804c5357d8a327
INFO base: VBoxManage path: VBoxManage
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "list", "hostonlyifs"]
INFO subprocess: Command not in installer, restoring original environment...
<Proc:0x000000010157ff60#/opt/vagrant/embedded/gems/2.1.2/gems/vagrant-2.1.2/lib/vagrant/action/warden.rb:94 (lambda)>
INFO warden: Calling IN action: #<Vagrant::Action::Builtin::HandleBox:0x00000001015fc448>
INFO handle_box: Machine already has box. HandleBox will not run.
INFO warden: Calling IN action: #<VagrantPlugins::Berkshelf::Action::Check:0x000000010135cee0>
INFO subprocess: Starting process: ["/usr/local/bin/berks", "--version", "--format", "json"]
INFO subprocess: Command not in installer, restoring original environment...
default: The Berkshelf shelf is at "/Users/ahayden/.berkshelf/vagrant-berkshelf/shelves/berkshelf20180730-4484-1fezzea-default"
INFO prepare_clone: no clone master, not preparing clone snapshot
INFO warden: Calling IN action: #<VagrantPlugins::ProviderVirtualBox::Action::Import:0x0000000100a5add8>
INFO interface: info: Importing base box 'bento/ubuntu-14.04'...
INFO interface: info: ==> default: Importing base box 'bento/ubuntu-14.04'...
==> default: Importing base box 'bento/ubuntu-14.04'...
INFO interface: info: Progress: 90%
Progress: 90%
==> default: Checking if box 'bento/ubuntu-14.04' is up to date...
INFO downloader: Downloader starting download:
INFO downloader: -- Source: https://vagrantcloud.com/bento/ubuntu-14.04
INFO downloader: -- Destination: /var/folders/gf/skrz9ljj2z3b3vm947tt5r680000gp/T/vagrant-load-metadata20180730-4484-lo2vxi
INFO subprocess: Starting process: ["/opt/vagrant/embedded/bin/curl", "-q", "--fail", "--location", "--max-redirs", "10", "--verbose", "--user-agent", "Vagrant/2.1.2 (+https://www.vagrantup.com; ruby2.4.4)", "-H", "Accept: application/json", "--output", "/var/folders/gf/skrz9ljj2z3b3vm947tt5r680000gp/T/vagrant-load-metadata20180730-4484-lo2vxi", "https://vagrantcloud.com/bento/ubuntu-14.04"]
INFO subprocess: Command in the installer. Specifying DYLD_LIBRARY_PATH...
==> default: Updating Vagrant's Berkshelf...
INFO subprocess: Starting process: ["/usr/local/bin/berks", "vendor", "/Users/ahayden/.berkshelf/vagrant-berkshelf/shelves/berkshelf20180730-4484-1fezzea-default", "--berksfile", "/Users/ahayden/Development/LSS/db_archive_chef/Berksfile"]
INFO subprocess: Command not in installer, restoring original environment...
INFO interface: output: Resolving cookbook dependencies...
Fetching 'db_archive' from source at .
Using chef-vault (3.1.0)
Using db_archive (0.3.14) from source at .
Using hostsfile (3.0.1)
INFO interface: output: ==> default: Resolving cookbook dependencies...
==> default: Fetching 'db_archive' from source at .
==> default: Using chef-vault (3.1.0)
==> default: Using db_archive (0.3.14) from source at .
==> default: Using hostsfile (3.0.1)
==> default: Vendoring chef-vault (3.1.0) to /Users/ahayden/.berkshelf/vagrant-berkshelf/shelves/berkshelf20180730-4484-1fezzea-default/chef-vault
==> default: Vendoring db_archive (0.3.14) to /Users/ahayden/.berkshelf/vagrant-berkshelf/shelves/berkshelf20180730-4484-1fezzea-default/db_archive
==> default: Vendoring hostsfile (3.0.1) to /Users/ahayden/.berkshelf/vagrant-berkshelf/shelves/berkshelf20180730-4484-1fezzea-default/hostsfile
INFO warden: Calling IN action: #<VagrantPlugins::Berkshelf::Action::Upload:0x000000010171f3e8>
INFO upload: Provisioner does need to upload
INFO warden: Calling IN action: #<Vagrant::Action::Builtin::Provision:0x00000001016de3c0>
INFO provision: Checking provisioner sentinel file...
INFO interface: warn: The cookbook path '/Users/ahayden/Development/LSS/db_archive_chef/cookbooks' doesn't exist. Ignoring...
==> default: Clearing any previously set network interfaces...
INFO network: Searching for matching hostonly network: 172.28.128.1
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "list", "hostonlyifs"]
INFO subprocess: Command not in installer, restoring original environment...
INFO interface: info: ==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "showvminfo", "560c0ba3-253c-478d-8cc9-97d8c2fbb1da", "--machinereadable"]
INFO subprocess: Command not in installer, restoring original environment...
INFO interface: detail: SSH address: 127.0.0.1:2222
INFO interface: detail: default: SSH address: 127.0.0.1:2222
default: SSH address: 127.0.0.1:2222
INFO ssh: Attempting SSH connection...
INFO ssh: Attempting to connect to SSH...
INFO ssh: - Host: 127.0.0.1
INFO ssh: - Port: 2222
INFO ssh: - Username: vagrant
INFO ssh: - Password? false
INFO ssh: - Key Path: ["/Users/ahayden/.vagrant.d/insecure_private_key"]
INFO ssh: SSH not ready: #<Vagrant::Errors::NetSSHException: An error occurred in the underlying SSH library that Vagrant uses.
The error message is shown below. In many cases, errors from this
library are caused by ssh-agent issues. Try disabling your SSH
agent or removing some keys and try again.
If the problem persists, please report a bug to the net-ssh project.
timeout during server version negotiating>
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "showvminfo", "560c0ba3-253c-478d-8cc9-97d8c2fbb1da", "--machinereadable"]
INFO subprocess: Command not in installer, restoring original environment...
["/Users/ahayden/.vagrant.d/insecure_private_key"]
INFO ssh: SSH is ready!
INFO ssh: Execute: (sudo=false)
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "showvminfo", "560c0ba3-253c-478d-8cc9-97d8c2fbb1da", "--machinereadable"]
INFO subprocess: Command not in installer, restoring original environment...
INFO guest: Autodetecting host type for [#<Vagrant::Machine: default (VagrantPlugins::ProviderVirtualBox::Provider)>]
INFO ssh: Execute: if test -r /etc/os-release; then
source /etc/os-release && test 'xLinux Mint' = "x$ID" && exit
fi
if test -x /usr/bin/lsb_release; then
/usr/bin/lsb_release -i 2>/dev/null | grep -qi 'Linux Mint' && exit
fi
if test -r /etc/issue; then
cat /etc/issue | grep -qi 'Linux Mint' && exit
fi
exit 1
(sudo=false)
INFO ssh: Execute: grep 'ostree=' /proc/cmdline (sudo=false)
INFO ssh: Execute: [ -x /usr/bin/lsb_release ] && /usr/bin/lsb_release -i 2>/dev/null | grep Trisquel (sudo=false)
INFO ssh: Execute: if test -r /etc/os-release; then
source /etc/os-release && test 'xelementary' = "x$ID" && exit
fi
if test -x /usr/bin/lsb_release; then
/usr/bin/lsb_release -i 2>/dev/null | grep -qi 'elementary' && exit
fi
if test -r /etc/issue; then
cat /etc/issue | grep -qi 'elementary' && exit
fi
exit 1
(sudo=false)
INFO ssh: Execute: uname -s | grep -i 'DragonFly' (sudo=false)
INFO ssh: Execute: cat /etc/pld-release (sudo=false)
INFO ssh: Execute: grep 'Amazon Linux' /etc/os-release (sudo=false)
INFO ssh: Execute: grep 'Fedora release' /etc/redhat-release (sudo=false)
INFO ssh: Execute: if test -r /etc/os-release; then
source /etc/os-release && test 'xkali' = "x$ID" && exit
fi
if test -x /usr/bin/lsb_release; then
/usr/bin/lsb_release -i 2>/dev/null | grep -qi 'kali' && exit
fi
if test -r /etc/issue; then
cat /etc/issue | grep -qi 'kali' && exit
fi
exit 1
(sudo=false)
INFO ssh: Execute: grep Funtoo /etc/gentoo-release (sudo=false)
INFO ssh: Execute: if test -r /etc/os-release; then
source /etc/os-release && test 'xubuntu' = "x$ID" && exit
fi
if test -x /usr/bin/lsb_release; then
/usr/bin/lsb_release -i 2>/dev/null | grep -qi 'ubuntu' && exit
fi
if test -r /etc/issue; then
cat /etc/issue | grep -qi 'ubuntu' && exit
fi
exit 1
(sudo=false)
INFO guest: Detected: ubuntu!
INFO ssh: SSH is ready!
INFO ssh: Execute: (sudo=false)
INFO ssh: Inserting key to avoid password: ssh-rsa AAAA/ vagrant
INFO interface: detail:
Inserting generated public key within guest...
INFO interface: detail: default:
default: Inserting generated public key within guest...
default:
default: Inserting generated public key within guest...
INFO ssh: SSH is ready!
INFO ssh: Execute: (sudo=false)
INFO guest: Execute capability: insert_public_key [#<Vagrant::Machine: default (VagrantPlugins::ProviderVirtualBox::Provider)>, "ssh-rsa AAAA/ vagrant"] (ubuntu)
INFO ssh: Execute: mkdir -p ~/.ssh
chmod 0700 ~/.ssh
cat '/tmp/vagrant-insert-pubkey-1532971970' >> ~/.ssh/authorized_keys && chmod 0600 ~/.ssh/authorized_keys
result=$?
rm -f '/tmp/vagrant-insert-pubkey-1532971970'
exit $result
(sudo=false)
INFO host: Execute capability: set_ssh_key_permissions [#<Vagrant::Environment: /Users/ahayden/Development/LSS/db_archive_chef>, #<Pathname:/Users/ahayden/Development/LSS/db_archive_chef/.vagrant/machines/default/virtualbox/private_key>] (darwin)
INFO interface: detail: Removing insecure key from the guest if it's present...
INFO ssh: Execute: if test -f ~/.ssh/authorized_keys; then
grep -v -x -f '/tmp/vagrant-remove-pubkey-1532971970' ~/.ssh/authorized_keys > ~/.ssh/authorized_keys.tmp
mv ~/.ssh/authorized_keys.tmp ~/.ssh/authorized_keys && chmod 0600 ~/.ssh/authorized_keys
result=$?
fi
rm -f '/tmp/vagrant-remove-pubkey-1532971970'
exit $result
(sudo=false)
INFO interface: detail: Key inserted! Disconnecting and reconnecting using new SSH key...
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "showvminfo", "560c0ba3-253c-478d-8cc9-97d8c2fbb1da", "--machinereadable"]
INFO subprocess: Command not in installer, restoring original environment...
INFO ssh: SSH is ready!
INFO ssh: Execute: (sudo=false)
INFO interface: output: Machine booted and ready!
INFO warden: Calling OUT action: #<VagrantPlugins::ProviderVirtualBox::Action::SaneDefaults:0x00000001014560a8>
INFO interface: info: Setting hostname..
INFO ssh: SSH is ready!
INFO ssh: Execute: (sudo=false)
INFO guest: Execute capability: change_host_name [#<Vagrant::Machine: default (VagrantPlugins::ProviderVirtualBox::Provider)>, "db-archive"] (ubuntu)
INFO ssh: Execute: hostname -f | grep '^db-archive$' (sudo=false)
INFO ssh: Execute: # Set the hostname
echo 'db-archive' > /etc/hostname
hostname -F /etc/hostname
if command -v hostnamectl; then
hostnamectl set-hostname 'db-archive'
fi
# Prepend ourselves to /etc/hosts
grep -w 'db-archive' /etc/hosts || {
if grep -w '^127\.0\.1\.1' /etc/hosts ; then
sed -i'' 's/^127\.0\.1\.1\s.*$/127.0.1.1\tdb-archive\tdb-archive/' /etc/hosts
else
sed -i'' '1i 127.0.1.1\tdb-archive\tdb-archive' /etc/hosts
fi
}
# Update mailname
echo 'db-archive' > /etc/mailname
# Restart hostname services
if test -f /etc/init.d/hostname; then
/etc/init.d/hostname start || true
fi
if test -f /etc/init.d/hostname.sh; then
/etc/init.d/hostname.sh start || true
fi
if test -x /sbin/dhclient ; then
/sbin/dhclient -r
/sbin/dhclient -nw
fi
(sudo=true)
INFO warden: Calling OUT action: #<Vagrant::Action::Builtin::SetHostname:0x00000001014560d0>
INFO synced_folders: Invoking synced folder enable: virtualbox
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "guestproperty", "get", "560c0ba3-253c-478d-8cc9-97d8c2fbb1da", "/VirtualBox/GuestInfo/OS/Product"]
INFO subprocess: Command not in installer, restoring original environment...
INFO interface: output: Mounting shared folders...
INFO interface: detail: /vagrant =>
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "showvminfo", "560c0ba3-253c-478d-8cc9-97d8c2fbb1da", "--machinereadable"]
INFO subprocess: Command not in installer, restoring original environment...
INFO ssh: SSH is ready!
INFO ssh: Execute: (sudo=false)
INFO guest: Execute capability: mount_virtualbox_shared_folder [#<Vagrant::Machine: default (VagrantPlugins::ProviderVirtualBox::Provider)>, "vagrant", "/vagrant", {:guestpath=>"/vagrant", :hostpath=>"/Users/ahayden/Development/LSS/db_archive_chef", :disabled=>false, :__vagrantfile=>true, :owner=>"vagrant", :group=>"vagrant"}] (ubuntu)
/sbin/initctl emit --no-wait vagrant-mounted MOUNTPOINT=/vagrant
fi
(sudo=true)
INFO ssh: Execute: id -u vagrant (sudo=false)
INFO ssh: Execute: getent group vagrant (sudo=false)
INFO ssh: Execute: mkdir -p /etc/chef (sudo=true)
INFO ssh: Execute: mount -t vboxsf -o uid=1000,gid=1000 etc_chef /etc/chef (sudo=true)
INFO ssh: Execute: chown 1000:1000 /etc/chef (sudo=true)
INFO ssh: Execute: if command -v /sbin/init && /sbin/init 2>/dev/null --version | grep upstart; then
/sbin/initctl emit --no-wait vagrant-mounted MOUNTPOINT=/etc/chef
fi
(sudo=true)
INFO provision: Writing provisioning sentinel so we don't provision again
INFO interface: info: Running provisioner: chef_solo...
INFO guest: Execute capability: chef_installed [#<Vagrant::Machine: default (VagrantPlugins::ProviderVirtualBox::Provider)>, "chef", :"12.10.24"] (ubuntu)
INFO ssh: Execute: test -x /opt/chef/bin/knife&& /opt/chef/bin/knife --version | grep 'Chef: 12.10.24' (sudo=true)
INFO interface: detail: Installing Chef (12.10.24)...
INFO interface: detail: default: Installing Chef (12.10.24)...
default: Installing Chef (12.10.24)...
INFO ssh: SSH is ready!
INFO ssh: Execute: (sudo=false)
INFO guest: Execute capability: chef_install [#<Vagrant::Machine: default (VagrantPlugins::ProviderVirtualBox::Provider)>, "chef", :"12.10.24", "stable", "https://omnitruck.chef.io", {:product=>"chef", :channel=>"stable", :version=>:"12.10.24", :omnibus_url=>"https://omnitruck.chef.io", :force=>false, :download_path=>nil}] (ubuntu)
INFO ssh: Execute: apt-get update -y -qq (sudo=true)
INFO ssh: Execute: apt-get install -y -qq curl (sudo=true)
INFO ssh: Execute: curl -sL https://omnitruck.chef.io/install.sh | bash -s -- -P "chef" -c "stable" -v "12.10.24" (sudo=true)
==> default: Running chef-solo...
==> default: [2018-07-30T17:33:12+00:00] INFO: Forking chef instance to converge...
INFO interface: info: Starting Chef Client, version 12.10.24
==> default: [2018-07-30T17:33:12+00:00] INFO: *** Chef 12.10.24 ***
INFO interface: info: [2018-07-30T17:33:12+00:00] INFO: Platform: x86_64-linux
INFO interface: info: [2018-07-30T17:33:14+00:00] INFO: Setting the run_list to ["recipe[chef-vault]", "recipe[db_archive::update]", "recipe[db_archive::install_packages]", "recipe[db_archive::install_hostsfile]", "recipe[db_archive::install_nginx]"] from CLI options
==> default: [2018-07-30T17:33:14+00:00] INFO: Starting Chef Run for ahayden
INFO interface: info: [2018-07-30T17:33:14+00:00] INFO: Running start handlers
INFO interface: info: ==> default: [2018-07-30T17:33:14+00:00] INFO: Start handlers complete.
==> default: [2018-07-30T17:33:14+00:00] INFO: Start handlers complete.
INFO interface: info: Installing Cookbook Gems:
INFO interface: info: Running handlers:
[2018-07-30T17:33:15+00:00] ERROR: Exception handlers complete
Chef Client failed. 0 resources updated in 03 seconds
[2018-07-30T17:33:15+00:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out
[2018-07-30T17:33:15+00:00] ERROR: Expected process to exit with [0], but received '5'
---- Begin output of bundle install ----
STDOUT: Don't run Bundler as root. Bundler can ask for sudo if it is needed, and
installing your bundle as root will break this application for all non-root
users on this machine.
Fetching gem metadata from https://rubygems.org/..........
Fetching version metadata from https://rubygems.org/..
Resolving dependencies...
Installing chef-vault 3.3.0
Gem::InstallError: chef-vault requires Ruby version >= 2.2.0.
Using bundler 1.11.2
An error occurred while installing chef-vault (3.3.0), and Bundler cannot
continue.
Make sure that `gem install chef-vault -v '3.3.0'` succeeds before bundling.
STDERR:
Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
ERROR warden: Error occurred: Chef never successfully completed! Any errors should be visible in the
output above. Please fix your recipes so that they properly complete.
INFO warden: Beginning recovery process...
INFO warden: Recovery complete.
INFO warden: Beginning recovery process...
INFO warden: Recovery complete.
ERROR warden: Error occurred: Chef never successfully completed! Any errors should be visible in the
output above. Please fix your recipes so that they properly complete.
INFO environment: Released process lock: machine-action-1c8a0b7102d23451e5804c5357d8a327
INFO environment: Running hook: environment_unload
INFO runner: Preparing hooks for middleware sequence...
INFO runner: 1 hooks defined.
INFO runner: Running action: environment_unload #<Vagrant::Action::Builder:0x0000000101164c50>
ERROR vagrant: Vagrant experienced an error! Details:
ERROR vagrant: #<VagrantPlugins::Chef::Provisioner::Base::ChefError: Chef never successfully completed! Any errors should be visible in the
output above. Please fix your recipes so that they properly complete.>
ERROR vagrant: Chef never successfully completed! Any errors should be visible in the
output above. Please fix your recipes so that they properly complete.
ERROR vagrant: /plugins/provisioners/chef/provisioner/chef_solo.rb:220:in `run_chef_solo'
/plugins/provisioners/chef/provisioner/chef_solo.rb:65:in `provision'
/lib/vagrant/action/builtin/provision.rb:138:in `run_provisioner'
/lib/vagrant/action/warden.rb:95:in `call'
/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
/lib/vagrant/action/warden.rb:34:in `call'
/lib/vagrant/action/builder.rb:116:in `call'
/lib/vagrant/action/runner.rb:66:in `block in run'
/lib/vagrant/util/busy.rb:19:in `busy'
/lib/vagrant/action/runner.rb:66:in `run'
/lib/vagrant/environment.rb:510:in `hook'
/lib/vagrant/action/builtin/provision.rb:126:in `call'
/lib/vagrant/action/builtin/provision.rb:126:in `block in call'
/lib/vagrant/action/builtin/provision.rb:103:in `each'
/lib/vagrant/action/builtin/provision.rb:103:in `call'
/lib/vagrant/action/warden.rb:34:in `call'
/lib/vagrant-berkshelf/action/upload.rb:23:in `call'
/lib/vagrant/action/warden.rb:34:in `call'
/lib/vagrant-berkshelf/action/install.rb:19:in `call'
/lib/vagrant/action/warden.rb:34:in `call'
/lib/vagrant-berkshelf/action/save.rb:21:in `call'
/lib/vagrant/action/warden.rb:34:in `call'
/plugins/providers/virtualbox/action/clear_forwarded_ports.rb:15:in `call'
/opt/vagrant/embedded/gems/2.1.2/gems/vagrant-/machine.rb:194:in `call'
/opt/vagrant/embedded/gems/2.1.2/gems/vagrant-/machine.rb:194:in `action
/opt/vagrant/embedded/gems/2.1.2/gems/vagrant-/batch_action.rb:82:in `block (2 levels) in run'
INFO interface: error: Chef never successfully completed! Any errors should be visible in the
output above. Please fix your recipes so that they properly complete.
I had to omit some things from the backtrace in order to post it...
The first sign is towards the top WARN global: resolv replacement
has not been enabled!
The next area of concern util.rb:93: warning: key "io" is duplicated
and overwritten on line 107
Then there are many cases of: Starting process: ["/usr/local/bin/VBoxManage", "showvminfo", "92b0cc90-127e-4e19-8c75-73b5bf0b5506"] INFO subprocess: Command not in installer, restoring original environment... . This happens very man times with VBoxManage and a couple other times with curl and berks. I think this is the problem.
At the end, it seems to finally fail with a gem install error for chef-vault. It says the gem requires Ruby version >2.2 which I do have.
Vagrantfile:
VAGRANTFILE_API_VERSION = '2'
Vagrant.require_version '>= 1.5.0'
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.hostname = 'db-archive'
if Vagrant.has_plugin?("vagrant-omnibus")
config.omnibus.chef_version = 'latest'
end
config.vm.box = 'bento/ubuntu-14.04'
config.vm.network :private_network, type: 'dhcp'
config.vm.network 'forwarded_port', guest: 80, host: 8080
config.vm.network 'forwarded_port', guest: 443, host: 8443
config.vm.synced_folder "#{ENV['HOME']}/Documents/src/db_archive", '/var/www/db_archive'
config.vm.synced_folder "#{ENV['HOME']}/.chef", '/etc/chef'
config.berkshelf.enabled = true
config.vm.provision :chef_solo do |chef|
chef.channel = 'stable'
chef.version = '12.10.24'
chef.environment = 'vagrant'
chef.environments_path = 'environments'
chef.run_list = [
"recipe[chef-vault]",
"recipe[db_archive::update]",
"recipe[db_archive::install_packages]",
"recipe[db_archive::install_hostsfile]",
"recipe[db_archive::install_nginx]"
]
chef.data_bags_path = 'data_bags'
chef.node_name = 'ahayden'
end
end
You are using Chef 12, which is no longer supported by the latest chef-vault. You'll need to upgrade your version of Chef.
In my metadata.rb file, I changed the line depends 'chef-vault' to depends 'chef-vault', '=2.1.1'. Then when I ran vagrant destroy && vagrant up it worked fine.

Resources