Strange behavior using docker - neo4j

i'm using neo4j with docker.
At the large, the product is running, but I cannot config it.
I added some configs in a neo4j/conf/neo4j.conf (which are ignored|).
This is my launch command:
docker run -e NEO4J_dbms_security_procedures_unrestricted=apoc.\\\* --publish=7474:7474 --publish=7687:7687 --volume=$HOME/arianne/2017/neo4j/data:/data --volume=$HOME/arianne/2017/neo4j/logs:/logs --volume=$HOME/arianne/2017/neo4j/conf:/conf --rm -e NEO4J_AUTH=none --volume=$HOME/arianne/2017/neo4j/plugins:/plugins neo4j:3.2
and this is the loading log ...
Active database: graph.db
Directories in use:
home: /var/lib/neo4j
config: /var/lib/neo4j/conf
logs: /logs
plugins: /plugins
import: /var/lib/neo4j/import
data: /var/lib/neo4j/data
certificates: /var/lib/neo4j/certificates
run: /var/lib/neo4j/run
Starting Neo4j.
2017-07-05 10:18:27.679+0000 WARN Unknown config option:
causal_clustering.discovery_listen_address
2017-07-05 10:18:27.681+0000 WARN Unknown config option:
causal_clustering.raft_advertised_address
2017-07-05 10:18:27.681+0000 WARN Unknown config option:
causal_clustering.raft_listen_address
2017-07-05 10:18:27.681+0000 WARN Unknown config option:
ha.host.coordination
2017-07-05 10:18:27.681+0000 WARN Unknown config option:
causal_clustering.transaction_advertised_address
2017-07-05 10:18:27.682+0000 WARN Unknown config option:
causal_clustering.discovery_advertised_address
2017-07-05 10:18:27.682+0000 WARN Unknown config option: ha.host.data
2017-07-05 10:18:27.682+0000 WARN Unknown config option:
causal_clustering.transaction_listen_address
2017-07-05 10:18:27.717+0000 INFO ======== Neo4j 3.2.1 ========
2017-07-05 10:18:27.918+0000 INFO No SSL certificate found,
generating a self-signed certificate..
2017-07-05 10:18:28.759+0000 INFO Starting...
2017-07-05 10:18:30.433+0000 INFO Bolt enabled on 0.0.0.0:7687.
2017-07-05 10:18:40.255+0000 INFO Started.
2017-07-05 10:18:42.441+0000 INFO Remote interface available at
http://192.168.10.196:7474/
The directory are totally different from those I put on the command line, and the directory var/lib/neo4j doesnt exists!!!
Any idea?
Paolo

This works for me :
sudo docker run \
-p 7474:7474 \
-p 7687:7687 \
-p 7473:7473 \
-v $HOME/dockerneo4j/data:/data \
-v $HOME/dockerneo4j/logs:/logs \
-v $HOME/dockerneo4j/import:/import \
-v $HOME/dockerneo4j/conf:/conf \
-v $HOME/dockerneo4j/plugins:/plugins \
neo4j:3.2.1
The config-file in dockerneo4j/conf is used (easily tested by changing the databasename in there and checking in dockerneo4j/data if a new database folder is created). The apoc plugins are picked up too.
It is correct (although - granted - confusing) that the log shows /var/lib/neo4j directories, those are the internal ones in your docker image.
Hope this helps,
Tom
P.S. http://kvangundy.com/wp/set-up-neo4j-and-docker/ has all the information.

Related

Error when importing realm config for keycloak within a docker container

I deployed the keycloak on the docker by the below command:
docker run -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin -p 8080:8080 quay.io/keycloak/keycloak:17.0.1 start-dev
And I connected to the container using:
docker exec -ti <CONTAINER> bash
And I created a realm config file as /tmp/realm.json
And finally, I ran the command:
/opt/keycloak/bin/kc.sh import --file /tmp/realm.json
But I encountered with the errors as below:
2022-09-26 21:40:37,142 ERROR [org.keycloak.services] (main) KC-SERVICES0010: Failed to add user 'admin' to realm 'master': user with username exists
2022-09-26 21:40:37,447 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to start server in (import_export) mode
2022-09-26 21:40:37,448 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Unable to start HTTP server
2022-09-26 21:40:37,448 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: io.quarkus.runtime.QuarkusBindException
2022-09-26 21:40:37,449 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) For more details run the same command passing the '--verbose' option. Also you can use '--help' to see the details about the usage of the particular command.
Hope to hear any suggestions and solutions to the problem, thank you in advance!

JFROG Artifactory installation and setup error 404

Dear who may read this topic,
I'm looking for help in setting up properly my jfrog oss.
I downloaded and unpacked the package suggested in the installation steps.
I modified the ports in the docker-compose.yaml and ran a shot.
Below the outputs of several files and execution.
System Diagnostic
└──╼ $sudo ./systemDiagnostics.sh
************** Start Artifactory Diagnostics *******************
Discovered compose installer, proceeding with var directory as :[/root/.jfrog/artifactory/var]
******************************** CHECK PORTS AVAILABLITY
********************************
Running: lsof -i:$ports_needed [ERROR] RESULT: NOT OK
---
Access 8040 AVAILABLE AccessGrpc 8045 AVAILABLE MetaData 8086 AVAILABLE FrontEnd 8070 AVAILABLE Replicator 8048 AVAILABLE Router 8046 AVAILABLE RouterTraefik 8049 AVAILABLE RouterGrpc 8047 AVAILABLE
[ERROR] Artifactory 8184 NOT AVAILABLE used by processName docker-pr processId 30261 [ERROR] Either stop the process or change the port by configuring artifactory.port in system.yaml
[ERROR] RouterExternal 8183 NOT AVAILABLE used by processName docker-pr processId 30274 [ERROR] Either stop the process or change the port by configuring router.entrypoints.externalPort in system.yaml
******************************** CHECKING MAX OPEN FILES
********************************
Running: ulimit -n [ERROR] RESULT: NOT OK
---
[ERROR] Number found 1024 is less than recommended minimum of 32000 for USER "root"
******************************** CHECKING MAX OPEN PROCESSES
********************************
Running: ulimit -u RESULT: OK
******************************** CHECK FIREWALL SETTINGS
********************************
RESULT: FirewallD is not configured
******************************** CHECK FIREWALL IPTABLES SETTINGS
********************************
Running: iptables -L INPUT -n -v | grep $ports_needed | grep -i -E "REJECT|DROP" RESULT: OK
Artifactory 8184 AVAILABLE Access 8040 AVAILABLE AccessGrpc 8045 AVAILABLE MetaData 8086 AVAILABLE FrontEnd 8070 AVAILABLE Replicator 8048 AVAILABLE Router 8046 AVAILABLE RouterExternal 8183 AVAILABLE RouterTraefik 8049 AVAILABLE RouterGrpc 8047 AVAILABLE
******************************** CHECK FIREWALL IP6TABLES SETTINGS
********************************
Running: ip6tables -L INPUT -n -v | grep $ports_needed | grep -i -E "REJECT|DROP" RESULT: OK
Artifactory 8184 AVAILABLE Access 8040 AVAILABLE AccessGrpc 8045 AVAILABLE MetaData 8086 AVAILABLE FrontEnd 8070 AVAILABLE Replicator 8048 AVAILABLE Router 8046 AVAILABLE RouterExternal 8183 AVAILABLE RouterTraefik 8049 AVAILABLE RouterGrpc 8047 AVAILABLE
******************************** CHECKING LOCALHOST PING
********************************
Running: ping localhost -c 1 > /dev/null 2>&1 RESULT: OK
******************************** PROXY LIST
********************************
Checking proxy configured in HTTP_PROXY RESULT: OK Checking proxy configured in HTTPS_PROXY RESULT: OK Checking proxy configured in NO_PROXY RESULT: OK Checking proxy configured in ALL_PROXY RESULT: OK
************** End Artifactory Diagnostics *******************
docker-compose.yaml
version: '3' services: postgres:
image: ${DOCKER_REGISTRY}/postgres:9.6.11
container_name: postgresql
environment:
- POSTGRES_DB=artifactory
- POSTGRES_USER=artifactory
- POSTGRES_PASSWORD=r00t
ports:
- 5437:5437
volumes:
- ${ROOT_DATA_DIR}/var/data/postgres/data:/var/lib/postgresql/data
- /etc/localtime:/etc/localtime:ro
restart: always
logging:
driver: json-file
options:
max-size: "50m"
max-file: "10"
ulimits:
nproc: 65535
nofile:
soft: 32000
hard: 40000 artifactory:
image: ${DOCKER_REGISTRY}/jfrog/artifactory-oss:${ARTIFACTORY_VERSION}
container_name: artifactory
volumes:
- ${ROOT_DATA_DIR}/var:/var/opt/jfrog/artifactory
- /etc/localtime:/etc/localtime:ro
restart: always
depends_on:
- postgres
ulimits:
nproc: 65535
nofile:
soft: 32000
hard: 40000
environment:
- JF_ROUTER_ENTRYPOINTS_EXTERNALPORT=${JF_ROUTER_ENTRYPOINTS_EXTERNALPORT}
ports:
- ${JF_ROUTER_ENTRYPOINTS_EXTERNALPORT}:${JF_ROUTER_ENTRYPOINTS_EXTERNALPORT}
# for router communication
- 8185:8185 # for artifactory communication
logging:
driver: json-file
options:
max-size: "50m"
max-file: "10"
system.yaml
──╼ $sudo cat /root/.jfrog/artifactory/var/etc/system.yaml
shared:
node:
ip: 127.0.1.1
id: parrot
name: parrot
database:
type: postgresql
driver: org.postgresql.Driver
password: r00t
username: artifactory
url: jdbc:postgresql://127.0.1.1:5437/artifactory
router:
entrypoints:
externalPort: 8185
Could someone please help me out through this issue ?
EDIT 1.
catalina localhost.log
04-Jun-2020 07:07:55.585 INFO [localhost-startStop-1] org.apache.catalina.core.ApplicationContext.log 1 Spring WebApplicationInitializers detected on classpath
04-Jun-2020 07:07:55.624 INFO [localhost-startStop-2] org.apache.catalina.core.ApplicationContext.log Using artifactory.home at '/opt/jfrog/artifactory' resolved from: System property
04-Jun-2020 07:07:58.411 INFO [localhost-startStop-1] org.apache.catalina.core.ApplicationContext.log Initializing Spring embedded WebApplicationContext
04-Jun-2020 07:09:59.503 INFO [localhost-startStop-3] org.apache.catalina.core.ApplicationContext.log Closing Spring root WebApplicationContext
console.log
2020-06-03T21:16:48.869Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [ ] [o.j.c.w.FileWatcher:147 ] [Thread-5 ] - Starting watch of folder configurations
2020-06-03T21:16:48.938L [35m[tomct][0m [SEVERE] [ ] [org.apache.tomcat.jdbc.pool.ConnectionPool] [org.apache.tomcat.jdbc.pool.ConnectionPool init] - Unable to create initial connections of pool.
org.postgresql.util.PSQLException: Connection to 127.0.1.1:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
.env
## This file contains environment variables used by the docker-compose yaml files
## IMPORTANT: During installation, this file may be updated based on user choices or existing configuration
## Docker registry to fetch images from
DOCKER_REGISTRY=docker.bintray.io
## Version of artifactory to install
ARTIFACTORY_VERSION=7.5.5
## The Installation directory for Artifactory. IF not entered, the script will prompt you for this input. Default [$HOME/.jfrog/artifactory]
ROOT_DATA_DIR=/root/.jfrog/artifactory
# Router external port mapping. This property may be overridden from the system.yaml (router.entrypoints.externalPort)
JF_ROUTER_ENTRYPOINTS_EXTERNALPORT=8185
EDIT 2.
I'm facing the same issue when trying to follow another tutorial.
artifactory#1cdf04cd9ed1:/opt/jfrog/artifactory/app/bin$ ./systemDiagnostics.sh
************** Start Artifactory Diagnostics *******************
Discovered debian installer, proceeding with var directory as :[/opt/jfrog/artifactory/var]
********************************
CHECK PORTS AVAILABLITY
********************************
Running: lsof -i:$ports_needed
RESULT: OK
Artifactory 8081 AVAILABLE
Access 8040 AVAILABLE
AccessGrpc 8045 AVAILABLE
MetaData 8086 AVAILABLE
FrontEnd 8070 AVAILABLE
Replicator 8048 AVAILABLE
Router 8046 AVAILABLE
RouterExternal 8082 AVAILABLE
RouterTraefik 8049 AVAILABLE
RouterGrpc 8047 AVAILABLE
********************************
CHECKING MAX OPEN FILES
********************************
BusyBox v1.30.1 (2019-02-14 18:11:39 UTC) multi-call binary.
Usage: grep [-HhnlLoqvsriwFE] [-m N] [-A/B/C N] PATTERN/-e PATTERN.../-f FILE [FILE]...
Search for PATTERN in FILEs (or stdin)
-H Add 'filename:' prefix
-h Do not add 'filename:' prefix
-n Add 'line_no:' prefix
-l Show only names of files that match
-L Show only names of files that don't match
-c Show only count of matching lines
-o Show only the matching part of line
-q Quiet. Return 0 if PATTERN is found, 1 otherwise
-v Select non-matching lines
-s Suppress open and read errors
-r Recurse
-i Ignore case
-w Match whole words only
-x Match whole lines only
-F PATTERN is a literal (not regexp)
-E PATTERN is an extended regexp
-m N Match up to N times per file
-A N Print N lines of trailing context
-B N Print N lines of leading context
-C N Same as '-A N -B N'
-e PTRN Pattern to match
-f FILE Read pattern from file
[WARN] User does not exist, skipping max open files check for this user
********************************
CHECKING MAX OPEN PROCESSES
********************************
BusyBox v1.30.1 (2019-02-14 18:11:39 UTC) multi-call binary.
Usage: grep [-HhnlLoqvsriwFE] [-m N] [-A/B/C N] PATTERN/-e PATTERN.../-f FILE [FILE]...
Search for PATTERN in FILEs (or stdin)
-H Add 'filename:' prefix
-h Do not add 'filename:' prefix
-n Add 'line_no:' prefix
-l Show only names of files that match
-L Show only names of files that don't match
-c Show only count of matching lines
-o Show only the matching part of line
-q Quiet. Return 0 if PATTERN is found, 1 otherwise
-v Select non-matching lines
-s Suppress open and read errors
-r Recurse
-i Ignore case
-w Match whole words only
-x Match whole lines only
-F PATTERN is a literal (not regexp)
-E PATTERN is an extended regexp
-m N Match up to N times per file
-A N Print N lines of trailing context
-B N Print N lines of leading context
-C N Same as '-A N -B N'
-e PTRN Pattern to match
-f FILE Read pattern from file
[WARN] User does not exist, skipping max open processes check for this user
********************************
CHECK FIREWALL SETTINGS
********************************
RESULT: FirewallD is not configured
********************************
CHECK FIREWALL IPTABLES SETTINGS
********************************
RESULT: Iptables is not configured
********************************
CHECK FIREWALL IP6TABLES SETTINGS
********************************
RESULT: Ip6tables is not configured
********************************
CHECKING LOCALHOST PING
********************************
Running: ping localhost -c 1 > /dev/null 2>&1
[ERROR] RESULT: NOT OK
---
[ERROR] Unable to resolve localhost. check if localhost is well defined in /etc/hosts
********************************
PROXY LIST
********************************
Checking proxy configured in HTTP_PROXY
RESULT: OK
Checking proxy configured in HTTPS_PROXY
RESULT: OK
Checking proxy configured in NO_PROXY
RESULT: OK
Checking proxy configured in ALL_PROXY
RESULT: OK
************** End Artifactory Diagnostics *******************
artifactory#1cdf04cd9ed1:/opt/jfrog/artifactory/app/bin$ cat /etc/host
host.conf hostname hosts
artifactory#1cdf04cd9ed1:/opt/jfrog/artifactory/app/bin$ cat /etc/host
host.conf hostname hosts
artifactory#1cdf04cd9ed1:/opt/jfrog/artifactory/app/bin$ cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 1cdf04cd9ed1
Thanks for your answers to my questions. Your postgres URL is going within artifactory container.
Please change url to jdbc:postgresql://postgres:5437/artifactory and try. Or add the both service with network_mode: host

Vagrant Provision fails at installing Ruby Gem chef-vault

As the new intern, I'm supposed to get one of our applications running on my local machine (OS X). It's a large set of files to run the application and it uses frameworks that I am not familiar with such as vagrant and chef.
I was told that it should be as easy as cloning the repo, running vagrant up, and viewing the page in my browser but I've encountered a few problems. Now, when I go into the directory and run vagrant up it shows a few questionable things:
Admins-MacBook-Pro:db_archive_chef ahayden$ VAGRANT_LOG=info vagrant up
INFO global: Vagrant version: 2.1.2
INFO global: Ruby version: 2.4.4
INFO global: RubyGems version: 2.6.14.1
INFO global: VAGRANT_LOG="info"
INFO global: VAGRANT_INSTALLER_VERSION="2"
INFO global: VAGRANT_INSTALLER_EMBEDDED_DIR="/opt/vagrant/embedded"
INFO global: VAGRANT_INSTALLER_ENV="1"
INFO global: VAGRANT_EXECUTABLE="/opt/vagrant/embedded/gems/2.1.2/gems/vagrant-2.1.2/bin/vagrant"
WARN global: resolv replacement has not been enabled!
INFO global: Plugins:
INFO global: - vagrant-berkshelf = [installed: 5.1.2 constraint: > 0]
INFO global: - virtualbox = [installed: 0.8.6 constraint: > 0]
INFO global: Loading plugins!
INFO global: Loading plugin `vagrant-berkshelf` with default require: `vagrant-berkshelf`
INFO root: Version requirements from Vagrantfile: [">= 1.5"]
INFO root: - Version requirements satisfied!
INFO manager: Registered plugin: berkshelf
INFO global: Loading plugin `virtualbox` with default require: `virtualbox`
/Users/ahayden/.vagrant.d/gems/2.4.4/gems/virtualbox-0.8.6/lib/virtualbox/com/ffi/util.rb:93: warning: key "io" is duplicated and overwritten on line 107
INFO vagrant: `vagrant` invoked: ["up"]
INFO environment: Environment initialized (#<Vagrant::Environment:0x00000001040deee0>)
INFO environment: - cwd: /Users/ahayden/Development/LSS/db_archive_chef
INFO environment: Home path: /Users/ahayden/.vagrant.d
INFO environment: Local data path: /Users/ahayden/Development/LSS/db_archive_chef/.vagrant
INFO environment: Running hook: environment_plugins_loaded
INFO runner: Preparing hooks for middleware sequence...
INFO runner: 1 hooks defined.
INFO root: Version requirements from Vagrantfile: [">= 1.5.0"]
INFO root: - Version requirements satisfied!
INFO loader: Loading configuration in order: [:home, :root]
INFO command: Active machine found with name default. Using provider: virtualbox
INFO environment: Getting machine: default (virtualbox)
INFO environment: Uncached load of machine.
INFO base: VBoxManage path: VBoxManage
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "--version"]
INFO subprocess: Command not in installer, restoring original environment...
INFO meta: Using VirtualBox driver: VagrantPlugins::ProviderVirtualBox::Driver::Version_5_2
INFO base: VBoxManage path: VBoxManage
INFO loader: Set "2174531280_machine_default" = []
INFO loader: Loading configuration in order: [:home, :root, "2174531280_machine_default"]
INFO box_collection: Box found: bento/ubuntu-14.04 (virtualbox)
INFO environment: Running hook: authenticate_box_url
INFO host: Autodetecting host type for [#<Vagrant::Environment: /Users/ahayden/Development/LSS/db_archive_chef>]
INFO host: Detected: darwin!
INFO runner: Preparing hooks for middleware sequence...
INFO runner: 2 hooks defined.
INFO runner: Running action: authenticate_box_url #<Vagrant::Action::Builder:0x00000001030ab348>
INFO loader: Loading configuration in order: [:"2175328800_bento/ubuntu-14.04_virtualbox", :home, :root, "2174531280_machine_default"]
INFO machine: Initializing machine: default
INFO machine: - Provider: VagrantPlugins::ProviderVirtualBox::Provider
INFO machine: - Box: #<Vagrant::Box:0x00000001034acc08>
INFO machine: - Data dir: /Users/ahayden/Development/LSS/db_archive_chef/.vagrant/machines/default/virtualbox
INFO base: VBoxManage path: VBoxManage
INFO meta: Using VirtualBox driver: VagrantPlugins::ProviderVirtualBox::Driver::Version_5_2
INFO base: VBoxManage path: VBoxManage
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "showvminfo", "92b0cc90-127e-4e19-8c75-73b5bf0b5506"]
INFO subprocess: Command not in installer, restoring original environment...
INFO machine: New machine ID: nil
INFO base: VBoxManage path: VBoxManage
ERROR loader: Unknown config sources: [:"2175328800_bento/ubuntu-14.04_virtualbox"]
INFO base: VBoxManage path: VBoxManage
INFO meta: Using VirtualBox driver: VagrantPlugins::ProviderVirtualBox::Driver::Version_5_2
INFO base: VBoxManage path: VBoxManage
INFO environment: Getting machine: default (virtualbox)
INFO environment: Returning cached machine: default (virtualbox)
INFO command: With machine: default (#
INFO interface: info: Bringing machine 'default' up with 'virtualbox' provider...
Bringing machine 'default' up with 'virtualbox' provider...
INFO batch_action: Enabling parallelization by default.
INFO batch_action: Disabling parallelization because provider doesn't support it: virtualbox
INFO batch_action: Batch action will parallelize: false
INFO batch_action: Starting action: #<Vagrant::Machine:0x0000000100a51238> up {:destroy_on_error=>true, :install_provider=>false, :parallel=>true, :provision_ignore_sentinel=>false, :provision_types=>nil}
INFO machine: Calling action: up on provider VirtualBox (new VM)
INFO environment: Acquired process lock: dotlock
INFO environment: Released process lock: dotlock
INFO environment: Acquired process lock: machine-action-1c8a0b7102d23451e5804c5357d8a327
INFO base: VBoxManage path: VBoxManage
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "list", "hostonlyifs"]
INFO subprocess: Command not in installer, restoring original environment...
<Proc:0x000000010157ff60#/opt/vagrant/embedded/gems/2.1.2/gems/vagrant-2.1.2/lib/vagrant/action/warden.rb:94 (lambda)>
INFO warden: Calling IN action: #<Vagrant::Action::Builtin::HandleBox:0x00000001015fc448>
INFO handle_box: Machine already has box. HandleBox will not run.
INFO warden: Calling IN action: #<VagrantPlugins::Berkshelf::Action::Check:0x000000010135cee0>
INFO subprocess: Starting process: ["/usr/local/bin/berks", "--version", "--format", "json"]
INFO subprocess: Command not in installer, restoring original environment...
default: The Berkshelf shelf is at "/Users/ahayden/.berkshelf/vagrant-berkshelf/shelves/berkshelf20180730-4484-1fezzea-default"
INFO prepare_clone: no clone master, not preparing clone snapshot
INFO warden: Calling IN action: #<VagrantPlugins::ProviderVirtualBox::Action::Import:0x0000000100a5add8>
INFO interface: info: Importing base box 'bento/ubuntu-14.04'...
INFO interface: info: ==> default: Importing base box 'bento/ubuntu-14.04'...
==> default: Importing base box 'bento/ubuntu-14.04'...
INFO interface: info: Progress: 90%
Progress: 90%
==> default: Checking if box 'bento/ubuntu-14.04' is up to date...
INFO downloader: Downloader starting download:
INFO downloader: -- Source: https://vagrantcloud.com/bento/ubuntu-14.04
INFO downloader: -- Destination: /var/folders/gf/skrz9ljj2z3b3vm947tt5r680000gp/T/vagrant-load-metadata20180730-4484-lo2vxi
INFO subprocess: Starting process: ["/opt/vagrant/embedded/bin/curl", "-q", "--fail", "--location", "--max-redirs", "10", "--verbose", "--user-agent", "Vagrant/2.1.2 (+https://www.vagrantup.com; ruby2.4.4)", "-H", "Accept: application/json", "--output", "/var/folders/gf/skrz9ljj2z3b3vm947tt5r680000gp/T/vagrant-load-metadata20180730-4484-lo2vxi", "https://vagrantcloud.com/bento/ubuntu-14.04"]
INFO subprocess: Command in the installer. Specifying DYLD_LIBRARY_PATH...
==> default: Updating Vagrant's Berkshelf...
INFO subprocess: Starting process: ["/usr/local/bin/berks", "vendor", "/Users/ahayden/.berkshelf/vagrant-berkshelf/shelves/berkshelf20180730-4484-1fezzea-default", "--berksfile", "/Users/ahayden/Development/LSS/db_archive_chef/Berksfile"]
INFO subprocess: Command not in installer, restoring original environment...
INFO interface: output: Resolving cookbook dependencies...
Fetching 'db_archive' from source at .
Using chef-vault (3.1.0)
Using db_archive (0.3.14) from source at .
Using hostsfile (3.0.1)
INFO interface: output: ==> default: Resolving cookbook dependencies...
==> default: Fetching 'db_archive' from source at .
==> default: Using chef-vault (3.1.0)
==> default: Using db_archive (0.3.14) from source at .
==> default: Using hostsfile (3.0.1)
==> default: Vendoring chef-vault (3.1.0) to /Users/ahayden/.berkshelf/vagrant-berkshelf/shelves/berkshelf20180730-4484-1fezzea-default/chef-vault
==> default: Vendoring db_archive (0.3.14) to /Users/ahayden/.berkshelf/vagrant-berkshelf/shelves/berkshelf20180730-4484-1fezzea-default/db_archive
==> default: Vendoring hostsfile (3.0.1) to /Users/ahayden/.berkshelf/vagrant-berkshelf/shelves/berkshelf20180730-4484-1fezzea-default/hostsfile
INFO warden: Calling IN action: #<VagrantPlugins::Berkshelf::Action::Upload:0x000000010171f3e8>
INFO upload: Provisioner does need to upload
INFO warden: Calling IN action: #<Vagrant::Action::Builtin::Provision:0x00000001016de3c0>
INFO provision: Checking provisioner sentinel file...
INFO interface: warn: The cookbook path '/Users/ahayden/Development/LSS/db_archive_chef/cookbooks' doesn't exist. Ignoring...
==> default: Clearing any previously set network interfaces...
INFO network: Searching for matching hostonly network: 172.28.128.1
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "list", "hostonlyifs"]
INFO subprocess: Command not in installer, restoring original environment...
INFO interface: info: ==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "showvminfo", "560c0ba3-253c-478d-8cc9-97d8c2fbb1da", "--machinereadable"]
INFO subprocess: Command not in installer, restoring original environment...
INFO interface: detail: SSH address: 127.0.0.1:2222
INFO interface: detail: default: SSH address: 127.0.0.1:2222
default: SSH address: 127.0.0.1:2222
INFO ssh: Attempting SSH connection...
INFO ssh: Attempting to connect to SSH...
INFO ssh: - Host: 127.0.0.1
INFO ssh: - Port: 2222
INFO ssh: - Username: vagrant
INFO ssh: - Password? false
INFO ssh: - Key Path: ["/Users/ahayden/.vagrant.d/insecure_private_key"]
INFO ssh: SSH not ready: #<Vagrant::Errors::NetSSHException: An error occurred in the underlying SSH library that Vagrant uses.
The error message is shown below. In many cases, errors from this
library are caused by ssh-agent issues. Try disabling your SSH
agent or removing some keys and try again.
If the problem persists, please report a bug to the net-ssh project.
timeout during server version negotiating>
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "showvminfo", "560c0ba3-253c-478d-8cc9-97d8c2fbb1da", "--machinereadable"]
INFO subprocess: Command not in installer, restoring original environment...
["/Users/ahayden/.vagrant.d/insecure_private_key"]
INFO ssh: SSH is ready!
INFO ssh: Execute: (sudo=false)
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "showvminfo", "560c0ba3-253c-478d-8cc9-97d8c2fbb1da", "--machinereadable"]
INFO subprocess: Command not in installer, restoring original environment...
INFO guest: Autodetecting host type for [#<Vagrant::Machine: default (VagrantPlugins::ProviderVirtualBox::Provider)>]
INFO ssh: Execute: if test -r /etc/os-release; then
source /etc/os-release && test 'xLinux Mint' = "x$ID" && exit
fi
if test -x /usr/bin/lsb_release; then
/usr/bin/lsb_release -i 2>/dev/null | grep -qi 'Linux Mint' && exit
fi
if test -r /etc/issue; then
cat /etc/issue | grep -qi 'Linux Mint' && exit
fi
exit 1
(sudo=false)
INFO ssh: Execute: grep 'ostree=' /proc/cmdline (sudo=false)
INFO ssh: Execute: [ -x /usr/bin/lsb_release ] && /usr/bin/lsb_release -i 2>/dev/null | grep Trisquel (sudo=false)
INFO ssh: Execute: if test -r /etc/os-release; then
source /etc/os-release && test 'xelementary' = "x$ID" && exit
fi
if test -x /usr/bin/lsb_release; then
/usr/bin/lsb_release -i 2>/dev/null | grep -qi 'elementary' && exit
fi
if test -r /etc/issue; then
cat /etc/issue | grep -qi 'elementary' && exit
fi
exit 1
(sudo=false)
INFO ssh: Execute: uname -s | grep -i 'DragonFly' (sudo=false)
INFO ssh: Execute: cat /etc/pld-release (sudo=false)
INFO ssh: Execute: grep 'Amazon Linux' /etc/os-release (sudo=false)
INFO ssh: Execute: grep 'Fedora release' /etc/redhat-release (sudo=false)
INFO ssh: Execute: if test -r /etc/os-release; then
source /etc/os-release && test 'xkali' = "x$ID" && exit
fi
if test -x /usr/bin/lsb_release; then
/usr/bin/lsb_release -i 2>/dev/null | grep -qi 'kali' && exit
fi
if test -r /etc/issue; then
cat /etc/issue | grep -qi 'kali' && exit
fi
exit 1
(sudo=false)
INFO ssh: Execute: grep Funtoo /etc/gentoo-release (sudo=false)
INFO ssh: Execute: if test -r /etc/os-release; then
source /etc/os-release && test 'xubuntu' = "x$ID" && exit
fi
if test -x /usr/bin/lsb_release; then
/usr/bin/lsb_release -i 2>/dev/null | grep -qi 'ubuntu' && exit
fi
if test -r /etc/issue; then
cat /etc/issue | grep -qi 'ubuntu' && exit
fi
exit 1
(sudo=false)
INFO guest: Detected: ubuntu!
INFO ssh: SSH is ready!
INFO ssh: Execute: (sudo=false)
INFO ssh: Inserting key to avoid password: ssh-rsa AAAA/ vagrant
INFO interface: detail:
Inserting generated public key within guest...
INFO interface: detail: default:
default: Inserting generated public key within guest...
default:
default: Inserting generated public key within guest...
INFO ssh: SSH is ready!
INFO ssh: Execute: (sudo=false)
INFO guest: Execute capability: insert_public_key [#<Vagrant::Machine: default (VagrantPlugins::ProviderVirtualBox::Provider)>, "ssh-rsa AAAA/ vagrant"] (ubuntu)
INFO ssh: Execute: mkdir -p ~/.ssh
chmod 0700 ~/.ssh
cat '/tmp/vagrant-insert-pubkey-1532971970' >> ~/.ssh/authorized_keys && chmod 0600 ~/.ssh/authorized_keys
result=$?
rm -f '/tmp/vagrant-insert-pubkey-1532971970'
exit $result
(sudo=false)
INFO host: Execute capability: set_ssh_key_permissions [#<Vagrant::Environment: /Users/ahayden/Development/LSS/db_archive_chef>, #<Pathname:/Users/ahayden/Development/LSS/db_archive_chef/.vagrant/machines/default/virtualbox/private_key>] (darwin)
INFO interface: detail: Removing insecure key from the guest if it's present...
INFO ssh: Execute: if test -f ~/.ssh/authorized_keys; then
grep -v -x -f '/tmp/vagrant-remove-pubkey-1532971970' ~/.ssh/authorized_keys > ~/.ssh/authorized_keys.tmp
mv ~/.ssh/authorized_keys.tmp ~/.ssh/authorized_keys && chmod 0600 ~/.ssh/authorized_keys
result=$?
fi
rm -f '/tmp/vagrant-remove-pubkey-1532971970'
exit $result
(sudo=false)
INFO interface: detail: Key inserted! Disconnecting and reconnecting using new SSH key...
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "showvminfo", "560c0ba3-253c-478d-8cc9-97d8c2fbb1da", "--machinereadable"]
INFO subprocess: Command not in installer, restoring original environment...
INFO ssh: SSH is ready!
INFO ssh: Execute: (sudo=false)
INFO interface: output: Machine booted and ready!
INFO warden: Calling OUT action: #<VagrantPlugins::ProviderVirtualBox::Action::SaneDefaults:0x00000001014560a8>
INFO interface: info: Setting hostname..
INFO ssh: SSH is ready!
INFO ssh: Execute: (sudo=false)
INFO guest: Execute capability: change_host_name [#<Vagrant::Machine: default (VagrantPlugins::ProviderVirtualBox::Provider)>, "db-archive"] (ubuntu)
INFO ssh: Execute: hostname -f | grep '^db-archive$' (sudo=false)
INFO ssh: Execute: # Set the hostname
echo 'db-archive' > /etc/hostname
hostname -F /etc/hostname
if command -v hostnamectl; then
hostnamectl set-hostname 'db-archive'
fi
# Prepend ourselves to /etc/hosts
grep -w 'db-archive' /etc/hosts || {
if grep -w '^127\.0\.1\.1' /etc/hosts ; then
sed -i'' 's/^127\.0\.1\.1\s.*$/127.0.1.1\tdb-archive\tdb-archive/' /etc/hosts
else
sed -i'' '1i 127.0.1.1\tdb-archive\tdb-archive' /etc/hosts
fi
}
# Update mailname
echo 'db-archive' > /etc/mailname
# Restart hostname services
if test -f /etc/init.d/hostname; then
/etc/init.d/hostname start || true
fi
if test -f /etc/init.d/hostname.sh; then
/etc/init.d/hostname.sh start || true
fi
if test -x /sbin/dhclient ; then
/sbin/dhclient -r
/sbin/dhclient -nw
fi
(sudo=true)
INFO warden: Calling OUT action: #<Vagrant::Action::Builtin::SetHostname:0x00000001014560d0>
INFO synced_folders: Invoking synced folder enable: virtualbox
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "guestproperty", "get", "560c0ba3-253c-478d-8cc9-97d8c2fbb1da", "/VirtualBox/GuestInfo/OS/Product"]
INFO subprocess: Command not in installer, restoring original environment...
INFO interface: output: Mounting shared folders...
INFO interface: detail: /vagrant =>
INFO subprocess: Starting process: ["/usr/local/bin/VBoxManage", "showvminfo", "560c0ba3-253c-478d-8cc9-97d8c2fbb1da", "--machinereadable"]
INFO subprocess: Command not in installer, restoring original environment...
INFO ssh: SSH is ready!
INFO ssh: Execute: (sudo=false)
INFO guest: Execute capability: mount_virtualbox_shared_folder [#<Vagrant::Machine: default (VagrantPlugins::ProviderVirtualBox::Provider)>, "vagrant", "/vagrant", {:guestpath=>"/vagrant", :hostpath=>"/Users/ahayden/Development/LSS/db_archive_chef", :disabled=>false, :__vagrantfile=>true, :owner=>"vagrant", :group=>"vagrant"}] (ubuntu)
/sbin/initctl emit --no-wait vagrant-mounted MOUNTPOINT=/vagrant
fi
(sudo=true)
INFO ssh: Execute: id -u vagrant (sudo=false)
INFO ssh: Execute: getent group vagrant (sudo=false)
INFO ssh: Execute: mkdir -p /etc/chef (sudo=true)
INFO ssh: Execute: mount -t vboxsf -o uid=1000,gid=1000 etc_chef /etc/chef (sudo=true)
INFO ssh: Execute: chown 1000:1000 /etc/chef (sudo=true)
INFO ssh: Execute: if command -v /sbin/init && /sbin/init 2>/dev/null --version | grep upstart; then
/sbin/initctl emit --no-wait vagrant-mounted MOUNTPOINT=/etc/chef
fi
(sudo=true)
INFO provision: Writing provisioning sentinel so we don't provision again
INFO interface: info: Running provisioner: chef_solo...
INFO guest: Execute capability: chef_installed [#<Vagrant::Machine: default (VagrantPlugins::ProviderVirtualBox::Provider)>, "chef", :"12.10.24"] (ubuntu)
INFO ssh: Execute: test -x /opt/chef/bin/knife&& /opt/chef/bin/knife --version | grep 'Chef: 12.10.24' (sudo=true)
INFO interface: detail: Installing Chef (12.10.24)...
INFO interface: detail: default: Installing Chef (12.10.24)...
default: Installing Chef (12.10.24)...
INFO ssh: SSH is ready!
INFO ssh: Execute: (sudo=false)
INFO guest: Execute capability: chef_install [#<Vagrant::Machine: default (VagrantPlugins::ProviderVirtualBox::Provider)>, "chef", :"12.10.24", "stable", "https://omnitruck.chef.io", {:product=>"chef", :channel=>"stable", :version=>:"12.10.24", :omnibus_url=>"https://omnitruck.chef.io", :force=>false, :download_path=>nil}] (ubuntu)
INFO ssh: Execute: apt-get update -y -qq (sudo=true)
INFO ssh: Execute: apt-get install -y -qq curl (sudo=true)
INFO ssh: Execute: curl -sL https://omnitruck.chef.io/install.sh | bash -s -- -P "chef" -c "stable" -v "12.10.24" (sudo=true)
==> default: Running chef-solo...
==> default: [2018-07-30T17:33:12+00:00] INFO: Forking chef instance to converge...
INFO interface: info: Starting Chef Client, version 12.10.24
==> default: [2018-07-30T17:33:12+00:00] INFO: *** Chef 12.10.24 ***
INFO interface: info: [2018-07-30T17:33:12+00:00] INFO: Platform: x86_64-linux
INFO interface: info: [2018-07-30T17:33:14+00:00] INFO: Setting the run_list to ["recipe[chef-vault]", "recipe[db_archive::update]", "recipe[db_archive::install_packages]", "recipe[db_archive::install_hostsfile]", "recipe[db_archive::install_nginx]"] from CLI options
==> default: [2018-07-30T17:33:14+00:00] INFO: Starting Chef Run for ahayden
INFO interface: info: [2018-07-30T17:33:14+00:00] INFO: Running start handlers
INFO interface: info: ==> default: [2018-07-30T17:33:14+00:00] INFO: Start handlers complete.
==> default: [2018-07-30T17:33:14+00:00] INFO: Start handlers complete.
INFO interface: info: Installing Cookbook Gems:
INFO interface: info: Running handlers:
[2018-07-30T17:33:15+00:00] ERROR: Exception handlers complete
Chef Client failed. 0 resources updated in 03 seconds
[2018-07-30T17:33:15+00:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out
[2018-07-30T17:33:15+00:00] ERROR: Expected process to exit with [0], but received '5'
---- Begin output of bundle install ----
STDOUT: Don't run Bundler as root. Bundler can ask for sudo if it is needed, and
installing your bundle as root will break this application for all non-root
users on this machine.
Fetching gem metadata from https://rubygems.org/..........
Fetching version metadata from https://rubygems.org/..
Resolving dependencies...
Installing chef-vault 3.3.0
Gem::InstallError: chef-vault requires Ruby version >= 2.2.0.
Using bundler 1.11.2
An error occurred while installing chef-vault (3.3.0), and Bundler cannot
continue.
Make sure that `gem install chef-vault -v '3.3.0'` succeeds before bundling.
STDERR:
Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
ERROR warden: Error occurred: Chef never successfully completed! Any errors should be visible in the
output above. Please fix your recipes so that they properly complete.
INFO warden: Beginning recovery process...
INFO warden: Recovery complete.
INFO warden: Beginning recovery process...
INFO warden: Recovery complete.
ERROR warden: Error occurred: Chef never successfully completed! Any errors should be visible in the
output above. Please fix your recipes so that they properly complete.
INFO environment: Released process lock: machine-action-1c8a0b7102d23451e5804c5357d8a327
INFO environment: Running hook: environment_unload
INFO runner: Preparing hooks for middleware sequence...
INFO runner: 1 hooks defined.
INFO runner: Running action: environment_unload #<Vagrant::Action::Builder:0x0000000101164c50>
ERROR vagrant: Vagrant experienced an error! Details:
ERROR vagrant: #<VagrantPlugins::Chef::Provisioner::Base::ChefError: Chef never successfully completed! Any errors should be visible in the
output above. Please fix your recipes so that they properly complete.>
ERROR vagrant: Chef never successfully completed! Any errors should be visible in the
output above. Please fix your recipes so that they properly complete.
ERROR vagrant: /plugins/provisioners/chef/provisioner/chef_solo.rb:220:in `run_chef_solo'
/plugins/provisioners/chef/provisioner/chef_solo.rb:65:in `provision'
/lib/vagrant/action/builtin/provision.rb:138:in `run_provisioner'
/lib/vagrant/action/warden.rb:95:in `call'
/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
/lib/vagrant/action/warden.rb:34:in `call'
/lib/vagrant/action/builder.rb:116:in `call'
/lib/vagrant/action/runner.rb:66:in `block in run'
/lib/vagrant/util/busy.rb:19:in `busy'
/lib/vagrant/action/runner.rb:66:in `run'
/lib/vagrant/environment.rb:510:in `hook'
/lib/vagrant/action/builtin/provision.rb:126:in `call'
/lib/vagrant/action/builtin/provision.rb:126:in `block in call'
/lib/vagrant/action/builtin/provision.rb:103:in `each'
/lib/vagrant/action/builtin/provision.rb:103:in `call'
/lib/vagrant/action/warden.rb:34:in `call'
/lib/vagrant-berkshelf/action/upload.rb:23:in `call'
/lib/vagrant/action/warden.rb:34:in `call'
/lib/vagrant-berkshelf/action/install.rb:19:in `call'
/lib/vagrant/action/warden.rb:34:in `call'
/lib/vagrant-berkshelf/action/save.rb:21:in `call'
/lib/vagrant/action/warden.rb:34:in `call'
/plugins/providers/virtualbox/action/clear_forwarded_ports.rb:15:in `call'
/opt/vagrant/embedded/gems/2.1.2/gems/vagrant-/machine.rb:194:in `call'
/opt/vagrant/embedded/gems/2.1.2/gems/vagrant-/machine.rb:194:in `action
/opt/vagrant/embedded/gems/2.1.2/gems/vagrant-/batch_action.rb:82:in `block (2 levels) in run'
INFO interface: error: Chef never successfully completed! Any errors should be visible in the
output above. Please fix your recipes so that they properly complete.
I had to omit some things from the backtrace in order to post it...
The first sign is towards the top WARN global: resolv replacement
has not been enabled!
The next area of concern util.rb:93: warning: key "io" is duplicated
and overwritten on line 107
Then there are many cases of: Starting process: ["/usr/local/bin/VBoxManage", "showvminfo", "92b0cc90-127e-4e19-8c75-73b5bf0b5506"] INFO subprocess: Command not in installer, restoring original environment... . This happens very man times with VBoxManage and a couple other times with curl and berks. I think this is the problem.
At the end, it seems to finally fail with a gem install error for chef-vault. It says the gem requires Ruby version >2.2 which I do have.
Vagrantfile:
VAGRANTFILE_API_VERSION = '2'
Vagrant.require_version '>= 1.5.0'
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.hostname = 'db-archive'
if Vagrant.has_plugin?("vagrant-omnibus")
config.omnibus.chef_version = 'latest'
end
config.vm.box = 'bento/ubuntu-14.04'
config.vm.network :private_network, type: 'dhcp'
config.vm.network 'forwarded_port', guest: 80, host: 8080
config.vm.network 'forwarded_port', guest: 443, host: 8443
config.vm.synced_folder "#{ENV['HOME']}/Documents/src/db_archive", '/var/www/db_archive'
config.vm.synced_folder "#{ENV['HOME']}/.chef", '/etc/chef'
config.berkshelf.enabled = true
config.vm.provision :chef_solo do |chef|
chef.channel = 'stable'
chef.version = '12.10.24'
chef.environment = 'vagrant'
chef.environments_path = 'environments'
chef.run_list = [
"recipe[chef-vault]",
"recipe[db_archive::update]",
"recipe[db_archive::install_packages]",
"recipe[db_archive::install_hostsfile]",
"recipe[db_archive::install_nginx]"
]
chef.data_bags_path = 'data_bags'
chef.node_name = 'ahayden'
end
end
You are using Chef 12, which is no longer supported by the latest chef-vault. You'll need to upgrade your version of Chef.
In my metadata.rb file, I changed the line depends 'chef-vault' to depends 'chef-vault', '=2.1.1'. Then when I ran vagrant destroy && vagrant up it worked fine.

Filebeat not pushing logs to Elasticsearch

I am new to docker and all this logging stuff so maybe I'm making a stuipd mistake so thanks for helping in advance. I have ELK running a a docker container (6.2.2) via Dockerfile line:
FROM sebp/elk:latest
In a separate container I am installing and running Filebeat via the folling Dockerfile lines:
RUN curl -L -O -k https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.2-amd64.deb
RUN dpkg -i filebeat-6.2.2-amd64.deb
COPY resources/filebeat/filebeat.yml /etc/filebeat/filebeat.yml
RUN chmod go-w /etc/filebeat/filebeat.yml
RUN /usr/share/filebeat/bin/filebeat -e -d "publish" &
My Filebeat configuration is:
filebeat.prospectors:
- type: log
enabled: true
paths:
- /jetty/jetty-distribution-9.3.8.v20160314/logs/*.log
output.logstash:
enabled: false
hosts: ["elk-stack:9002"]
#index: 'audit'
output.elasticsearch:
enabled: true
hosts: ["elk-stack:9200"]
#index: "audit-%{+yyyy.MM.dd}"
path.config: "/etc/filebeat"
#setup.template.name: "audit"
#setup.template.pattern: "audit-*"
#setup.template.fields: "${path.config}/fields.yml"
As you can see I was trying to do a custom index into elasticsearch, but now I'm just trying to get the default working first. The jetty logs all have global read permissions.
The docker container logs show no errors and after running I make sure the config and output are OK:
# filebeat test config
Config OK
# filebeat test output
elasticsearch: http://elk-stack:9200...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: 172.17.0.3
dial up... OK
TLS... WARN secure connection disabled
talk to server... OK
version: 6.2.2
/var/log/filebeat/filebeat shows:
2018-03-15T13:23:38.859Z INFO instance/beat.go:468 Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2018-03-15T13:23:38.860Z INFO instance/beat.go:475 Beat UUID: ed5cecaf-cbf5-438d-bbb9-30bab80c4cb9
2018-03-15T13:23:38.860Z INFO elasticsearch/client.go:145 Elasticsearch url: http://elk-stack:9200
2018-03-15T13:23:38.891Z INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.2
However when i hit localhost:9200/_cat/indices?v it doesn't return any indices:
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
How do I get this working? I am out of ideas. Thanks again for any help.
To answer my own question you can't start filebeat with:
RUN /usr/share/filebeat/bin/filebeat -e -d "publish" &
and have it keep running once the container starts. Need to manually start it or have it start in its own container with an ENTRYPOINT tag.

Neo4j cannot be started because the database files require upgrading and upgrades are disabled in the configuration

Getting the error "Neo4j cannot be started because the database files require upgrading and upgrades are disabled in the configuration. Please set 'dbms.allow_upgrade' to 'true' in your configuration file" when I try to connect neo4j through its Java driver.
Despite that I have set the property dbms.allow_upgrade to true in the neo4j.conf file, nothing is changed.
This worked for me, upgrade to the 3.3.1 neo4j Docker image! I tried the following but this didn't work either for 3.0:
docker run \
--publish=7474:7474 --publish=7687:7687 \
--volume=$HOME/neo4j/data:/data \
--volume=$HOME/neo4j/logs:/logs \
--env=NEO4J_dbms_allow_upgrade=true \
--env=NEO4J_dbms_allow_format_migration=true \
neo4j:3.0
===============================================================
fflintstone#OPTIPLEX790 MINGW64 /c/Users/fflintstone
$ docker run \
> --publish=7474:7474 --publish=7687:7687 \
> --volume=$HOME/neo4j/data:/data \
> --volume=$HOME/neo4j/logs:/logs \
> --env=NEO4J_dbms_allow_upgrade=true \
> --env=NEO4J_dbms_allow_format_migration=true \
> neo4j:3.0
Starting Neo4j.
2017-12-26 06:47:03.172+0000 INFO ======== Neo4j 3.0.12 ========
2017-12-26 06:47:03.228+0000 INFO No SSL certificate found, generating a self-signed certificate..
2017-12-26 06:47:04.204+0000 INFO Starting...
2017-12-26 06:47:05.140+0000 INFO Bolt enabled on 0.0.0.0:7687.
2017-12-26 06:47:05.560+0000 ERROR Neo4j cannot be started, because the database files require upgrading and upgrades ar
e disabled in configuration. Please set 'dbms.allow_format_migration' to 'true' in your configuration file and try again
.
fflintstone#OPTIPLEX790 MINGW64 /c/Users/fflintstone
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
neo4j 3.0 39739226e15b 13 days ago 606MB
fflintstone#OPTIPLEX790 MINGW64 /c/Users/fflintstone
$ docker run --publish=7474:7474 --publish=7687:7687 --volume=$HOME/neo4j/data:/data --volume=$HOME/neo4j/l
ogs:/logs neo4j:3.3.1
Unable to find image 'neo4j:3.3.1' locally
3.3.1: Pulling from library/neo4j
2fdfe1cd78c2: Pull complete
82630fd6e5ba: Pull complete
119d364c885d: Pull complete
46f8fad107ee: Pull complete
fe7f5c604f04: Pull complete
6fd4ca7c99ff: Pull complete
d242a75fec77: Pull complete
Digest: sha256:baeb76f0d4785817c2a3608796ff0104a8f87ed89fe3391ef467eb6f0a1fc40e
Status: Downloaded newer image for neo4j:3.3.1
Active database: graph.db
Directories in use:
home: /var/lib/neo4j
config: /var/lib/neo4j/conf
logs: /logs
plugins: /var/lib/neo4j/plugins
import: /var/lib/neo4j/import
data: /var/lib/neo4j/data
certificates: /var/lib/neo4j/certificates
run: /var/lib/neo4j/run
Starting Neo4j.
2017-12-26 06:51:42.867+0000 WARN Unknown config option: causal_clustering.discovery_listen_address
2017-12-26 06:51:42.872+0000 WARN Unknown config option: causal_clustering.raft_advertised_address
2017-12-26 06:51:42.872+0000 WARN Unknown config option: causal_clustering.raft_listen_address
2017-12-26 06:51:42.872+0000 WARN Unknown config option: ha.host.coordination
2017-12-26 06:51:42.872+0000 WARN Unknown config option: causal_clustering.transaction_advertised_address
2017-12-26 06:51:42.873+0000 WARN Unknown config option: causal_clustering.discovery_advertised_address
2017-12-26 06:51:42.873+0000 WARN Unknown config option: ha.host.data
2017-12-26 06:51:42.874+0000 WARN Unknown config option: causal_clustering.transaction_listen_address
2017-12-26 06:51:42.917+0000 INFO ======== Neo4j 3.3.1 ========
2017-12-26 06:51:42.995+0000 INFO Starting...
2017-12-26 06:51:45.790+0000 INFO Bolt enabled on 0.0.0.0:7687.
2017-12-26 06:51:52.936+0000 INFO Started.
2017-12-26 06:51:55.374+0000 INFO Remote interface available at http://localhost:7474/
In my case, the problem was that the configuration file was not the one I thought.
According to this documentation, it should have been under <neo4j-home>/conf/neo4j.conf
However, running neo4j stop and then neo4j start printed out a couple of key paths.
Directories in use:
home: /var/lib/neo4j
config: /etc/neo4j
logs: /var/log/neo4j
plugins: /var/lib/neo4j/plugins
import: /var/lib/neo4j/import
data: /var/lib/neo4j/data
certificates: /var/lib/neo4j/certificates
run: /var/run/neo4j
So, editing the config found under /etc/neo4j worked.

Resources