ELK Docker -- Logstash not working properly - docker

I am trying to run the ELK stash using the popular Docker image on DockerHub, seep/elk.
In my project dir, I have the following two files:
docker-compose.up (just want to see if logstash works, so I'm reading from stdin and writing to stdout rather than elasticsearch):
input { stdin {} }
output { stdout {} }
logstash.conf:
elk:
image: sebp/elk
ports:
- "5605:5601"
- "9200:9200"
- "9300:9300"
- "5044:5044"
volumes:
- /path/to/project/dir/logstash.conf:/usr/share/logstash/config/logstash.conf
When I run docker-compose up elk, the following stack trace is displayed:
elk_1 | * Starting periodic command scheduler cron
elk_1 | ...done.
elk_1 | * Starting Elasticsearch Server
elk_1 | ...done.
elk_1 | waiting for Elasticsearch to be up (1/30)
elk_1 | waiting for Elasticsearch to be up (2/30)
elk_1 | waiting for Elasticsearch to be up (3/30)
elk_1 | waiting for Elasticsearch to be up (4/30)
elk_1 | waiting for Elasticsearch to be up (5/30)
elk_1 | waiting for Elasticsearch to be up (6/30)
elk_1 | waiting for Elasticsearch to be up (7/30)
elk_1 | waiting for Elasticsearch to be up (8/30)
elk_1 | waiting for Elasticsearch to be up (9/30)
elk_1 | waiting for Elasticsearch to be up (10/30)
elk_1 | waiting for Elasticsearch to be up (11/30)
elk_1 | Waiting for Elasticsearch cluster to respond (1/30)
elk_1 | logstash started.
elk_1 | * Starting Kibana5
elk_1 | ...done.
elk_1 | ==> /var/log/elasticsearch/elasticsearch.log <==
elk_1 | [2018-08-11T17:34:41,530][INFO ][o.e.g.GatewayService ] [pIJHFdO] recovered [0] indices into cluster_state
elk_1 | [2018-08-11T17:34:41,926][INFO ][o.e.c.m.MetaDataIndexTemplateService] [pIJHFdO] adding template [.watches] for index patterns [.watches*]
elk_1 | [2018-08-11T17:34:42,033][INFO ][o.e.c.m.MetaDataIndexTemplateService] [pIJHFdO] adding template [.watch-history-7] for index patterns [.watcher-history-7*]
elk_1 | [2018-08-11T17:34:42,099][INFO ][o.e.c.m.MetaDataIndexTemplateService] [pIJHFdO] adding template [.triggered_watches] for index patterns [.triggered_watches*]
elk_1 | [2018-08-11T17:34:42,205][INFO ][o.e.c.m.MetaDataIndexTemplateService] [pIJHFdO] adding template [.monitoring-logstash] for index patterns [.monitoring-logstash-6-*]
elk_1 | [2018-08-11T17:34:42,288][INFO ][o.e.c.m.MetaDataIndexTemplateService] [pIJHFdO] adding template [.monitoring-es] for index patterns [.monitoring-es-6-*]
elk_1 | [2018-08-11T17:34:42,338][INFO ][o.e.c.m.MetaDataIndexTemplateService] [pIJHFdO] adding template [.monitoring-beats] for index patterns [.monitoring-beats-6-*]
elk_1 | [2018-08-11T17:34:42,374][INFO ][o.e.c.m.MetaDataIndexTemplateService] [pIJHFdO] adding template [.monitoring-alerts] for index patterns [.monitoring-alerts-6]
elk_1 | [2018-08-11T17:34:42,431][INFO ][o.e.c.m.MetaDataIndexTemplateService] [pIJHFdO] adding template [.monitoring-kibana] for index patterns [.monitoring-kibana-6-*]
elk_1 | [2018-08-11T17:34:42,523][INFO ][o.e.l.LicenseService ] [pIJHFdO] license [f28743a3-8cc3-46ad-8c75-7c096c7afaa7] mode [basic] - valid
elk_1 |
elk_1 | ==> /var/log/logstash/logstash-plain.log <==
elk_1 |
elk_1 | ==> /var/log/kibana/kibana5.log <==
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:55Z","tags":["status","plugin:kibana#6.3.2","info"],"pid":247,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:55Z","tags":["status","plugin:elasticsearch#6.3.2","info"],"pid":247,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:55Z","tags":["status","plugin:xpack_main#6.3.2","info"],"pid":247,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:55Z","tags":["status","plugin:searchprofiler#6.3.2","info"],"pid":247,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:55Z","tags":["status","plugin:ml#6.3.2","info"],"pid":247,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:55Z","tags":["status","plugin:tilemap#6.3.2","info"],"pid":247,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:55Z","tags":["status","plugin:watcher#6.3.2","info"],"pid":247,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:55Z","tags":["status","plugin:license_management#6.3.2","info"],"pid":247,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:55Z","tags":["status","plugin:index_management#6.3.2","info"],"pid":247,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:55Z","tags":["status","plugin:timelion#6.3.2","info"],"pid":247,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:55Z","tags":["status","plugin:graph#6.3.2","info"],"pid":247,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:55Z","tags":["status","plugin:monitoring#6.3.2","info"],"pid":247,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":","prevMsg":"Waiting for Elasticsearch"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:57Z","tags":["status","plugin:searchprofiler#6.3.2","info"],"pid":247,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:57Z","tags":["status","plugin:ml#6.3.2","info"],"pid":247,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:57Z","tags":["status","plugin:tilemap#6.3.2","info"],"pid":247,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:57Z","tags":["status","plugin:watcher#6.3.2","info"],"pid":247,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:57Z","tags":["status","plugin:index_management#6.3.2","info"],"pid":247,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:57Z","tags":["status","plugin:graph#6.3.2","info"],"pid":247,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:57Z","tags":["status","plugin:security#6.3.2","info"],"pid":247,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:57Z","tags":["status","plugin:grokdebugger#6.3.2","info"],"pid":247,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:57Z","tags":["status","plugin:logstash#6.3.2","info"],"pid":247,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:57Z","tags":["status","plugin:reporting#6.3.2","info"],"pid":247,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:57Z","tags":["info","monitoring-ui","kibana-monitoring"],"pid":247,"message":"Starting all Kibana monitoring collectors"}
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:34:57Z","tags":["license","info","xpack"],"pid":247,"message":"Imported license information from Elasticsearch for the [monitoring] cluster: mode: basic | status: active"}
elk_1 |
elk_1 | ==> /var/log/logstash/logstash-plain.log <==
elk_1 | [2018-08-11T17:35:08,371][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/opt/logstash/data/queue"}
elk_1 | [2018-08-11T17:35:08,380][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/opt/logstash/data/dead_letter_queue"}
elk_1 | [2018-08-11T17:35:08,990][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
elk_1 | [2018-08-11T17:35:09,025][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"aa287931-643e-47ae-bd8e-f982c75b2105", :path=>"/opt/logstash/data/uuid"}
elk_1 | [2018-08-11T17:35:09,779][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.3.2"}
elk_1 | [2018-08-11T17:35:13,753][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch hosts=>[//localhost], manage_template=>false, index=>"%{[#metadata][beat]}-%{+YYYY.MM.dd}", document_type=>"%{[#metadata][type]}", id=>"c4ee5abcf701afed0db36d4aa16c4fc10da6a92bbd615d837cccdf2f368b7802", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_21596240-07d7-4d2e-b4e5-bb68516e5a61", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
elk_1 | [2018-08-11T17:35:13,823][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>6, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
elk_1 | [2018-08-11T17:35:15,074][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
elk_1 | [2018-08-11T17:35:15,090][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
elk_1 | [2018-08-11T17:35:15,360][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
elk_1 | [2018-08-11T17:35:15,518][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
elk_1 | [2018-08-11T17:35:15,525][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
elk_1 | [2018-08-11T17:35:15,569][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost"]}
elk_1 | [2018-08-11T17:35:16,370][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
elk_1 | [2018-08-11T17:35:16,445][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x2c697fd4 run>"}
elk_1 | [2018-08-11T17:35:16,602][INFO ][org.logstash.beats.Server] Starting server on port: 5044
elk_1 | [2018-08-11T17:35:16,643][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
elk_1 | [2018-08-11T17:35:17,096][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
elk_1 |
elk_1 | ==> /var/log/kibana/kibana5.log <==
elk_1 | {"type":"log","#timestamp":"2018-08-11T17:35:20Z","tags":["listening","info"],"pid":247,"message":"Server running at http://0.0.0.0:5601"}
Now, Kibana and Elasticsearch seem to be perfectly fine, while logstash isn't doing anything because when I type something in the terminal, I get no response.
Running ps aux in the container bash terminal, I get the following:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 21332 3592 ? Ss 17:50 0:00 /bin/bash /usr/local/bin/start.sh
root 20 0.0 0.0 29272 2576 ? Ss 17:50 0:00 /usr/sbin/cron
elastic+ 86 18.0 4.4 5910168 1479108 ? Sl 17:50 0:46 /usr/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -X
elastic+ 112 0.0 0.0 135668 7328 ? Sl 17:50 0:00 /opt/elasticsearch/modules/x-pack/x-pack-ml/platform/linux-x86_64/bin/controller
logstash 226 43.6 2.2 5714032 726940 ? SNl 17:50 1:47 /usr/bin/java -Xms1g -Xmx1g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djav
kibana 243 20.0 0.4 1315812 155744 ? Sl 17:50 0:49 /opt/kibana/bin/../node/bin/node --max-old-space-size=250 --no-warnings /opt/kibana/bin/../src/cli -l /var/log/kibana/kibana5.log
root 245 0.0 0.0 7612 672 ? S 17:50 0:00 tail -f /var/log/elasticsearch/elasticsearch.log /var/log/logstash/logstash-plain.log /var/log/kibana/kibana5.log
root 323 1.3 0.0 21488 3544 pts/0 Ss 17:54 0:00 bash
root 340 0.0 0.0 37656 3300 pts/0 R+ 17:54 0:00 ps aux
Running ll /var/log/logstash/ in the container bash terminal, I get the following:
total 16
drwxr-xr-x 1 logstash logstash 4096 Aug 11 17:51 ./
drwxr-xr-x 1 root root 4096 Jul 26 14:27 ../
-rw-r--r-- 1 root root 0 Aug 11 17:50 logstash.err
-rw-r--r-- 1 logstash logstash 3873 Aug 11 17:51 logstash-plain.log
-rw-r--r-- 1 logstash logstash 0 Aug 11 17:51 logstash-slowlog-plain.log
-rw-r--r-- 1 root root 3964 Aug 11 17:51 logstash.stdout
Now, I did change logstash.conf to have the following:
input { stdin {} }
output {
elasticsearch {
hosts => ["localhost:9200"]
}
}
Still when I type something in the terminal, there is nothing in the discover section of Kibana, neither has any index pattern created...
Running ps aux in the container bash terminal, I get the following:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 21332 3600 ? Ss 17:40 0:00 /bin/bash /usr/local/bin/start.sh
root 21 0.0 0.0 29272 2568 ? Ss 17:40 0:00 /usr/sbin/cron
elastic+ 87 12.0 4.5 5912216 1484068 ? Sl 17:40 0:52 /usr/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -X
elastic+ 113 0.0 0.0 135668 7332 ? Sl 17:40 0:00 /opt/elasticsearch/modules/x-pack/x-pack-ml/platform/linux-x86_64/bin/controller
logstash 224 27.8 2.3 5714032 771528 ? SNl 17:40 1:58 /usr/bin/java -Xms1g -Xmx1g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djav
kibana 241 12.0 0.5 1322444 181228 ? Sl 17:40 0:50 /opt/kibana/bin/../node/bin/node --max-old-space-size=250 --no-warnings /opt/kibana/bin/../src/cli -l /var/log/kibana/kibana5.log
root 246 0.0 0.0 7612 692 ? S 17:40 0:00 tail -f /var/log/elasticsearch/elasticsearch.log /var/log/logstash/logstash-plain.log /var/log/kibana/kibana5.log
root 317 1.0 0.0 21488 3744 pts/0 Ss 17:47 0:00 bash
root 334 0.0 0.0 37656 3356 pts/0 R+ 17:48 0:00 ps aux
Running ll /var/log/logstash/ in the container bash terminal, I get the following:
total 16
drwxr-xr-x 1 logstash logstash 4096 Aug 11 17:41 ./
drwxr-xr-x 1 root root 4096 Jul 26 14:27 ../
-rw-r--r-- 1 root root 0 Aug 11 17:40 logstash.err
-rw-r--r-- 1 logstash logstash 3873 Aug 11 17:41 logstash-plain.log
-rw-r--r-- 1 logstash logstash 0 Aug 11 17:41 logstash-slowlog-plain.log
-rw-r--r-- 1 root root 3964 Aug 11 17:41 logstash.stdout
I have been spending a good amount of time with no luck here, so any help would be highly appreciated!

So, I did find a solution thanks to the owner of the elk image repo.
I followed instructions from this page. That is, I entered the container bash by running docker exec -it <container-name> bash, and then (inside the container terminal) I ran the command /opt/logstash/bin/logstash --path.data /tmp/logstash/data -e 'input { stdin { } } output { elasticsearch { hosts => ["localhost"] } }'.
The problem was that although the Logstash service had been started, it did not have an an interactive terminal. The command above, addresses this problem.
The following logs were displayed inside the container terminal:
Sending Logstash's logs to /opt/logstash/logs which is now configured via log4j2.properties
[2018-08-12T06:28:28,941][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/tmp/logstash/data/queue"}
[2018-08-12T06:28:28,948][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/tmp/logstash/data/dead_letter_queue"}
[2018-08-12T06:28:29,592][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-08-12T06:28:29,656][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"29cb946b-2bed-4390-b0cb-9aad6ef5a2a2", :path=>"/tmp/logstash/data/uuid"}
[2018-08-12T06:28:30,634][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.3.2"}
[2018-08-12T06:28:32,911][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>6, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-08-12T06:28:33,646][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-08-12T06:28:33,663][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-08-12T06:28:34,107][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-08-12T06:28:34,205][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-08-12T06:28:34,212][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-08-12T06:28:34,268][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost"]}
[2018-08-12T06:28:34,364][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-08-12T06:28:34,442][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-08-12T06:28:34,496][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x5dcf75c7 run>"}
[2018-08-12T06:28:34,602][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash
The stdin plugin is now waiting for input:
[2018-08-12T06:28:34,727][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-08-12T06:28:35,607][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9601}
And the following inside my server terminal:
elk_1 | ==> /var/log/elasticsearch/elasticsearch.log <==
elk_1 | [2018-08-12T06:28:34,777][INFO ][o.e.c.m.MetaDataIndexTemplateService] [jqTz2zS] adding template [logstash] for index patterns [logstash-*]
elk_1 | [2018-08-12T06:28:35,214][INFO ][o.e.c.m.MetaDataCreateIndexService] [jqTz2zS] [logstash-2018.08.12] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [_default_]
elk_1 | [2018-08-12T06:28:36,207][INFO ][o.e.c.m.MetaDataMappingService] [jqTz2zS] [logstash-2018.08.12/hiLssj14TMKd5lzBq6tvrw] create_mapping [doc]
Doing so, an index pattern was indeed created inside Kibana and I started to receive messages inside the discover tab.

Related

MariaDB on Docker cannot write to named volume

I have been trying to get mariadb to work with named volumes, but I keep running into the problem that it cannot write to it.
My docker host
Docker Version: 20.10.3 on Synology DSM 7
My docker-compose.yml
---
version: "3.8"
services:
bitwarden:
depends_on:
- database
env_file:
- /volume1/docker/bitwardenextvol/settings.env
image: bitwarden/self-host:beta
restart: unless-stopped
ports:
- "4088:4080"
- "4449:4443"
volumes:
- bitwarden:/etc/bitwarden
database:
environment:
MARIADB_USER: "bitwarden"
MARIADB_PASSWORD: "****************"
MARIADB_DATABASE: "bitwarden_vault"
MARIADB_RANDOM_ROOT_PASSWORD: "true"
image: mariadb:latest
user: 1026:100
restart: unless-stopped
volumes:
- data:/var/lib/mysql
volumes:
bitwarden:
external: true
data:
external: true
My named volumes
[
{
"CreatedAt": "2023-01-14T18:32:39+01:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/volume1/#docker/volumes/bitwarden/_data",
"Name": "bitwarden",
"Options": {
"device": "//diskstation.diesveld.lan/docker/bitwardenextvol/bitwarden",
"o": "addr=diskstation.diesveld.lan,username=harald,password=********,vers=3.0",
"type": "cifs"
},
"Scope": "local"
}
]
[
{
"CreatedAt": "2023-01-14T18:33:24+01:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/volume1/#docker/volumes/data/_data",
"Name": "data",
"Options": {
"device": "//diskstation.diesveld.lan/docker/bitwardenextvol/data",
"o": "addr=diskstation.diesveld.lan,username=harald,password=********,vers=3.0",
"type": "cifs"
},
"Scope": "local"
}
]
The host directory
harald#diskstation:/volume1/docker$ ls -al | grep bitwardenextvol
drwxrwxrwx+ 1 harald users 116 Jan 14 18:36 bitwardenextvol
Inside the host directory
harald#diskstation:/volume1/docker/bitwardenextvol$ ls -al
total 16
drwxrwxrwx+ 1 harald users 116 Jan 14 18:36 .
drwxrwxrwx+ 1 root root 294 Jan 14 18:41 ..
drwxrwxrwx+ 1 harald users 94 Jan 14 18:39 bitwarden
drwxrwxrwx+ 1 harald users 30 Jan 14 18:37 data
-rwxrwxrwx+ 1 harald users 698 Jan 14 22:07 docker-compose.yml
-rwxrwxrwx+ 1 harald users 6148 Jan 14 18:11 .DS_Store
drwxrwxrwx+ 1 root users 94 Jan 14 18:12 #eaDir
-rwxrwxrwx+ 1 harald users 1940 Jan 14 18:18 settings.env
My user account
harald#diskstation:/volume1/docker/bitwardenextvol$ id harald
uid=1026(harald) gid=100(users) groups=100(users),101(administrators)
The error message that I get when running sudo docker-compose up
harald#diskstation:/volume1/docker/bitwardenextvol$ sudo docker-compose up
Creating bitwardenextvol_database_1 ... done
Creating bitwardenextvol_bitwarden_1 ... done
Attaching to bitwardenextvol_database_1, bitwardenextvol_bitwarden_1
database_1 | 2023-01-14 23:24:33+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started.
bitwarden_1 | addgroup: gid '100' in use
database_1 | 2023-01-14 23:24:33+00:00 [Note] [Entrypoint]: Initializing database files
database_1 | 2023-01-14 23:24:33 0 [Warning] Can't create test file /var/lib/mysql/a68bd83e89af.lower-test
database_1 | 2023-01-14 23:24:33 0 [ERROR] mariadbd: Can't create/write to file './ddl_recovery.log' (Errcode: 13 "Permission denied")
database_1 | 2023-01-14 23:24:33 0 [ERROR] DDL_LOG: Failed to create ddl log file: ./ddl_recovery.log
database_1 | 2023-01-14 23:24:33 0 [ERROR] Aborting
database_1 |
database_1 | Installation of system tables failed! Examine the logs in
database_1 | /var/lib/mysql/ for more information.
database_1 |
database_1 | The problem could be conflicting information in an external
database_1 | my.cnf files. You can ignore these by doing:
database_1 |
database_1 | shell> /usr/bin/mariadb-install-db --defaults-file=~/.my.cnf
database_1 |
database_1 | You can also try to start the mysqld daemon with:
database_1 |
database_1 | shell> /usr/sbin/mariadbd --skip-grant-tables --general-log &
database_1 |
database_1 | and use the command line tool /usr/bin/mariadb
database_1 | to connect to the mysql database and look at the grant tables:
database_1 |
database_1 | shell> /usr/bin/mysql -u root mysql
database_1 | mysql> show tables;
database_1 |
database_1 | Try 'mysqld --help' if you have problems with paths. Using
database_1 | --general-log gives you a log in /var/lib/mysql/ that may be helpful.
database_1 |
database_1 | The latest information about mysql_install_db is available at
database_1 | ******/kb/en/installing-system-tables-mysql_install_db
database_1 | You can find the latest source at ******** and
database_1 | the maria-discuss email list at ********/~maria-discuss
database_1 |
database_1 | Please check all of the above before submitting a bug report
database_1 | at *******/jira
database_1 |
bitwardenextvol_database_1 exited with code 1
database_1 | 2023-01-14 23:24:35+00:00 [Note] [Entrypoint]: Initializing database files
bitwarden_1 | 2023-01-14 23:24:35,560 INFO Included extra file "/etc/supervisor.d/admin.ini" during parsing
bitwarden_1 | 2023-01-14 23:24:35,560 INFO Included extra file "/etc/supervisor.d/api.ini" during parsing
bitwarden_1 | 2023-01-14 23:24:35,560 INFO Included extra file "/etc/supervisor.d/events.ini" during parsing
bitwarden_1 | 2023-01-14 23:24:35,560 INFO Included extra file "/etc/supervisor.d/icons.ini" during parsing
bitwarden_1 | 2023-01-14 23:24:35,560 INFO Included extra file "/etc/supervisor.d/identity.ini" during parsing
bitwarden_1 | 2023-01-14 23:24:35,560 INFO Included extra file "/etc/supervisor.d/nginx.ini" during parsing
bitwarden_1 | 2023-01-14 23:24:35,560 INFO Included extra file "/etc/supervisor.d/notifications.ini" during parsing
bitwarden_1 | 2023-01-14 23:24:35,560 INFO Included extra file "/etc/supervisor.d/scim.ini" during parsing
bitwarden_1 | 2023-01-14 23:24:35,561 INFO Included extra file "/etc/supervisor.d/sso.ini" during parsing
database_1 | 2023-01-14 23:24:35 0 [Warning] Can't create test file /var/lib/mysql/a68bd83e89af.lower-test
bitwarden_1 | 2023-01-14 23:24:35,571 INFO RPC interface 'supervisor' initialized
bitwarden_1 | 2023-01-14 23:24:35,571 CRIT Server 'unix_http_server' running without any HTTP authentication checking
bitwarden_1 | 2023-01-14 23:24:35,572 INFO supervisord started with pid 48
database_1 | 2023-01-14 23:24:35 0 [ERROR] mariadbd: Can't create/write to file './ddl_recovery.log' (Errcode: 13 "Permission denied")
database_1 | 2023-01-14 23:24:35 0 [ERROR] DDL_LOG: Failed to create ddl log file: ./ddl_recovery.log
database_1 | 2023-01-14 23:24:35 0 [ERROR] Aborting
database_1 |
database_1 | Installation of system tables failed! Examine the logs in
database_1 | /var/lib/mysql/ for more information.
database_1 |
database_1 | The problem could be conflicting information in an external
database_1 | my.cnf files. You can ignore these by doing:
database_1 |
database_1 | shell> /usr/bin/mariadb-install-db --defaults-file=~/.my.cnf
database_1 |
database_1 | You can also try to start the mysqld daemon with:
database_1 |
database_1 | shell> /usr/sbin/mariadbd --skip-grant-tables --general-log &
database_1 |
database_1 | and use the command line tool /usr/bin/mariadb
database_1 | to connect to the mysql database and look at the grant tables:
database_1 |
database_1 | shell> /usr/bin/mysql -u root mysql
database_1 | mysql> show tables;
database_1 |
database_1 | Try 'mysqld --help' if you have problems with paths. Using
database_1 | --general-log gives you a log in /var/lib/mysql/ that may be helpful.
database_1 |
database_1 | The latest information about mysql_install_db is available at
database_1 | ********/kb/en/installing-system-tables-mysql_install_db
database_1 | You can find the latest source at https://downloads.mariadb.org and
database_1 | the maria-discuss email list at https://launchpad.net/~maria-discuss
database_1 |
database_1 | Please check all of the above before submitting a bug report
database_1 | at ******
database_1 |
bitwarden_1 | 2023-01-14 23:24:36,574 INFO spawned: 'identity' with pid 49
bitwarden_1 | 2023-01-14 23:24:36,576 INFO spawned: 'admin' with pid 50
bitwarden_1 | 2023-01-14 23:24:36,578 INFO spawned: 'api' with pid 51
bitwarden_1 | 2023-01-14 23:24:36,579 INFO spawned: 'icons' with pid 52
bitwarden_1 | 2023-01-14 23:24:36,583 INFO spawned: 'nginx' with pid 53
bitwarden_1 | 2023-01-14 23:24:36,586 INFO spawned: 'notifications' with pid 54
database_1 | 2023-01-14 23:24:36+00:00 [Note] [Entrypoint]: Initializing database files
database_1 | 2023-01-14 23:24:36 0 [Warning] Can't create test file /var/lib/mysql/a68bd83e89af.lower-test
database_1 | 2023-01-14 23:24:36 0 [ERROR] mariadbd: Can't create/write to file './ddl_recovery.log' (Errcode: 13 "Permission denied")
database_1 | 2023-01-14 23:24:36 0 [ERROR] DDL_LOG: Failed to create ddl log file: ./ddl_recovery.log
database_1 | 2023-01-14 23:24:36 0 [ERROR] Aborting
database_1 |
database_1 | Installation of system tables failed! Examine the logs in
database_1 | /var/lib/mysql/ for more information.
database_1 |
database_1 | The problem could be conflicting information in an external
database_1 | my.cnf files. You can ignore these by doing:
database_1 |
database_1 | shell> /usr/bin/mariadb-install-db --defaults-file=~/.my.cnf
database_1 |
database_1 | You can also try to start the mysqld daemon with:
database_1 |
database_1 | shell> /usr/sbin/mariadbd --skip-grant-tables --general-log &
database_1 |
database_1 | and use the command line tool /usr/bin/mariadb
database_1 | to connect to the mysql database and look at the grant tables:
database_1 |
database_1 | shell> /usr/bin/mysql -u root mysql
database_1 | mysql> show tables;
database_1 |
database_1 | Try 'mysqld --help' if you have problems with paths. Using
database_1 | --general-log gives you a log in /var/lib/mysql/ that may be helpful.
database_1 |
database_1 | The latest information about mysql_install_db is available at
database_1 | *******/kb/en/installing-system-tables-mysql_install_db
database_1 | You can find the latest source at ****** and
database_1 | the maria-discuss email list at *******/~maria-discuss
database_1 |
database_1 | Please check all of the above before submitting a bug report
database_1 | at ******
database_1 |
bitwardenextvol_database_1 exited with code 1
The weird thing is it works perfectly well with bind mounts to the very same location on disk. Also, the bitwarden container is working perfectly fine with the described named volume, however, the mariadb container is throwing the errors like in above log
[ERROR] mariadbd: Can't create/write to file './ddl_recovery.log' (Errcode: 13 "Permission denied")
The directory that the named volume is pointing to is not getting written in at all.
I cannot view the MariaDB logs, as the container keeps restarting so I cannot SSH into it.
What I get from the official documentation is that MariaDB should work perfectly fine with named volumes, but for some reason it is not in my case. My hunch was that it has to do with permissions. I tried adding the --user 1026:100 in the MariaDB service declaration but that doesn't change things.
I know I could simply work with bind mounts, but I really want to figure out how to do it with named volumes.
Who has some tips or knows how to further debug this? Your help or insights are much appreciated.
Updates based on answers
I noticed something peculiar though, the user mysql is not 999:999, but rather 66:66
harald#diskstation:~$ id mysql uid=66(mysql) gid=66(mysql) groups=66(mysql)
As I am a bit out of my comfort zone here, I cannot exactly estimate the impact of this.
As I can see in the below output, this user is not running any processes
harald#diskstation:~$ ps -u mysql
PID TTY TIME CMD
But when I try to delete the user (Synology does not support deluser) I cannot get it done:
harald#diskstation:~$ sudo synouser --del mysql
Lastest SynoErr=[user_db_delete.c:38] synouser.c:798 SYNOLocalAccountUserDelete failed. synoerr=[0xB800].
Otherwise I could get MariaDB recreate the mysql user with the right creds perhaps.
Results from docker run command from answer of Dan shows that the named volume can be read:
harald#diskstation:~$ sudo docker run -v data:/var/lib/mysql --rm mariadb ls -laZ /var/lib/mysql
total 8
drwxr-xr-x 2 root root ? 0 Jan 15 14:57 .
drwxr-xr-x 1 root root ? 76 Dec 9 02:27 ..
-rwxr-xr-x 1 root root ? 6148 Jan 14 16:40 .DS_Store
-rwxr-xr-x 1 root root ? 0 Jan 15 14:57 just-to-show-this-file-in-the-volume.yml
mysql user is indeed 999:999 inside container, but it is 66:66 outside for some weird reason.
harald#diskstation:~$ sudo docker run --rm mariadb id mysql
uid=999(mysql) gid=999(mysql) groups=999(mysql)
Changing the mount options of CIFS to include cache=none appears to be
one solution. The other is adding --innodb_flush_method=fsync to the
mariadb container args.
Mounting volume with cache=none option and adding
innodb_flush_method: "fsync"
to the docker-compose.yml does not change the behaviour unfortunately.
This is the latest state, with above remarks taken into account and without the --user. Still the same results.
harald#diskstation:~$ sudo docker volume inspect data
[
{
"CreatedAt": "2023-01-15T16:42:40+01:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/volume1/#docker/volumes/data/_data",
"Name": "data",
"Options": {
"device": "//diskstation.diesveld.lan/docker/bitwardenextvol/data",
"o": "addr=diskstation.diesveld.lan,username=harald,password=***,vers=3.0,cache=none",
"type": "cifs"
},
"Scope": "local"
}
]
And this is my docker-compose.yml
---
version: "3.8"
services:
bitwarden:
depends_on:
- database
env_file:
- /volume1/docker/bitwardenextvol/settings.env
image: bitwarden/self-host:beta
restart: unless-stopped
ports:
- "4088:4080"
- "4449:4443"
volumes:
- bitwarden:/etc/bitwarden
database:
environment:
MARIADB_USER: "bitwarden"
MARIADB_PASSWORD: "***"
MARIADB_DATABASE: "bitwarden_vault"
MARIADB_RANDOM_ROOT_PASSWORD: "true"
innodb_flush_method: "fsync"
image: mariadb:latest
restart: unless-stopped
volumes:
- data:/var/lib/mysql
volumes:
bitwarden:
external: true
data:
external: true
The logs from this:
harald#diskstation:/volume1/docker/bitwardenextvol$ sudo docker-compose up
Creating bitwardenextvol_database_1 ... done
Creating bitwardenextvol_bitwarden_1 ... done
Attaching to bitwardenextvol_database_1, bitwardenextvol_bitwarden_1
database_1 | 2023-01-16 06:26:00+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started.
bitwarden_1 | addgroup: gid '100' in use
database_1 | 2023-01-16 06:26:00+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
database_1 | 2023-01-16 06:26:00+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started.
database_1 | 2023-01-16 06:26:00+00:00 [Note] [Entrypoint]: Initializing database files
database_1 | 2023-01-16 6:26:00 0 [Warning] Can't create test file /var/lib/mysql/4d6c9c2f7405.lower-test
database_1 | 2023-01-16 6:26:00 0 [ERROR] mariadbd: Can't create/write to file './ddl_recovery.log' (Errcode: 13 "Permission denied")
database_1 | 2023-01-16 6:26:00 0 [ERROR] DDL_LOG: Failed to create ddl log file: ./ddl_recovery.log
database_1 | 2023-01-16 6:26:00 0 [ERROR] Aborting
database_1 |
database_1 | Installation of system tables failed! Examine the logs in
database_1 | /var/lib/mysql/ for more information.
database_1 |
database_1 | The problem could be conflicting information in an external
database_1 | my.cnf files. You can ignore these by doing:
database_1 |
database_1 | shell> /usr/bin/mariadb-install-db --defaults-file=~/.my.cnf
database_1 |
database_1 | You can also try to start the mysqld daemon with:
database_1 |
database_1 | shell> /usr/sbin/mariadbd --skip-grant-tables --general-log &
database_1 |
database_1 | and use the command line tool /usr/bin/mariadb
database_1 | to connect to the mysql database and look at the grant tables:
database_1 |
database_1 | shell> /usr/bin/mysql -u root mysql
database_1 | mysql> show tables;
database_1 |
database_1 | Try 'mysqld --help' if you have problems with paths. Using
database_1 | --general-log gives you a log in /var/lib/mysql/ that may be helpful.
database_1 |
database_1 | The latest information about mysql_install_db is available at
database_1 | https://mariadb.com/kb/en/installing-system-tables-mysql_install_db
database_1 | You can find the latest source at https://downloads.mariadb.org and
database_1 | the maria-discuss email list at https://launchpad.net/~maria-discuss
database_1 |
database_1 | Please check all of the above before submitting a bug report
database_1 | at https://mariadb.org/jira
database_1 |
bitwardenextvol_database_1 exited with code 1
bitwarden_1 | 2023-01-16 06:26:02,303 INFO Included extra file "/etc/supervisor.d/admin.ini" during parsing
bitwarden_1 | 2023-01-16 06:26:02,303 INFO Included extra file "/etc/supervisor.d/api.ini" during parsing
bitwarden_1 | 2023-01-16 06:26:02,303 INFO Included extra file "/etc/supervisor.d/events.ini" during parsing
bitwarden_1 | 2023-01-16 06:26:02,303 INFO Included extra file "/etc/supervisor.d/icons.ini" during parsing
bitwarden_1 | 2023-01-16 06:26:02,303 INFO Included extra file "/etc/supervisor.d/identity.ini" during parsing
bitwarden_1 | 2023-01-16 06:26:02,303 INFO Included extra file "/etc/supervisor.d/nginx.ini" during parsing
bitwarden_1 | 2023-01-16 06:26:02,304 INFO Included extra file "/etc/supervisor.d/notifications.ini" during parsing
bitwarden_1 | 2023-01-16 06:26:02,304 INFO Included extra file "/etc/supervisor.d/scim.ini" during parsing
bitwarden_1 | 2023-01-16 06:26:02,304 INFO Included extra file "/etc/supervisor.d/sso.ini" during parsing
bitwarden_1 | 2023-01-16 06:26:02,314 INFO RPC interface 'supervisor' initialized
bitwarden_1 | 2023-01-16 06:26:02,314 CRIT Server 'unix_http_server' running without any HTTP authentication checking
bitwarden_1 | 2023-01-16 06:26:02,315 INFO supervisord started with pid 47
database_1 | 2023-01-16 06:26:02+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
database_1 | 2023-01-16 06:26:02+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started.
database_1 | 2023-01-16 06:26:02+00:00 [Note] [Entrypoint]: Initializing database files
database_1 | 2023-01-16 6:26:02 0 [Warning] Can't create test file /var/lib/mysql/4d6c9c2f7405.lower-test
database_1 | 2023-01-16 6:26:03 0 [ERROR] mariadbd: Can't create/write to file './ddl_recovery.log' (Errcode: 13 "Permission denied")
database_1 | 2023-01-16 6:26:03 0 [ERROR] DDL_LOG: Failed to create ddl log file: ./ddl_recovery.log
database_1 | 2023-01-16 6:26:03 0 [ERROR] Aborting
database_1 |
database_1 | Installation of system tables failed! Examine the logs in
database_1 | /var/lib/mysql/ for more information.
database_1 |
database_1 | The problem could be conflicting information in an external
database_1 | my.cnf files. You can ignore these by doing:
database_1 |
database_1 | shell> /usr/bin/mariadb-install-db --defaults-file=~/.my.cnf
database_1 |
database_1 | You can also try to start the mysqld daemon with:
database_1 |
database_1 | shell> /usr/sbin/mariadbd --skip-grant-tables --general-log &
database_1 |
database_1 | and use the command line tool /usr/bin/mariadb
database_1 | to connect to the mysql database and look at the grant tables:
database_1 |
database_1 | shell> /usr/bin/mysql -u root mysql
database_1 | mysql> show tables;
database_1 |
database_1 | Try 'mysqld --help' if you have problems with paths. Using
database_1 | --general-log gives you a log in /var/lib/mysql/ that may be helpful.
database_1 |
database_1 | The latest information about mysql_install_db is available at
database_1 | https://mariadb.com/kb/en/installing-system-tables-mysql_install_db
database_1 | You can find the latest source at https://downloads.mariadb.org and
database_1 | the maria-discuss email list at https://launchpad.net/~maria-discuss
database_1 |
database_1 | Please check all of the above before submitting a bug report
database_1 | at https://mariadb.org/jira
database_1 |
bitwarden_1 | 2023-01-16 06:26:03,318 INFO spawned: 'identity' with pid 48
bitwarden_1 | 2023-01-16 06:26:03,320 INFO spawned: 'admin' with pid 49
bitwarden_1 | 2023-01-16 06:26:03,322 INFO spawned: 'api' with pid 50
bitwarden_1 | 2023-01-16 06:26:03,324 INFO spawned: 'icons' with pid 51
bitwarden_1 | 2023-01-16 06:26:03,326 INFO spawned: 'nginx' with pid 52
bitwarden_1 | 2023-01-16 06:26:03,328 INFO spawned: 'notifications' with pid 53
Its either the UID mapping or a selinux label, (note the + in the ls -la output). With the selinux label its probably up to an selinux boolean to allow docker to access the volume. Specific selinux labels can also be applied at the mount time.
When checking uid mapping its helpful to look inside the container as the mappings may be different:
docker run -v data:/var/lib/mysql --rm mariadb ls -laZ /var/lib/mysql
Rather than putting user in compose you can get the CIFS to map the uid to the 999:999 user the mariadb container uses by default.
docker run --rm mariadb id mysql
uid=999(mysql) gid=999(mysql) groups=999(mysql)
Based of MDEV-26970, the conditions under which MariaDB under its default --innodb_flush_method=O_DIRECT work with CIFS is limited.
Changing the mount options of CIFS to include cache=none appears to be one solution.
The other is adding --innodb_flush_method=fsync to the mariadb container command.
command: --innodb_flush_method=fsync

Error 404 not found after running docker-compose with SpringBoot and MongoDB

My DockerFile is:
FROM openjdk:8
VOLUME /tmp
ADD target/demo-0.0.1-SNAPSHOT.jar app.jar
#RUN bash -c 'touch /app.jar'
#EXPOSE 8080
ENTRYPOINT ["java","-Dspring.data.mongodb.uri=mongodb://mongo/players","-jar","/app.jar"]
And the docker-compose is:
version: "3"
services:
spring-docker:
build: .
restart: always
ports:
- "8080:8080"
depends_on:
- db
db:
image: mongo
volumes:
- ./data:/data/db
ports:
- "27000:27017"
restart: always
I have docker Image and when I use docker-compose up, anything goes well without any error.
But in the Postman, when I use GET method with localhost:8080/player I do not have any out put, so I used the IP of docker-machine such as 192.168.99.101:8080, but I have error 404 Not found in the Postman.
what is my mistake?!
The docker-compose logs:
$ docker-compose logs
Attaching to thesismongoproject_spring-docker_1, thesismongoproject_db_1
spring-docker_1 |
spring-docker_1 | . ____ _ __ _ _
spring-docker_1 | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
spring-docker_1 | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
spring-docker_1 | \\/ ___)| |_)| | | | | || (_| | ) ) ) )
spring-docker_1 | ' |____| .__|_| |_|_| |_\__, | / / / /
spring-docker_1 | =========|_|==============|___/=/_/_/_/
spring-docker_1 | :: Spring Boot :: (v2.2.6.RELEASE)
spring-docker_1 |
spring-docker_1 | 2020-05-31 11:36:39.598 INFO 1 --- [ main] thesisM
ongoProject.Application : Starting Application v0.0.1-SNAPSHOT on e81c
cff8ba0e with PID 1 (/demo-0.0.1-SNAPSHOT.jar started by root in /)
spring-docker_1 | 2020-05-31 11:36:39.620 INFO 1 --- [ main] thesisM
ongoProject.Application : No active profile set, falling back to defau
lt profiles: default
spring-docker_1 | 2020-05-31 11:36:41.971 INFO 1 --- [ main] .s.d.r.
c.RepositoryConfigurationDelegate : Bootstrapping Spring Data MongoDB repositori
es in DEFAULT mode.
spring-docker_1 | 2020-05-31 11:36:42.216 INFO 1 --- [ main] .s.d.r.
c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in
225ms. Found 4 MongoDB repository interfaces.
spring-docker_1 | 2020-05-31 11:36:44.319 INFO 1 --- [ main] o.s.b.w
.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
spring-docker_1 | 2020-05-31 11:36:44.381 INFO 1 --- [ main] o.apach
e.catalina.core.StandardService : Starting service [Tomcat]
spring-docker_1 | 2020-05-31 11:36:44.381 INFO 1 --- [ main] org.apa
che.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.
33]
spring-docker_1 | 2020-05-31 11:36:44.619 INFO 1 --- [ main] o.a.c.c
.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationC
ontext
spring-docker_1 | 2020-05-31 11:36:44.619 INFO 1 --- [ main] o.s.web
.context.ContextLoader : Root WebApplicationContext: initialization c
ompleted in 4810 ms
spring-docker_1 | 2020-05-31 11:36:46.183 INFO 1 --- [ main] org.mon
godb.driver.cluster : Cluster created with settings {hosts=[db:270
17], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms'
, maxWaitQueueSize=500}
spring-docker_1 | 2020-05-31 11:36:46.781 INFO 1 --- [null'}-db:27017] org.mon
godb.driver.connection : Opened connection [connectionId{localValue:1
, serverValue:1}] to db:27017
spring-docker_1 | 2020-05-31 11:36:46.802 INFO 1 --- [null'}-db:27017] org.mon
godb.driver.cluster : Monitor thread successfully connected to ser
ver with description ServerDescription{address=db:27017, type=STANDALONE, state=
CONNECTED, ok=true, version=ServerVersion{versionList=[4, 2, 7]}, minWireVersion
=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30,
roundTripTimeNanos=5468915}
spring-docker_1 | 2020-05-31 11:36:48.829 INFO 1 --- [ main] o.s.s.c
oncurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTas
kExecutor'
spring-docker_1 | 2020-05-31 11:36:49.546 INFO 1 --- [ main] o.s.b.w
.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with
context path ''
spring-docker_1 | 2020-05-31 11:36:49.581 INFO 1 --- [ main] thesisM
ongoProject.Application : Started Application in 11.264 seconds (JVM r
unning for 13.615)
spring-docker_1 | 2020-05-31 11:40:10.290 INFO 1 --- [extShutdownHook] o.s.s.c
oncurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTa
skExecutor'
db_1 | 2020-05-31T11:36:35.623+0000 I CONTROL [main] Automatically
disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none
'
db_1 | 2020-05-31T11:36:35.639+0000 W ASIO [main] No TransportL
ayer configured during NetworkInterface startup
db_1 | 2020-05-31T11:36:35.645+0000 I CONTROL [initandlisten] Mong
oDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=1a0e5bc0c503
db_1 | 2020-05-31T11:36:35.646+0000 I CONTROL [initandlisten] db v
ersion v4.2.7
db_1 | 2020-05-31T11:36:35.646+0000 I CONTROL [initandlisten] git
version: 51d9fe12b5d19720e72dcd7db0f2f17dd9a19212
db_1 | 2020-05-31T11:36:35.646+0000 I CONTROL [initandlisten] Open
SSL version: OpenSSL 1.1.1 11 Sep 2018
db_1 | 2020-05-31T11:36:35.646+0000 I CONTROL [initandlisten] allo
cator: tcmalloc
db_1 | 2020-05-31T11:36:35.646+0000 I CONTROL [initandlisten] modu
les: none
db_1 | 2020-05-31T11:36:35.647+0000 I CONTROL [initandlisten] buil
d environment:
db_1 | 2020-05-31T11:36:35.647+0000 I CONTROL [initandlisten]
distmod: ubuntu1804
db_1 | 2020-05-31T11:36:35.647+0000 I CONTROL [initandlisten]
distarch: x86_64
db_1 | 2020-05-31T11:36:35.647+0000 I CONTROL [initandlisten]
target_arch: x86_64
db_1 | 2020-05-31T11:36:35.648+0000 I CONTROL [initandlisten] opti
ons: { net: { bindIp: "*" } }
db_1 | 2020-05-31T11:36:35.649+0000 I STORAGE [initandlisten] Dete
cted data files in /data/db created by the 'wiredTiger' storage engine, so setti
ng the active storage engine to 'wiredTiger'.
db_1 | 2020-05-31T11:36:35.650+0000 I STORAGE [initandlisten] wire
dtiger_open config: create,cache_size=256M,cache_overflow=(file_max=0M),session_
max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(f
ast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager
=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statis
tics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress],
db_1 | 2020-05-31T11:36:37.046+0000 I STORAGE [initandlisten] Wire
dTiger message [1590924997:46670][1:0x7f393f9a0b00], txn-recover: Recovering log
9 through 10
db_1 | 2020-05-31T11:36:37.231+0000 I STORAGE [initandlisten] Wire
dTiger message [1590924997:231423][1:0x7f393f9a0b00], txn-recover: Recovering lo
g 10 through 10
db_1 | 2020-05-31T11:36:37.294+0000 I STORAGE [initandlisten] Wire
dTiger message [1590924997:294858][1:0x7f393f9a0b00], txn-recover: Main recovery
loop: starting at 9/6016 to 10/256
db_1 | 2020-05-31T11:36:37.447+0000 I STORAGE [initandlisten] Wire
dTiger message [1590924997:447346][1:0x7f393f9a0b00], txn-recover: Recovering lo
g 9 through 10
db_1 | 2020-05-31T11:36:37.564+0000 I STORAGE [initandlisten] Wire
dTiger message [1590924997:564841][1:0x7f393f9a0b00], txn-recover: Recovering lo
g 10 through 10
db_1 | 2020-05-31T11:36:37.645+0000 I STORAGE [initandlisten] Wire
dTiger message [1590924997:645216][1:0x7f393f9a0b00], txn-recover: Set global re
covery timestamp: (0, 0)
db_1 | 2020-05-31T11:36:37.681+0000 I RECOVERY [initandlisten] Wire
dTiger recoveryTimestamp. Ts: Timestamp(0, 0)
db_1 | 2020-05-31T11:36:37.703+0000 I STORAGE [initandlisten] Time
stamp monitor starting
db_1 | 2020-05-31T11:36:37.704+0000 I CONTROL [initandlisten]
db_1 | 2020-05-31T11:36:37.704+0000 I CONTROL [initandlisten] ** W
ARNING: Access control is not enabled for the database.
db_1 | 2020-05-31T11:36:37.704+0000 I CONTROL [initandlisten] **
Read and write access to data and configuration is unrestricted.
db_1 | 2020-05-31T11:36:37.705+0000 I CONTROL [initandlisten]
db_1 | 2020-05-31T11:36:37.712+0000 I SHARDING [initandlisten] Mark
ing collection local.system.replset as collection version: <unsharded>
db_1 | 2020-05-31T11:36:37.722+0000 I STORAGE [initandlisten] Flow
Control is enabled on this deployment.
db_1 | 2020-05-31T11:36:37.722+0000 I SHARDING [initandlisten] Mark
ing collection admin.system.roles as collection version: <unsharded>
db_1 | 2020-05-31T11:36:37.724+0000 I SHARDING [initandlisten] Mark
ing collection admin.system.version as collection version: <unsharded>
db_1 | 2020-05-31T11:36:37.726+0000 I SHARDING [initandlisten] Mark
ing collection local.startup_log as collection version: <unsharded>
db_1 | 2020-05-31T11:36:37.729+0000 I FTDC [initandlisten] Init
ializing full-time diagnostic data capture with directory '/data/db/diagnostic.d
ata'
db_1 | 2020-05-31T11:36:37.740+0000 I SHARDING [LogicalSessionCache
Refresh] Marking collection config.system.sessions as collection version: <unsha
rded>
db_1 | 2020-05-31T11:36:37.748+0000 I SHARDING [LogicalSessionCache
Reap] Marking collection config.transactions as collection version: <unsharded>
db_1 | 2020-05-31T11:36:37.748+0000 I NETWORK [listener] Listening
on /tmp/mongodb-27017.sock
db_1 | 2020-05-31T11:36:37.748+0000 I NETWORK [listener] Listening
on 0.0.0.0
db_1 | 2020-05-31T11:36:37.749+0000 I NETWORK [listener] waiting f
or connections on port 27017
db_1 | 2020-05-31T11:36:38.001+0000 I SHARDING [ftdc] Marking colle
ction local.oplog.rs as collection version: <unsharded>
db_1 | 2020-05-31T11:36:46.536+0000 I NETWORK [listener] connectio
n accepted from 172.19.0.3:40656 #1 (1 connection now open)
db_1 | 2020-05-31T11:36:46.653+0000 I NETWORK [conn1] received cli
ent metadata from 172.19.0.3:40656 conn1: { driver: { name: "mongo-java-driver|l
egacy", version: "3.11.2" }, os: { type: "Linux", name: "Linux", architecture: "
amd64", version: "4.14.154-boot2docker" }, platform: "Java/Oracle Corporation/1.
8.0_252-b09" }
db_1 | 2020-05-31T11:40:10.302+0000 I NETWORK [conn1] end connecti
on 172.19.0.3:40656 (0 connections now open)
db_1 | 2020-05-31T11:40:10.523+0000 I CONTROL [signalProcessingThr
ead] got signal 15 (Terminated), will terminate after current cmd ends
db_1 | 2020-05-31T11:40:10.730+0000 I NETWORK [signalProcessingThr
ead] shutdown: going to close listening sockets...
db_1 | 2020-05-31T11:40:10.731+0000 I NETWORK [listener] removing
socket file: /tmp/mongodb-27017.sock
db_1 | 2020-05-31T11:40:10.731+0000 I - [signalProcessingThr
ead] Stopping further Flow Control ticket acquisitions.
db_1 | 2020-05-31T11:40:10.796+0000 I CONTROL [signalProcessingThr
ead] Shutting down free monitoring
db_1 | 2020-05-31T11:40:10.800+0000 I FTDC [signalProcessingThr
ead] Shutting down full-time diagnostic data capture
db_1 | 2020-05-31T11:40:10.803+0000 I STORAGE [signalProcessingThr
ead] Deregistering all the collections
db_1 | 2020-05-31T11:40:10.811+0000 I STORAGE [signalProcessingThr
ead] Timestamp monitor shutting down
db_1 | 2020-05-31T11:40:10.828+0000 I STORAGE [TimestampMonitor] T
imestamp monitor is stopping due to: interrupted at shutdown
db_1 | 2020-05-31T11:40:10.828+0000 I STORAGE [signalProcessingThr
ead] WiredTigerKVEngine shutting down
db_1 | 2020-05-31T11:40:10.829+0000 I STORAGE [signalProcessingThr
ead] Shutting down session sweeper thread
db_1 | 2020-05-31T11:40:10.829+0000 I STORAGE [signalProcessingThr
ead] Finished shutting down session sweeper thread
db_1 | 2020-05-31T11:40:10.829+0000 I STORAGE [signalProcessingThr
ead] Shutting down journal flusher thread
db_1 | 2020-05-31T11:40:10.916+0000 I STORAGE [signalProcessingThr
ead] Finished shutting down journal flusher thread
db_1 | 2020-05-31T11:40:10.917+0000 I STORAGE [signalProcessingThr
ead] Shutting down checkpoint thread
db_1 | 2020-05-31T11:40:10.917+0000 I STORAGE [signalProcessingThr
ead] Finished shutting down checkpoint thread
db_1 | 2020-05-31T11:40:10.935+0000 I STORAGE [signalProcessingThr
ead] shutdown: removing fs lock...
db_1 | 2020-05-31T11:40:10.942+0000 I CONTROL [signalProcessingThr
ead] now exiting
db_1 | 2020-05-31T11:40:10.943+0000 I CONTROL [signalProcessingThr
ead] shutting down with code:0
for solving this problem I must put #EnableAutoConfiguration(exclude={MongoAutoConfiguration.class}) annotation

How to load a specific flow whenever Node-red server is started through docker?

I am running Node-red server through docker-compose and it creates new flow every time the 'docker-compose up' is executed. How to prevent this from happening? And how to load/import a specific flow instead of the new one as it is hectic to copy the nodes on to the new flow every time? Thank you!
nodered_1 | 9 Jan 05:32:55 - [warn] ------------------------------------------------------
nodered_1 | 9 Jan 05:32:55 - [warn] [node-red/rpi-gpio] Info : Ignoring Raspberry Pi specific node
nodered_1 | 9 Jan 05:32:55 - [warn] ------------------------------------------------------
nodered_1 | 9 Jan 05:32:55 - [info] Settings file : /var/node-red/settings.js
nodered_1 | 9 Jan 05:32:55 - [info] User directory : /var/node-red
nodered_1 | 9 Jan 05:32:55 - [info] Flows file : /var/node-red/flows_dc4a44db1d02.json
nodered_1 | 9 Jan 05:32:55 - [info] Creating new flow file
nodered_1 | 9 Jan 05:32:55 - [info] Starting flows
nodered_1 | 9 Jan 05:32:55 - [info] Started flows
nodered_1 | 9 Jan 05:32:55 - [info] Server now running at http://127.0.0.1:1880/
You need to pass in the name of flow to use, how to do this is described in the README for the Node-RED docker image:
$ docker run -it -p 1880:1880 -e FLOWS=my_flows.json nodered/node-red-docker
You will also need to mount your own data directory so the flow is persisted across new instances being created, again this is in the README here
$ docker run -it -p 1880:1880 -v ~/.node-red:/data --name mynodered nodered/node-red-docker
so combining the 2 will get you:
$ docker run -it -p 1880:1880 -e FLOWS=my_flow.json -v ~/.node-red:/data --name mynodered nodered/node-red-docker
You will have to take that command line and map it into the docker-compose file.

Unable to build app using dockers

I have setup of my application on DigitaOcean using dockers. It was working fine but few days back it stopped working. Whenever I want to build application and deploy it doesn't shows any progress.
By using following commands
docker-compose build && docker-compose stop && docker-compose up -d
systems stucks on the following output
db uses an image, skipping
elasticsearch uses an image, skipping
redis uses an image, skipping
Building app
It doesn't shows any furthur progress.
Following are the logs of docker-compose
db_1 | LOG: received smart shutdown request
db_1 | LOG: autovacuum launcher shutting down
db_1 | LOG: shutting down
db_1 | LOG: database system is shut down
db_1 | LOG: database system was shut down at 2018-01-10
02:25:36 UTC
db_1 | LOG: MultiXact member wraparound protections are now enabled
db_1 | LOG: database system is ready to accept connections
db_1 | LOG: autovacuum launcher started
redis_1 | 11264:C 26 Mar 15:20:17.028 # Failed opening the RDB
file root (in server root dir /run) for saving: Permission denied
redis_1 | 1:M 26 Mar 15:20:17.127 # Background saving error
redis_1 | 1:M 26 Mar 15:20:23.038 * 1 changes in 3600 seconds.
Saving...
redis_1 | 1:M 26 Mar 15:20:23.038 * Background saving started by pid 11265
elasticsearch | [2018-03-06T01:18:25,729][WARN ][o.e.b.BootstrapChecks ] [_IRIbyW] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
elasticsearch | [2018-03-06T01:18:28,794][INFO ][o.e.c.s.ClusterService ] [_IRIbyW] new_master {_IRIbyW}{_IRIbyWCSoaUaKOLN93Fzg}{TFK38PIgRT6Kl62mTGBORg}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
elasticsearch | [2018-03-06T01:18:28,835][INFO ][o.e.h.n.Netty4HttpServerTransport] [_IRIbyW] publish_address {172.17.0.4:9200}, bound_addresses {0.0.0.0:9200}
elasticsearch | [2018-03-06T01:18:28,838][INFO ][o.e.n.Node ] [_IRIbyW] started
elasticsearch | [2018-03-06T01:18:29,104][INFO ][o.e.g.GatewayService ] [_IRIbyW] recovered [4] indices into cluster_state
elasticsearch | [2018-03-06T01:18:29,799][INFO ][o.e.c.r.a.AllocationService] [_IRIbyW] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[product_records][2]] ...]).
elasticsearch | [2018-03-07T16:11:18,449][INFO ][o.e.n.Node ] [_IRIbyW] stopping ...
elasticsearch | [2018-03-07T16:11:18,575][INFO ][o.e.n.Node ] [_IRIbyW] stopped
elasticsearch | [2018-03-07T16:11:18,575][INFO ][o.e.n.Node ] [_IRIbyW] closing ...
elasticsearch | [2018-03-07T16:11:18,601][INFO ][o.e.n.Node ] [_IRIbyW] closed
elasticsearch | [2018-03-07T16:11:37,993][INFO ][o.e.n.Node ] [] initializing ...
WARNING: Connection pool is full, discarding connection: 'Ipaddress'
I am using postgres , redis, elasticsearch and sidekiq images in my rails application
But i have no clue where the things are going wrong.

docker-compose persisting folder empty

I would like use bitnami-docker-redmine with docker-compose persisting on Windows.
If i run the first exemple docker-compose.yml, without persisting application, redmine start and run perfectly.
But, i would like use this with persisting application exemple :
version: '2'
services:
mariadb:
image: 'bitnami/mariadb:latest'
environment:
- ALLOW_EMPTY_PASSWORD=yes
volumes:
- './mariadb:/bitnami/mariadb'
redmine:
image: bitnami/redmine:latest
ports:
- 80:3000
volumes:
- './redmine:/bitnami/redmine'
And only MariaDB run, with error message :
$ docker-compose up
Creating bitnamidockerredmine_redmine_1
Creating bitnamidockerredmine_mariadb_1
Attaching to bitnamidockerredmine_mariadb_1, bitnamidockerredmine_redmine_1
mariadb_1 |
mariadb_1 | Welcome to the Bitnami mariadb container
mariadb_1 | Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mariadb
mariadb_1 | Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mariadb/issues
mariadb_1 | Send us your feedback at containers#bitnami.com
mariadb_1 |
mariadb_1 | WARN ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
mariadb_1 | nami INFO Initializing mariadb
mariadb_1 | mariadb INFO ==> Configuring permissions...
mariadb_1 | mariadb INFO ==> Validating inputs...
mariadb_1 | mariadb WARN Allowing the "rootPassword" input to be empty
redmine_1 |
redmine_1 | Welcome to the Bitnami redmine container
redmine_1 | Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-redmine
redmine_1 | Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-redmine/issues
redmine_1 | Send us your feedback at containers#bitnami.com
redmine_1 |
redmine_1 | nami INFO Initializing redmine
redmine_1 | redmine INFO Configuring Redmine database...
mariadb_1 | mariadb INFO ==> Initializing database...
mariadb_1 | mariadb INFO ==> Creating 'root' user with unrestricted access...
mariadb_1 | mariadb INFO ==> Enabling remote connections...
mariadb_1 | mariadb INFO
mariadb_1 | mariadb INFO ########################################################################
mariadb_1 | mariadb INFO Installation parameters for mariadb:
mariadb_1 | mariadb INFO Root User: root
mariadb_1 | mariadb INFO Root Password: Not set during installation
mariadb_1 | mariadb INFO (Passwords are not shown for security reasons)
mariadb_1 | mariadb INFO ########################################################################
mariadb_1 | mariadb INFO
mariadb_1 | nami INFO mariadb successfully initialized
mariadb_1 | INFO ==> Starting mariadb...
mariadb_1 | nami ERROR Unable to start com.bitnami.mariadb: Warning: World-writable config file '/opt/bitnami/mariadb/conf/my.cnf' is ignored
mariadb_1 | Warning: World-writable config file '/opt/bitnami/mariadb/conf/my.cnf' is ignored
mariadb_1 |
bitnamidockerredmine_mariadb_1 exited with code 1
redmine_1 | mysqlCo INFO Trying to connect to MySQL server
redmine_1 | Error executing 'postInstallation': Failed to connect to mariadb:3306 after 36 tries
bitnamidockerredmine_redmine_1 exited with code 1
My ./mariadb folder is good, but ./redmine is empty.
Do you have any idea why my persisting does not start completely ? Without the persisting, it works :(
docker-version : 1.13.0 (client/server)
plateform : Windows 10 (sorry, not test on Linux)
Thank you !

Resources