Logstash missing config - docker

I have following issue . Every time when I'm trying to set config for logstash it doesn't see my file. I am sure that the path is properly set.
There is info:
[2018-09-14T09:28:44,073][INFO ][logstash.config.source.local.configpathloader] No config files found in path {:path=>"/home/jakub/IdeaProjects/test/logstash.conf"}
My docker-compose.yml looks following:
logstash:
image: docker.elastic.co/logstash/logstash:6.4.0
networks: ['stack']
ports:
- "9290:9290"
- "4560:4560"
command: logstash -f /home/jakub/IdeaProjects/test/logstash.conf
depends_on: ['elasticsearch']
and logstash.conf:
input {
redis {
host => "redis"
key => "log4j2"
data_type => "list"
password => "RedisTest"
}
}
output {
elasticsearch {
host => "elasticsearch"
}
}
What I'm doing wrong ? Can you give me some advice or solve my issue ?
Thanks for everything.
Cheers

I guess your logstash.conf is on your host under /home/jakub/IdeaProjects/test/logstash.conf.
Thus, it is not inside your container (unless there is some hidden mount). The command will be executed from within the container, thus it points to a non-existing file.
So, you may use docker cp /home/jakub/IdeaProjects/test/logstash.conf :/home/jakub/IdeaProjects/test/logstash.conf (provided the directory exists in your container)
... or (better!) mount the path from your host to your container. Such as :
volumes:
- /home/jakub/IdeaProjects/test/logstash.conf:/home/jakub/IdeaProjects/test/logstash.conf:ro
.. or to use config (the best option to moo if you are in swarm mode!). The mount is close to the "volume" option above, but you also have to pre-create (from command line or from docker-compose file)
There are other options, but the main point is that you have to make your file available from within your container!

Related

How can I create additional user in influxdb2 with a docker-compose.yml

I am able to run a docker-compose.yml that starts an influxdb2 and configures with a admin user, org and bucket. My problem is that I am not able to create an additional user (without admin privileges) via the docker-compose.yml.
I would appreciate if someone could give me a hint.
docker-compose.yml:
`version: "3.5"
services:
influxdb:
image: influxdb:latest
container_name: influxdb2
volumes:
- influxdb-storage:/etc/influxdb2:rw
- influxdb-storage:/var/lib/influxdb2:rw
environment:
- DOCKER_INFLUXDB_INIT_MODE=setup
- DOCKER_INFLUXDB_INIT_USERNAME=adminuser
- DOCKER_INFLUXDB_INIT_PASSWORD=adminpassword
- DOCKER_INFLUXDB_INIT_ORG=myOrg
- DOCKER_INFLUXDB_INIT_BUCKET=myBucket
- DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=randomTokenValue
ports:
- "8086:8086"
restart: unless-stopped`
I tried adding an entrypoint to somehow run the following command:
influx user create -n john -p user -o myOrg
but that did not work.
The influxdb:latest image is described in the following repository: influxdata-docker/influxdb/2.6/
If you check these files you will see that user is created inside entrypoint.sh script in setup_influxd() function, called from main() -> init_influxd().
There is a run_user_scripts() function which runs user-defined scripts on startup, if directory specified by ${USER_SCRIPT_DIR} variable exists:
# Allow users to mount arbitrary startup scripts into the container,
# for execution after initial setup/upgrade.
declare -r USER_SCRIPT_DIR=/docker-entrypoint-initdb.d
...
# Execute all shell files mounted into the expected path for user-defined startup scripts.
function run_user_scripts () {
if [ -d ${USER_SCRIPT_DIR} ]; then
log info "Executing user-provided scripts" script_dir ${USER_SCRIPT_DIR}
run-parts --regex ".*sh$" --report --exit-on-error ${USER_SCRIPT_DIR}
fi
}
I think you could use this functionality to do the additional steps. But probably you'll need to guard against running these steps twice.

Keep running container (dev-container) if there is a error?

I'm currently developing in an Odoo container , but every time the main process terminates and I have VSCODE attached I need to reload to make it connect again.
To prevent this, inside .devcontainermodify the file docker-compose.yml with the following line:
#....more configs
# Overrides default command so things don't shut down after the process ends.
command: /bin/sh -c "while sleep 1000; do :; done"
This is my devcontainer.json:
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.245.2/containers/docker-existing-docker-compose
// If you want to run as a non-root user in the container, see .devcontainer/docker-compose.yml.
{
"name": "Existing Docker Compose (Extend)",
// Update the 'dockerComposeFile' list if you have more compose files or use different names.
// The .devcontainer/docker-compose.yml file contains any overrides you need/want to make.
"dockerComposeFile": [
"../docker-compose.yml",
"docker-compose.yml"
],
// The 'service' property is the name of the service for the container that VS Code should
// use. Update this value and .devcontainer/docker-compose.yml to the real service name.
"service": "web",
// The optional 'workspaceFolder' property is the path VS Code should open by default when
// connected. This is typically a file mount in .devcontainer/docker-compose.yml
"workspaceFolder": "/home",
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Uncomment the next line if you want start specific services in your Docker Compose config.
// "runServices": [],
// Uncomment the next line if you want to keep your containers running after VS Code shuts down.
// "shutdownAction": "none",
// Uncomment the next line to run commands after the container is created - for example installing curl.
// "postCreateCommand": "apt-get update && apt-get install -y curl",
// Uncomment to connect as a non-root user if you've added one. See https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "root"
}
Even with everything, when I get an error, I have to raise the container again docker-compose up, which causes VSCODE to disconnect.
Inside the container I use a config option that restarts the service every time it detects a code change, similar to the function nodemon in nodejs, but unlike nodemon, if a fatal "uncompilable" error occurs the service stops completely and exits with an error code.
How can I avoid this behavior? Is there a way to ignore the error code so I don't have to reload vscode?
UPDATE
This is an example of my docker compose file:
version: '3.1'
services:
web:
image: odoo:14.0
depends_on:
- db
ports:
- "8069:8069"
volumes:
- odoo-web-data:/var/lib/odoo
- ./config:/etc/odoo
- ./addons:/mnt/extra-addons
environment:
- PASSWORD_FILE=/run/secrets/postgresql_password
secrets:
- postgresql_password
db:
image: postgres:13
environment:
- POSTGRES_DB=postgres
- POSTGRES_PASSWORD_FILE=/run/secrets/postgresql_password
- POSTGRES_USER=odoo
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- odoo-db-data:/var/lib/postgresql/data/pgdata
secrets:
- postgresql_password
volumes:
odoo-web-data:
odoo-db-data:
secrets:
postgresql_password:
file: odoo_pg_pass
Odoo Image
You need to add tty: true, where bash will be able to create interactive session and the container will be started
#docker-compose.yml
services:
container-name:
image: $IMAGE
...
tty: true # docker run -t
...
See https://docs.docker.com/engine/reference/run/#foreground
In foreground mode (the default when -d is not specified), docker run
can start the process in the container and attach the console to the
process’s standard input, output, and standard error. It can even
pretend to be a TTY (this is what most command line executables
expect) and pass along signals.

Running docker-entrypoint-initdb.d file is not working in docker-compose MongoDB

I have a docker-compose that looks like this:
stock-trading-system-db:
container_name: stock-trading-system-db
image: mongo
restart: always
ports:
- "27017:27017"
environment:
- MONGO_INITDB_DATABASE=stock-trading-system-db
volumes:
- ./mongo-seed/import.js:/docker-entrypoint-initdb.d/import.js
- ./mongo-seed/mongod.conf:/etc/mongod.conf
- ./mongo/data:/data/db
- ./mongo-seed/MOCK_DATA.csv:/mongo-seed/MOCK_DATA.csv
and import.js looks like this:
let exec = require('child_process').exec
let command1 = 'mongod --config /etc/mongod.conf'
let command2 = 'mongo --eval "rs.initiate();"'
let command3 = 'mongoimport --host=mongo1 --db=stock-trading-system-db --collection=stocks --type=csv --headerline --file=/mongo-seed/MOCK_DATA.csv'
exec(command1, (err, stdout, stderr) => {
// check for errors or if it was succesfuly
if(!err){
exec(command2, (err, stdout, stderr) => {
// check for errors or if it was succesfuly
if(!err){
exec(command3, (err, stdout, stderr) => {
// check for errors or if it was succesfuly
if(!err){
console.log('MongoDB seed was succesful');
}
})
}
})
}
})
But it doesn't seem like import.js even gets recognised by the container.
The mongo docs say the following:
This variable allows you to specify the name of a database to be used for creation scripts in /docker-entrypoint-initdb.d/*.js (see Initializing a fresh instance below). MongoDB is fundamentally designed for "create on first use", so if you do not insert data with your JavaScript files, then no database is created.
This is in relation to the MONGO_INITDB_DATABASE variable in the docker-compose file, which i have included.
Where am i going wrong?
p.s. This is all to try and get a single node replica set working in a mongo container so that I can use change streams in my node app, so if you know an easier way to do this whilst also importing a csv file into the DB then please mention it :)
First, you shouldn't be trying to start mongod in your init script;
the image already does this for you. When you run your first command:
mongod --config /etc/mongod.conf
That's it. That command doesn't exit, and nothing progresses beyond
that point, so your additional commands will not execute nor will the
normal container startup process complete.
So don't do that.
Next, if you want to run rs.initiate() (or any other javascript
command) as part of the startup process, you don't need to wrap it in
a call to mongo --eval. Just drop in a .js file that contains:
rs.initiate()
And if you're just going to run a shell command, it's probably easiest just to drop that in a .sh file.
I would replace your import.js script with two seperate files. In rs-initiate.js, I would have the call to rs.initiate(), as above, and in import.sh I would have your call to mongoimport...
#!/bin/sh
mongoimport \
--db=stock-trading-system-db \
--collection=stocks \
--type=csv \
--headerline \
--file=/mongo-seed/MOCK_DATA.csv
...so that the docker-compose.yaml would look like:
version: "3"
services:
mongo:
image: docker.io/mongo:latest
ports:
- "27017:27017"
environment:
- MONGO_INITDB_DATABASE=stock-trading-system-db
volumes:
- ./mongo-seed/import.sh:/docker-entrypoint-initdb.d/import.sh
- ./mongo-seed/rs-initiate.js:/docker-entrypoint-initdb.d/rs-initiate.js
- ./mongo-seed/MOCK_DATA.csv:/mongo-seed/MOCK_DATA.csv
- ./mongo-seed/mongod.conf:/etc/mongod.conf
- mongo_data:/data/db
volumes:
mongo_data:
I'm using a named volume for the mongo data directory here, rather than a bind mount, but that's not necessary; everything will work fine with the bind mount as long as you have permissions configured correctly.

Volume isn't created in Scylla when using docker-compose in Windows 10

I just started learning Docker and docker-compose and I want to try out ScyllaDB (database). I want to start a single instance of ScyllaDB in Docker through docker-compose with persistent storage. The persistent storage should be saved in folder 'target' relative to my docker-compose file. The problem is that I don't see any folder being created, but docker-compose seems to persist the data, but I am not sure where I can locate the files that ScyllaDB created. Step by step reproduction path:
Create a docker-compose.yml with the following content (/var/lib/scylla should be correct according to https://docs.scylladb.com/operating-scylla/procedures/tips/best_practices_scylla_on_docker/):
docker-compose.yml
version: '3'
services:
b-scylla:
image: "scylladb/scylla:4.3.1"
container_name: b-scylla
volumes:
- ./target:/var/lib/scylla
ports:
- "127.0.0.1:9042:9042"
- "127.0.0.1:9160:9160"
This does not give any result: $ docker volume ls
Start up docker-compose and wait a minute for ScyllaDB to start up: $ docker-compose up -d
This does still not give any result: $ docker volume ls. I expect that Docker should created a volume (./target/).
Persist some data in ScyllaDB to verify that the data is saved somewhere:
Run the following commands:
$ docker exec -it b-scylla cqlsh
$ create keyspace somekeyspace with replication = {
'class': 'NetworkTopologyStrategy',
'replication_factor': 2
};
The created keyspace is saved somewhere, but I don't know where. I would expect it is just in the target folder, but that folder isn't even created. When I restart docker-compose, the keyspace is still present, so the data is saved somewhere, but where?
You are using the "short syntax" for data mounting (https://docs.docker.com/compose/compose-file/compose-file-v3/#short-syntax-3) that is creating a mount point binding. Bindings are not volumes. They can't be checked with the docker volume ls. You can find out about your mounts with docker inspect {container}.
However, Scylla image does not start for me correctly with the bind mounting. I saw constant file system errors for writing sstables in mounted directory:
version: '3'
services:
b-scylla:
image: "scylladb/scylla:4.3.1"
container_name: b-scylla
volumes:
- ./target:/var/lib/scylla
$ docker compose up -f .\test.yaml
b-scylla | INFO 2021-03-04 07:24:53,132 [shard 0] init - initializing batchlog manager
b-scylla | INFO 2021-03-04 07:24:53,135 [shard 0] legacy_schema_migrator - Moving 0 keyspaces from legacy schema tables to the new schema keyspace (system_schema)
b-scylla | INFO 2021-03-04 07:24:53,136 [shard 0] legacy_schema_migrator - Dropping legacy schema tables
b-scylla | ERROR 2021-03-04 07:24:53,168 [shard 0] table - failed to write sstable /var/lib/scylla/data/system/truncated-38c19fd0fb863310a4b70d0cc66628aa/mc-8-big-Data.db: std::system_error (error system:2, No such file or directory)
I did not find out what causes this, but the dir is writable and contains most of the normal initial data - reserved commitlog segments and system ks data folders.
What actually works is using Volumes:
version: '3'
services:
b-scylla:
image: "scylladb/scylla:4.3.1"
container_name: b-scylla
volumes:
- type: volume
source: target
target: /var/lib/scylla
volume:
nocopy: true
volumes:
target:
$ docker compose up -f .\test.yaml
$ docker volume ls
DRIVER VOLUME NAME
local 6b57922b3380d61b960110dacf8d180e663b1ce120494d7a005fc08cee475234
local ad220954e311ea4503eb3179de0d1162d2e75b73d1d9582605b4e5c0da37502d
local projects_target
$ docker volume inspect projects_target
[
{
"CreatedAt": "2021-03-04T07:20:40Z",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "projects",
"com.docker.compose.version": "1.0-alpha",
"com.docker.compose.volume": "target"
},
"Mountpoint": "/var/lib/docker/volumes/projects_target/_data",
"Name": "projects_target",
"Options": null,
"Scope": "local"
}
]
And Scylla starts successfully in this mode.
You of course can mount this volume to any other container with:
$ docker run -it --mount source=projects_target,target=/app --entrypoint bash scylladb/scylla:4.3.1
or accessing it via WSL (Locating data volumes in Docker Desktop (Windows)):
$ \\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\projects_target\_data
Turns out I needed to reset my credentials in Docker Desktop

Graylog in Docker persistent

I'm trying to make a Graylog Docker Container persistent.
Meaning that after restarting (docker-compose down; docker-compose up) the logs will still be there alongside the configuration.
I've used the documentation at https://docs.graylog.org/en/3.1/pages/installation/docker.html I created a yml file with the content under the topic "Persisting data".
I only edited the line "GRAYLOG_HTTP_EXTERNAL_URI=http://127.0.0.1:9000/" to not use localhost but the external ip the machine is using.
Docker works, i can create an input and collect logfiles. What does not work is the data being persistent. Also every time i restart the node id changes, so i have to reconfigure the input. Running docker volume ls lists five volumes 3 of which are the ones created in the yml file.
I don't understand why data is not persistent. Can anybody help?
I had the same problem and I'd been struggling for a while before I found a solution. I'm on 3.2 and also had issues with node persistence. The documentation doesn't seem to directly state that there is one more configuration folder you need to persist, which is:
/usr/share/graylog/data/config
They actually mention it in the Custom configuration files section and when I took a look via CLI in that directory, it turns out that it's where the graylog.conf and node-id (the file Graylog uses to store information about its nodes) are stored as well!
Here's my docker-compose.override.yml section with the necessary changes (marked with '# ADDED' comments)
services:
graylog:
environment:
# CHANGE ME (must be at least 16 characters)!
- GRAYLOG_PASSWORD_SECRET=somepasswordpepper
# Password: admin
- GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
- GRAYLOG_HTTP_EXTERNAL_URI=http://127.0.0.1:9000/
- GRAYLOG_IS_MASTER=true
#- GRAYLOG_NODE_ID_FILE=/usr/share/graylog/data/config/node-id
ports:
# Graylog web interface and REST API
- 9000:9000
# Syslog TCP
- 1514:1514
# Syslog UDP
- 1514:1514/udp
# GELF TCP
- 12201:12201
# GELF UDP
- 12201:12201/udp
volumes:
- "graylogjournal:/usr/share/graylog/data/journal"
- "graylogconfig:/usr/share/graylog/data/config" # ADDED
volumes:
graylogjournal:
driver: local
graylogconfig: # ADDED
driver: local # ADDED
Hope this helps
You can add into daemon.json file these lines ;
{
"log-driver": "gelf",
"log-opts": {
"gelf-address": "udp://1.2.3.4:12201"
}
}
https://docs.docker.com/config/containers/logging/gelf/

Resources