Graylog in Docker persistent - docker

I'm trying to make a Graylog Docker Container persistent.
Meaning that after restarting (docker-compose down; docker-compose up) the logs will still be there alongside the configuration.
I've used the documentation at https://docs.graylog.org/en/3.1/pages/installation/docker.html I created a yml file with the content under the topic "Persisting data".
I only edited the line "GRAYLOG_HTTP_EXTERNAL_URI=http://127.0.0.1:9000/" to not use localhost but the external ip the machine is using.
Docker works, i can create an input and collect logfiles. What does not work is the data being persistent. Also every time i restart the node id changes, so i have to reconfigure the input. Running docker volume ls lists five volumes 3 of which are the ones created in the yml file.
I don't understand why data is not persistent. Can anybody help?

I had the same problem and I'd been struggling for a while before I found a solution. I'm on 3.2 and also had issues with node persistence. The documentation doesn't seem to directly state that there is one more configuration folder you need to persist, which is:
/usr/share/graylog/data/config
They actually mention it in the Custom configuration files section and when I took a look via CLI in that directory, it turns out that it's where the graylog.conf and node-id (the file Graylog uses to store information about its nodes) are stored as well!
Here's my docker-compose.override.yml section with the necessary changes (marked with '# ADDED' comments)
services:
graylog:
environment:
# CHANGE ME (must be at least 16 characters)!
- GRAYLOG_PASSWORD_SECRET=somepasswordpepper
# Password: admin
- GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
- GRAYLOG_HTTP_EXTERNAL_URI=http://127.0.0.1:9000/
- GRAYLOG_IS_MASTER=true
#- GRAYLOG_NODE_ID_FILE=/usr/share/graylog/data/config/node-id
ports:
# Graylog web interface and REST API
- 9000:9000
# Syslog TCP
- 1514:1514
# Syslog UDP
- 1514:1514/udp
# GELF TCP
- 12201:12201
# GELF UDP
- 12201:12201/udp
volumes:
- "graylogjournal:/usr/share/graylog/data/journal"
- "graylogconfig:/usr/share/graylog/data/config" # ADDED
volumes:
graylogjournal:
driver: local
graylogconfig: # ADDED
driver: local # ADDED
Hope this helps

You can add into daemon.json file these lines ;
{
"log-driver": "gelf",
"log-opts": {
"gelf-address": "udp://1.2.3.4:12201"
}
}
https://docs.docker.com/config/containers/logging/gelf/

Related

Landoop. Kafka connect how to change worker properties

landoop/fast-data-dev:2.6
I want to change default batch.size using 'producer.override.batch.size=65536' when creating new connector.
But in order to do that, it's required to apply override policy on the worker side
connector.client.config.override.policy=All
Otherwise there is exception
"producer.override.batch.size" : The 'None' policy does not allow
'batch.size' to be overridden in the connector configuration.
It's not clear, how exactly:
to change the default worker properties
where it expects them to be placed
which name they should have
So that landoop sees them
I start the landoop using the following docker-compose.
version: '2'
services:
kafka-cluster:
image: landoop/fast-data-dev:2.6
environment:
ADV_HOST: 127.0.0.1
RUNTESTS: 0
ports:
- 2181:2181 # Zookeeper
- 3030:3030 # Landoop UI
- 8081-8083:8081-8083 # REST Proxy, Schema Registry, Kafka Connect ports
- 9581-9585:9581-9585 # JMX Ports
- 9092:9092 # Kafka Broker
volumes:
- ./connectors/news/crypto-panic-connector-1.0.jar:/connectors/crypto-panic-connector-1.0.jar
distributed.properties at folder /connect/connect-avro-distributed.properties generated by Landoop
offset.storage.partitions=5
key.converter.schema.registry.url=http://127.0.0.1:8081
value.converter.schema.registry.url=http://127.0.0.1:8081
config.storage.replication.factor=1
offset.storage.topic=connect-offsets
status.storage.partitions=3
offset.storage.replication.factor=1
key.converter=io.confluent.connect.avro.AvroConverter
config.storage.topic=connect-configs
config.storage.partitions=1
group.id=connect-fast-data
rest.advertised.host.name=127.0.0.1
port=8083
value.converter=io.confluent.connect.avro.AvroConverter
rest.port=8083
status.storage.replication.factor=1
status.storage.topic=connect-statuses
access.control.allow.origin=*
access.control.allow.methods=GET,POST,PUT,DELETE,OPTIONS
jmx.port=9584
plugin.path=/var/run/connect/connectors/stream-reactor,/var/run/connect/connectors/third-party,/connectors
bootstrap.servers=PLAINTEXT://127.0.0.1:9092
crypto-panic-connector-1.0 connector directories structure:
/config:
> worker.properties
/src:
> ...
UPDATE
Adding to environment properties:
CONNECT_CONNECT_CLIENT_CONFIG_OVERRIDE_POLICY: 'All'
CONNECT_PRODUCER_OVERRIDE_BATCH_SIZE: 65536
Doesn't work for landoop/fast-data-dev:2.6
In logs it's still
'connector.client.config.override.policy = None'
And warning
WARN The configuration 'connect.client.config.override.policy' was supplied but isn't a known config.
Changing this to
CONNECTOR_CONNECTOR_CLIENT_CONFIG_OVERRIDE_POLICY: 'All'
CONNECT_PRODUCER_OVERRIDE_BATCH_SIZE: 65536
Removes warning, but at the end the override policy is still 'None' and it's not possible to override properties for client when creating connector.
Changing to
CONNECTOR_CLIENT_CONFIG_OVERRIDE_POLICY: 'All'
CONNECT_PRODUCER_OVERRIDE_BATCH_SIZE: 65536
has same effect, policy 'None'.
Also batch size overriding is not aplied. So I assume those overriding features are not supported in Landoop.
WARN The configuration 'producer.override.batch.size' was supplied but isn't a known config.
I assume 'confluentinc/cp-kafka-connect' doesn't have UI built-in, and for learning purposes seems better to have it. So it's more preferable to do it in Landoop. But thanks for recommendation to use 'confluentinc/cp-kafka-connect'. I will try to do this config overriding there also
For starters, that image is very old and no longer maintained. I'd recommend you use confluentinc/cp-kafka-connect
In any case, for both images, you can use
environment:
CONNECT_CONNECT_CLIENT_CONFIG_OVERRIDE_POLICY: 'All'
CONNECT_PRODUCER_OVERRIDE_BATCH_SIZE: 65536
It's not clear, how exactly ... change the default worker properties
Look at the source code

Confluence on Docker runs setup assistent on existing installation after update

A few days ago, my watchtower updated Confluence on Docker with the 6.15.1-alpine tag. It's hosted using Atlassians official image. Since those update, Confluence shows the setup screen. Haven't any chance to get inside the admin panel. When continue the wizard end entering server credentials of the existing installation, it gave an error that an installation already exists that would be overwritten if continued.
It was a re-push of the exact version tag 6.15.1 tag, not a regular version update. So there seems no possibility to use the old, working image. Also other versions seems re-pushed. Tried some older ones and also a new one, without success.
docker-compose.yml
version: "2"
volumes:
confluence-home:
services:
confluence:
container_name: confluence
image: atlassian/confluence-server:6.15.1-alpine
#restart: always
mem_limit: 6g
volumes:
- confluence-home:/var/atlassian/application-data/confluence
- ./confluence.cfg.xml:/var/atlassian/application-data/confluence/confluence.cfg.xml
- ./server.xml:/opt/atlassian/confluence/conf/server.xml
- ./mysql-connector-java-5.1.42-bin.jar:/opt/atlassian/confluence/lib/mysql-connector-java-5.1.42-bin.jar
networks:
- traefik
environment:
- "TZ=Europe/Berlin"
- JVM_MINIMUM_MEMORY=4096m
- JVM_MAXIMUM_MEMORY=4096m
labels:
- "traefik.port=8090"
- "traefik.backend=confluence"
- "traefik.frontend.rule=Host:confluence.my-domain.com"
networks:
traefik:
external: true
I found out that there were the following changes on the images:
Ownership
The logs throwed errors about not beinng able to write on log files because nearly the entire home directory was owned by an user called bin:
root#8ac38faa94f1:/var/atlassian/application-data/confluence# ls -l
total 108
drwx------ 2 bin bin 4096 Aug 19 00:03 analytics-logs
drwx------ 3 bin bin 4096 Jun 15 2017 attachments
drwx------ 2 bin bin 24576 Jan 12 2019 backups
[...]
This could be fixed by executing a chown:
docker exec -it confluence bash
chown confluence:confluence -R /var/atlassian/application-data/confluence
Moutings inside mount
My docker-compose.yml mounts a volume to /var/atlassian/application-data/confluence and inside those volume, the confluence.cfg.xml file was mounted from current directory. This approach is a bit older and should seperate the user data in the volume from configuration files like docker-compose.yml and also the application itself as confluence.cfg.xml.
Seems not properly working any more on using Docker 17.05 and Docker-Compose 1.8.0 (at least in combination with Confluence), so I simply removed that second mount and placed the configuration file inside the volume.
Atlassian creates config files now dynamically
It was noticeable that my mounted configuration files like confluence.cfg.xml and server.xml were overwritten by Atlassians container. Their source code shows that they now use Jina2, a common Python template engine used in e.g. Ansible. A python script parse those files on startup and create Confluences configuration files, without properly checking on all of those files if they already exists.
Mounting them as read only caused the app to crash because this is also not handled in their Python script. By analyzing their templates, I learned that they replaced nearly every config item by environment variables. Not a bad approach, so I specified my server.xml parameters by env variables instead of mouting the entire file.
In my case, Confluence is behind a Traefik reverse proxy and it's required to tell Confluence it's final application url for end users:
environment:
- ATL_proxyName=confluence.my-domain.com
- ATL_proxyPort=443
- ATL_tomcat_scheme=https
Final working docker-compose.yml
By applying all modifications above, accessing the existing installation works again using the following docker-compose.yml file:
version: "2"
volumes:
confluence-home:
services:
confluence:
container_name: confluence
image: atlassian/confluence-server:6.15.1
#restart: always
mem_limit: 6g
volumes:
- confluence-home:/var/atlassian/application-data/confluence
- ./mysql-connector-java-5.1.42-bin.jar:/opt/atlassian/confluence/lib/mysql-connector-java-5.1.42-bin.jar
networks:
- traefik
environment:
- "TZ=Europe/Berlin"
- JVM_MINIMUM_MEMORY=4096m
- JVM_MAXIMUM_MEMORY=4096m
- ATL_proxyName=confluence.my-domain.com
- ATL_proxyPort=443
- ATL_tomcat_scheme=https
labels:
- "traefik.port=8090"
- "traefik.backend=confluence"
- "traefik.frontend.rule=Host:confluence.my-domain.com"
networks:
traefik:
external: true

Why Neo4J docker authentication doesn't work

I want to run a Neo4J instance through docker using a docker-compose.
docker-compose.yml
version: '3'
services:
neo4j:
container_name: neo4j-lab
image: neo4j:latest
environment:
- NEO4J_dbms_memory_pagecache_size=2G
- NEO4J_dbms_memory_heap_maxSize=4G
- NEO4J_dbms_memory_heap_initialSize=512M
- NEO4J_AUTH=neo4j/changeme
ports:
- 7474:7474
- 7687:7687
volumes:
- neo4j_data:/data
- neo4j_conf:/conf
- ./import:/import
volumes:
neo4j_data:
neo4j_conf:
Running the following with docker-compose up is perfectly fine, and I can reach the login screen.
But when I set the credentials, I get the following error on my container logs : Neo.ClientError.Security.Unauthorized The client is unauthorized due to authentication failure. whereas I am sure that I fill with right credentials (the ones used in my docker-compose file)
Furthermore,
when I set NEO4J_AUTH to none, then no credentials have been asked.
when I set it to neo4j/neo4j it said that I can't use the default password
According the documentation, this is perfectly fine :
By default Neo4j requires authentication and requires you to login with neo4j/neo4j at the first connection and set a new password. You can set the password for the Docker container directly by specifying --env NEO4J_AUTH=neo4j/password in your run directive. Alternatively, you can disable authentication by specifying --env NEO4J_AUTH=none instead.
Do you have any idea of what's going on ?
Hope you could help me to solve this !
EDIT
Docker logs output :
neo4j-lab | 2019-03-13 23:02:32.378+0000 INFO Starting...
neo4j-lab | 2019-03-13 23:02:37.796+0000 INFO Bolt enabled on 0.0.0.0:7687.
neo4j-lab | 2019-03-13 23:02:41.102+0000 INFO Started.
neo4j-lab | 2019-03-13 23:02:43.935+0000 INFO Remote interface available at http://localhost:7474/
neo4j-lab | 2019-03-13 23:02:56.105+0000 WARN The client is unauthorized due to authentication failure.
EDIT 2 :
It seems that deleting the volume associated first works. The password is now changed.
However, if I docker-compose down then docker-compose up whereas I change the password in my docker-compose file then the issue reappears.
So I think that when we change the password through docker-compose more than once while a volume exists, we need to remove the auth file presents in the volumes.
To do that :
docker volume inspect <volume_name>
You should get something like that :
[
{
"CreatedAt": "2019-03-14T11:17:08+01:00",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "neo4j",
"com.docker.compose.volume": "neo4j_data"
},
"Mountpoint": "/data/docker/volumes/neo4j_neo4j_data/_data",
"Name": "neo4j_neo4j_data",
"Options": null,
"Scope": "local"
}
]
This is obviously different if you named your container and your volumes not like me (neo4j, neo4j_data).
The important part is the Mountpoint which locates the volume.
In this volume, you can delete the auth file which is in dbms directory.
Then restart your docker and everything should be fine.
Neo4j docker developer here.
The reason this is happening is that the NEO4J_AUTH environment variable doesn't set the database password, it sets the INITIAL password only.
If you're mounting a data volume with an existing database inside, then NEO4J_AUTH has no effect because that database already has a password. It sounds like that's what you're experiencing here.
The documentation around this feature was not great and I've updated it! See: Neo4j docker authentication documentation
define Neo4j password with docker-compose
neo4j:
image: 'neo4j:4.1'
environment:
NEO4J_AUTH: 'neo4j/your_password'
ports:
- "7474:7474"
volumes:
...

Logstash missing config

I have following issue . Every time when I'm trying to set config for logstash it doesn't see my file. I am sure that the path is properly set.
There is info:
[2018-09-14T09:28:44,073][INFO ][logstash.config.source.local.configpathloader] No config files found in path {:path=>"/home/jakub/IdeaProjects/test/logstash.conf"}
My docker-compose.yml looks following:
logstash:
image: docker.elastic.co/logstash/logstash:6.4.0
networks: ['stack']
ports:
- "9290:9290"
- "4560:4560"
command: logstash -f /home/jakub/IdeaProjects/test/logstash.conf
depends_on: ['elasticsearch']
and logstash.conf:
input {
redis {
host => "redis"
key => "log4j2"
data_type => "list"
password => "RedisTest"
}
}
output {
elasticsearch {
host => "elasticsearch"
}
}
What I'm doing wrong ? Can you give me some advice or solve my issue ?
Thanks for everything.
Cheers
I guess your logstash.conf is on your host under /home/jakub/IdeaProjects/test/logstash.conf.
Thus, it is not inside your container (unless there is some hidden mount). The command will be executed from within the container, thus it points to a non-existing file.
So, you may use docker cp /home/jakub/IdeaProjects/test/logstash.conf :/home/jakub/IdeaProjects/test/logstash.conf (provided the directory exists in your container)
... or (better!) mount the path from your host to your container. Such as :
volumes:
- /home/jakub/IdeaProjects/test/logstash.conf:/home/jakub/IdeaProjects/test/logstash.conf:ro
.. or to use config (the best option to moo if you are in swarm mode!). The mount is close to the "volume" option above, but you also have to pre-create (from command line or from docker-compose file)
There are other options, but the main point is that you have to make your file available from within your container!

Docker-compose : understanding linking environment variables

I'm now using docker-compose for all of my projects. Very convenient. Much more comfortable than manual linking through several docker commands.
There is something that is not clear to me yet though: the logic behind the linking environment variables.
Eg. with this docker-compose.yml:
mongodb:
image: mongo
command: "--smallfiles --logpath=/dev/null"
web:
build: .
command: npm start
volumes:
- .:/myapp
ports:
- "3001:3000"
links:
- mongodb
environment:
PORT: 3000
NODE_ENV: 'development'
In the node app, I need to retrieve the mongodb url. And if I console.log(process.env), I get so many things that it feels very random (just kept the docker-compose-related ones):
MONGODB_PORT_27017_TCP: 'tcp://172.17.0.2:27017',
MYAPP_MONGODB_1_PORT_27017_TCP_PORT: '27017',
MYAPP_MONGODB_1_PORT_27017_TCP_PROTO: 'tcp',
MONGODB_ENV_MONGO_VERSION: '3.2.6',
MONGODB_1_ENV_GOSU_VERSION: '1.7',
'MYAPP_MONGODB_1_ENV_affinity:container': '=d5c9ebd7766dc954c412accec5ae334bfbe836c0ad0f430929c28d4cda1bcc0e',
MYAPP_MONGODB_1_ENV_GPG_KEYS: 'DFFA3DCF326E302C4787673A01C4E7FAAAB2461C \t42F3E95A2C4F08279C4960ADD68FA50FEA312927',
MYAPP_MONGODB_1_PORT_27017_TCP: 'tcp://172.17.0.2:27017',
MONGODB_1_PORT: 'tcp://172.17.0.2:27017',
MYAPP_MONGODB_1_ENV_MONGO_VERSION: '3.2.6',
MONGODB_1_ENV_MONGO_MAJOR: '3.2',
MONGODB_ENV_GOSU_VERSION: '1.7',
MONGODB_1_PORT_27017_TCP_ADDR: '172.17.0.2',
MONGODB_1_NAME: '/myapp_web_1/mongodb_1',
MONGODB_1_PORT_27017_TCP_PORT: '27017',
MONGODB_1_PORT_27017_TCP_PROTO: 'tcp',
'MONGODB_1_ENV_affinity:container': '=d5c9ebd7766dc954c412accec5ae334bfbe836c0ad0f430929c28d4cda1bcc0e',
MONGODB_PORT: 'tcp://172.17.0.2:27017',
MONGODB_1_ENV_GPG_KEYS: 'DFFA3DCF326E302C4787673A01C4E7FAAAB2461C \t42F3E95A2C4F08279C4960ADD68FA50FEA312927',
MYAPP_MONGODB_1_ENV_GOSU_VERSION: '1.7',
MONGODB_ENV_MONGO_MAJOR: '3.2',
MONGODB_PORT_27017_TCP_ADDR: '172.17.0.2',
MONGODB_NAME: '/myapp_web_1/mongodb',
MONGODB_1_PORT_27017_TCP: 'tcp://172.17.0.2:27017',
MONGODB_PORT_27017_TCP_PORT: '27017',
MONGODB_1_ENV_MONGO_VERSION: '3.2.6',
MONGODB_PORT_27017_TCP_PROTO: 'tcp',
MYAPP_MONGODB_1_PORT: 'tcp://172.17.0.2:27017',
'MONGODB_ENV_affinity:container': '=d5c9ebd7766dc954c412accec5ae334bfbe836c0ad0f430929c28d4cda1bcc0e',
MYAPP_MONGODB_1_ENV_MONGO_MAJOR: '3.2',
MONGODB_ENV_GPG_KEYS: 'DFFA3DCF326E302C4787673A01C4E7FAAAB2461C \t42F3E95A2C4F08279C4960ADD68FA50FEA312927',
MYAPP_MONGODB_1_PORT_27017_TCP_ADDR: '172.17.0.2',
MYAPP_MONGODB_1_NAME: '/myapp_web_1/novatube_mongodb_1',
Don't know what to pick, and why so many entries? Is it better to use the general ones, or the MYAPP-prefixed one? Where does the MYAPP name comes from? Folder name?
Could someone clarify this?
Wouldn't it be easier to let the user define the ones he needs in the docker-compose.yml file with a custom mapping? Like:
links:
- mongodb:
- MONGOIP: IP
- MONGOPORT : PORT
What I'm saying might not have sense though. :-)
Environment variables are a legacy way of defining links between containers. If you are using a newer version of compose, you don't need the links declaration at all. Trying to connect to mongodb from your app container will work fine by just using the name of the service (mongodb) as a hostname, without any links defined in the compose file (instead using docker's builtin DNS resolution, check /etc/hosts, nothing in there either!)
In answer to your question about why the prefix with MYAPP, you're right. Compose prefixes the service name with the name of the folder (or 'project', in compose nomenclature). It does the same thing when creating custom networks and volumes.

Resources