How to Update Graylog version in docker - docker

I am new to graylog, I have installed graylog in docker and after installing it I observed 2 notifications one is related to Graylog Upgrade. Can someone tell me how to update it using docker commands?
Note: First in need to take backup of my data and then I need to update it to version 2.4.6.
Note 2: I have already referred the documentation in graylog.
http://docs.graylog.org/en/2.4/pages/upgrade.html
http://docs.graylog.org/en/2.4/pages/installation/docker.html
Graylog Installation process:
docker run --name mongo -d mongo:3
docker run --name elasticsearch \
-e "http.host=0.0.0.0" -e "xpack.security.enabled=false" \
-d docker.elastic.co/elasticsearch/elasticsearch:5.6.2
docker run --link mongo --link elasticsearch \
-p 9000:9000 -p 12201:12201 -p 514:514 \
-e GRAYLOG_WEB_ENDPOINT_URI="http://127.0.0.1:9000/api" \
-d graylog/graylog:2.4.0-1

To use the latest version change the tag of the graylog image from 2.4.0-1 to 2.4 or 2.4.6-1
Seems like the documentation you found is not completely in line with the documentation on docker hub:
If you simply want to checkout Graylog without any further customization, you can run the following three commands to create the necessary environment:
docker run --name mongo -d mongo:3
docker run --name elasticsearch \
-e "http.host=0.0.0.0" -e "xpack.security.enabled=false" \
-d docker.elastic.co/elasticsearch/elasticsearch:5.6.12
docker run --link mongo --link elasticsearch \
-p 9000:9000 -p 12201:12201 -p 514:514 \
-e GRAYLOG_WEB_ENDPOINT_URI="http://127.0.0.1:9000/api" \
-d graylog/graylog:2.4

First i have installed graylog with my own volumes
docker run --link mongo --link elasticsearch \
-p 9000:9000 -p 12201:12201 -p 514:514 \
-e GRAYLOG_WEB_ENDPOINT_URI="http://127.0.0.1:9000/api" \
-d graylog/graylog:2.4.0-1
Now stop graylog using
docker stop [graylog Container ID]
Now remove container from docker
docker rm [graylog Container ID]
Now Remove docker image
docker rmi [graylog Image ID]
Now again install graylog by changing the graylog version
docker run --link mongo --link elasticsearch \
-p 9000:9000 -p 12201:12201 -p 514:514 \
-e GRAYLOG_WEB_ENDPOINT_URI="http://127.0.0.1:9000/api" \
-d graylog/graylog:2.4.6-1
Note: Only remove graylog not mongoDB/Elasticsearch. Then you won't loose any data.

Related

Backup and restore Zookeeper and Kafka data

zookeeper cluster
sudo docker run -itd --name zookeeper-1 --restart=always -v /data/zookeeper/zoo.cfg:/opt/bitnami/zookeeper/conf/zoo.cfg -v /data/zookeeper/data:/bitnami --network sevenchats -e ALLOW_ANONYMOUS_LOGIN=yes -e ZOO_SERVER_ID=1 -e ZOO_SERVERS=0.0.0.0:2888:3888,ip2:2888:3888,ip3:2888:3888 -p 2181:2181 -p 2888:2888 -p 3888:3888 bitnami/zookeeper:3.8.0
sudo docker run -itd --name zookeeper-2 --restart=always --network sevenchats -e ALLOW_ANONYMOUS_LOGIN=yes -e ZOO_SERVER_ID=2 -e ZOO_SERVERS=ip1:2888:3888,0.0.0.0:2888:3888,ip3:2888:3888 -v /data/zookeeper/zoo.cfg:/opt/bitnami/zookeeper/conf/zoo.cfg -v /data/zookeeper/data:/bitnami -p 2181:2181 -p 2888:2888 -p 3888:3888 bitnami/zookeeper:3.8.0
sudo docker run -itd --name zookeeper-3 --restart=always --network sevenchats -e ALLOW_ANONYMOUS_LOGIN=yes -e ZOO_SERVER_ID=3 -e ZOO_SERVERS=ip1:2888:3888,ip2:2888:3888,0.0.0.0:2888:3888 -v /data/zookeeper/zoo.cfg:/opt/bitnami/zookeeper/conf/zoo.cfg -v /data/zookeeper/data:/bitnami -p 2181:2181 -p 2888:2888 -p 3888:3888 bitnami/zookeeper:3.8.0
kafka cluster
sudo docker run -itd --name kafka-1 --network sevenchats -v /data/kafka/data:/bitnami –restart=always -v /data/kafka/server.properties:/bitnami/kafka/config/server.properties -e KAFKA_CFG_ZOOKEEPER_CONNECT=ip1:2181,ip2:2181,ip3:2181 -e ALLOW_PLAINTEXT_LISTENER=yes -p 9092:9092 bitnami/kafka:3.1.0
sudo docker run -itd --name kafka-2 --network sevenchats -v /data/kafka/data:/bitnami --restart=always -v /data/kafka/server.properties:/bitnami/kafka/config/server.properties -e KAFKA_CFG_ZOOKEEPER_CONNECT=ip1:2181,ip2:2181,ip3:2181 -e ALLOW_PLAINTEXT_LISTENER=yes -p 9092:9092 bitnami/kafka:3.1.0
sudo docker run -itd --name kafka-3 --network sevenchats -v /data/kafka/data:/bitnami –restart=always -v /data/kafka/server.properties:/bitnami/kafka/config/server.properties -e KAFKA_CFG_ZOOKEEPER_CONNECT=ip1:2181,ip2:2181,ip3:2181 -e ALLOW_PLAINTEXT_LISTENER=yes -p 9092:9092 bitnami/kafka:3.1.0
after running these commands my cluster is working fine
I want to restore my Kafka data or topics to new cluster so using tar command I took the backup of the docker volumes on my host under /data/kafka/ folder
tar cvzf data_kafka1.tar.gz data
when I check the size of the folder on my existing cluster the /data/kafka folder is around 200MB
and when I copy my backup folder to new cluster and extract there using the command
tar xvzf data_kafka1.tar.gz
when I check the size of the data folder
du -hs data
it is 20GB on my new cluster
I want my backup size to be same on both the clusters
small difference is also fine but there is a huge difference in size
let me know what mistake am i doing?

How can I map a hostname (subdomain) to Docker Container with Traefik?

I have multiple Docker containers running nginx that serves up a web application. These containers are running on a virtual machine abc.com. They all require https.
If I have just one container running, I can access the container running at abc.com:443 no problem. I can also run multiple containers using docker run and port mapping where I can map a port to 433 like this:
VersionA 0.0.0.0:5000->443 can hit on abc.com:5000
VersionB 0.0.0.0:5001->443 can hit on abc.com:5001
VersionC 0.0.0.0:5002->443 can hit on abc.com:5002
What I would like is:
vA.abc.com -> VersionAContainer:443
vB.abc.com -> VersionBContainer:443
vC.abc.com -> VersionCContainer:443
These containers will spin up and close regularly and need them to be picked up by Traefik. What is the proper build command for traefik and run command for the container using labels?
This is how I was running the Traefik container with no luck.
sudo docker container run -d --name traefik_proxy \
--network traefik_webgateway \
-p 80:80 \
-p 443:443 \
-p 8080:8080 \
--restart always \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume /dev/null:/traefik.toml \
traefik --docker --logLevel=INFO --api \
--entrypoints="Name:http Address::80 Redirect.EntryPoint:https" \
--entrypoints="Name:https Address::443 TLS" \
--defaultentrypoints="http,https"
And this is how I was running my container:
sudo docker run -d --name some-nginx \
--network traefik_webgateway \
--label traefik.docker.network=traefik_webgateway \
--label traefik.protocol=https \
--label traefik.frontend.entryPoints=http,https \
--label traefik.frontend.rule=Host:something.localhost \ # unsure if this is correct to use local host or abc.com
--label traefik.port=443 \
--label traefik.frontend.auth.forward.tls.insecureSkipVerify=true \
container:latest

Question: How can I run TimeScaleDB on Docker on an ARM architecture with volumes?

How can I run TimeScale on Docker on a ARM architecture with the Postgres/TimescaleDB data volumes exposed to the host?
My idea was to do the following:
docker run -d --restart always \
--name timescaledb \
-p 5432:5432 \
-e POSTGRES_PASSWORD=password \
-e POSTGRES_USER=user \
-e POSTGRES_DB=databasename \
-v /etc/postgresql:$PWD/postgres/etc \
-v /var/log/postgresql:$PWD/postgres/log \
-v /var/lib/postgresql:$PWD/postgres/lib \
timescale/timescaledb
However, TimescaleDB seems to be stuck in the start/ restart process:
Do you have any suggestions / ideas what I'm doing wrong?
Question is available on Github, too: https://github.com/timescale/timescaledb-docker/issues/23
The docker log (docker logs timescaledb) tells the following:
standard_init_linux.go:190: exec user process caused "exec format error"
Running
docker run \
--name timescaledb \
-p 5432:5432 \
-e POSTGRES_PASSWORD=password \
-e POSTGRES_USER=user \
-e POSTGRES_DB=databasename \
timescale/timescaledb
returns the same error: standard_init_linux.go:190: exec user process caused "exec format error"
I will check if the image supports ARM architectures. Follow up for more information is here: https://github.com/timescale/timescaledb-docker/issues/25

--net equivalent in docker compose

When i am running my docker container,I am explicitly providing network as host.
docker run -d \
--net=host \
--name=zookeeper \
-e ZOOKEEPER_CLIENT_PORT=32181 \
-e ZOOKEEPER_TICK_TIME=2000 \
confluentinc/cp-zookeeper:4.1.0
How can i provide the same network in docker-compose.yaml file so that all container run within this host?
You can use as below in your service definition -
network_mode: "host"
Ref - https://docs.docker.com/compose/compose-file/#network_mode

Kafka with Docker 3 nodes in different host - Broker may not be available

I'm played with wurstmeister/kafka image on three different docker host
, and hosts ip are
10.1.1.11
10.1.1.12
10.1.1.13
I enter these command to run image:
10.1.1.11:
sudo docker run --name kafka -p 9092:9092 --restart always \
-e KAFKA_BROKER_ID="1" \
-e KAFKA_ADVERTISED_HOST_NAME="10.1.1.11" \
-e KAFKA_ADVERTISED_PORT="9092" \
-e KAFKA_ZOOKEEPER_CONNECT="0.0.0.0:2181,10.1.1.12:2181,10.1.1.13:2181" \
-d wurstmeister/kafka
10.1.1.12:
sudo docker run --name kafka -p 9092:9092 --restart always \
-e KAFKA_BROKER_ID="2" \
-e KAFKA_ADVERTISED_HOST_NAME="10.1.1.12" \
-e KAFKA_ADVERTISED_PORT="9092" \
-e KAFKA_ZOOKEEPER_CONNECT="10.1.1.11:2181,0.0.0.0:2181,10.1.1.13:2181" \
-d wurstmeister/kafka
10.1.1.13:
sudo docker run --name kafka -p 9092:9092 --restart always \
-e KAFKA_BROKER_ID="3" \
-e KAFKA_ADVERTISED_HOST_NAME="10.1.1.13" \
-e KAFKA_ADVERTISED_PORT="9092" \
-e KAFKA_ZOOKEEPER_CONNECT="10.1.1.11:2181,10.1.1.12:2181,0.0.0.0:2181" \
-d wurstmeister/kafka
When run those command, always the first command appear the:
Warning, the rest of two are not appear this question.
I'm using kafka producer test too. if host appear this problem, message send failed with message
if not appear this problem, message send success.
When I restart the image in 10.1.1.11, the problem is fixed, but 10.1.1.12 start the same problem and so on.
All i search this problem solve method are set the KAFKA_ADVERTISED_HOST_NAME to docker host. But i already did.
I have no idea why appear this problem.
My Zookeeper command on 10.1.1.11:
sudo docker run --name zookeeper -p 2181:2181 -p 2888:2888 -p 3888:3888 \
--restart always \
-e ZOO_MY_ID="1" \
-e ZOO_SERVERS="server.1=0.0.0.0:2888:3888 server.2=10.1.1.12:2888:3888 server.3=10.1.1.13:2888:3888" \
-d zookeeper:latest
Solution from OP.
The problem was firewall block docker container connect to docker host.
so i can't telnet docker host inside docker container.
the solution was set rule to iptables
sudo iptables -I INPUT 1 -i <docker-bridge-interface> -j ACCEPT
I found solution from https://github.com/moby/moby/issues/24370

Resources