Elasticsearch production docker command not working at all, and when it works, it exits - docker

I was following the tutorial for what should I do for when deploying elasticsearch for production, and I came up with these commands
docker run
-d
--name es01
-p 9200:9200
-p 9300:9300
--restart=unless-stopped
--ulimit nofile=65536:65536
-e "bootstrap.memory_lock=true"
--ulimit memlock=-1:-1
-e "discovery.type=single-node"
-e "cluster.name=es-cluster01"
-e "node.name=es01-node"
--mount source=elasticsearch,target=/usr/share/elasticsearch/data
elasticsearch:elasticsearch:8.6.1
-Xms4g -Xmx4g
This version of my docker command did not work, because of the -Xms4g -Xmx4g in the last part, it was unrecognized, I read in the production documentation that I should not use this command -e ES_JAVA_OPTS="-Xms1g -Xmx1g", but I am forced to use it in my second version since the alternative which is using -Xms4g -Xmx4g at the end of the command did not work, this is my 2nd attempt:
docker run
-d
--name es01
-p 9200:9200
-p 9300:9300
--restart=unless-stopped
--ulimit nofile=65536:65536
-e "bootstrap.memory_lock=true"
--ulimit memlock=-1:-1
-e "discovery.type=single-node"
-e "cluster.name=es-cluster01"
-e ES_JAVA_OPTS="-Xms1g -Xmx1g"
-e "node.name=es01-node"
--mount source=elasticsearch,target=/usr/share/elasticsearch/data
elasticsearch:elasticsearch:8.6.1
When I use this version, I get the following error which prevents elasticsearch installation from commencing:
"log.level":"ERROR", "message":"exception during geoip databases update",
So, in the end I came up with this version, where I removed -e ES_JAVA_OPTS="-Xms1g -Xmx1g" :
docker run
-d
--name es01
-p 9200:9200
-p 9300:9300
--restart=unless-stopped
--ulimit nofile=65536:65536
-e "bootstrap.memory_lock=true"
--ulimit memlock=-1:-1
-e "discovery.type=single-node"
-e "cluster.name=es-cluster01"
-e "node.name=es01-node"
--mount source=elasticsearch,target=/usr/share/elasticsearch/data
elasticsearch:elasticsearch:8.6.1
This version installs fine, without a problem, and I can see the default password and everything, and the docker container goes online, but the issue with this here is that, every 24 hours, it is unexpectedly exiting, without a log to show me what is wrong.
I used the command docker container inspect es01 and there is no error in STATUS.

Related

run elasticsearch and kibanawith docker bootstrap checks failed ERROR

this is my first time using elasticsearch and I use this link but when I run this command
docker run --name es-node01 --net elastic -p 9200:9200 -p 9300:9300 -t docker.elastic.co/elasticsearch/elasticsearch:8.6.1
I got this error
ERROR: [1] bootstrap checks failed. You must address the points described in the following [1] lines before starting Elasticsearch.
Try to disable swapping.
-e "bootstrap.memory_lock=true" --ulimit memlock=-1:-1
https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#_disable_swapping
EDIT:
To use it:
docker run --name es-node01 --net elastic -p 9200:9200 -p 9300:9300 -e "bootstrap.memory_lock=true" --ulimit memlock=-1:-1 -t docker.elastic.co/elasticsearch/elasticsearch:8.6.1

I get an error when I try to run Pihole on Docker in Windows

I have a problem with Docker. I wanted to run this command in cmd and also Windows-Powershell.
docker run -d --name pihole -e ServerIP=192.168.178.20 -e WEBPASSWORD=XXX -e TZ=Europe/Berlin -e DNS1=172.17.0.1 -e DNS2=1.1.1.1 -e DNS3=192.168.178.1 - p 80:80 -p 53:53/tcp -p 443:443 --restart=unless-stopped pihole/pihole:latest
I typed in the right password I just censored it here. Everytime I run this command I get the report:
invalid reference format
Can anyone of you help me, I donĀ“t know what to do.

Getting "/opt/jboss/tools/docker-entrypoint.sh: line 165: DB_ADDR: unbound variable" while creating keycloak container

I am trying to create a docker container for keycloak. But when I try the below command in docker quickstart terminal:
docker run -it -p 8080:8080 -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=admin jboss/keycloak
I am getting the below error:
/opt/jboss/tools/docker-entrypoint.sh: line 165: DB_ADDR: unbound variable
Upon researching a bit, I came to know that I need to pass DB_ADDR in the command also. So I tried the below command now:
docker run -it -p 8080:8080 -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=admin -e DB_ADDR=keycloak-db -e DB_VENDOR=h2 jboss/keycloak
But still I am getting the same error. I don't really know what I am doing wrong. Please if anyone can help me here. Thankyou all.
Update the script as below:
docker run \
-p 8080:8080 \
-e KEYCLOAK_USER=admin \
-e DB_VENDOR=h2 \
-e KEYCLOAK_PASSWORD=admin \
-it jboss/keycloak:latest

Kafka Tool connection issues w/ kafka image running on Docker for Windows

I followed the example provided here: https://codeopinion.com/getting-started-apache-kafka-with-net-core/
and the following comamnds:
docker network create kafka
docker run -d --network=kafka --name=zookeeper -e ZOOKEEPER_CLIENT_PORT=2181 -e ZOOKEEPER_TICK_TIME=2000 -p 2181:2181 confluentinc/cp-zookeeper
docker run -d --network=kafka --name=kafka -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092 -p 9092:9092 confluentinc/cp-kafka
I am able to use Kafa tool to connect to instance, but when a message is send nothing is coming thru.
I read up on advertised listeners and did the following:
docker run -d --network=kafka --name=bf-kafka -p 9092:9092 -e KAFKA_ZOOKEEPER_CONNECT=bf-zookeeper:2181 -e ALLOW_PLAINTEXT_LISTENER=yes -e KAFKA_LISTENERS=PLAINTEXT://:9092 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092 -e KAFKA_AUTO_CREATE_TOPICS_ENABLE=false confluentinc/cp-kafka
NOTE: I am setting KAFKA_AUTO_CREATE_TOPICS_ENABLE to flase because my message format is {api}.{domain}.{action}.{version} format.
ONE THIS TO NOTE:
When I ran the commands on Mac, everything works fine. Kafka tool is able receive the message.
What am I doing wrong?

How can I set up hyperledger fabric with multiple hosts using Docker?

I work on the Hyperledger Fabric v1.0 and would like to make the Getting Setup work on multiple hosts. For now, 2 would be great.
Here is what I want to do:
Host1: start an orderer and 2 peers
Host2: start 1 peer
Host2: A client creates a channel (using the channel_test.sh updated with the good hosts IP) and join all the 3 peers
Host1: Call de deploy.js of the given example to deploy the chaincode
I have a problem on the 3rd step. I think the channel creation works but on my peers log I have the same warnings on the 3 peers:
Remote endpoint claims to be a different peer, expected [host1 IP:8051] but got [172.17.0.4:7051]
Failed obtaining connection for 172.31.9.126:8051, PKIid:[49 55 50 ...] reason: Authentication failure
It looks like they can't communicate with each other. Any idea where the problem is?
I still tried my step 4 but I can't deploy it unless I remove the host2: peer1 from the config.json. And even then, I can only query from the host1: peer0, not the host1: peer2.
Here are the commands I use to set up my network:
Host1: Orderer
docker run --rm -it --name orderer -p 8050:7050
-e ORDERER_GENERAL_LEDGERTYPE=ram
-e ORDERER_GENERAL_BATCHTIMEOUT=10s
-e ORDERER_GENERAL_BATCHSIZE_MAXMESSAGECOUNT=10
-e ORDERER_GENERAL_MAXWINDOWSIZE=1000
-e ORDERER_GENERAL_ORDERERTYPE=solo
-e ORDERER_GENERAL_LOGLEVEL=debug
-e ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
-e ORDERER_GENERAL_LISTENPORT=7050
-e ORDERER_RAMLEDGER_HISTORY_SIZE=100
sfhackfest22017/fabric-orderer:x86_64-0.7.0-snapshot-c7b3fe0 orderer
Host1: Peer0
docker run --rm -it --name peer0 -p 8051:7051 -p 8053:7053
-v /var/run/:/host/var/run/ -v $BASE_DIR/tmp/peer0:/etc/hyperledger/fabric/msp/sampleconfig
-e CORE_PEER_ADDRESSAUTODETECT=true
-e CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
-e CORE_LOGGING_LEVEL=DEBUG
-e CORE_PEER_NETWORKID=peer0
-e CORE_NEXT=true
-e CORE_PEER_ENDORSER_ENABLED=true
-e CORE_PEER_ID=peer0
-e CORE_PEER_PROFILE_ENABLED=true
-e CORE_PEER_COMMITTER_LEDGER_ORDERER=$ORDERER_IP:7050
-e CORE_PEER_GOSSIP_ORGLEADER=true
-e CORE_PEER_GOSSIP_IGNORESECURITY=true
sfhackfest22017/fabric-peer:x86_64-0.7.0-snapshot-c7b3fe0 peer node start --peer-defaultchain=false
Host1: Peer2
docker run --rm -it --name peer2 -p 8055:7051 -p 8057:7053
-v /var/run/:/host/var/run/ -v $BASE_DIR/tmp/peer0:/etc/hyperledger/fabric/msp/sampleconfig
-e CORE_PEER_ID=peer2
[Other parameters are the same as Peer0]
sfhackfest22017/fabric-peer:x86_64-0.7.0-snapshot-c7b3fe0 peer node start --peer-defaultchain=false
Host2: Peer1
docker run --rm -it --name peer1 -p 8051:7051
-v /var/run/:/host/var/run/ -v $BASE_DIR/tmp/peer0:/etc/hyperledger/fabric/msp/sampleconfig
-e CORE_PEER_ID=peer1
[Other parameters are the same as Peer0]
sfhackfest22017/fabric-peer:x86_64-0.7.0-snapshot-c7b3fe0 peer node start --peer-defaultchain=false
Host2: Cli
docker run --rm -it --name cli
-v /var/run/:/host/var/run/ -v $BASE_DIR/tmp/peer3:/etc/hyperledger/fabric/msp/sampleconfig -v $BASE_DIR/src/github.com/example_cc/example_cc.go:/opt/gopath/src/github.com/hyperledger/fabric/examples/example_cc.go -v $BASE_DIR/channel_test.sh:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel_test.sh
--workdir /opt/gopath/src/github.com/hyperledger/fabric/peer
-e GOPATH=/opt/gopath
-e CORE_PEER_ADDRESSAUTODETECT=true
-e CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
-e CORE_LOGGING_LEVEL=DEBUG
-e CORE_NEXT=true
-e CORE_PEER_ID=cli
-e CORE_PEER_ENDORSER_ENABLED=true
-e CORE_PEER_COMMITTER_LEDGER_ORDERER=$ORDERER_IP:8050
-e CORE_PEER_ADDRESS=$PEER0_IP:8051
sfhackfest22017/fabric-peer:x86_64-0.7.0-snapshot-c7b3fe0 ./channel_test.sh
If you need more information feel free to ask.
Note: I'm not very familiar with docker, any improvement/advice on how I use it is welcome :)
I found a solution that seems to work using docker swarm mode.
Initialize a swarm: (docker swarm documentation for mor information)
Join the swarm with the other host as a manager
Create a network ("hyp-net" in my case)
docker network create --attachable --driver overlay hyp-net
Changes I had to do:
Linked the containers with the --link docker parameter
Added the --network docker parameter (--network=hyp-net)
Added a new environment varialble todocker run command used:
-e CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hyp-net
Here are the commands that works for me:
Orderer
docker run --rm -it --network="hyp-net" --name orderer -p 8050:7050
-e ORDERER_GENERAL_LEDGERTYPE=ram
-e ORDERER_GENERAL_BATCHTIMEOUT=10s
-e ORDERER_GENERAL_BATCHSIZE_MAXMESSAGECOUNT=10
-e ORDERER_GENERAL_MAXWINDOWSIZE=1000
-e ORDERER_GENERAL_ORDERERTYPE=solo
-e ORDERER_GENERAL_LOGLEVEL=debug
-e ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
-e ORDERER_GENERAL_LISTENPORT=7050
-e ORDERER_RAMLEDGER_HISTORY_SIZE=100
-e CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hyp-net
sfhackfest22017/fabric-orderer:x86_64-0.7.0-snapshot-c7b3fe0 orderer
Peer0
docker run --rm -it --link orderer:orderer --network="hyp-net" --name peer0 -p 8051:7051 -p 8053:7053
-v /var/run/:/host/var/run/ -v $BASE_DIR/tmp/peer0:/etc/hyperledger/fabric/msp/sampleconfig
-e CORE_PEER_ADDRESSAUTODETECT=true
-e CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
-e CORE_LOGGING_LEVEL=DEBUG
-e CORE_PEER_NETWORKID=peer0
-e CORE_NEXT=true
-e CORE_PEER_ENDORSER_ENABLED=true
-e CORE_PEER_ID=peer0
-e CORE_PEER_PROFILE_ENABLED=true
-e CORE_PEER_COMMITTER_LEDGER_ORDERER=orderer:7050
-e CORE_PEER_GOSSIP_ORGLEADER=true
-e CORE_PEER_GOSSIP_IGNORESECURITY=true
-e CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hyp-net
sfhackfest22017/fabric-peer:x86_64-0.7.0-snapshot-c7b3fe0 peer node start --peer-defaultchain=false
Peer1
docker run --rm -it --network="hyp-net" --link orderer:orderer --link peer0:peer0 [...] -e CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hyp-net sfhackfest22017/fabric-peer:x86_64-0.7.0-snapshot-c7b3fe0 peer node start --peer-defaultchain=false
Peer2
docker run --rm -it --network="hyp-net" --link orderer:orderer --link peer0:peer0 --link peer1:peer1 [...] -e CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hyp-net sfhackfest22017/fabric-peer:x86_64-0.7.0-snapshot-c7b3fe0 peer node start --peer-defaultchain=false
Cli
docker run --rm -it --network="hyp-net" --link orderer:orderer --link peer0:peer0 --link peer1:peer1 --link peer2:peer2 [...] -e CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hyp-net sfhackfest22017/fabric-peer:x86_64-0.7.0-snapshot-c7b3fe0 ./channel_test.sh
With this, I am able to deploy, invoke and query my chaincode.
I was able to host hyperledger fabric network on multiple machines using docker swarm mode. Swarm mode provides a network across multiple hosts/machines for the communication of the fabric network components.
This post explains the deployment process.It creates a swarm network and all the other machines join the network. https://medium.com/#wahabjawed/hyperledger-fabric-on-multiple-hosts-a33b08ef24f
It works with Fabric 1.0+
Check with the server names in fabric-samples\first-network\connection-org*.yaml and fabric-samples\first-network\connection-org*.json. These would be templated and generated from ccp-template.json and ccp-template.yaml.
Also, have entries for peers in fabric-samples\first-network\crypto-config.yaml under 'Specs'.

Resources