How to add data volume to ElasticSearch docker-compose.yml - docker

I have created a docker-compose.yml for Elasticsearch and Kibana as shown below. The docker-compose.yml is working fine but with no index and data. I remember in one of my database specific docker-compose.yml I adds data like as shown below and within the docker/db folder I places my dq sql scripts
services:
postgres:
image: postgres:9.6
volumes:
- ./docker/db:/docker-entrypoint-initdb.d
environment:
POSTGRES_DB: some_db
ports:
- 5432:5432
Now the question similarly to be above way how do I specify ES index and data volume, what should be the ES file extension
To be more specific I want the below ES index and data to be there when the elasticsearch is started
PUT test
POST _bulk
{ "index" : { "_index" : "test"} }
{ "name" : "A" }
{ "index" : { "_index" : "test"} }
{ "name" : "B" }
{ "index" : { "_index" : "test"} }
{ "name" : "C" }
{ "index" : { "_index" : "test"} }
{ "name" : "D" }
docker-compose.yml
version: '3.7'
services:
# Elasticsearch Docker Images: https://www.docker.elastic.co/
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.4.0
container_name: elasticsearch
environment:
- xpack.security.enabled=false
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
cap_add:
- IPC_LOCK
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:7.4.0
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
ports:
- 5601:5601
depends_on:
- elasticsearch
volumes:
elasticsearch-data:
driver: local

Related

Broker not available (loadMetadataForTopics) - kafka-node consumer

node for my nodejs code. I have an api for requeest/response.
First of all, I make a request http://localhost:3000/number1 , after I start a consumer whitch consume messages from a kafka topic and from one partition "receive" and I try to find the message with id = number1 . After I want to return a response to user with this value. So I create a consumer like the follow :
options = {
kafkaHost: 'kafka:9092'
}
const client_node = new kafka_node.KafkaClient(options);
var Consumer = kafka_node.Consumer
var consumer_node = new Consumer(
client_node,
[
{ topic: 'receive.kafka.entities', partition: 0 , offset: 0}
],
{
autoCommit: false,
fetchMaxWaitMs: 100,
fromOffset: 'earliest' ,
groupId: 'kafka-node-group',
asyncPush: false,
}
);
const read = (callback)=>{
let ret = "1"
consumer_node.on('message',async function (message) {
var parse1 = JSON.parse(message.value)
var parse2 = JSON.parse(parse1.payload)
var id = parse2.fullDocument.id
var lastOffset = message.highWaterOffset - 1
//check if there is a query
if(lastOffset <= message.offset || ret !== "1"){
return callback(ret)
}
else if(id === back2){
ret = parse2.fullDocument
}
});
}
let error = {
id: "The entity " + back2 + " not found "
}
read((data)=>{
consumer_node.close(true,function(message){
if(data != "1"){
res.status(200).send(data)
}
else{
res.status(404).send(error)
}
})
})
If I try to make one continuous requests, after the first request I get a response :
{
"message": "Broker not available (loadMetadataForTopics)"
}
my Docker-compose file1 is the following :
zookeeper:
image: confluentinc/cp-zookeeper:5.4.1
container_name: stellio-zookeeper
ports:
- 2181:2181
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
networks:
- default
- localnet
kafka:
image: confluentinc/cp-enterprise-kafka:latest
container_name: kafka
ports:
- 9092:9092
- 9101:9101
environment:
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:9092,EXTERNAL://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_BROKER_ID: 1
KAFKA_LOG4J_ROOT_LOGLEVEL: INFO
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: localhost
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: kafka:9092
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081
depends_on:
- zookeeper
networks:
- default
- localnet
- my-proxy-net-kafka
networks:
default: # this network (app2)
driver: bridge
my-proxy-net-kafka:
external:
name: kafka_network
Docker-compose file2
app:
container_name: docker-node
hostname: docker-node
restart: always
build: .
command: nodemon /usr/src/app/index.js
networks:
- default
- proxynet-kafka
ports:
- '3000:3000'
volumes:
- .:/usr/src/app
networks:
default:
driver: bridge
proxynet-kafka:
name: kafka_network
Why that happens? Can you help me to fix this?
[ If you want more information feel free to ask me :) ]

Error occured with redis HMSEET, dial tcp :6379: connect: connection refused

I have a dockerized back-end with golang gin server, postgresql and redis.
Everything starts correctly with this docker-compose.yaml file :
version: '3.9'
services:
postgresql:
image: 'postgres:13.1-alpine'
volumes:
- data:/var/lib/postgresql/data
env_file:
- ./env/postgre.env
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
ports:
- '5432:5432'
server:
build: ./server
ports:
- '8000:8000'
volumes:
- ./server:/app
depends_on:
- postgresql
redis:
image: "redis"
ports:
- "6379:6379"
volumes:
- $PWD/redis-data:/var/lib/redis
volumes:
data:
Than I initialize redis in main func :
func main() {
util.InitializeRedis()
(...)
// InitializeRedis func
func newPool() *redis.Pool {
return &redis.Pool{
MaxIdle:3,
IdleTimeout:240 * time.Second,
DialContext: func(context.Context) (redis.Conn, error) {
return redis.Dial("tcp",":6379")
},
}
}
var (
pool *redis.Pool
)
func InitializeRedis() {
flag.Parse()
pool = newPool()
}
It doesn't prompt any error, but I cannot get connection with pool.Get in another function :
// Handle "/redis" for test
router.GET("/redis", util.ServeHome)
// ServeHome func
func ServeHome(ctx *gin.Context){
conn := pool.Get()
defer conn.Close()
var p1 struct{
Title string `redis:"title" json:"title"`
Author string `redis:"author" json:"author"`
Body string `redis:"body" json:"body"`
}
p1.Title = "Example"
p1.Author = "Gary"
p1.Body = "Hello"
if _, err := conn.Do("HMSET", redis.Args{}.Add("id1").AddFlat(&p1)...); err != nil {
log.Fatalf("Error occured with redis HMSEET, %v", err) // Error in console is from here
return
}
(...)
And when I try to access /redis with Insomnia it shows: Error: Server returned nothing (no headers, no data) and in console logs : Error occured with redis HMSEET, dial tcp :6379: connect: connection refused
I couldn't find any article which solve this problem for me, so I do appreciate any help.
Since you're using docker-compose Redis won't be available on :6379, instead it will be available on the hostname redis.
I think you'll need to update your code to the following:
redis.Dial("tcp","redis:6379")

Redirect DNS to different port with traefik

I'm trying to make a monitoring stack with traefik, grafana, zabbix, gotify etc.
I've a domain name called domain.tld.
In my docker-compose, I've some services with different port (grafana for example), but I've also some services on the same port (gotify, zabbix).
I want to redirect my domain.tld with zabbix.domain.tld, grafana.domain.tld to each container with SSL.
It's works, but not exactly.
If I put in my address bar:
grafana.domain.tld -> 404 Error with SSL redirection
If I put in my address bar:
grafana.domain.tld:3000 -> It's ok
I think, I'm little lost (or completely ?) in my many modifications..
Just doc and me is not enought.
So, my docker-compose:
version: '3.5'
networks:
traefik_front:
external: true
services:
traefik:
image: traefik
command: --api --docker
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- "${TRAEFIK_PATH}/traefik.toml:/etc/traefik/traefik.toml"
- "${TRAEFIK_PATH}/acme.json:/acme.json"
- /var/run/docker.sock:/var/run/docker.sock
labels:
- "traefik.frontend.rule=Host:traefik.${DOMAIN}"
- "treafik.port=8080"
- "traefik.enable=true"
- "traefik.backend=traefik"
- "traefik.docker.network=traefik_front"
#- "traefik.frontend.entryPoints=http,https"
networks:
- traefik_front
gotify:
image: gotify/server
container_name: gotify
volumes:
- "${GOTIFY_PATH}:/app/data"
env_file:
- env/.env_gotify
labels:
- "traefik.frontend.rule=Host:push.${DOMAIN}"
- "traefik.port=80"
- "traefik.enable=true"
- "traefik.backend=gotify"
- "traefik.docker.network=traefik_front"
networks:
- traefik_front
- default
grafana:
image: grafana/grafana
container_name: grafana
volumes:
- "${GF_PATH}:/var/lib/grafana"
env_file:
- env/.env_grafana
labels:
- "traefik.frontend.rule=Host:grafana.${DOMAIN}"
- "traefik.port=3000"
- "traefik.enable=true"
- "traefik.backend=grafana"
- "traefik.docker.network=traefik_front"
networks:
- traefik_front
- default
zabbix-server:
image: zabbix/zabbix-server-mysql:ubuntu-4.0-latest
volumes:
- "${ZABBIX_PATH}/alertscripts:/usr/lib/zabbix/alertscripts:ro"
- "${ZABBIX_PATH}/externalscripts:/usr/lib/zabbix/externalscripts:ro"
- "${ZABBIX_PATH}/modules:/var/lib/zabbix/modules:ro"
- "${ZABBIX_PATH}/enc:/var/lib/zabbix/enc:ro"
- "${ZABBIX_PATH}/ssh_keys:/var/lib/zabbix/ssh_keys:ro"
- "${ZABBIX_PATH}/mibs:/var/lib/zabbix/mibs:ro"
- "${ZABBIX_PATH}/snmptraps:/var/lib/zabbix/snmptraps:ro"
links:
- mysql-server:mysql-server
env_file:
- env/.env_zabbix_db_mysql
- env/.env_zabbix_srv
user: root
depends_on:
- mysql-server
- zabbix-snmptraps
labels:
- "traefik.backend=zabbix-server"
- "traefik.port=10051"
zabbix-web-apache-mysql:
image: zabbix/zabbix-web-apache-mysql:ubuntu-4.0-latest
links:
- mysql-server:mysql-server
- zabbix-server:zabbix-server
volumes:
- "${ZABBIX_PATH}/ssl/apache2:/etc/ssl/apache2:ro"
env_file:
- env/.env_zabbix_db_mysql
- env/.env_zabbix_web
user: root
depends_on:
- mysql-server
- zabbix-server
labels:
- "traefik.frontend.rule=Host:zabbix.${DOMAIN}"
- "traefik.port=80"
- "traefik.enable=true"
- "traefik.backend=zabbix-web"
- "traefik.docker.network=traefik_front"
networks:
- traefik_front
- default
zabbix-agent:
image: zabbix/zabbix-agent:ubuntu-4.0-latest
ports:
- "10050:10050"
volumes:
- "${ZABBIX_PATH}/zabbix_agentd.d:/etc/zabbix/zabbix_agentd.d:ro"
- "${ZABBIX_PATH}/modules:/var/lib/zabbix/modules:ro"
- "${ZABBIX_PATH}/enc:/var/lib/zabbix/enc:ro"
- "${ZABBIX_PATH}/ssh_keys:/var/lib/zabbix/ssh_keys:ro"
links:
- zabbix-server:zabbix-server
env_file:
- env/.env_zabbix_agent
user: root
networks:
- default
zabbix-snmptraps:
image: zabbix/zabbix-snmptraps:ubuntu-4.0-latest
ports:
- "162:162/udp"
volumes:
- "${ZABBIX_PATH}/snmptraps:/var/lib/zabbix/snmptraps:rw"
user: root
networks:
- default
mysql-server:
image: mysql:5.7
command: [mysqld, --character-set-server=utf8, --collation-server=utf8_bin]
volumes:
- /var/lib/mysql:/var/lib/mysql:rw
env_file:
- env/.env_zabbix_db_mysql
labels:
- "traefik.enable=false"
user: root
networks:
- default
And my traefik.toml:
# WEBUI
[web]
entryPoint = "dashboard"
dashboard = true
address = ":8080"
usersFile = "/etc/docker/traefik/.htpasswd"
logLevel = "ERROR"
# Force HTTPS
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.dashboard]
address = ":8080"
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[docker]
endpoint = "unix:///var/run/docker.sock"
watch = true
exposedbydefault = false
domain = "domain.tld"
network = "traefik_front"
# Let's Encrypt
[acme]
email = "mail#mail.fr"
storage = "acme.json"
entryPoint = "https"
onHostRule = true
onDemand = false
[acme.httpChallenge]
entryPoint = "http"
OnHostRule = true
[[acme.domains]]
main = "domain.tld"
[[acme.domains]]
main = "domain.tld"
[[acme.domains]]
main = "domain.tld"
[[acme.domains]]
main = "domain.tld"
I've done something similar, and it would look this on your setup
docker-compose.yml
service:
traefik:
labels:
- "treafik.port=8080"
- "traefik.enable=true"
- "traefik.backend=traefik"
- "traefik.docker.network=traefik_front"
- "traefik.frontend.rule=Host:traefik.${DOMAIN}"
- "traefik.webservice.frontend.entryPoints=https"
zabbix-web-apache-mysql:
labels:
- "traefik.port=80"
- "traefik.enable=true"
- "traefik.backend=zabbix-web"
- "traefik.passHostHeader=true"
- "traefik.docker.network=traefik_front"
- "traefik.frontend.rule=Host:zabbix.${DOMAIN}"
grafana:
labels:
- "traefik.port=3000"
- "traefik.enable=true"
- "traefik.backend=grafana"
- "traefik.passHostHeader=true"
- "traefik.docker.network=traefik_front"
- "traefik.frontend.rule=Host:grafana.${DOMAIN}"
and the way my traefik.toml is configured
InsecureSkipVerify = true ## This is optional
## Force HTTPS
[entryPoints]
[entryPoints.http]
passHostHeader = true
address = ":80"
[entryPoints.http.forwardedHeaders]
insecure = true
[entryPoints.http.proxyProtocol]
insecure = true
## This seems to be an absolute requirement for redirect
## ...but it redirects every request to https
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.traefik]
address = ":8080"
[entryPoints.traefik.auth.basic]
# the "user" password is the MD5 encrpytion of the word "pass"
users = ["user:$apr1$.LWU4fEi$4YipxeuXs5T0xulH3S7Kb."]
[entryPoints.https]
passHostHeader = true
address = ":443"
[entryPoints.https.tls] ## This seems to be an absolute requirement
[entryPoints.https.forwardedHeaders]
insecure = true
[entryPoints.https.proxyProtocol]
insecure = true

Serialization Error using Corda Docker Image

I get the following error (for each node), when I run the command docker-compose up. I configured the network parameters myself as well as the nodes, not using the network bootstrapper.
[ERROR] 08:07:48+0000 [main] internal.NodeStartupLogging.invoke - Exception during node startup: Serialization scheme ([6D696E696D756D], P2
P) not supported. [errorCode=1e6peth, moreInformationAt=https://errors.corda.net/OS/4.0/1e6peth]
I have tried to change the properties in the network-parameters file, yet unsuccessfully so far.
Here are my config files:
myLegalName : "O=Notary, L=London, C=GB"
p2pAddress : "localhost:10008"
devMode : true
notary : {
validating : false
}
rpcSettings = {
address : "notary:10003"
adminAddress : "notary:10004"
}
rpcUsers=[
{
user="user"
password="test"
permissions=[
ALL
]
}
]
detectPublicIp : false
myLegalName : "O=PartyA, L=London, C=GB"
p2pAddress : "localhost:10005"
devMode : true
rpcSettings = {
address : "partya:10003"
adminAddress : "partya:10004"
}
rpcUsers=[
{
user=corda
password=corda_initial_password
permissions=[
ALL
]
}
]
detectPublicIp : false
myLegalName : "O=PartyB, L=London, C=GB"
p2pAddress : "localhost:10006"
devMode : true
rpcSettings = {
address : "partyb:10003"
adminAddress : "partyb:10004"
}
rpcUsers=[
{
user=corda
password=corda_initial_password
permissions=[
ALL
]
}
]
detectPublicIp : false
as well as my network-parameters file and my docker-compose.yml file:
minimumPlatformVersion=4
notaries=[NotaryInfo(identity=O=Notary, L=London, C=GB, validating=false)]
maxMessageSize=10485760
maxTransactionSize=524288000
whitelistedContractImplementations {
}
eventHorizon="30 days"
epoch=1
version: '3.7'
services:
Notary:
image: corda/corda-zulu-4.0:latest
container_name: Notary
networks:
- corda
volumes:
- ./nodes/notary_node.conf:/etc/corda/node.conf
- ./nodes/network-parameters:/opt/corda/network-parameters
PartyA:
image: corda/corda-zulu-4.0:latest
container_name: PartyA
networks:
- corda
volumes:
- ./nodes/partya_node.conf:/etc/corda/node.conf
- ./nodes/network-parameters:/opt/corda/network-parameters
- ./build/libs/:/opt/corda/cordapps
PartyB:
image: corda/corda-zulu-4.0:latest
container_name: PartyB
networks:
- corda
volumes:
- ./nodes/partyb_node.conf:/etc/corda/node.conf
- ./nodes/network-parameters:/opt/corda/network-parameters
- ./build/libs/:/opt/corda/cordapps
networks:
corda:
Many thanks in advance for your help!
It looks like it is indeed the issue with missing serialization scheme.
Also, in our most Corda 4.4 release, we have released an official image of the containerized Corda node.
Feel free to check out our most recent guide on how to start a docker format node. https://medium.com/corda/containerising-corda-with-corda-docker-image-and-docker-compose-af32d3e8746c

How to resolve service name to in docker swarm mode for hyperledger composer?

I am using docker swarm mode for hyperledger composer setup and I am new to docker. My fabric is running okay. When I use service names in connection.json file, it results into "REQUEST_TIMEOUT" while installing network. But when I use IP address of host instead of service name all works fine. So,how can I resolve service name/container name?
Here is my peer configuration:
peer1:
deploy:
replicas: 1
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
hostname: peer1.eprocure.org.com
image: hyperledger/fabric-peer:$ARCH-1.1.0
networks:
hyperledger-ov:
aliases:
- peer1.eprocure.org.com
environment:
- CORE_LOGGING_LEVEL=debug
- CORE_CHAINCODE_LOGGING_LEVEL=DEBUG
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_PEER_ID=peer1.eprocure.org.com
- CORE_PEER_ADDRESS=peer1.eprocure.org.com:7051
- CORE_PEER_LOCALMSPID=eProcureMSP
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/peer/msp
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb1:5984
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hyperledger-ov
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.eprocure.org.com:7051
- CORE_PEER_ENDORSER_ENABLED=true
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_GOSSIP_SKIPHANDSHAKE=true
- CORE_PEER_PROFILE_ENABLED=true
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: peer node start
volumes:
- /var/run/:/host/var/run/
- /export/composer/genesis-folder:/etc/hyperledger/configtx
- /export/composer/crypto-config/peerOrganizations/eprocure.org.com/peers/peer1.eprocure.org.com/msp:/etc/hyperledger/peer/msp
- /export/composer/crypto-config/peerOrganizations/eprocure.org.com/users:/etc/hyperledger/msp/users
ports:
- 8051:7051
- 8053:7053
And here is my current connection.json with IP
"peers": {
"peer0.eprocure.org.com": {
"url": "grpc://192.168.0.147:7051",
"eventUrl": "grpc://192.168.0.147:7053"
},
"peer1.eprocure.org.com": {
"url": "grpc://192.168.0.147:8051",
"eventUrl": "grpc://192.168.0.147:8053"
},
"peer2.eprocure.org.com": {
"url": "grpc://192.168.0.147:9051",
"eventUrl": "grpc://192.168.0.147:9053"
}
},
I have tried following before.
"peers": {
"peer0.eprocure.org.com": {
"url": "grpc://peers_peer0:7051",
"eventUrl": "grpc://peers_peer0:7053"
},
"peer1.eprocure.org.com": {
"url": "grpc://peers_peer1:8051",
"eventUrl": "grpc://peers_peer2:8053"
},
"peer2.eprocure.org.com": {
"url": "grpc://peers_peer2:9051",
"eventUrl": "grpc://peers_peer2:9053"
}
}
But this doesn't work.
Can anyone please let me know how can I solve my problem?

Resources