I would like to run OrientDB in a distributed mode with at least 2 nodes in a cluster. So I would like to know if setting a Distributed flag to true is enough or should there be some more configurations?
My docker-compose file looks like this:
node1:
image: orientdb:latest
ports:
- "2424:2424"
- "2480:2480"
environment:
ORIENTDB_ROOT_PASSWORD: 'pwd'
ORIENTDB_NODE_NAME: node1
volumes:
- /orientdb/config:/opt/orientdb/config
- /orientdb/databases:/orientdb/databases
- /orientdb/backup:/orientdb/backup
- ./data:/orientdb/bin/data
command: /orientdb/bin/server.sh -Ddistributed=true
I have created 2 services for each nodes with a separate set of configs:
version: '3'
services:
node1:
image: orientdb:latest
entrypoint: /orientdb/bin/server.sh -Ddistributed=true
volumes:
- /orientdb/config:/orientdb/config
- /orientdb/databases:/orientdb/databases
- /orientdb/backup:/orientdb/backup
- ./data:/orientdb/bin/data
environment:
ORIENTDB_ROOT_PASSWORD: 'pwd'
ORIENTDB_NODE_NAME: node1
ports:
- "2424:2424"
- "2480:2480"
node2:
image: orientdb:latest
entrypoint: /orientdb/bin/server.sh -Ddistributed=true
volumes:
- /orientdb/config2:/orientdb/config
- /orientdb/databases2:/orientdb/databases
- /orientdb/backup2:/orientdb/backup
- ./data:/orientdb/bin/data
environment:
ORIENTDB_ROOT_PASSWORD: 'pwd'
ORIENTDB_NODE_NAME: node2
depends_on:
- node1
Related
I m trying to setup a Redis Sentinel setup in my docker using bitnami setup.
https://hub.docker.com/r/bitnami/redis-sentinel
Here is my docker file
version: '2'
networks:
app-tier:
driver: bridge
services:
redis:
image: 'bitnami/redis:latest'
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- '6379:6379'
networks:
- app-tier
slave:
image: 'bitnami/redis:latest'
ports:
- '6380:6380'
environment:
- ALLOW_EMPTY_PASSWORD=yes
- REDIS_MASTER_HOST=127.0.0.1
networks:
- app-tier
redis-sentinel:
image: 'bitnami/redis-sentinel:latest'
environment:
- REDIS_MASTER_HOST=127.0.0.1
ports:
- '26379:26379'
networks:
- app-tier
but in this case my slave is not able to sync up with master and master info shows 0 slave connected.
Another alternative i tried is
version: '2'
networks:
app-tier:
driver: bridge
services:
redis:
image: 'bitnami/redis:latest'
environment:
- REDIS_REPLICATION_MODE=master
- ALLOW_EMPTY_PASSWORD=yes
ports:
- '6379:6379'
networks:
- app-tier
slave:
image: 'bitnami/redis:latest'
ports:
- '6380:6380'
environment:
- ALLOW_EMPTY_PASSWORD=yes
- REDIS_REPLICATION_MODE=slave
- REDIS_MASTER_HOST=redis
depends_on:
- redis
networks:
- app-tier
redis-sentinel:
image: 'bitnami/redis-sentinel:latest'
environment:
- REDIS_MASTER_HOST=redis
depends_on:
- redis
ports:
- '26379:26379'
networks:
- app-tier
in this case sentinel and slave is runing on virtual n/w which i m not able to access in local.
So here my moto is to monitor slave node so that i can see the logs.
Can anyone help
I've been experimenting with Hyperledger Composer and with the official multi org tutorial. I was successful in modifying the given demo, adding a third organisation and finally installing my own bna.
The next step was to fully understand how to deploy the Fabric network and Composer on multiple physical machines. And I went through all the available info about deploying such a process but without much luck.
Let suppose:
PC1: 1 Orderer, 1 Organisation, One cli container;
PC2: 1 Organisation;
PC3: 1 Organization;
I'm able to put the 3 machines in a swarm.
I know that I need to generate the certificates for all the machines and that they should be identical.
But from there I don't fully understand how to continue, or how to add references to the swarm network inside the compose files...
#docker-compose-cas-template-0 - This is for PC 1
version: '2'
networks:
example:
services:
ca0:
image: hyperledger/fabric-ca
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca-manager
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.manager.example.com-cert.pem
- FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/CA1_PRIVATE_KEY
ports:
- "7054:7054"
command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/ca.manager.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/CA1_PRIVATE_KEY -b admin:adminpw -d'
volumes:
- ./crypto-config/peerOrganizations/manager.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
container_name: ca_peerManager
networks:
- example
# docker-compose-base-1.yaml - This is for PC1
version: '2'
services:
orderer.proa.com:
container_name: orderer.proa.com
image: hyperledger/fabric-orderer:$IMAGE_TAG
environment:
- ORDERER_GENERAL_LOGLEVEL=INFO
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ../channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ../crypto-config/ordererOrganizations/proa.com/orderers/orderer.proa.com/msp:/var/hyperledger/orderer/msp
- ../crypto-config/ordererOrganizations/proa.com/orderers/orderer.proa.com/tls/:/var/hyperledger/orderer/tls
- orderer.proa.com:/var/hyperledger/production/orderer
ports:
- 7050:7050
peer0.manager.proa.com:
container_name: peer0.manager.proa.com
extends:
file: peer-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer0.manager.proa.com
- CORE_PEER_ADDRESS=peer0.manager.proa.com:7051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer1.manager.proa.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.manager.proa.com:7051
- CORE_PEER_LOCALMSPID=ManagerMSP
volumes:
- /var/run/:/host/var/run/
- ../crypto-config/peerOrganizations/manager.proa.com/peers/peer0.manager.proa.com/msp:/etc/hyperledger/fabric/msp
- ../crypto-config/peerOrganizations/manager.proa.com/peers/peer0.manager.proa.com/tls:/etc/hyperledger/fabric/tls
- peer0.manager.proa.com:/var/hyperledger/production
ports:
- 7051:7051
- 7053:7053
peer1.manager.proa.com:
container_name: peer1.manager.proa.com
extends:
file: peer-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer1.manager.proa.com
- CORE_PEER_ADDRESS=peer1.manager.proa.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.manager.proa.com:7051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.manager.proa.com:7051
- CORE_PEER_LOCALMSPID=ManagerMSP
volumes:
- /var/run/:/host/var/run/
- ../crypto-config/peerOrganizations/manager.proa.com/peers/peer1.manager.proa.com/msp:/etc/hyperledger/fabric/msp
- ../crypto-config/peerOrganizations/manager.proa.com/peers/peer1.manager.proa.com/tls:/etc/hyperledger/fabric/tls
- peer1.manager.proa.com:/var/hyperledger/production
ports:
- 8051:7051
- 8053:7053
UPDATED with docker-compose-cli.yaml - for ORG2 and PC2
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
volumes:
peer0.sponsor.example.com:
peer1.sponsor.example.com:
networks:
example:
external:
name: example
services:
peer0.sponsor.example.com:
container_name: peer0.sponsor.example.com
extends:
file: base/docker-compose-base-2.yaml
service: peer0.sponsor.example.com
networks:
- example
peer1.sponsor.example.com:
container_name: peer1.sponsor.example.com
extends:
file: base/docker-compose-base-2.yaml
service: peer1.sponsor.example.com
networks:
- example
cli2:
container_name: cli2
image: hyperledger/fabric-tools:$IMAGE_TAG
tty: true
stdin_open: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
#- CORE_LOGGING_LEVEL=DEBUG
- CORE_LOGGING_LEVEL=INFO
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.sponsor.example.com:7051
- CORE_PEER_LOCALMSPID=SponsorMSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/sponsor.example.com/peers/peer0.sponsor.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/sponsor.example.com/peers/peer0.sponsor.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/sponsor.example.com/peers/peer0.sponsor.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/sponsor.example.com/users/Admin#sponsor.example.com/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
- ./../chaincode/:/opt/gopath/src/github.com/chaincode
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on:
- peer0.sponsor.example.com
- peer1.sponsor.example.com
networks:
- example
You're on right track. I'll list down the steps below:
We create a docker swarm and connect these hosts. Since you've already created a swarm, I assume your hosts are connected.
We create an overlay network from one of the hosts. In our case, it is "example" network.
docker network create --attachable --driver overlay example
Now, this overlay network will be available in all the hosts. You can run the following command in each of the hosts:
docker network ls
Here, you'll be able to see the network with the name "example" that is an overlay network.
Also, you can inspect the network to see which all hosts (peers) are connected to this network using:
docker network inspect example
Spin up the containers. However, in this step, we need to make these containers join the existing overlay network i.e "example". So your compose files will be like:
version: '2'
networks:
example:
external:
name: example
services:
ca0:
image: hyperledger/fabric-ca
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca-manager
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.manager.example.com-cert.pem
- FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/CA1_PRIVATE_KEY
ports:
- "7054:7054"
command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/ca.manager.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/CA1_PRIVATE_KEY -b admin:adminpw -d'
volumes:
- ./crypto-config/peerOrganizations/manager.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
container_name: ca_peerManager
networks:
- example
This configuration will remain similar for all your docker containers, be it, peer, orderer, ca or cli. Also, this configuration will make sure that your container will join the existing network instead of creating a new one.
Note: Running the docker in swarm mode requires few ports to be opened up. You can find those references in this article.
Please help me with docker-compose file.
Right now, i'm using Solr in docker file, but i need to change it to SolrCloud. I need 2 Solr instances, an internal Zookeeper and docker (local).
This is an example of docker-compose file I did:
version: "3"
services:
mongo:
image: mongo:latest
container_name: mongo
hostname: mongo
networks:
- gsec
ports:
- 27018:27017
sqlserver:
image: microsoft/mssql-server-linux:latest
hostname: sqlserver
container_name: sqlserver
environment:
SA_PASSWORD: "#Password123!"
ACCEPT_EULA: "Y"
networks:
- gsec
ports:
- 1403:1433
solr:
image: solr
container_name: solr
ports:
- "8983:8983"
networks:
- gsec
volumes:
- data:/opt/solr/server/solr/mycores
entrypoint:
- docker-entrypoint.sh
- solr-precreate
- mycore
volumes:
data:
networks:
gsec:
driver: bridge
Thank you in advanced.
Solr docker instance has a zookeeper server embedded into. You have just to start Solr with the right parameters and add the zookeeper ports 9983:9983 in the docker-compose file:
solr:
image: solr
container_name: solr
ports:
- "9983:9983"
- "8983:8983"
networks:
- gsec
volumes:
- data:/opt/solr/server/solr/mycores
entrypoint:
- docker-entrypoint.sh
- solr
- start
- -c
- -f
SolrCloud basically is a Solr cluster where Zookeeper is used to coordinate and configure the cluster.
Usually you use SolrCloud with Docker because you're are learning how it works or because you're preparing your application (locally?) to deploy in a bigger environment.
On the other hand it doesn't make much sense run SolrCloud if you don't have a distributed configuration, i.e. having Solr and Zookeeper running on different nodes.
SolrCloud is the kind of cluster you need when you have hundred or even thousands searches per second with collection of millions or even billions of documents.
Your cluster have to scale horizontally.
Version to use with external zookeper.
'-t' to change data dir in container.
To see other options run: solr start -help
version: '3'
services:
solr1:
image: solr
ports:
- "8984:8984"
entrypoint:
- solr
command:
- start
- -f
- -c
- -h
- "10.1.0.157"
- -p
- "8984"
- -z
- "10.1.0.157:2181,10.1.0.157:2182,10.1.0.157:2183"
- -m
- 1g
- -t
- "/opt/solr/server/solr/mycores"
volumes:
- "./data1/mycores:/opt/solr/server/solr/mycores"
I use this setup locally to test three instances of solr and three instances of zookeeper, based on the official example.
version: '3.7'
services:
solr-1:
image: solr:8.7
container_name: solr-1
ports:
- "8981:8983"
environment:
- ZK_HOST=zoo-1:2181,zoo-2:2181,zoo-3:2181
networks:
- solr
depends_on:
- zoo-1
- zoo-2
- zoo-3
# command:
# - solr-precreate
# - gettingstarted
solr-2:
image: solr:8.7
container_name: solr-2
ports:
- "8982:8983"
environment:
- ZK_HOST=zoo-1:2181,zoo-2:2181,zoo-3:2181
networks:
- solr
depends_on:
- zoo-1
- zoo-2
- zoo-3
solr-3:
image: solr:8.7
container_name: solr-3
ports:
- "8983:8983"
environment:
- ZK_HOST=zoo-1:2181,zoo-2:2181,zoo-3:2181
networks:
- solr
depends_on:
- zoo-1
- zoo-2
- zoo-3
zoo-1:
image: zookeeper:3.6
container_name: zoo-1
restart: always
hostname: zoo-1
volumes:
- zoo1data:/data
ports:
- 2181:2181
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181 server.2=zoo-2:2888:3888;2181 server.3=zoo-3:2888:3888;2181
networks:
- solr
zoo-2:
image: zookeeper:3.6
container_name: zoo-2
restart: always
hostname: zoo-2
volumes:
- zoo2data:/data
ports:
- 2182:2181
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zoo-1:2888:3888;2181 server.2=0.0.0.0:2888:3888;2181 server.3=zoo-3:2888:3888;2181
networks:
- solr
zoo-3:
image: zookeeper:3.6
container_name: zoo-3
restart: always
hostname: zoo-3
volumes:
- zoo3data:/data
ports:
- 2183:2181
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=zoo-1:2888:3888;2181 server.2=zoo-2:2888:3888;2181 server.3=0.0.0.0:2888:3888;2181
networks:
- solr
networks:
solr:
# persist the zookeeper data in volumes
volumes:
zoo1data:
driver: local
zoo2data:
driver: local
zoo3data:
driver: local
I'm attempting to run this script in Win10 to configure everything.
All containers except the elastic container are initialized correctly and
Elastic times out and then exits with code 124.
https://imgur.com/a/FO8ckwc (some log outputs)
I'm running this script where I didn't touch anything except the Windows ports (you can see the comments)
https://pastebin.com/7Z8Gnenr
version: '3.1'
# Generated on 23-04-2018
services:
alfresco:
image: openmbeeguest/mms-repo:3.2.4-SNAPSHOT
environment:
CATALINA_OPTS: "-Xmx1G -XX:+UseConcMarkSweepGC"
depends_on:
- postgresql
- activemq
- elastic
networks:
- internal
ports:
- 8080:8080
volumes:
- alf_logs:/usr/local/tomcat/logs
- alf_data:/opt/alf_data
tmpfs:
- /tmp
- /usr/local/tomcat/temp/
- /usr/local/tomcat/work/
solr:
image: openmbeeguest/mms-solr:3.2.4-SNAPSHOT
environment:
CATALINA_OPTS: "-Xmx1G -XX:+UseG1GC -XX:+ParallelRefProcEnabled -XX:G1HeapRegionSize=8m -XX:MaxGCPauseMillis=200"
depends_on:
- alfresco
networks:
- internal
volumes:
- solr_logs:/usr/local/tomcat/logs/
- solr_content_store:/opt/solr/ContentStore
tmpfs:
- /tmp
- /usr/local/tomcat/temp/
- /usr/local/tomcat/work/
activemq:
image: openmbeeguest/mms-activemq:3.2.4-SNAPSHOT
ports:
#I changed these Windows side ports
- 61615:61616
- 61617:61614
- 8162:8161
# ORIGINAL
#- 61616:61616
#- 61614:61614
#- 8161:8161
volumes:
- activemq-data-volume:/data/activemq
- activemq-log-volume:/var/log/activemq
- activemq-conf-volume:/opt/activemq/conf
environment:
- ACTIVEMQ_ADMIN_LOGIN admin
- ACTIVEMQ_ADMIN_PASSWORD admin
networks:
- internal
elastic:
image: openmbeeguest/mms-elastic:3.2.4-SNAPSHOT
environment:
CLEAN: 'false'
ports:
- 9200:9200
volumes:
- elastic-data-volume:/usr/share/elasticsearch/data
networks:
- internal
postgresql:
image: openmbeeguest/mms-postgres:3.2.4-SNAPSHOT
volumes:
- pgsql_data:/var/lib/postgresql/data
environment:
- POSTGRES_USER=alfresco
- POSTGRES_PASSWORD=alfresco
- POSTGRES_DB=alfresco
networks:
- internal
volumes:
alf_logs:
alf_data:
solr_logs:
solr_content_store:
pgsql_data:
activemq-data-volume:
activemq-log-volume:
activemq-conf-volume:
elastic-data-volume:
nginx-external-volume:
networks:
internal:
Any help would be greatly appreciated!
Do you have the logs from the elasticsearch container to share? Without that it's hard to tell why it's exiting.
One thing that's tripped me up repeatedly though is the vm.max_map_count setting - the default in Docker is too low for elasticsearch to function, so it's a good first thing to check.
I'm running tests from my Jenkins and I've realized that if I run one only test, the result is SUCCESS; but if I run three tests in parallel, that test doesn't finish SUCCESS, it gets timeout.
My docker configuration is:
version: "2"
services:
selenium:
networks:
mynet:
ipv4_address: 172.25.0.100
image: bufete/spider-selenium:beta1
ports:
- "8080:8080"
hub:
networks:
mynet:
ipv4_address: 172.25.0.101
image: selenium/hub
ports:
- "5001:4444"
environment:
- GRID_BROWSER_TIMEOUT=1400
- GRID_MAX_SESSION=10
- GRID_TIMEOUT=120
- GRID_NEW_SESSION_WAIT_TIMEOUT=-1
# - GRID_TIMEOUT=900
chrome:
image: selenium/node-chrome-debug:3.3.1
shm_size: 2g
links:
- hub
volumes:
- /dev/urandom:/dev/random
depends_on:
- hub
environment:
- HUB_PORT_4444_TCP_ADDR=172.25.0.101
- HUB_PORT_4444_TCP_PORT=4444
- NODE_MAX_SESSION=1
- NODE_MAX_INSTANCES=1
- SCREEN_WIDTH=1920
- SCREEN_HEIGHT=1480
- GRID_TIMEOUT=120
- no_proxy=""
networks:
- mynet
# ports:
# - "666:5900"
volumes:
- "/dev/shm:/dev/shm"
In Jenkins, I execute the following command:
cd folder_project/name_test/
/opt/tools/apache-maven-3.5.0/bin/mvn -e test
Do you know how could I get the same result running my tests in parallel???
Thanks so much!!