I have a problem with Elasticsearch 5 in Docker.
Stack compose file:
version: "3.4"
services:
elastic01: &elasticbase
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.7
networks:
- default
restart: always
environment:
- node.name=elastic01
- cluster.name=elastic
- network.host=0.0.0.0
- xpack.security.enabled=false
- xpack.monitoring.enabled=false
- xpack.watcher.enabled=false
- bootstrap.memory_lock=false ## Docker swarm does not support that
- discovery.zen.minimum_master_nodes=2
- discovery.zen.ping.unicast.hosts=elastic02,elastic03
volumes:
- /var/docker/elastic:/usr/share/elasticsearch/data
deploy:
placement:
constraints: [node.hostname == node1]
elastic02:
<<: *elasticbase
depends_on:
- elastic01
environment:
- node.name=elastic02
- cluster.name=elastic
- network.host=0.0.0.0
- xpack.security.enabled=false
- xpack.monitoring.enabled=false
- xpack.watcher.enabled=false
- bootstrap.memory_lock=false ## Docker swarm does not support that
- discovery.zen.minimum_master_nodes=2
- discovery.zen.ping.unicast.hosts=elastic01,elastic03
volumes:
- /var/docker/elastic:/usr/share/elasticsearch/data
deploy:
placement:
constraints: [node.hostname == node2]
elastic03:
<<: *elasticbase
depends_on:
- elastic01
volumes:
- /var/docker/elastic:/usr/share/elasticsearch/data
environment:
- node.name=elastic03
- cluster.name=elastic
- network.host=0.0.0.0
- xpack.security.enabled=false
- bootstrap.memory_lock=false ## Docker swarm does not support that
- discovery.zen.minimum_master_nodes=2
- discovery.zen.ping.unicast.hosts=elastic01,elastic02
deploy:
placement:
constraints: [node.hostname == node3]
networks:
default:
driver: overlay
attachable: true
When I run stack file, it works like a charm. _cluster/health shows that nodes are up and running a the status is "Green" but after while, periodically, system goes down with exception Elastic exception
Feb 10 09:39:39 : [2018-02-10T08:39:39,159][WARN ][o.e.d.z.UnicastZenPing ] [elastic01] failed to send ping to [{elastic03}{2WS6GPu8Qka9YLE_PWfVKg}{AD_Nw1m9T-CZHUFhgXQjtQ}{10.0.9.5}{10.0.9.5:9300}{ml.max_open_jobs=10, ml.enabled=true}]
Feb 10 09:39:39 : org.elasticsearch.transport.ReceiveTimeoutTransportException: [elastic03][10.0.9.5:9300][internal:discovery/zen/unicast] request_id [5167] timed out after [3750ms]
Feb 10 09:39:39 : at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:961) [elasticsearch-5.6.7.jar:5.6.7]
Feb 10 09:39:39 : at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.6.7.jar:5.6.7]
Feb 10 09:39:39 : at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_161]
Feb 10 09:39:39 : at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_161]
Feb 10 09:39:39 : at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
Feb 10 09:39:40 : [2018-02-10T08:39:40,159][WARN ][o.e.d.z.UnicastZenPing ] [elastic01] failed to send ping to [{elastic03}{2WS6GPu8Qka9YLE_PWfVKg}{AD_Nw1m9T-CZHUFhgXQjtQ}{10.0.9.5}{10.0.9.5:9300}{ml.max_open_jobs=10, ml.enabled=true}]
Feb 10 09:39:40 : org.elasticsearch.transport.ReceiveTimeoutTransportException: [elastic03][10.0.9.5:9300][internal:discovery/zen/unicast] request_id [5172] timed out after [3750ms]
Feb 10 09:39:40 : at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:961) [elasticsearch-5.6.7.jar:5.6.7]
Feb 10 09:39:40 : at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.6.7.jar:5.6.7]
Feb 10 09:39:40 : at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_161]
Feb 10 09:39:40 : at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_161]
Feb 10 09:39:40 : at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
Feb 10 09:39:41 : [2018-02-10T08:39:41,159][WARN ][o.e.d.z.UnicastZenPing ] [elastic01] failed to send ping to [{elastic03}{2WS6GPu8Qka9YLE_PWfVKg}{AD_Nw1m9T-CZHUFhgXQjtQ}{10.0.9.5}{10.0.9.5:9300}{ml.max_open_jobs=10, ml.enabled=true}]
Feb 10 09:39:41 : org.elasticsearch.transport.ReceiveTimeoutTransportException: [elastic03][10.0.9.5:9300][internal:discovery/zen/unicast] request_id [5175] timed out after [3751ms]
Feb 10 09:39:41 : at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:961) [elasticsearch-5.6.7.jar:5.6.7]
Feb 10 09:39:41 : at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.6.7.jar:5.6.7]
Feb 10 09:39:41 : at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_161]
Feb 10 09:39:41 : at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_161]
Feb 10 09:39:41 : at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
And sometimes:
Feb 10 09:44:10 [2018-02-10T08:44:10,810][WARN ][o.e.t.n.Netty4Transport ] [elastic01] exception caught on transport layer [[id: 0x3675891a, L:/10.0.9.210:53316 - R:10.0.9.5/10.0.9.5:9300]], closing connection
Feb 10 09:44:10 java.io.IOException: No route to host
Feb 10 09:44:10 at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[?:?]
Feb 10 09:44:10 at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) ~[?:?]
Feb 10 09:44:10 at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[?:?]
Feb 10 09:44:10 at sun.nio.ch.IOUtil.read(IOUtil.java:197) ~[?:?]
Feb 10 09:44:10 at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) ~[?:?]
Feb 10 09:44:10 at io.netty.buffer.PooledHeapByteBuf.setBytes(PooledHeapByteBuf.java:261) ~[netty-buffer-4.1.13.Final.jar:4.1.13.Final]
Feb 10 09:44:10 at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1100) ~[netty-buffer-4.1.13.Final.jar:4.1.13.Final]
Feb 10 09:44:10 at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:372) ~[netty-transport-4.1.13.Final.jar:4.1.13.Final]
Feb 10 09:44:10 at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Feb 10 09:44:10 at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Feb 10 09:44:10 at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Feb 10 09:44:10 at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Feb 10 09:44:10 at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Feb 10 09:44:10 at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.13.Final.jar:4.1.13.Final]
Feb 10 09:44:10 at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
Strange thing is that all the time it happens I am able to ping the container and resolve the name from the container where it happens. No packet loss, no timeouts. The only bad thing is the transport layer in the Elastic. All other services are running in the same cluster without issues (MongoDB, Redis, Internal microservices)
Does anybody have a clue?
I found the issue.
Elasticsearch must be binded to a one single interface, not to 0.0.0.0. Once I binded it to eth0, it started to work. Also it looks, there cannot be named volume - it throws another error during the time. It must be mounted to a local drive directly.
This works:
services:
elastic01:
environment:
network.host=_eth0_
Related
I tried to deploy Keycloak and it's database via Docker (Docker-Compose).
It retries 10 times, then failes the deployment. The same docker-compose.yml file worked for me in the past. Haven't done any OS or contianer updates since.
The the following error and warning is thrown:
keycloak | 09:48:42,070 ERROR [org.jgroups.protocols.TCP] (ServerService Thread Pool -- 60) JGRP000034: cff2ce8f5cdf: failure sending message to e832b25e9785: java.net.SocketTimeoutException: connect timed out
keycloak | 09:48:45,378 WARN [org.jgroups.protocols.pbcast.GMS] (ServerService Thread Pool -- 60) cff2ce8f5cdf: JOIN(cff2ce8f5cdf) sent to 05bdb7a4a7f5 timed out (after 3000 ms), on try 0
My docker-compose.yml looks like this:
keycloak:
container_name: keycloak
image: jboss/keycloak:11.0.2
ports:
- 8081:8080
environment:
- DB_VENDOR=mariadb
- DB_ADDR=authenticationDB
- DB_DATABASE=keycloak
- DB_USER=keycloak
- DB_PASSWORD=password
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=admin
- JGROUPS_DISCOVERY_PROTOCOL=JDBC_PING
- JGROUPS_DISCOVERY_PROPERTIES=datasource_jndi_name=java:jboss/datasources/KeycloakDS,info_writer_sleep_time=500
depends_on:
- authenticationDB
authenticationDB:
container_name: authenticationDB
image: mariadb
volumes:
- ./keycloakDB:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: keycloak
MYSQL_USER: keycloak
MYSQL_PASSWORD: password
healthcheck:
test: ["CMD", "mysqladmin", "ping", "--silent"]
I've tried following:
SSH into Keycloak's container and curl authenticationDB:3306. I've got a no permission error so the container could talk to each other.
Check if the database is running inside the DB-Container and yes, it's running.
I am running out of ideas.
Normally it retried 10 times, and then successfully deployed keycloak.
Thanks in advance,
Rosario
I would say that docker image jboss/keycloak:11.0.2 doesn't support JDBC_PING:
$ docker run --rm --entrypoint bash -ti jboss/keycloak:11.0.2 \
-c 'ls -lah /opt/jboss/tools/cli/jgroups/discovery/'
total 4.0K
drwxrwxr-x. 1 jboss root 25 Sep 15 09:01 .
drwxrwxr-x. 1 jboss root 23 Sep 15 09:01 ..
-rw-rw-r--. 1 jboss root 611 Sep 15 09:01 default.cli
vs
$ docker run --rm --entrypoint bash -ti jboss/keycloak:12.0.2 \
-c 'ls -lah /opt/jboss/tools/cli/jgroups/discovery/'
total 8.0K
drwxrwxr-x. 1 jboss root 46 Jan 19 07:27 .
drwxrwxr-x. 1 jboss root 23 Jan 19 07:27 ..
-rw-rw-r--. 1 jboss root 611 Jan 19 07:27 default.cli
-rw-rw-r--. 1 jboss root 605 Jan 19 07:27 JDBC_PING.cli
Try to test newer version.
I'm working through a course at Blockchain Training Alliance for Hyperledger Fabric 1.4.*
I'm trying to start a channel on a dev test network and keep getting the following error:
2020-07-07 01:41:48.496 UTC [cauthdsl] deduplicate -> ERRO 34f Principal deserialization failure (the supplied identity is not valid: x509: certificate signed by unknown authority) for identity 0
I saw in one post by anjy that network time-sync issues can cause this issue. My time did seem to be different on the containers verses the host VM, so I installed ntpdate and ran sudo ntpdate pool.ntp.org on host VM before starting the network. That fixed the time issue, but the above error was still there.
According to Nikhil Gupta's post, this error indicates that "the MSP ID that was passed as a parameter with the request was not recognized by the ordering service."
"the ordering service recognized your MSP ID, but could not validate that your certificate was issued by one of your organization's certificate authorities."
I'm using cryptogen and configtxgen to create my artifacts as follows:
$cryptogen generate --config=./crypto-config.yaml
then I edit docker-compose.yml to include the newly generated sk value and continue with:
$configtxgen -profile DefaultBlockOrderingService -outputBlock ./config/genesis.block -configPath $PWD
$configtxgen -profile btaMembersOnly -outputCreateChannelTx ./config/btamembersonly.tx -channelID btamembersonly
After succeeding to create the genesis block and channel transaction artifact, I start the network:
$docker-compose -f docker-compose.yml up -d Devorderer.btacoin.com Andy.BTA.btacoin.com GeneralCA.btacoin.com cli
andy#ubuntu-server:~/fabric/network$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f406f488ede5 hyperledger/fabric-peer "peer node start" 4 seconds ago Up 1 second 0.0.0.0:7051->7051/tcp, 0.0.0.0:7053->7053/tcp Andy.BTA.btacoin.com
0900795b1368 hyperledger/fabric-tools "/bin/bash" 4 seconds ago Up 2 seconds cli
c4946b315b08 hyperledger/fabric-orderer "orderer" 6 seconds ago Up 3 seconds 0.0.0.0:7050->7050/tcp Devorderer.btacoin.com
2e66b1d981f5 hyperledger/fabric-ca "sh -c 'fabric-ca-se…" 6 seconds ago Up 3 seconds 0.0.0.0:7054->7054/tcp GeneralCA.btacoin.com
Then I log into the admin peer and try to start the channel:
$docker exec -it Andy.BTA.btacoin.com bash
#cd /etc/hyperledger/configtx
#export CORE_PEER_LOCALMSPID=BTAMSP
#export CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin#BTA.btacoin.com/msp
#peer channel create -o Devorderer.btacoin.com:7050 -f /etc/hyperledger/configtx/btamembersonly.tx -c btamembersonly
At this point, I get the following error:
Error: got unexpected status: BAD_REQUEST -- error validating channel creation transaction for new channel 'btamembersonly', could not succesfully apply update to template configuration: error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Admins' sub-policies to be satisfied
At this point, the orderer node log shows the error mentioned at the beginning:
2020-07-07 01:57:04.947 UTC [cauthdsl] deduplicate -> ERRO 34f Principal deserialization failure (the supplied identity is not valid: x509: certificate signed by unknown authority) for identity 0
2020-07-07 01:57:04.947 UTC [cauthdsl] func1 -> DEBU 350 0xc00046e820 gate 1594087024947536840 evaluation starts
2020-07-07 01:57:04.947 UTC [cauthdsl] func2 -> DEBU 351 0xc00046e820 signed by 0 principal evaluation starts (used [false])
2020-07-07 01:57:04.947 UTC [cauthdsl] func2 -> DEBU 352 0xc00046e820 principal evaluation fails
2020-07-07 01:57:04.947 UTC [cauthdsl] func1 -> DEBU 353 0xc00046e820 gate 1594087024947536840 evaluation fails
2020-07-07 01:57:04.947 UTC [policies] Evaluate -> DEBU 354 Signature set did not satisfy policy /Channel/Application/BTAMSP/Admins
2020-07-07 01:57:04.947 UTC [policies] Evaluate -> DEBU 355 == Done Evaluating *cauthdsl.policy Policy /Channel/Application/BTAMSP/Admins
2020-07-07 01:57:04.947 UTC [policies] func1 -> DEBU 356 Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ BTAMSP/Admins ]
2020-07-07 01:57:04.947 UTC [policies] Evaluate -> DEBU 357 Signature set did not satisfy policy /Channel/Application/ChannelCreationPolicy
2020-07-07 01:57:04.947 UTC [policies] Evaluate -> DEBU 358 == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Application/ChannelCreationPolicy
2020-07-07 01:57:04.947 UTC [orderer.common.broadcast] ProcessMessage -> WARN 359 [channel: btamembersonly] Rejecting broadcast of config message from 172.18.0.4:56024 because of error: error validating channel creation transaction for new channel 'btamembersonly', could not succesfully apply update to template configuration: error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Admins' sub-policies to be satisfied
2020-07-07 01:57:04.947 UTC [orderer.common.server] func1 -> DEBU 35a Closing Broadcast stream
2020-07-07 01:57:04.947 UTC [comm.grpc.server] 1 -> INFO 35b streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=172.18.0.4:56024 grpc.code=OK grpc.call_duration=12.196088ms
2020-07-07 01:57:04.960 UTC [common.deliver] Handle -> WARN 35c Error reading from 172.18.0.4:56022: rpc error: code = Canceled desc = context canceled
2020-07-07 01:57:04.961 UTC [orderer.common.server] func1 -> DEBU 35d Closing Deliver stream
2020-07-07 01:57:04.961 UTC [comm.grpc.server] 1 -> INFO 35e streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=172.18.0.4:56022 error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.call_duration=27.125361ms
2020-07-07 01:57:04.964 UTC [grpc] infof -> DEBU 35f transport: loopyWriter.run returning. connection error: desc = "transport is closing"
2020-07-07 01:57:04.964 UTC [grpc] infof -> DEBU 360 transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I'm not really sure where to look next for troubleshooting.
My setup is as follows (please let me know if I missed any key information):
andy#ubuntu-server:~/fabric/network$ tree -L 2
.
├── config
│ ├── btamembersonly.tx
│ └── genesis.block
├── configtx.yaml
├── crypto-config
│ ├── ordererOrganizations
│ └── peerOrganizations
├── crypto-config.yaml
├── defaults
│ ├── core.yaml
│ └── orderer.yaml
└── docker-compose.yml
docker-compose.yml:
andy#ubuntu-server:~/fabric/network$ cat -n docker-compose.yml
1 version: '2'
2
3 networks:
4 btacoin:
5
6 services:
7 GeneralCA.btacoin.com:
8 container_name: GeneralCA.btacoin.com
9 image: hyperledger/fabric-ca
10 command: sh -c 'fabric-ca-server start -b btaCA:SimplePassword' #startup command
11 environment:
12 - FABRIC_CA_SERVER_CA_NAME=GeneralCA.btacoin.com
13 - FABRIC_LOGGING_SPEC=debug
14 - FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
15 - FABRIC_CA_SERVER_CA_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.BTA.btacoin.com-cert.pem
16 - FABRIC_CA_SERVER_CA_KEYFILE=/etc/hyperledger/fabric-ca-server-config/ccb94b9473ef97a36b7d83beeb80583e9a2bda50ca5a392071b3c96185948ed7_sk
17 volumes:
18 - ./crypto-config/peerOrganizations/BTA.btacoin.com/ca/:/etc/hyperledger/fabric-ca-server-config
19 ports:
20 - 7054:7054
21 networks:
22 - btacoin
23
24 Devorderer.btacoin.com:
25 container_name: Devorderer.btacoin.com
26 image: hyperledger/fabric-orderer
27 command: orderer #startup command
28 environment:
29 - FABRIC_LOGGING_SPEC=info
30 - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
31 - ORDERER_GENERAL_GENESISMETHOD=file
32 - FABRIC_LOGGING_SPEC=debug
33 - ORDERER_GENERAL_LOCALMSPID=btacoinOrderersMSP
34 - ORDERER_GENERAL_LOCALMSPDIR=/etc/hyperledger/msp/orderer/msp
35 - ORDERER_GENERAL_GENESISFILE=/etc/hyperledger/configtx/genesis.block
36
37 volumes:
38 - ./config/:/etc/hyperledger/configtx
39 - ./crypto-config/ordererOrganizations/btacoin.com/orderers/Devorderer.btacoin.com/:/etc/hyperledger/msp/orderer
40 - ./crypto-config/peerOrganizations/BTA.btacoin.com/peers/Andy.BTA.btacoin.com/:/etc/hyperledger/msp/BTA
41 ports:
42 - 7050:7050
43 networks:
44 - btacoin
45
46 Andy.BTA.btacoin.com:
47 container_name: Andy.BTA.btacoin.com
48 image: hyperledger/fabric-peer
49 command: peer node start #startup command
50 environment:
51 - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_btacoin
52 - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
53 - FABRIC_LOGGING_SPEC=debug
54 - CORE_PEER_ID=Andy.BTA.btacoin.com
55 - CORE_PEER_ADDRESS=Andy.BTA.btacoin.com:7051
56 - CORE_PEER_LOCALMSPID=BTAMSP
57 - CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/peer/
58
59 volumes:
60 - /var/run/:/host/var/run/
61 - ./crypto-config/peerOrganizations/BTA.btacoin.com/peers/Andy.BTA.btacoin.com/msp:/etc/hyperledger/msp/peer
62 - ./crypto-config/peerOrganizations/BTA.btacoin.com/users:/etc/hyperledger/msp/users
63 - ./config:/etc/hyperledger/configtx
64 - ./../cc:/etc/hyperledger/chaincode
65 - ./chaincode:/etc/hyperledger/chaincode #Referenced in the Student Lab Guide
66 ports:
67 - 7051:7051
68 - 7053:7053
69 depends_on:
70 - Devorderer.btacoin.com
71 networks:
72 - btacoin
73
74 cli:
75 container_name: cli
76 image: hyperledger/fabric-tools
77 command: /bin/bash #startup command
78 tty: true
79 environment:
80 - GOPATH=/opt/gopath/src
81 - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
82 - FABRIC_LOGGING_SPEC=debug
83 - CORE_PEER_ID=cli
84 - CORE_PEER_ADDRESS=Andy.BTA.btacoin.com:7051
85 - CORE_PEER_LOCALMSPID=BTAMSP
86 - CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/BTA.btacoin.com/user/Admin#BTA.btacoin.com/msp
87
88 volumes:
89 - /var/run/:/host/var/run/
90 - ./../cc/:/opt/gopath/src/
91 - ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
92 # - ./cryptoconfig:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
93 - ./config:/etc/hyperledger/configtx
94 depends_on:
95 - Devorderer.btacoin.com
96 networks:
97 - btacoin
configtx.yaml:
andy#ubuntu-server:~/fabric/network$ cat -n configtx.yaml
1 Organizations:
2 - &btacoinOrderers
3 Name: btacoinOrderersMSP
4 ID: btacoinOrderersMSP
5 MSPDir: crypto-config/ordererOrganizations/btacoin.com/msp
6 - &BTA
7 Name: BTAMSP
8 ID: BTAMSP
9 MSPDir: crypto-config/ordererOrganizations/btacoin.com/msp
10 AnchorPeers:
11 - Host: Andy.BTA.btacoin.com
12 Port: 7051
13
14 Application: &ApplicationDefaults
15 Organizations:
16
17 Orderer: &DevModeOrdering
18 OrdererType: solo
19 Addresses:
20 - Devorderer.btacoin.com:7050
21 BatchTimeout: 1s
22 BatchSize:
23 MaxMessageCount: 1
24
25 Channel:
26
27 Profiles:
28 DefaultBlockOrderingService:
29 Orderer:
30 <<: *DevModeOrdering
31 Organizations:
32 - *btacoinOrderers
33 Consortiums:
34 NetworkConsortium: #Note, in the video, this was called SampleConsortium
35 Organizations:
36 - *BTA
37 btaMembersOnly:
38 Consortium: NetworkConsortium #Note, in the video, this was called SampleConsortium
39 Application:
40 <<: *ApplicationDefaults
41 Organizations:
42 - *BTA
43
crypto-config.yaml:
andy#ubuntu-server:~/fabric/network$ cat -n crypto-config.yaml
1 #Note: crypto-config is only used for development purposes, NOT for production purposes
2 #For production, you should have the Certificate Authority manage things!
3 OrdererOrgs:
4 - Name: btacoinOrderers
5 Domain: btacoin.com
6 Specs:
7 - Hostname: Devorderer
8
9 PeerOrgs:
10 - Name: BTA
11 Domain: BTA.btacoin.com
12 Specs:
13 - Hostname: Andy
14 Template:
15 Count: 1
16 Users:
17 Count: 1
CA cert value used in docker-compose.yml:
andy#ubuntu-server:~/fabric/network$ ls ./crypto-config/peerOrganizations/BTA.btacoin.com/ca
ca.BTA.btacoin.com-cert.pem ccb94b9473ef97a36b7d83beeb80583e9a2bda50ca5a392071b3c96185948ed7_sk
orderer.yaml and core.yaml come from:
https://s3.us-east-2.amazonaws.com/fabric-masterclass/orderer.yaml
https://s3.us-east-2.amazonaws.com/fabric-masterclass/core.yaml
Docker images come from:
docker pull hyperledger/fabric-peer
docker pull hyperledger/fabric-orderer
docker pull hyperledger/fabric-ca
docker pull hyperledger/fabric-tools
*The course is self-guided and with no forum or course support available. If there was another channel for help, I wouldn't post here. (I tried emailing the course creators for help prior to posting here.)
According to Nikhil Gupta's post, this error indicates that "the MSP ID that was passed as a parameter with the request was not recognized by the ordering service."
Actually, this isn't true. If you read this post carefully, you will see that the error
ERRO 02d Principal deserialization failure
(the supplied identity is not valid: x509: certificate usigned by unknown authority)
actually indicates that the MSPID is recognized by the system, and that in fact, it is your certificate which is invalid. So, either the MSPID does not match the certificate, or, the certificate was not appropriately isseud by the CAs for that MSPID in your channel configuration.
In your case, based on the fact that this is a course exercise, and not a production network, I would guess that perhaps the network has been bootstrapped multiple times, but without properly cleaning all of the persisted artifacts in between steps. I would encourage you to ensure that all docker containers and especially docker volumes have been removed, and attempt to recreate this failure in a clean environment. The docker-compose you included does enumerate volumes, you can see them via docker volume ls, and you can remove them with a command like docker volume rm $(docker volume ls -q).
As I mentioned in the comments, the course creators sent me a copy of the official YAML files for comparison. After comparing those files with mine, I discovered that I entered the wrong MSP for the peer in configtx.yaml.
Instead of pointing to the peer's MSP, I was pointing to the orderer's MSP!
Original (with error):
6 - &BTA
7 Name: BTAMSP
8 ID: BTAMSP
9 MSPDir: crypto-config/ordererOrganizations/btacoin.com/msp
10 AnchorPeers:
11 - Host: Andy.BTA.btacoin.com
12 Port: 7051
Corrected line:
9 MSPDir: crypto-config/peerOrganizations/BTA.btacoin.com/msp
After fixing this line, I was able to successfully create the new channel.
So this is my current docker-compose.yml:
version: "2.0"
services:
redis:
image: redis
container_name: framework-redis
ports:
- "127.0.0.1:6379:6379"
web:
image: myContainer:v1
container_name: framework-web
depends_on:
- redis
volumes:
- /var/www/myApp:/app
environment:
LOG_STDOUT: /var/log/docker.access.log
LOG_STDERR: /var/log/docker.error.log
ports:
- "8100:80"
I've tried different settings; for example: not using a port value for redis, using 0.0.0.0, switching to the expose option.
If I try to connect using 127.0.0.1 from the host machine it works, but it fails with a connection refused message for my app container.
Any thoughts?
If you're accessing framework-redis from framework-web, then you need to access it using ip (or container name, i.e., framework-redis) and port of framework-redis. Since, its going to be behind a docker bridge, an ip in the range of 172.17.0.0/16 will be allocated to framework-redis. You can use that IP or better just give the container name along with 6379 port.
$ cat docker-compose.yml
version: "2.0"
services:
redis:
image: redis
container_name: framework-redis
web:
image: redis
container_name: framework-web
depends_on:
- redis
command: [ "redis-cli", "-h", "framework-redis", "ping" ]
$
$ docker-compose up
Recreating framework-redis ... done
Recreating framework-web ... done
Attaching to framework-redis, framework-web
framework-redis | 1:C 09 Dec 2019 19:25:52.798 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
framework-redis | 1:C 09 Dec 2019 19:25:52.798 # Redis version=5.0.6, bits=64, commit=00000000, modified=0, pid=1, just started
framework-redis | 1:C 09 Dec 2019 19:25:52.798 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
framework-redis | 1:M 09 Dec 2019 19:25:52.799 * Running mode=standalone, port=6379.
framework-redis | 1:M 09 Dec 2019 19:25:52.800 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
framework-redis | 1:M 09 Dec 2019 19:25:52.800 # Server initialized
framework-redis | 1:M 09 Dec 2019 19:25:52.800 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
framework-redis | 1:M 09 Dec 2019 19:25:52.800 * DB loaded from disk: 0.000 seconds
framework-redis | 1:M 09 Dec 2019 19:25:52.800 * Ready to accept connections
framework-web | PONG
framework-web exited with code 0
As you can see above, I received a PONG for the PING command.
Some additional points:
ports are written in the form HOST_PORT:CONTAINER_PORT. You don't need to give IP (as pointed out by #coulburton in the comments).
If you're only accessing framework-redis from framework-web, then you don't need to publish ports (i.e., 6379:6379 in the ports section). We only need to publish ports when we want to access an application running in the container network (which as far as I know by default is 172.17.0.0/16) from some other network (ex, the host machine or some other physical machine).
Anyone has a working recipe of Redis cluster in swarm mode? I tried everything I know and searched the internet but seems like an impossible task.
Here is what I have so far:
version: '3.4'
services:
redis-master:
image: redis
networks:
- redisdb
ports:
- 6379:6379
volumes:
- redis-master:/data
redis-slave:
image: redis
networks:
- redisdb
command: redis-server --slaveof redis-master 6379
volumes:
- redis-slave:/data
sentinel:
image: redis
networks:
- redisdb
ports:
- 26379:26379
command: >
bash -c "echo 'port 26379' > sentinel.conf &&
echo 'dir /tmp' >> sentinel.conf &&
echo 'sentinel monitor redis-master redis-master 6379 2' >> sentinel.conf &&
echo 'sentinel down-after-milliseconds redis-master 5000' >> sentinel.conf &&
echo 'sentinel parallel-syncs redis-master 1' >> sentinel.conf &&
echo 'sentinel failover-timeout redis-master 5000' >> sentinel.conf &&
cat sentinel.conf &&
redis-server sentinel.conf --sentinel"
links:
- redis-master
- redis-slave
volumes:
redis-master:
driver: local
redis-slave:
driver: local
networks:
redisdb:
attachable: true
driver: overlay
I use the following command to deploy as a service:
docker stack deploy --compose-file docker-compose-test.yml redis
The result is deployed services, were redis-master and redis-slave are connecting and I can see the synchronization processes happening as follows:
redis-master log:
1:C 16 Oct 2019 04:19:42.720 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 16 Oct 2019 04:19:42.720 # Redis version=5.0.6, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 16 Oct 2019 04:19:42.720 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
1:M 16 Oct 2019 04:19:42.723 * Running mode=standalone, port=6379.
1:M 16 Oct 2019 04:19:42.723 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 16 Oct 2019 04:19:42.723 # Server initialized
1:M 16 Oct 2019 04:19:42.723 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:M 16 Oct 2019 04:19:42.723 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 16 Oct 2019 04:19:42.723 * Ready to accept connections
1:M 16 Oct 2019 04:19:43.976 * Replica 10.0.27.2:6379 asks for synchronization
1:M 16 Oct 2019 04:19:43.976 * Full resync requested by replica 10.0.27.2:6379
1:M 16 Oct 2019 04:19:43.976 * Starting BGSAVE for SYNC with target: disk
1:M 16 Oct 2019 04:19:43.976 * Background saving started by pid 15
15:C 16 Oct 2019 04:19:43.982 * DB saved on disk
15:C 16 Oct 2019 04:19:43.982 * RDB: 0 MB of memory used by copy-on-write
1:M 16 Oct 2019 04:19:44.053 * Background saving terminated with success
1:M 16 Oct 2019 04:19:44.053 * Synchronization with replica 10.0.27.2:6379 succeeded
Redis-slave log:
1:C 16 Oct 2019 04:19:40.776 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 16 Oct 2019 04:19:40.776 # Redis version=5.0.6, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 16 Oct 2019 04:19:40.776 # Configuration loaded
1:S 16 Oct 2019 04:19:40.779 * Running mode=standalone, port=6379.
1:S 16 Oct 2019 04:19:40.779 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:S 16 Oct 2019 04:19:40.779 # Server initialized
1:S 16 Oct 2019 04:19:40.779 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:S 16 Oct 2019 04:19:40.779 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:S 16 Oct 2019 04:19:40.779 * Ready to accept connections
1:S 16 Oct 2019 04:19:40.779 * Connecting to MASTER redis-master:6379
1:S 16 Oct 2019 04:19:40.817 # Unable to connect to MASTER: Invalid argument
1:S 16 Oct 2019 04:19:41.834 * Connecting to MASTER redis-master:6379
1:S 16 Oct 2019 04:19:41.851 # Unable to connect to MASTER: Invalid argument
1:S 16 Oct 2019 04:19:42.866 * Connecting to MASTER redis-master:6379
1:S 16 Oct 2019 04:19:42.942 # Unable to connect to MASTER: Invalid argument
1:S 16 Oct 2019 04:19:43.970 * Connecting to MASTER redis-master:6379
1:S 16 Oct 2019 04:19:43.975 * MASTER <-> REPLICA sync started
1:S 16 Oct 2019 04:19:43.975 * Non blocking connect for SYNC fired the event.
1:S 16 Oct 2019 04:19:43.975 * Master replied to PING, replication can continue...
1:S 16 Oct 2019 04:19:43.976 * Partial resynchronization not possible (no cached master)
1:S 16 Oct 2019 04:19:43.977 * Full resync from master: 39bb36f74ef0cdefdc08a2dc8d4a86112ea69f12:0
1:S 16 Oct 2019 04:19:44.053 * MASTER <-> REPLICA sync: receiving 175 bytes from master
1:S 16 Oct 2019 04:19:44.054 * MASTER <-> REPLICA sync: Flushing old data
1:S 16 Oct 2019 04:19:44.054 * MASTER <-> REPLICA sync: Loading DB in memory
1:S 16 Oct 2019 04:19:44.054 * MASTER <-> REPLICA sync: Finished with success
Redis-sentinel log:
port 26379
port 26379
dir /tmp
dir /tmp
sentinel monitor redis-master redis-master 6379 2
sentinel monitor redis-master redis-master 6379 2
sentinel down-after-milliseconds redis-master 5000
sentinel down-after-milliseconds redis-master 5000
sentinel parallel-syncs redis-master 1
sentinel parallel-syncs redis-master 1
sentinel failover-timeout redis-master 5000
sentinel failover-timeout redis-master 5000
1:X 16 Oct 2019 04:19:49.506 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
*** FATAL CONFIG FILE ERROR ***
1:X 16 Oct 2019 04:19:49.506 # Redis version=5.0.6, bits=64, commit=00000000, modified=0, pid=1, just started
1:X 16 Oct 2019 04:19:49.506 # Configuration loaded
Reading the configuration file, at line 3
>>> 'sentinel monitor redis-master redis-master 6379 2'
1:X 16 Oct 2019 04:19:49.508 * Running mode=sentinel, port=26379.
1:X 16 Oct 2019 04:19:49.508 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
Can't resolve master instance hostname.
1:X 16 Oct 2019 04:19:49.511 # Sentinel ID is aad9553d4999951f3b37eede5968b4aa262c07a9
1:X 16 Oct 2019 04:19:49.511 # +monitor master redis-master 10.0.27.5 6379 quorum 2
1:X 16 Oct 2019 04:19:49.512 * +slave slave 10.0.27.2:6379 10.0.27.2 6379 # redis-master 10.0.27.5 6379
1:X 16 Oct 2019 04:19:54.514 # +sdown slave 10.0.27.2:6379 10.0.27.2 6379 # redis-master 10.0.27.5 6379
So the slave-announce-ip, replica-announce-ip etc are having issues:
1- because the ips keep changing
2- Overlay networks ips are not always what they show they are
So failover dose not work and if master went down slave dont kick in!
# develop.yml
redis:
image: redis
command: redis-server --requirepass 123
ports:
- '6379:6379'
expose:
- "6379
docker-compose -f develop.yml up redis shows:
docker-compose -f develop.yml up redis
Starting django-blog_redis_1 ... done
Attaching to django-blog_redis_1
redis_1 | 1:C 16 Nov 2018 03:52:46.935 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1 | 1:C 16 Nov 2018 03:52:46.935 # Redis version=5.0.1, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1 | 1:C 16 Nov 2018 03:52:46.935 # Configuration loaded
redis_1 | 1:M 16 Nov 2018 03:52:46.935 # Creating Server TCP listening socket *:6379: unable to bind socket, errno: 13
I check the port:
fuser -k -n tcp 6379
but nothing use 6379.
How can I solve it?
My os: Deepin Linux.
It seems that the problem is with deepin.
Execute the following command to solve it:
sudo apt remove apparmor
Related discussion: https://github.com/docker/for-linux/issues/413