I'm trying to follow a basic tutorial to start using RabbitMQ, converting from "docker run" to docker-compose. http://josuelima.github.io/docker/rabbitmq/cluster/2017/04/19/setting-up-a-rabbitmq-cluster-on-docker.html
Here's my docker-compose file:
version: '3'
services:
rabbit1:
image: rabbitmq:3.6.6-management
restart: unless-stopped
hostname: rabbit1
ports:
- "4369:4369"
- "5672:5672"
- "15672:15672"
- "25672:25672"
- "35197:35197"
environment:
- RABBITMQ_USE_LONGNAME=true
- RABBITMQ_LOGS=/var/log/rabbitmq/rabbit.log
volumes:
- "/nfs/docker/rabbit/data1:/var/lib/rabbitmq"
- "/nfs/docker/rabbit/data1/logs:/var/log/rabbitmq"
Trying to see if I can connect (and also remove the guest account) I get this error.
Error: unable to connect to node rabbit#rabbit1: nodedown
DIAGNOSTICS
===========
attempted to contact: [rabbit#rabbit1]
rabbit#rabbit1:
* connected to epmd (port 4369) on rabbit1
* epmd reports node 'rabbit' running on port 25672
* TCP connection succeeded but Erlang distribution failed
* suggestion: hostname mismatch?
* suggestion: is the cookie set correctly?
* suggestion: is the Erlang distribution using TLS?
current node details:
- node name: 'rabbitmq-cli-41#rabbit1.no-domain'
- home dir: /var/lib/rabbitmq
- cookie hash: WjJle1otRdldm4Wso6HGfg==
Looking at the persistant data, it doesn't appear to be creating a cookie (whether or not I use the RABBITMQ_ERLANG_COOKIE variable) and I'm not convinced that the domain is being handled properly.
RabbitMQ docs are useless for this.
Related
I am fairly new to using traefik, so I might be totally missing something simple, but I have the following docker-compose.yaml:
version: '3.8'
services:
reverse-proxy:
container_name: reverse_proxy
restart: unless-stopped
image: traefik:v2.0
command:
- --entrypoints.web.address=:80
- --entrypoints.web-secure.address=:443
- --api.insecure=true
- --providers.file.directory=/conf/
- --providers.file.watch=true
- --providers.docker=true
ports:
- "80:80"
- "8080:8080"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./scripts/certificates/conf/:/conf/
- ./scripts/certificates/ssl/:/certs/
networks:
- bnkrl.io
labels:
- "traefik.enable=true"
- "traefik.http.routers.api.rule=Host(`traefik.bnkrl.io`)"
- "traefik.docker.network=bnkrl.io"
bankroll:
container_name: bankroll
build:
context: .
ports:
- "3000"
volumes:
- .:/usr/src/app
command: yarn start
networks:
- bnkrl.io
labels:
- "traefik.http.routers.bankroll.rule=Host(`bankroll.bnkrl.io`)"
- "traefik.docker.network=bnkrl.io"
- "traefik.http.services.bankroll.loadbalancer.server.port=3000"
- "traefik.http.routers.bankroll-https.rule=Host(`bankroll.bnkrl.io`)"
- "traefik.http.routers.bankroll-https.tls=true"
networks:
bnkrl.io:
external: true
But for some reason the following is happening:
Running curl when ssh'd into my bankroll container gives the following:
/usr/src/app# curl bankroll.bnkrl.io
curl: (7) Failed to connect to bankroll.bnkrl.io port 80: Connection refused
Despite having - "traefik.http.services.bankroll.loadbalancer.server.port=3000" label set up.
I am also unable to hit traefik from my application container:
curl traefik.bnkrl.io
curl: (6) Could not resolve host: traefik.bnkrl.io
Despite my expectation to be able to do so since they are both on the same network.
Any help with understanding what I might be doing wrong would be greatly appreciated! My application (bankroll) is a very basic hello-world react app, but I don't think any of the details around that are relevant to the issue I'm facing.
EDIT: I am also not seeing any error logs on traefik side of things.
You are using host names that are not declared and therefore are unreachable.
To reach a container from another container, you need to use the service name, for example, if you connect to bankroll from the reverse-proxy it will hit the other service.
While if you want to access them from the host machine, you will have to publish the ports (which you did, it's all the stuff in ports in your Docker-compose file) and access from localhost or from your machine local IP address instead of traefik.bnkrl.io.
If you want to access from traefik.bnkrl.io, you will have to declare this host name, and point it to the place where the Docker containers are running from.
So either a DNS record in the domain bnkrl.io pointing to your local machine, or a HOSTS file entry in your computer pointing to 127.0.0.1.
Another note: For SSL you are going to need a valid certificate to use for the host name. While in local development, you can use the self-signed certificate provided by Traefik, but you may have to install it in the computer connecting to the service, or allow untrusted certificates from your browser, or wherever you are making the requests from (some browsers no longer support using self-signed certificates). For SSL on the Internet you will need to look at things like Let's Encrypt.
I have this docker-compose.yml file from here that I am using to open selenium hub and node on mac OS . I changed host port to 65299 , as I got error that 4444 is being used. I have docker desktop 3.5.1 installed
version: "3"
services:
selenium-hub:
image: selenium/hub
container_name: selenium-hub
ports:
- "65299:4444"
chrome:
image: selenium/node-chrome
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=65299
firefox:
image: selenium/node-firefox
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=65299
When I look here - http://localhost:65299/grid/console , I dont see any node registered
Also, on terminal I get this
firefox_1 | 20:27:22.110 INFO [SelfRegisteringRemote$1.run] - Couldn't register this node: The hub is down or not responding: Failed to connect to selenium-hub/172.26.0.2:65299
Also , in logs it says
Nodes should register to http://172.27.0.2:4444/grid/register/
so why is system even trying 172.26.0.2:65299 or may be I am missing something here ?
The HUB_PORT variable of nodes are wrong. 65299 port is the port for accessing hub from outside of docker network. For example you are using this port the access hub from browser.
You need to set 4444 to that variable. That port available to docker network. So nodes can connect hub.
I am new in dockers and I need some help please.
I am trying to install TICK in docker. Influxdb, Kapacitor and Chronograf will be installed in dockers but telegraf will be installed in each machine that will be necessary.
Port 8086 in my host is in use, so I will use 8087 for influxdb. Is it posible to configure influxdb dokcer with -p 8087:8086? If so, which port should I configure in conf files?
Docker compose file will be:
version: '3'
networks:
TICK_network:
services:
influxdb:
image: influxdb
container_name: influxdb
networks:
TICK_network:
ports:
- "8087:8086"
- "8083:8083"
expose:
- "8087"
- "8083"
hostname: influxdb
volumes:
- /var/lib/influxdb:/var/lib/influxdb
- /etc/influxdb/influxdb.conf:/etc/influxdb/influxdb.conf:ro
restart: unless-stopped
kapacitor:
image: kapacitor
container_name: kapacitor
networks:
TICK_network:
links:
- influxdb
ports:
- "9092:9092"
expose:
- "9092"
hostname: kapacitor
volumes:
- /var/lib/kapacitor:/var/lib/kapacitor
- /etc/kapacitor/kapacitor.conf:/etc/kapacitor/kapacitor.conf:ro
restart: unless-stopped
chronograf:
image: chronograf
container_name: chronograf
networks:
TICK_network:
links:
- influxdb
- kapacitor
ports:
- "8888:8888"
expose:
- "8888"
hostname: chronograf
volumes:
- /var/lib/chronograf:/var/lib/chronograf
restart: unless-stopped
influxdb.conf is edited to point to port 8087
[http]
enabled = true
bind-address = ":8087"
auth-enabled = true
Kapacitor.conf and telegraf.conf are also pointing to port 8087.
But I am receiving following errors:
Telegraf log:
W! [outputs.influxdb] when writing to [http://localhost:8087]: database "telegraf" creation failed: Post http://localhost:8087/query: EOF
E! [outputs.influxdb] when writing to [http://localhost:8087]: Post http://localhost:8087/write?db=tick: EOF
E! [agent] Error writing to outputs.influxdb: could not write any address
kapacitor log:
vl=error msg="encountered error" service=run err="open server: open service *influxdb.Service: failed to link subscription on startup: authorization failed"
run: open server: open service *influxdb.Service: failed to link subscription on startup: authorization failed
What you did is correct if you want to access those services from outside the Docker network, that is from the host access to localhost:8087 for example.
However, this is not correct in your case. As you are using docker-compose, all the services are in the same network, and therefore, you need to attack the port in which the influx is listening in the Docker network (the right-side port), that is, 8086.
But, even if you do so, it will still not work. Why? Because you are trying to access localhost from the Telegraf container. You need to configure the access to influx as influxdb:8086, not as localhost:8087. influxdb here is the name of the container, if for example you name it ailb90, then it would be ailb90:8086
thanks for your answer. But telegraf is not installed in a container. This is why I access to database using urls = ["http://localhost:8087"].
In the other hand, kapacitor is installed in a docker container. The conexion to influxdb is made using the string urls=["https://influxdb:8087"]. If I cinfigure kapacitor in port 8086 it gives a connexion error (I think it is because influxdb.conf is pointing to port 8087):
lvl=error msg="failed to connect to InfluxDB, retrying..." service=influxdb cluster=default err="Get http://influxdb:8086/ping: dial tcp 172.0.0.2:8086: connect: connection refused"
I am trying to integrate my own ABCI-application with the localnet. The docker-compose looks as
version: '3'
services:
node0:
container_name: node0
image: "tendermint/localnode"
ports:
- "26656-26657:26656-26657"
environment:
- ID=0
- LOG=${LOG:-tendermint.log}
volumes:
- ./build:/tendermint:Z
command: node --proxy_app=tcp://abci0:26658
networks:
localnet:
ipv4_address: 192.167.10.2
abci0:
container_name: abci0
image: "abci-image"
volumes:
- $GOPATH/src/samplePOC:/go/src/samplePOC
ports:
- "26658:26658"
build:
context: .
dockerfile: $GOPATH/src/samplePOC/Dockerfile
command: /go/src/samplePOC/samplePOC
networks:
localnet:
ipv4_address: 192.167.10.6
Both the nodes and the abci- containers are built successfully. The ABCI server is started successfully and the nodes are trying to make connections. However the main problem is that the I see the two are not able to communicate with each other.
I get the following error:
node0 |E[2019-10-29|15:14:28.525] abci.socketClient failed to connect
to tcp://abci0:26658. Retrying... module=abci-client connection=query
err="dial tcp 192.167.10.6:26658: connect: connection refused"
Can someone please help me here?
My first thought is that you may need to add a depends_on: ["abci0"] to node0, as the ABCI application must be listening before Tendermint will try to connect.
Of course, TM should continue to retry so this may not be the issue.
Another thing you can try, is to run tendermint on your host machine, and attempt to connect to the exposed port of ABCI port on abci0 (26658) to isolate the problem to the docker configuration.
If you're not able to run tendermint node --proxy_app=tcp://localhost:26658 the problem likely lies in your ABCI application.
I assume you've initialized a directory in the volume you mount into node0?
I got this working with the kvstore example from Tendermint.
version: "3.4"
services:
kvstore-app:
image: alpine
expose:
- "26658"
volumes:
- ./kvstore-example:/home/dev/kvstore-example
command: "/home/dev/kvstore-example --socket-addr tcp://kvstore-app:26658"
tendermint-node:
image: tendermint/tendermint
depends_on:
- kvstore-app
ports:
- "26657:26657"
environment:
- TMHOME=/tmp/tendermint
volumes:
- ./tmp/tendermint:/tmp/tendermint
command: node --proxy_app=tcp://kvstore-app:26658
I'm not exactly sure why your docker-compose.yml isn't working, but it's likely that you are not binding the socket of your abci application in a way that is accessible to the node. I'm explicitly telling the abci application to do so with the argument --socket-addr tcp://kvstore-app:26658". Additionally, I'm just exposing the port of the abci application on the docker network, but I think mapping the port should do this implicitly.
Also I would get rid of all the network stuff. Personally, I use the network configuration only if I have some very specific network goals in mind.
My setup is the following:
2 Linux virtual machines running in VirtualBox;
Both hosts are engaged in a Docker Swarm;
Host 1 consists of: 1 orderer, 1 organization with 2 peers and a cli container;
Host 2 consists of: 1 organization with 2 peers;
I'm using the following tutorial as reference (https://hyperledger.github.io/composer/latest/tutorials/deploy-to-fabric-multi-org)
How I'm actually running the Fabric network:
I'm generating the channel artifacts & crypto-config files the same on both hosts.
Starting fabric on host 2 - with both peers, couchdbs and ca;
Starting fabric on host 1;
Generating a channel on host 1; joining peers from host 1 and updating anchor peer;
When inspecting the overlay swarm network I'm able to see both peers and containers available for each host;
My problems appear when trying to make the peers from host 2 join the channel. I'm trying to add them to the channel through the cli on host 1.
But I'm receiving the following error:
Error: error getting endorser client for channel: endorser client failed to connect to peer0.sponsor.example.com:7051: failed to create new connection: context deadline exceeded
This is my docker-compose-cli.yaml for host 1:
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
volumes:
orderer.example.com:
peer0.manager.example.com:
peer1.manager.example.com:
peer0.sponsor.example.com:
peer1.sponsor.example.com:
networks:
example:
services:
orderer.example.com:
extends:
file: base/docker-compose-base-1.yaml
service: orderer.example.com
container_name: orderer.example.com
networks:
- example
peer0.manager.example.com:
container_name: peer0.manager.example.com
extends:
file: base/docker-compose-base-1.yaml
service: peer0.manager.example.com
networks:
- example
peer1.manager.example.com:
container_name: peer1.manager.example.com
extends:
file: base/docker-compose-base-1.yaml
service: peer1.manager.example.com
networks:
- example
cli:
container_name: cli
image: hyperledger/fabric-tools:$IMAGE_TAG
tty: true
stdin_open: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
#- CORE_LOGGING_LEVEL=DEBUG
- CORE_LOGGING_LEVEL=INFO
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.manager.example.com:7051
- CORE_PEER_LOCALMSPID=ManagerMSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/manager.example.com/peers/peer0.manager.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/manager.example.com/peers/peer0.manager.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/manager.example.com/peers/peer0.manager.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/manager.example.com/users/Admin#manager.example.com/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
- ./../chaincode/:/opt/gopath/src/github.com/chaincode
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on:
- orderer.example.com
- peer0.manager.example.com
- peer1.manager.example.com
networks:
- example
The node fails to connect to peer0.sponsor.proa.com. This is probably due to some changes you have made to customize the network addresses to your liking, is this something you've customized? I haven't followed this tutorial but I've run into some similar problems while customizing the first-network example while following this one.
Make sure your peer addresses are well configured on the configtx.yaml, cryptoconfig.yaml, docker-compose-cli.yaml and docker-compose-base.yaml and also peer-base.yaml if you changed the network name.
If the addresses of the peers don't check out accross those files you will probably need to generate the channel transactions again and start the network over, since the configuration for the channel that is present in the blockchain is not matching your current network configuration.
I was making a very simple mistake: I was not copying the generated crypto material from one host to another; I was generating new crypto materials on all of the hosts, thinking that they will be the same.