I have docker-compose file with several services:
services:
consul:
image: consul:latest
ports:
- "8300:8300"
- "8301:8301"
- "8302:8302"
- "8400:8400"
- "8500:8500"
- "8600:53/udp"
command: "agent -server -ui -bootstrap-expect=1 -client=0.0.0.0"
microservice1:
image: xxx:latest
ports:
- "8080"
microservice2:
image: yyy:latest
ports:
- "8080"
Services are successfully registered on Consul, health check is passing.
I scale up one microservice:
docker-compose up microservice1=3
all new containers are properly shown in consul.
But, when I scale down microservice I have orange status in Consul and logs are showing something like this:
2016/07/28 18:58:38 [WARN] agent: http request failed
'http://27dd6662f944:8080/health': Get
http://27dd6662f944:8080/health: dial tcp: lookup 27dd6662f944 on
127.0.0.11:53: no such host
Does anybody know how to solve this?
I found solution:
services:
consul:
**hostname: consul**
image: consul:latest
ports:
- "8300:8300"
- "8301:8301"
- "8302:8302"
- "8400:8400"
- "8500:8500"
- "8600:53/udp"
command: "agent -server -ui -bootstrap-expect=1 -client=0.0.0.0"
microservice1:
image: xxx:latest
ports:
- "8080"
microservice2:
image: yyy:latest
ports:
- "8080"
Add hostname for consul image...
Related
I am trying to run a gitea server with drone. They are currently both hosted on the same ubuntu machine and the docker containers are set up through a docker-compose.yml file.
When starting up all services I get the following error in the logs of the drone runner service:
time="2020-08-12T19:10:42Z" level=error msg="cannot ping the remote server" error="Post http://drone:80/rpc/v2/ping: dial tcp: lookup drone on 127.0.0.11:53: no such host"
Both http://gitea and http://drone point to localhost (via /etc/hosts). I sadly don't understand how or why the drone runner can not find the server. Calling "docker container inspect" on all my 4 containers shows they are all connected to the same network (drone_and_gitea_giteanet). Which is also the network I set in the DRONE_RUNNER_NETWORKS environment variable.
This is how my docker-compose.yml file looks:
version: "3.8"
# Create named volumes for gitea server, gitea database and drone server
volumes:
gitea:
gitea-db:
drone:
# Create shared network for gitea and drone
networks:
giteanet:
external: false
services:
gitea:
container_name: gitea
image: gitea/gitea:1
#restart: always
environment:
- APP_NAME="Automated Student Assessment Tool"
- USER_UID=1000
- USER_GID=1000
- ROOT_URL=http://gitea:3000
- DB_TYPE=postgres
- DB_HOST=gitea-db:5432
- DB_NAME=gitea
- DB_USER=gitea
- DB_PASSWD=gitea
networks:
- giteanet
ports:
- "3000:3000"
- "222:22"
volumes:
- gitea:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
depends_on:
- gitea-db
gitea-db:
container_name: gitea-db
image: postgres:9.6
#restart: always
environment:
- POSTGRES_USER=gitea
- POSTGRES_PASSWORD=gitea
- POSTGRES_DB=gitea
networks:
- giteanet
volumes:
- gitea-db:/var/lib/postgresql/data
drone-server:
container_name: drone-server
image: drone/drone:1
#restart: always
environment:
# General server settings
- DRONE_SERVER_HOST=drone:80
- DRONE_SERVER_PROTO=http
- DRONE_RPC_SECRET=topsecret
# Gitea Config
- DRONE_GITEA_SERVER=http://gitea:3000
- DRONE_GITEA_CLIENT_ID=<CLIENT ID>
- DRONE_GITEA_CLIENT_SECRET=<CLIENT SECRET>
# Create Admin User, name should be the same as Gitea Admin user
- DRONE_USER_CREATE=username:AdminUser,admin:true
# Drone Logs Settings
- DRONE_LOGS_PRETTY=true
- DRONE_LOGS_COLOR=true
networks:
- giteanet
ports:
- "80:80"
volumes:
- drone:/data
depends_on:
- gitea
drone-agent:
container_name: drone-agent
image: drone/drone-runner-docker:1
#restart: always
environment:
- DRONE_RPC_PROTO=http
- DRONE_RPC_HOST=drone:80
- DRONE_RPC_SECRET=topsecret
- DRONE_RUNNER_CAPACITY=1
- DRONE_RUNNER_NETWORKS=drone_and_gitea_giteanet
networks:
- giteanet
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- drone-server
It would help me a lot if somebody could maybe take a look at the issue and help me out! :)
Hi All I faced a small issue with the nifi cluster am creating with the below docker-compose file
services:
zookeeper:
hostname: zookeeper
container_name: zookeeper
image: 'bitnami/zookeeper:latest'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
networks:
- efactory-network
nifi:
image: apache/nifi:1.9.2
ports:
- 8080:8080 # Unsecured HTTP Web Port
- 8081:8081
environment:
- NIFI_WEB_HTTP_PORT=8080
- NIFI_CLUSTER_IS_NODE=true
- NIFI_CLUSTER_NODE_PROTOCOL_PORT=8082
- NIFI_ZK_CONNECT_STRING=zookeeper:2181
- NIFI_ELECTION_MAX_WAIT=1 min
- nifi.security.needClientAuth=false
networks:
- efactory-network
volumes:
- state:/opt/nifi/nifi-1.9.2/state
- conf:/opt/nifi/nifi-1.9.2/conf
- content:/opt/nifi/nifi-1.9.2/content_repository
- db:/opt/nifi/nifi-1.9.2/database_repository
- flowfile:/opt/nifi/nifi-1.9.2/flowfile_repository
- provenance:/opt/nifi/nifi-1.9.2/provenance_repository
- logs:/opt/nifi/nifi-1.9.2/logs
- data:/opt/nifi/nifi-1.9.2/data
extra_hosts:
- nifi.at:159.69.214.42
networks:
efactory-network:
external:
name: security-network
volumes:
conf:
content:
db:
flowfile:
provenance:
logs:
state:
data:
I persisted data with docker volumes. So state of the cluster should be persisted on a docker-compose restart . I think it's persisted but giving the below error
java.net.UnknownHostException: ffcca3db4879
I will be much grateful if someone can help me on this
Can you add on your nifi:
depends_on:
- zookeeper
I try to setup zipkin, elasticsearch, prometheus and grafana with docker-compose.yml
When I run dockers, see in the log:
dependencies_zipkin | 19/09/30 14:37:09 ERROR NetworkClient: Node [172.28.0.2:9200] failed (java.net.ConnectException: Connection refused (Connection refused)); no other nodes left - aborting...
I'm on MacOS X with docker 2.1.0.3
the content of my docker-compose.yml is this one:
version: '3.7'
services:
storage:
image: openzipkin/zipkin-elasticsearch7
container_name: elasticsearch
ports:
- "9200:9200"
environment:
- "xpack.security.enabled=false"
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
restart: unless-stopped
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- $PWD/prometheus:/etc/prometheus/
- /tmp/prometheus:/prometheus/data:rw
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- "9090:9090"
restart: unless-stopped
zipkin:
image: openzipkin/zipkin
container_name: zipkin
depends_on:
- dependencies
- storage
environment:
- "STORAGE_TYPE=elasticsearch"
- "ES_HOSTS=storage"
ports:
- "9411:9411"
restart: unless-stopped
grafana:
image: grafana/grafana
container_name: grafana
ports:
- "3000:3000"
restart: unless-stopped
dependencies:
image: openzipkin/zipkin-dependencies
container_name: dependencies_zipkin
depends_on:
- storage
environment:
- "STORAGE_TYPE=elasticsearch"
- "ES_HOSTS=storage"
When I connect to localhost:9200, I see that elasticsearch is working fine and on port 9411, zipkin is deployed but I have the error:
ERROR: cannot load service names: server error (Service Unavailable)(due to the network error
In the log, I have this information:
105 ^[[35mdependencies_zipkin |^[[0m 19/09/30 14:45:20 ERROR NetworkClient: Node [172.28.0.2:9200] failed (java.net.ConnectException: Connection refused (Connection refused)); no other nodes left - aborting...
and this one
^[[31mzipkin |^[[0m java.lang.IllegalStateException: couldn't connect any of [Endpoint{storage:80, ipAddr=172.28.0.2, weight=1000}]
Any idea?
UPDATE
by using mysql it is working fine, so the problem is at the level of elastic search.
I tried alsoo by using
"STORAGE_PORT_9200_TCP_ADDR=127.0.0.1"
but the issue still occurs.
UPDATE
As mention is the solution gave by Brian, I have to use:
ES_HOSTS=http://storage:9300
the key is on port, I was using the port 9200
The error disappear between zipkin and es but still occurs between es and zipkin-dependencies.
The problem lies in your ES_HOSTS variable, from the docs here:
ES_HOSTS: A comma separated list of elasticsearch base urls to connect to ex. http://host:9200.
Defaults to "http://localhost:9200".
So you will need: ES_HOSTS=http://storage:9200
Finally I have this file:
version: '3.7'
services:
storage:
image: openzipkin/zipkin-elasticsearch7
container_name: elasticsearch
ports:
- 9200:9200
zipkin:
image: openzipkin/zipkin
container_name: zipkin
environment:
- STORAGE_TYPE=elasticsearch
- "ES_HOSTS=elasticsearch:9300"
ports:
- 9411:9411
depends_on:
- storage
dependencies:
image: openzipkin/zipkin-dependencies
container_name: dependencies
entrypoint: crond -f
depends_on:
- storage
environment:
- STORAGE_TYPE=elasticsearch
- "ES_HOSTS=elasticsearch:9300"
- "ES_NODES_WAN_ONLY=true"
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- $PWD/prometheus:/etc/prometheus/
- /tmp/prometheus:/prometheus/data:rw
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- "9090:9090"
grafana:
image: grafana/grafana
container_name: grafana
depends_on:
- prometheus
ports:
- "3000:3000"
Main differences are the usage of
"ES_HOSTS=elasticsearch:9300"
instead of
"ES_HOSTS=storage:9300"
and in the dependencies configuration I add the entrypoint in dependencies:
entrypoint: crond -f
This one is really the key to not have the exception when I start docker-compose.
To solve this issue, I check the this project: https://github.com/openzipkin/docker-zipkin
The remaining question is: why do I need to use entrypoint: crond -f
I have very simple docker-compose config:
version: '3.5'
services:
consul:
image: consul:latest
hostname: "consul"
command: "consul agent -server -bootstrap-expect 1 -client=0.0.0.0 -ui -data-dir=/tmp"
environment:
SERVICE_53_IGNORE: 'true'
SERVICE_8301_IGNORE: 'true'
SERVICE_8302_IGNORE: 'true'
SERVICE_8600_IGNORE: 'true'
SERVICE_8300_IGNORE: 'true'
SERVICE_8400_IGNORE: 'true'
SERVICE_8500_IGNORE: 'true'
ports:
- 8300:8300
- 8400:8400
- 8500:8500
- 8600:8600/udp
networks:
- backend
registrator:
command: -internal consul://consul:8500
image: gliderlabs/registrator:master
depends_on:
- consul
links:
- consul
volumes:
- /var/run/docker.sock:/tmp/docker.sock
networks:
- backend
image_tagger:
build: image_tagger
image: image_tagger:latest
ports:
- 8000
networks:
- backend
mongo:
image: mongo
command: [--auth]
ports:
- "27017:27017"
restart: always
networks:
- backend
volumes:
- /mnt/data/mongo-data:/data/db
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: qwerty
postgres:
image: postgres:11.1
# ports:
# - "5432:5432"
networks:
- backend
volumes:
- ./postgres-data:/var/lib/postgresql/data
- ./scripts:/docker-entrypoint-initdb.d
restart: always
environment:
POSTGRES_PASSWORD: qwerty
POSTGRES_DB: ttt
SERVICE_5432_NAME: postgres
SERVICE_5432_ID: postgres
networks:
backend:
name: backend
(and some other services)
Also I configured dnsmasq on host to access containers by internal name.
I spent couple of days, but still not able to make it stable:
1. Very often some services are just not get registered by registrator (sometimes I get 5 out of 15).
2. Very often containers are registered with wrong ip address. So in container info I have one address(correct), in consul - another (incorrect). And when I want to reach some service by address like myservice.service.consul I end up at wrong container.
3. Sometimes resolution fails at all even when containers are registered with correct ip.
Do I have some mistakes in config?
So, at least for now I was able to fix this by passing -resync 15 param to registrator. Not sure if it's correct solution, but it works.
I have a problem with accessing a container from another container. My docker-compose.yml file is:
version: '2'
services:
consul:
image: "consul"
hostname: "consul"
command: "agent -server -client=0.0.0.0 -bind=10.30.1.134 -retry-join=10.30.1.97 -retry-join=10.30.1.42 -bootstrap-expect=3 -ui"
ports:
- "8400:8400"
- "8500:8500"
- "8300:8300"
- "8301:8301"
- "8302:8302"
- "8600:53/udp"
volumes:
- /data/consul:/consul/data
network_mode: host
vault:
depends_on:
- consul
image: "vault"
hostname: "vault"
links:
- consul
environment:
VAULT_ADDR: http://127.0.0.1:8200
ports:
- "8200:8200"
- "8201:8201"
volumes:
- ./tools/wait-for-it.sh:/wait-for-it.sh
- ./config/vault:/config
- ./config/vault/policies:/policies
entrypoint: /wait-for-it.sh -t 200 -h consul -p 8500 -s -- vault server -config=/config/with-consul.hcl
When I do docker-compose up I see errors
vault_1 | nc: bad address 'consul'
And also I am not able to ping the service:
root#ip-10-30-1-134:~# docker run vault ping consul
ping: bad address 'consul'
According to the documentation the contaners must be acccessible by the name defined i.e. consul