Instance always down when running into docker - docker

I run my microservice system into docker for windows here is my docker-compose.yml
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper:3.4.6
restart: always
ports:
- "8400:8400"
- "2181:2181"
kafka:
image: wurstmeister/kafka:1.1.0
restart: always
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
registry-jhipster:
image: jhipster/jhipster-registry:v3.2.4
restart: always
ports:
- "8761:8761"
environment:
- SPRING_PROFILES_ACTIVE=dev,native
- JHIPSTER_REGISTRY_PASSWORD=admin
- JHIPSTER_SECURITY_AUTHENTICATION_JWT_SECRET=secret
- SPRING_CLOUD_CONFIG_SERVER_NATIVE_SEARCH_LOCATIONS=file:./central-config/
volumes:
- ./central-config:/central-config
db:
image: mariadb
restart: always
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: password
volumes:
- ./db-init:/docker-entrypoint-initdb.d
tomcat:
image: tomcat:8.5-alpine
environment:
- JVM_OPTS=-Xmx12g -Xms12g -XX:MaxPermSize=1024m
links:
- db:mysql
- registry-jhipster:registry
- kafka:kafka
ports:
- "8080:8080"
volumes:
- ./tomcat/webapps/app.original.war:/usr/local/tomcat/webapps/app.original.war
- ./tomcat/conf/tomcat-users.xml:/usr/local/tomcat/conf/tomcat-users.xml:ro
After i run... everything is set up, i can see my registry but the app instance is always down.
With these into tomcat container logs :
2018-07-06 16:16:48.395 WARN 1 --- [nfoReplicator-0] o.a.k.clients.consumer.ConsumerConfig : The configuration 'value.serializer' was supplied but isn't a known config.
2018-07-06 16:16:48.396 WARN 1 --- [nfoReplicator-0] o.a.k.clients.consumer.ConsumerConfig : The configuration 'key.serializer' was supplied but isn't a known config.
2018-07-06 16:16:48.936 WARN 1 --- [nfoReplicator-0] c.n.discovery.InstanceInfoReplicator : There was a problem with the instance info replicator
2018-07-06T16:16:48.936993100Z
java.lang.OutOfMemoryError: Java heap space
What's happen? and how can i fix and made my app available?

I found the solution. I added this to my docker-compose.yml
jhipster-registry:
container_name: registry
hostname: registry
and in the boostrap.yml of my app
Spring:
cloud:
config:
uri: http://admin:${jhipster.registry.password}#registry:8761/config
and it works

Related

How run minio on docker-compose + nginx reverse proxy?

I have problem with minio, not started on selected domain - 502 error.
my docker-compose.yml for nginx proxy reverse + le
services:
nginx:
container_name: nginx
image: nginxproxy/nginx-proxy
restart: unless-stopped
ports:
- 80:80
- 443:443
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /var/docker/nginx/html:/usr/share/nginx/html
- /var/docker/nginx/certs:/etc/nginx/certs
- /var/docker/nginx/vhost:/etc/nginx/vhost.d
logging:
options:
max-size: "10m"
max-file: "3"
letsencrypt-companion:
container_name: nginx-le
image: jrcs/letsencrypt-nginx-proxy-companion
restart: unless-stopped
volumes_from:
- nginx
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/docker/nginx/acme:/etc/acme.sh
environment:
DEFAULT_EMAIL: mail#mail.com
docker-compose.yml for minio
version: '2'
services:
minio:
container_name: minio.domain.com
command: server /data --console-address ":9001"
environment:
- MINIO_ROOT_USER=admin
- MINIO_ROOT_PASSWORD=supersecret
- MINIO_BROWSER_REDIRECT_URL=https://minio.domain.com
- MINIO_DOMAIN=minio.domain.com
image: quay.io/minio/minio:latest
volumes:
- minio:/data
restart: unless-stopped
expose:
- "9000"
- "9001"
environment:
VIRTUAL_HOST: minio.domain.com
LETSENCRYPT_HOST: minio.domain.com
networks:
- proxy
networks:
proxy:
external:
name: nginx_default
volumes:
minio:
logs from docker logs for minio container
Warning: Default parity set to 0. This can lead to data loss.
WARNING: Detected default credentials 'minioadmin:minioadmin', we recommend that you change these values with 'MINIO_ROOT_USER' and 'MINIO_ROOT_PASSWORD' environment variables
MinIO Object Storage Server
Copyright: 2015-2022 MinIO, Inc.
License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
Version: RELEASE.2022-12-12T19-27-27Z (go1.19.4 linux/amd64)
Status: 1 Online, 0 Offline.
API: http://192.168.0.7:9000 http://127.0.0.1:9000
Console: http://192.168.0.7:9001 http://127.0.0.1:9001
Documentation: https://min.io/docs/minio/linux/index.html
When I put in docker-compose for minio:
ports:
- '9000:9000'
- '9001:9001'
Minio working, but for all domain on my server.
How I can fix that minio show only on minio.domain.com ?

Kafka manager no application loader is configured

Here's my docker-compose:
version: '3'
services:
nodered:
container_name: nodered
image: nodered/node-red
ports:
- "1880:1880"
volumes:
- ./nodered:/data
depends_on:
- mosquitto
environment:
TZ: "America/Toronto"
restart: always
mosquitto:
image: eclipse-mosquitto
container_name: mqtt
restart: always
ports:
- "1883:1883"
volumes:
- "./mosquitto/config:/mosquitto/config"
- "./mosquitto/data:/mosquitto/data"
- "./mosquitto/log:/mosquitto/log"
environment:
- TZ=America/Toronto
user: "${PUID}:${PGID}"
portainer:
ports:
- "9000:9000"
container_name: portainer
restart: always
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "./portainer/portainer_data:/data"
image: portainer/portainer-ce
zookeeper:
image: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
volumes:
- "zookeeper_data:/bitnami"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: bitnami/kafka
container_name: kafka
ports:
- "9092:9092"
volumes:
- "kafka_data:/bitnami"
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
restart: on-failure
cmak:
image: hlebalbau/kafka-manager
container_name: cmak
restart: always
depends_on:
- kafka
- zookeeper
ports:
- "9080:9080"
environment:
- ZK_HOSTS=zookeper:2181
- APPLICATION_SECRET=letmein
command: bin/cmak -Dconfig.file=/opt/cmak/conf/application.conf -Dhttp.port=9080
volumes:
zookeeper_data:
driver: local
kafka_data:
driver: local
My port 9000 is already used by Portainer and it works properly, but when I'm trying to run Kafka manager on 9080, I'm getting this error without any further explanation:
nodered | 14 Sep 21:59:41 - [info] Starting flows
nodered | 14 Sep 21:59:41 - [info] Started flows
cmak | Oops, cannot start the server.
cmak | java.lang.RuntimeException: No application loader is configured. Please configure an application loader either using the play.application.loader configuration property, or by depending on a module that configures one. You can add the Guice support module by adding "libraryDependencies += guice" to your build.sbt.
cmak | at scala.sys.package$.error(package.scala:30)
cmak | at play.api.ApplicationLoader$.play$api$ApplicationLoader$$loaderNotFound(ApplicationLoader.scala:44)
cmak | at play.api.ApplicationLoader$.apply(ApplicationLoader.scala:70)
cmak | at play.core.server.ProdServerStart$.start(ProdServerStart.scala:50)
cmak | at play.core.server.ProdServerStart$.main(ProdServerStart.scala:25)
cmak | at play.core.server.ProdServerStart.main(ProdServerStart.scala)
I have a feeling it's either my path to kafka-manager is wrong or I might have to expose the hostname on my kafka container...

Set up Prometheus and Cadvisor with docker-compose

i am new to prometheus , cadvisor and docker-compose. i made a docker-compose file including my own created application named chat, with a mongo container. those work fine. now i want to monitor my containers with prometheus and cadvisor. im getting following errors:
cadvisor | W0419 11:41:00.576916 1 sysinfo.go:203] Nodes topology is not available, providing CPU topology
cadvisor | W0419 11:41:00.577437 1 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
cadvisor | E0419 11:41:00.582000 1 info.go:114] Failed to get system UUID: open /etc/machine-id: no such file or directory
and
prometheus | ts=2022-04-19T11:54:19.051Z caller=main.go:438 level=error msg="Error loading config (--config.file=/etc/prometheus/prometheus.yml)" file=/etc/prometheus/prometheus.yml err="parsing YAML file /etc/prometheus/prometheus.yml: yaml: unmarshal errors:\n line 2: field scrape-interval not found in type config.plain"
i tryed to change the config parameter from my docker-compose into, but it dont changed the error:
command:
- '--config.file=./prometheus/prometheus.yml'
docker-compose.yml:
version : '3.7'
services:
chat-api:
container_name: chat-api
build:
context: .
dockerfile: ./Dockerfile
ports:
- '4000:4000'
networks:
- cchat
restart: 'on-failure'
userdb:
image: mongo:latest
container_name: mongodb
volumes:
- userdb:/data/db
networks:
- cchat
prometheus:
image: prom/prometheus:latest
container_name: prometheus
restart: always
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
command:
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- '9080:9080'
networks:
- cloudchat
cadvisor:
image: gcr.io/cadvisor/cadvisor:latest
container_name: cadvisor
restart: always
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker:/var/lib/docker:ro
devices:
- /dev/kmsg:/dev/kmsg
depends_on:
- chat-api
networks:
- cchat
volumes:
userdb:
networks:
cchat:
prometheus.yml:
global:
scrape-interval: 2s
scrape_configs:
- job_name: 'cadvisor'
static_configs:
- targets: ['cadvisor:8080']
project structure:
picture of project structure
I guess it's quite late but you can try mounting /etc/machine-id:/etc/machine-id:ro.
Running in privileged mode could help too. This is my configuration which is working without problems:
cadvisor:
image: gcr.io/cadvisor/cadvisor:v0.47.0
container_name: cadvisor
restart: unless-stopped
privileged: true
ports:
- "8080:8080"
volumes:
- /:/rootfs:ro
- /var/run:/var/run:ro
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
- /dev/disk/:/dev/disk:ro
Some important note, don't use latest it seems it's not the latest version (source: https://github.com/google/cadvisor/issues/3066).

Creating spark cluster with drone.yml not working

I have docker-compose.yml with below image and configuration
version: '3'
services:
spark-master:
image: bde2020/spark-master:2.4.4-hadoop2.7
container_name: spark-master
ports:
- "8080:8080"
- "7077:7077"
environment:
- INIT_DAEMON_STEP=setup_spark
spark-worker-1:
image: bde2020/spark-worker:2.4.4-hadoop2.7
container_name: spark-worker-1
depends_on:
- spark-master
ports:
- "8081:8081"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
here the docker-compose up log ---> https://jpst.it/1Xc4K
and here containers up and running and i mean spark worker connected to spark master without any issues , now problem is i created drone.yml and where i added services component with
services:
jce-cassandra:
image: cassandra:3.0
ports:
- "9042:9042"
jce-elastic:
image: elasticsearch:5.6.16-alpine
ports:
- "9200:9200"
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
janusgraph:
image: janusgraph/janusgraph:latest
ports:
- "8182:8182"
environment:
JANUS_PROPS_TEMPLATE: cassandra-es
janusgraph.storage.backend: cql
janusgraph.storage.hostname: jce-cassandra
janusgraph.index.search.backend: elasticsearch
janusgraph.index.search.hostname: jce-elastic
depends_on:
- jce-elastic
- jce-cassandra
spark-master:
image: bde2020/spark-master:2.4.4-hadoop2.7
container_name: spark-master
ports:
- "8080:8080"
- "7077:7077"
environment:
- INIT_DAEMON_STEP=setup_spark
spark-worker-1:
image: bde2020/spark-worker:2.4.4-hadoop2.7
container_name: spark-worker-1
depends_on:
- spark-master
ports:
- "8081:8081"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
but here spark worker is not connected to spark master getting exceptions, here is exception log details , can some one please guide me why am facing this issue
Note : I am trying to create these services in drone.yml for my integration testing
Answering for better formatting. The comments suggest sleeping. Assuming this is the dockerfile (https://hub.docker.com/r/bde2020/spark-worker/dockerfile) You could sleep by adding the command:
spark-worker-1:
image: bde2020/spark-worker:2.4.4-hadoop2.7
container_name: spark-worker-1
command: sleep 10 && /bin/bash /worker.sh
depends_on:
- spark-master
ports:
- "8081:8081"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
Although sleep 10 is probably excessive, if this would would sleep 5 or sleep 2

docker-compose: zipkin cannot connect to elasticsearch

I try to setup zipkin, elasticsearch, prometheus and grafana with docker-compose.yml
When I run dockers, see in the log:
dependencies_zipkin | 19/09/30 14:37:09 ERROR NetworkClient: Node [172.28.0.2:9200] failed (java.net.ConnectException: Connection refused (Connection refused)); no other nodes left - aborting...
I'm on MacOS X with docker 2.1.0.3
the content of my docker-compose.yml is this one:
version: '3.7'
services:
storage:
image: openzipkin/zipkin-elasticsearch7
container_name: elasticsearch
ports:
- "9200:9200"
environment:
- "xpack.security.enabled=false"
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
restart: unless-stopped
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- $PWD/prometheus:/etc/prometheus/
- /tmp/prometheus:/prometheus/data:rw
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- "9090:9090"
restart: unless-stopped
zipkin:
image: openzipkin/zipkin
container_name: zipkin
depends_on:
- dependencies
- storage
environment:
- "STORAGE_TYPE=elasticsearch"
- "ES_HOSTS=storage"
ports:
- "9411:9411"
restart: unless-stopped
grafana:
image: grafana/grafana
container_name: grafana
ports:
- "3000:3000"
restart: unless-stopped
dependencies:
image: openzipkin/zipkin-dependencies
container_name: dependencies_zipkin
depends_on:
- storage
environment:
- "STORAGE_TYPE=elasticsearch"
- "ES_HOSTS=storage"
When I connect to localhost:9200, I see that elasticsearch is working fine and on port 9411, zipkin is deployed but I have the error:
ERROR: cannot load service names: server error (Service Unavailable)(due to the network error
In the log, I have this information:
105 ^[[35mdependencies_zipkin |^[[0m 19/09/30 14:45:20 ERROR NetworkClient: Node [172.28.0.2:9200] failed (java.net.ConnectException: Connection refused (Connection refused)); no other nodes left - aborting...
and this one
^[[31mzipkin |^[[0m java.lang.IllegalStateException: couldn't connect any of [Endpoint{storage:80, ipAddr=172.28.0.2, weight=1000}]
Any idea?
UPDATE
by using mysql it is working fine, so the problem is at the level of elastic search.
I tried alsoo by using
"STORAGE_PORT_9200_TCP_ADDR=127.0.0.1"
but the issue still occurs.
UPDATE
As mention is the solution gave by Brian, I have to use:
ES_HOSTS=http://storage:9300
the key is on port, I was using the port 9200
The error disappear between zipkin and es but still occurs between es and zipkin-dependencies.
The problem lies in your ES_HOSTS variable, from the docs here:
ES_HOSTS: A comma separated list of elasticsearch base urls to connect to ex. http://host:9200.
Defaults to "http://localhost:9200".
So you will need: ES_HOSTS=http://storage:9200
Finally I have this file:
version: '3.7'
services:
storage:
image: openzipkin/zipkin-elasticsearch7
container_name: elasticsearch
ports:
- 9200:9200
zipkin:
image: openzipkin/zipkin
container_name: zipkin
environment:
- STORAGE_TYPE=elasticsearch
- "ES_HOSTS=elasticsearch:9300"
ports:
- 9411:9411
depends_on:
- storage
dependencies:
image: openzipkin/zipkin-dependencies
container_name: dependencies
entrypoint: crond -f
depends_on:
- storage
environment:
- STORAGE_TYPE=elasticsearch
- "ES_HOSTS=elasticsearch:9300"
- "ES_NODES_WAN_ONLY=true"
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- $PWD/prometheus:/etc/prometheus/
- /tmp/prometheus:/prometheus/data:rw
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- "9090:9090"
grafana:
image: grafana/grafana
container_name: grafana
depends_on:
- prometheus
ports:
- "3000:3000"
Main differences are the usage of
"ES_HOSTS=elasticsearch:9300"
instead of
"ES_HOSTS=storage:9300"
and in the dependencies configuration I add the entrypoint in dependencies:
entrypoint: crond -f
This one is really the key to not have the exception when I start docker-compose.
To solve this issue, I check the this project: https://github.com/openzipkin/docker-zipkin
The remaining question is: why do I need to use entrypoint: crond -f

Resources