Zipkin distributed tracing not showing in docker-compose using spring boot - docker

I am using zipkin distributed tracing with rabbitMQ. But tracing is not showing. When i RUN QUERY in zipkin, it does not show anything. Here is my screen
docker-compose.yaml file is bellow...'
version: '3.7'
services:
api-gatway:
image: mydocker/pocv1-api-gateway:0.0.1-SNAPSHOT
mem_limit: 700m
ports:
- "8765:8765"
networks:
- account-network
depends_on:
- service-registry
- rabbitmq
environment:
EUREKA.CLIENT.SERVICEURL.DEFAULTZONE: http://service-registry:8761/eureka
SPRING.ZIPKIN.BASEURL: http://zipkin-server:9411/
RABBIT_URI: amqp://guest:guest#rabbitmq:5672
SPRING_RABBITMQ_HOST: rabbitmq
SPRING_ZIPKIN_SENDER_TYPE: rabbit
zipkin-server:
image: openzipkin/zipkin-slim
mem_limit: 300m
ports:
- "9411:9411"
networks:
- account-network
depends_on:
- rabbitmq
environment:
environment:
RABBIT_URI: amqp://guest:guest#rabbitmq:5672
restart: always #Restart if there is a problem starting up
rabbitmq:
image: rabbitmq:3-management
mem_limit: 300m
ports:
- "5672:5672"
- "15672:15672"
networks:
- account-network
account-opening:
image: mydocker/pocv1-account-opening:0.0.1-SNAPSHOT
mem_limit: 700m
ports:
- "8081:8081"
networks:
- account-network
depends_on:
- postgres
- service-registry
- rabbitmq
environment:
SPRING_DATASOURCE_URL: jdbc:postgresql://postgres:5432/accountdb
EUREKA.CLIENT.SERVICEURL.DEFAULTZONE: http://service-registry:8761/eureka
SPRING.ZIPKIN.BASEURL: http://zipkin-server:9411/
RABBIT_URI: amqp://guest:guest#rabbitmq:5672
SPRING_RABBITMQ_HOST: rabbitmq
SPRING_ZIPKIN_SENDER_TYPE: rabbit
networks:
account-network:
In application.yml I use like bellow:
spring:
sleuth:
sampler:
probability: 1.0
I don't know why it is not working. Please help me...

You have made a silly mistake that you have included extra environment property in zipkin configuration. you need to remove that line. I faced the same mistake and got resolved in the same way.
zipkin-server:
image: openzipkin/zipkin-slim
mem_limit: 300m
ports:
- "9411:9411"
networks:
- account-network
depends_on:
- rabbitmq
environment:
***environment:** //remove this line*
RABBIT_URI: amqp://guest:guest#rabbitmq:5672
restart: always #Restart if there is a problem starting up

Related

traefik with docker breaks when I setup networks

I'm trying to setup docker networks with traefik on an existing website.
Before my tries, it had this:
version: "3"
services:
database:
build:
context: ./database
environment:
MYSQL_DATABASE: '${MYSQL_DATABASE}'
MYSQL_USER: '${MYSQL_USER}'
MYSQL_PASSWORD: '${MYSQL_PASSWORD}'
MYSQL_ROOT_PASSWORD: '${MYSQL_ROOT_PASSWORD}'
volumes:
- ./database/data:/var/lib/mysql
restart: always
php-http:
build:
context: ../
dockerfile: ./docker/php-apache/Dockerfile
args:
MAIN_DOMAIN: '${MAIN_DOMAIN}'
ALL_DOMAINS: '${ALL_DOMAINS}'
PROJECT_FOLDER_NAME: '${PROJECT_FOLDER_NAME}'
WEBSITE_USER_PASSWORD: '${WEBSITE_USER_PASSWORD}'
depends_on:
- database
- mailserver
volumes:
- ./apachelogs:/var/log/apache2
- ./apachelogs/auth.log:/var/log/auth.log
- './symfonylogs:/var/www/html/mywebsite/var/log/'
labels:
- traefik.http.routers.php-http.tls=true
- traefik.http.routers.php-http.tls.certresolver=letsencrypt
- traefik.http.services.php-http.loadbalancer.server.port=80
- traefik.enable=true
- traefik.http.routers.php-http.rule=Host(`mystuff.com`, `en.mystuff.com`)
- 'traefik.http.routers.php-http.tls.domains[0].main=mystuff.com'
- 'traefik.http.routers.php-http.tls.domains[1].main=en.mystuff.com'
restart: always
mailserver:
[doesntmatter]
traefik:
image: traefik:v2.9
command:
- --providers.docker
- --providers.docker.exposedByDefault=false
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --entrypoints.web.http.redirections.entryPoint.to=websecure
- --entrypoints.web.http.redirections.entryPoint.scheme=https
- --entrypoints.web.http.redirections.entrypoint.permanent=true
- --certificatesresolvers.letsencrypt.acme.email=heyho#gmail.com
- --certificatesresolvers.letsencrypt.acme.storage=acme.json
- --certificatesresolvers.letsencrypt.acme.httpchallenge.entrypoint=web
ports:
- 80:80
network_mode: host
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./acme.json:/acme.json
It works fine.
Then I tried doing this :
docker network create web
And in the yml:
networks:
web:
external: true
internal:
external: false
For php-http:
networks:
- internal
- web
and (I tried without and with it)
- "traefik.docker.network=web"
In database and mailserver :
networks:
- internal
In traefik:
networks:
- web
and (tried without and with it)
- "traefik.docker.network=web"
It didn't work at all, my website wasn't accessible anymore.
Then as said there : https://doc.traefik.io/traefik/user-guides/docker-compose/basic-example/
I tried :
networks:
web: {}
Then in php-http and traefik:
networks:
- web
It didn't work either. Their example (with whoami) works on my server. (Tried with a local curl). Like always, this makes me hate sysadmin very much, does anyone has any clue on what's wrong there? It doesn't make anysense to me. I followed everything, tried everything.
Thank you

The docker Container of caddy is in restarting state

This is docker-compose file that starts the containers all are working fine except the caddy.
version: '3'
services:
db:
image: postgres:latest
restart: always
expose:
- "5555"
volumes:
- pgdata:/var/lib/postgresql/data/
environment:
- POSTGRES_DB=chiefonboarding
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
networks:
- global
web:
image: chiefonboarding/chiefonboarding:latest
restart: always
expose:
- "9000"
environment:
- SECRET_KEY=somethingsupersecret
- BASE_URL=https://on.hr.gravesfoods.com
- DATABASE_URL=postgres://postgres:postgres#db:5432/chiefonboarding
- ALLOWED_HOSTS=on.hr.gravesfoods.com
- DEFAULT_FROM_EMAIL=hello#gravesfoods.com
depends_on:
- db
networks:
- global
caddy:
image: caddy:2.3.0-alpine
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- $PWD/Caddyfile:/etc/caddy/Caddyfile
- $PWD/site:/srv
- caddy_data:/data
- caddy_config:/config
networks:
- global
volumes:
pgdata:
caddy_data:
caddy_config:
networks:
global:
Also these are the logs it is generating:
[https://on.hr.gravesfoods.com:80] scheme and port violate convention "level":"info","ts":1656425557.6256478,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile" run: adapting config using caddyfile: server block 0, key 0 (https://on.hr.gravesfoods.com:80): determining listener address: [https://on.hr.gravesfoods.com:80] scheme and port violate convention.

Docker compose containers cannot connect to each other through the network bridge

I am trying to run this docker compose file but my microservices cannot connect to eureka server through the docker network bridge. Does anyone know where is the problem? This is the docker compose file I am running
version: '3'
services:
zipkin:
image: openzipkin/zipkin
container_name: zipkin
ports:
- "9411:9411"
networks:
- spring
rabbitmq:
image: rabbitmq:3.9.11-management-alpine
container_name: rabbitmq
ports:
- "5672:5672"
- "15672:15672"
networks:
- spring
eureka-server:
image: shaslan/eureka-server:latest
container_name: eureka-server
ports:
- "8761:8761"
environment:
- "SPRING_PROFILES_ACTIVE=docker"
networks:
- spring
depends_on:
- zipkin
api-gw:
image: shaslan/apigw:latest
container_name: apigw
ports:
- "8083:8083"
environment:
- "SPRING_PROFILES_ACTIVE=docker"
networks:
- spring
depends_on:
- zipkin
- eureka-server
customer:
image: shaslan/customer:latest
container_name: customer
ports:
- "8080:8080"
environment:
- "SPRING_PROFILES_ACTIVE=docker"
networks:
- spring
depends_on:
- zipkin
- eureka-server
- rabbitmq
fraud:
image: shaslan/fraud:latest
container_name: fraud
ports:
- "8081:8081"
environment:
- "SPRING_PROFILES_ACTIVE=docker"
networks:
- spring
depends_on:
- zipkin
- eureka-server
- rabbitmq
notification:
image: shaslan/notification:latest
container_name: notification
ports:
- "8082:8082"
environment:
- "SPRING_PROFILES_ACTIVE=docker"
networks:
- spring
depends_on:
- zipkin
- eureka-server
- rabbitmq
networks:
spring:
driver: bridge
When I open up my eureka server it doesn't discover any microservice. Any help would be appreciated

Creating spark cluster with drone.yml not working

I have docker-compose.yml with below image and configuration
version: '3'
services:
spark-master:
image: bde2020/spark-master:2.4.4-hadoop2.7
container_name: spark-master
ports:
- "8080:8080"
- "7077:7077"
environment:
- INIT_DAEMON_STEP=setup_spark
spark-worker-1:
image: bde2020/spark-worker:2.4.4-hadoop2.7
container_name: spark-worker-1
depends_on:
- spark-master
ports:
- "8081:8081"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
here the docker-compose up log ---> https://jpst.it/1Xc4K
and here containers up and running and i mean spark worker connected to spark master without any issues , now problem is i created drone.yml and where i added services component with
services:
jce-cassandra:
image: cassandra:3.0
ports:
- "9042:9042"
jce-elastic:
image: elasticsearch:5.6.16-alpine
ports:
- "9200:9200"
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
janusgraph:
image: janusgraph/janusgraph:latest
ports:
- "8182:8182"
environment:
JANUS_PROPS_TEMPLATE: cassandra-es
janusgraph.storage.backend: cql
janusgraph.storage.hostname: jce-cassandra
janusgraph.index.search.backend: elasticsearch
janusgraph.index.search.hostname: jce-elastic
depends_on:
- jce-elastic
- jce-cassandra
spark-master:
image: bde2020/spark-master:2.4.4-hadoop2.7
container_name: spark-master
ports:
- "8080:8080"
- "7077:7077"
environment:
- INIT_DAEMON_STEP=setup_spark
spark-worker-1:
image: bde2020/spark-worker:2.4.4-hadoop2.7
container_name: spark-worker-1
depends_on:
- spark-master
ports:
- "8081:8081"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
but here spark worker is not connected to spark master getting exceptions, here is exception log details , can some one please guide me why am facing this issue
Note : I am trying to create these services in drone.yml for my integration testing
Answering for better formatting. The comments suggest sleeping. Assuming this is the dockerfile (https://hub.docker.com/r/bde2020/spark-worker/dockerfile) You could sleep by adding the command:
spark-worker-1:
image: bde2020/spark-worker:2.4.4-hadoop2.7
container_name: spark-worker-1
command: sleep 10 && /bin/bash /worker.sh
depends_on:
- spark-master
ports:
- "8081:8081"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
Although sleep 10 is probably excessive, if this would would sleep 5 or sleep 2

how to run SolrCloud with docker-compose

Please help me with docker-compose file.
Right now, i'm using Solr in docker file, but i need to change it to SolrCloud. I need 2 Solr instances, an internal Zookeeper and docker (local).
This is an example of docker-compose file I did:
version: "3"
services:
mongo:
image: mongo:latest
container_name: mongo
hostname: mongo
networks:
- gsec
ports:
- 27018:27017
sqlserver:
image: microsoft/mssql-server-linux:latest
hostname: sqlserver
container_name: sqlserver
environment:
SA_PASSWORD: "#Password123!"
ACCEPT_EULA: "Y"
networks:
- gsec
ports:
- 1403:1433
solr:
image: solr
container_name: solr
ports:
- "8983:8983"
networks:
- gsec
volumes:
- data:/opt/solr/server/solr/mycores
entrypoint:
- docker-entrypoint.sh
- solr-precreate
- mycore
volumes:
data:
networks:
gsec:
driver: bridge
Thank you in advanced.
Solr docker instance has a zookeeper server embedded into. You have just to start Solr with the right parameters and add the zookeeper ports 9983:9983 in the docker-compose file:
solr:
image: solr
container_name: solr
ports:
- "9983:9983"
- "8983:8983"
networks:
- gsec
volumes:
- data:/opt/solr/server/solr/mycores
entrypoint:
- docker-entrypoint.sh
- solr
- start
- -c
- -f
SolrCloud basically is a Solr cluster where Zookeeper is used to coordinate and configure the cluster.
Usually you use SolrCloud with Docker because you're are learning how it works or because you're preparing your application (locally?) to deploy in a bigger environment.
On the other hand it doesn't make much sense run SolrCloud if you don't have a distributed configuration, i.e. having Solr and Zookeeper running on different nodes.
SolrCloud is the kind of cluster you need when you have hundred or even thousands searches per second with collection of millions or even billions of documents.
Your cluster have to scale horizontally.
Version to use with external zookeper.
'-t' to change data dir in container.
To see other options run: solr start -help
version: '3'
services:
solr1:
image: solr
ports:
- "8984:8984"
entrypoint:
- solr
command:
- start
- -f
- -c
- -h
- "10.1.0.157"
- -p
- "8984"
- -z
- "10.1.0.157:2181,10.1.0.157:2182,10.1.0.157:2183"
- -m
- 1g
- -t
- "/opt/solr/server/solr/mycores"
volumes:
- "./data1/mycores:/opt/solr/server/solr/mycores"
I use this setup locally to test three instances of solr and three instances of zookeeper, based on the official example.
version: '3.7'
services:
solr-1:
image: solr:8.7
container_name: solr-1
ports:
- "8981:8983"
environment:
- ZK_HOST=zoo-1:2181,zoo-2:2181,zoo-3:2181
networks:
- solr
depends_on:
- zoo-1
- zoo-2
- zoo-3
# command:
# - solr-precreate
# - gettingstarted
solr-2:
image: solr:8.7
container_name: solr-2
ports:
- "8982:8983"
environment:
- ZK_HOST=zoo-1:2181,zoo-2:2181,zoo-3:2181
networks:
- solr
depends_on:
- zoo-1
- zoo-2
- zoo-3
solr-3:
image: solr:8.7
container_name: solr-3
ports:
- "8983:8983"
environment:
- ZK_HOST=zoo-1:2181,zoo-2:2181,zoo-3:2181
networks:
- solr
depends_on:
- zoo-1
- zoo-2
- zoo-3
zoo-1:
image: zookeeper:3.6
container_name: zoo-1
restart: always
hostname: zoo-1
volumes:
- zoo1data:/data
ports:
- 2181:2181
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181 server.2=zoo-2:2888:3888;2181 server.3=zoo-3:2888:3888;2181
networks:
- solr
zoo-2:
image: zookeeper:3.6
container_name: zoo-2
restart: always
hostname: zoo-2
volumes:
- zoo2data:/data
ports:
- 2182:2181
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zoo-1:2888:3888;2181 server.2=0.0.0.0:2888:3888;2181 server.3=zoo-3:2888:3888;2181
networks:
- solr
zoo-3:
image: zookeeper:3.6
container_name: zoo-3
restart: always
hostname: zoo-3
volumes:
- zoo3data:/data
ports:
- 2183:2181
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=zoo-1:2888:3888;2181 server.2=zoo-2:2888:3888;2181 server.3=0.0.0.0:2888:3888;2181
networks:
- solr
networks:
solr:
# persist the zookeeper data in volumes
volumes:
zoo1data:
driver: local
zoo2data:
driver: local
zoo3data:
driver: local

Resources