Kibana can't reach elasticsearch - docker

Using docker-compose v3 and deploying to a swarm:
version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.4.1
deploy:
replicas: 1
ports:
- "9200:9200"
tty: true
kibana:
image: docker.elastic.co/kibana/kibana:5.4.1
deploy:
mode: global
ports:
- "5601:5601"
depends_on:
- elasticsearch
tty: true
I see this in the kibana service log:
Unable to revive connection: http://elasticsearch:9200/
Elasticsearch service is running and can be reached.
Swarm consists of 3 nodes.
What am I missing?
Update:
I turns out that if I try to access kibana on the same swarm node where elasticsearch is running, it works. All other nodes either have a network problem or cannot resolve the elasticsearch name.

I found the reason, and the solution.
My swarm is running on AWS - All nodes are placed in the same security group and I assumed all ports were open internally in that security group. That's not the case.
I explicitly configured the security group to allow inbound traffic as per dockers routing mesh specs here: https://docs.docker.com/engine/swarm/ingress/

Docker-compose by default generates a network and puts all services within it. But I do not know if it changes in docker swarm. To define it you can do this.
version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.4.1
deploy:
replicas: 1
ports:
- "9200:9200"
tty: true
networks:
- some-name
kibana:
image: docker.elastic.co/kibana/kibana:5.4.1
deploy:
mode: global
ports:
- "5601:5601"
links:
- elasticsearch
depends_on:
- elasticsearch
tty: true
networks:
- some-name
networks:
some-name:
driver: overlay
I hope it serves you, I will wait for news.

Related

Traefik 2 network between 2 containers results in Gateway Timeout errors

I'm trying to set up 2 docker containers with docker-compose, 1 is a Traefik proxy and the other is a Vikunja kanban board container.
They both have their own docker-compose file. I can start the containers and the Traefik dashboard doesn't show any issues but when I open the URL in a browser I only get a Gateway Timeout error.
I have been looking at similar questions on here and different platforms and in nearly all other cases the issue was that they were placed on 2 different networks. However, I added a networks directive to the Traefik docker-compose.yml and still have this problem, unless I'm using them wrong.
The docker-compose file for the Vikunja container
(adapted from https://vikunja.io/docs/full-docker-example/)
version: '3'
services:
api:
image: vikunja/api
environment:
VIKUNJA_DATABASE_HOST: db
VIKUNJA_DATABASE_PASSWORD: REDACTED
VIKUNJA_DATABASE_TYPE: mysql
VIKUNJA_DATABASE_USER: vikunja
VIKUNJA_DATABASE_DATABASE: vikunja
VIKUNJA_SERVICE_JWTSECRET: REDACTED
VIKUNJA_SERVICE_FRONTENDURL: REDACTED
volumes:
- ./files:/app/vikunja/files
networks:
- web
- default
depends_on:
- db
restart: unless-stopped
labels:
- "traefik.enable=true"
- "traefik.http.routers.vikunja-api.rule=Host(`subdomain.domain.de`) && PathPrefix(`/api/v1`, `/dav/`, `/.well-known/`)"
- "traefik.http.routers.vikunja-api.entrypoints=websecure"
- "traefik.http.routers.vikunja-api.tls.certResolver=myresolver"
frontend:
image: vikunja/frontend
labels:
- "traefik.enable=true"
- "traefik.http.routers.vikunja-frontend.rule=Host(`subdomain.domain.de`)"
- "traefik.http.routers.vikunja-frontend.entrypoints=websecure"
- "traefik.http.routers.vikunja-frontend.tls.certResolver=myresolver"
networks:
- web
- default
restart: unless-stopped
db:
image: mariadb:10
command: --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
environment:
MYSQL_ROOT_PASSWORD: REDACTED
MYSQL_USER: vikunja
MYSQL_PASSWORD: REDACTED
MYSQL_DATABASE: vikunja
volumes:
- ./db:/var/lib/mysql
restart: unless-stopped
command: --max-connections=1000
networks:
- web
networks:
web:
external: true
The network directives for the api and frontend services in the Vikunja docker-compose.yml were present in the template (I added one for the db service for testing but it didn't have any effect).
networks:
- web
After getting a docker error about the network not being found I created it via docker network create web
The docker-compose file for the Traefik container
version: '3'
services:
traefik:
image: traefik:v2.8
ports:
- "80:80"
- "443:443"
- "8080:8080" # dashboard
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./letsencrypt:/letsencrypt
- ./traefik.http.yml:/etc/traefik/traefik.yml
networks:
- web
networks:
web:
external: true
I've tried adding the Traefik service to the Vikunja docker-compose.yml in one file but that didn't have any effect either.
I'm thankful for any pointers.
For debugging you could try to configure all container to use the host network to enusre they are realy on the same netwok.
i had a similar issue trying to run two different dockers and getting a
"Gateway Timeout". My issue was solved after changing the mapping in the second docker for traefik and accessing the site with :84 at the end (http://sitename:84)
traefik:
image: traefik:v2.0
container_name: "${PROJECT_NAME}_traefik"
command: --api.insecure=true --providers.docker
ports:
- '84:80'
- '8084:8080'

How can 1 microservice communicate with an instance of elasticsearch in another microservice?

I have currently 2 microservices running. One of them has elasticsearch up and running as a container and the data has been pushed to elasticsearch. The other microservice needs to hit an endpoint that fethces the data in elasticsearch and display that data in the browser. The data can be seen in kibana but not when the other microservice hits that endpoint. What could I be doing wrong?
I have tried adding the name of the alias of elasticsearch in the other microservice .env file but still does not seem to communicate with elasticsearch.
------microservice 1-----
version: '3.5'
services:
micro1_php:
environment:
SERVICE_NAME: micro1-app
DB_CONNECTION: local
ELASTIC_HOST: mic_elasticsearch
ELASTIC_PORT: 9200
networks:
- default
- proxynet
mic_elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.2.3
environment:
discovery.type: single-node
volumes:
- "./storage/es-data:/usr/share/elasticsearch/data" # to be persistant across docker-compose restarts
networks:
default: {}
proxynet:
aliases:
- micro.elasticsearch
ports:
- "9200:9200"
networks:
proxynet:
name: custom_network
external: true
------microservice 2-----
version: '3.5'
services:
micro2_php:
environment:
SERVICE_NAME: micro2-app
DB_CONNECTION: local
networks:
- default
- proxynet
networks:
proxynet:
name: custom_network
external: true

Exposing a Docker database service only on the internal network with Traefik

Let's say I defined two services "frontend" and "db" in my docker-compose.yml which are deployed to a Docker swarm, i.e. they may also run in different stacks. With this setup Traefik automatically generates the frontend and backend for each stack which is fine.
Now I have another Docker container running temporarily in a Jenkins pipeline which shall be able to access the db service in a specific stack. My first idea was to expose the db service by adding it to the cluster-global-net network so that Traefik can generate a frontend route to the bakend. This basically works.
But I'd like to hide the database service from "the public" while still being able to connect another Docker container to it via its stack or service name using the internal "default" network.
Can this be done somehow?
version: '3.6'
networks:
default: {}
cluster-global-net:
external: true
services:
frontend:
image: frontend_image
ports:
- 8080
networks:
- cluster-global-net
- default
deploy:
labels:
traefik.port: 8080
traefik.docker.network: cluster-global-net
traefik.backend.loadbalancer.swarm: 'true'
traefik.backend.loadbalancer.stickiness: 'true'
replicas: 1
restart_policy:
condition: any
db:
image: db_image
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=false
- MYSQL_DATABASE=db_schema
- MYSQL_USER=db_user
- MYSQL_PASSWORD=db_pass
ports:
- 3306
volumes:
- db_volume:/var/lib/mysql
networks:
- default
restart: on-failure
deploy:
labels:
traefik.port: 3306
traefik.docker.network: default
What you need is a network on which both of them are deployed, but that it's not visible from anyone else.
To do such, create a network , add it to your db service and frontend, and also to your temporary service. And indeed, remove traefik label on db because they are not needed anymore here.
EG :
...
networks:
default: {}
cluster-global-net:
external: true
db-net:
external: true
services:
frontend:
image: frontend_image
networks:
- cluster-global-net
- default
- db-net
deploy:
...
db:
image: db_image
...
networks:
- default
- db-net
restart: on-failure
#no labels
docker network create db-net
docker stack deploy -c <mycompose.yml> <myfront>
docker service create --network db-net <myTemporaryImage> <temporaryService>
Then, the temporaryService as well as the frontend can reach the db through db:3306
BTW : you don't need to open the port for the frontend, since traefik will access it internally (trafik.port).
EDIT : new exemple with network created from compose file.
...
networks:
default: {}
cluster-global-net:
external: true
db-net: {}
services:
frontend:
image: frontend_image
networks:
- cluster-global-net
- default
- db-net
deploy:
...
db:
image: db_image
...
networks:
- default
- db-net
restart: on-failure
#no labels
docker stack deploy -c <mycompose.yml> someStackName
docker service create --network someStackName_db-net <myTemporaryImage> <temporaryService>

How to collect logs via fluentd in swarm mode

I try to run services (mongo) in swarm mode with log collected to elasticsearch via fluentd. It's worked(!) with:
docker-compose up
But when I deploy via stack, services started, but logs not collected, and i don't know how to see what the reason.
docker stack deploy -c docker-compose.yml env_staging
docker-compose.yml:
version: "3"
services:
mongo:
image: mongo:3.6.3
depends_on:
- fluentd
command: mongod
networks:
- webnet
logging:
driver: "fluentd"
options:
fluentd-address: localhost:24224
tag: mongo
fluentd:
image: zella/fluentd-es
depends_on:
- elasticsearch
ports:
- 24224:24224
- 24224:24224/udp
networks:
- webnet
elasticsearch:
image: elasticsearch
ports:
- 9200:9200
networks:
- webnet
kibana:
image: kibana
depends_on:
- elasticsearch
ports:
- 5601:5601
networks:
- webnet
networks:
webnet:
upd
I remove fluentd-address: localhost:24224 and problem solves. But I don't understand what is "localhost"? Why we can't set "fluentd" host. If someone explain what is fluentd-address, I will accept answer.
fluentd-address is the address where fluentd daemon resides (default is localhost and you don't need to specify it in this case).
In your case (using stack) your fluentd daemon will run on a node, you should reach that service using the name of the service (in your case fluentd, have you tried?).
Remember to add to your options the fluentd-async-connect: "true"
Reference is at:
https://docs.docker.com/config/containers/logging/fluentd/#usage
You don't need to specify fluentd-address. When you set logging driver to fluentd, Swarm automatically discovers nearest fluentd instance and sends there all stdout of desired container.

Docker created with docker-compose not visible from outside server

I've created my docker-compose file with 3 dockerfile attached. Everything is working but currently I'd like to expose outside port 8000.
This is not happening. The host is unreachable :(
What's wrong with this?
version: '3'
services:
elastic:
build: ./elastic
ports:
- 5500:80
tty: true
networks:
- default
api:
build: ./api
ports:
- 5000:80
depends_on:
- elastic
tty: true
networks:
- default
web:
build: ./web
ports:
- 8000:80
depends_on:
- api
tty: true
networks:
- outside
- default
networks:
outside:
external:
name: docker_gwbridge
I had a similar issue with an app running on port other than 80/443. I deployed the app on AWS EC2, and the host could not be reached. In order to make it visible, I had to add an inbound rule in "Security Groups" of EC2 instance, which exposed other ports (8000 in my case).

Resources