I got a 404 when running kibana on docker behind traefik, but elastic can be reached - docker

I am having issues while running ELK on docker, behind Traefik. Every other services are running, but when i try to access to kibana on a browser via its url, I got a 404.
This is my docker-compose.yml :
version: '3.4'
networks:
app-network:
name: app-network
driver: bridge
ipam:
config:
- subnet: xxx.xxx.xxx.xxx/xxx
services:
reverse-proxy:
image: traefik:v2.5
command:
--providers.docker.network=app-network
--providers.docker.exposedByDefault=false
--entrypoints.web.address=:80
--entrypoints.websecure.address=:443
--providers.docker=true
--api=true
--api.dashboard=true
ports:
- "80:80"
- "443:443"
networks:
app-network:
ipv4_address: xxx.xxx.xxx.xxx
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /certs/:/certs
labels:
- traefik.enable=true
- traefik.docker.network=public
- traefik.http.routers.traefik-http.entrypoints=web
- traefik.http.routers.traefik-http.service=api#internal
elasticsearch:
hostname: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.12.0
environment:
- bootstrap.memory_lock=true
- cluster.name=docker-cluster
- cluster.routing.allocation.disk.threshold_enabled=false
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms2048m -Xmx2048m"
ulimits:
memlock:
hard: -1
soft: -1
volumes:
- esdata:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
app-network:
ipv4_address: xxx.xxx.xxx.xxx
healthcheck:
interval: 20s
retries: 10
test: curl -s http://localhost:9200/_cluster/health | grep -vq '"status":"red"'
labels:
- "traefik.enable=true"
- "traefik.http.routers.elasticsearch.entrypoints=http"
- "traefik.http.routers.elastic.rule=Host(`elastic.mydomain.fr`)"
- "traefik.http.services.elastic.loadbalancer.server.port=9200"
kibana:
hostname: kibana
image: docker.elastic.co/kibana/kibana:7.12.0
depends_on:
elasticsearch:
condition: service_healthy
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200
ELASTICSEARCH_HOSTS: http://elasticsearch:9200
ports:
- 5601:5601
networks:
app-network:
ipv4_address: xxx.xxx.xxx.xxx
links:
- elasticsearch
healthcheck:
interval: 10s
retries: 20
test: curl --write-out 'HTTP %{http_code}' --fail --silent --output /dev/null http://localhost:5601/api/status
labels:
- "traefik.enable=true"
- "traefik.http.routers.kibana.entrypoints=http"
- "traefik.http.routers.kibana.rule=Host(`kibana.mydomain.fr`)"
- "traefik.http.services.kibana.loadbalancer.server.port=5601"
- "traefik.http.routers.kibana.entrypoints=websecure"
volumes:
esdata:
driver: local
Knowing that, as I said, Elastic and other services can be accessed.
I have already tried to set the basePath, but it did not works either.
Do you have any idea what am I missing ?

You named your entrypoints at the top "web" and "websecure", but the labels are using "http" as entrypoint, you have to rename them (except you have defined http as well as entrypoint somewhere else). You have to match the word you are defining in the configuration string: --entrypoints.web.address=:80
So for example: "traefik.http.routers.elasticsearch.entrypoints=web"
Additional Tip: you can remove the label with the loadbalancer port, because as long as you are defining an exposed or mapped port in Docker, Traefik recognises the used port. I have no such line configured for my personal services. - "traefik.http.services.elastic.loadbalancer.server.port=9200"

Thanks for your answer Zeikos, it helps me a lot.
I think it could be closed now

Related

I want docker to listen to "http://localhost/user" and forward to "http://portal.local/user" using traefik

I've got my docker environment setup using traefik and I've got two services running at the moment.
I'm using Google OAuth for authentication which redirects to my web application with auth-code. The redirect URL isn't allowed anything but localhost or localhost:<any-port> or any CDN. I've setup my docker for http://portal.local.
I now want http://localhost/user/googleLogin?code=xxxxxxxxxx to be translated to http://portal.local/user/googleLogin?code=xxxxxxxx for further processing of authentication.
Right now, I'm having to manually change localhost to portal.local in browser URL after it gives site not found error, which then takes me to further processing.
Below is my docker-compose.yml file.
version: "3.9"
services:
portal-traefik:
container_name: portal-traefik
command:
- --api.insecure=true
- --providers.docker=true
- --providers.docker.exposedbydefault=false
- --entrypoints.web.address=:80
# - --entrypoints.websecure.address=:443
# - --certificatesresolvers.myresolver.acme.httpchallenge=true
# - --certificatesresolvers.myresolver.acme.httpchallenge.entrypoint=web
# - --certificatesresolvers.myresolver.acme.email=ssl#idealsalessolutions.com
# - --certificatesresolvers.myresolver.acme.storage=/acme/acme.json
image: traefik:latest
networks:
api_driven:
ports:
- "80:80"
- "8080:8080"
# - "443:443"
restart: unless-stopped
volumes:
- portal_acme:/acme
- /var/run/docker.sock:/var/run/docker.sock:ro
api-i4f:
container_name: api-i4f
depends_on:
- php-i4f
- portal-traefik
image: nginx:stable-alpine
labels:
- traefik.enable=true
- traefik.http.routers.api.rule=Host(`api.local`)
networks:
api_driven:
restart: unless-stopped
volumes:
- ../docker.sites/api.local:/usr/share/nginx/api.local
- ./conf/nginx/conf.d:/etc/nginx/conf.d:ro
command: [nginx, '-g', 'daemon off;']
hostname: api.local
portal-i4f:
container_name: portal-i4f
depends_on:
- php-i4f
- portal-traefik
image: nginx:stable-alpine
labels:
- traefik.enable=true
- traefik.http.routers.portal.rule=Host(`portal.local`)
networks:
api_driven:
restart: unless-stopped
volumes:
- ../docker.sites/portal.local:/usr/share/nginx/portal.local
- ./conf/nginx/conf.d:/etc/nginx/conf.d:ro
command: [nginx, '-g', 'daemon off;']
hostname: portal.local
php-i4f:
container_name: php-i4f
depends_on:
- portal-traefik
image: isshub/core:php7.4.30-fpm-alpine3.16-intl-mysql
networks:
api_driven:
restart: unless-stopped
volumes:
- ../docker.sites/api.local:/usr/share/nginx/api.local
- ../docker.sites/portal.local:/usr/share/nginx/portal.local
networks:
api_driven:
name: "api_driven"
volumes:
portal_acme:
I've tried to use multiple router rules to listen to both localhost and portal.local using regex/replacement middlewares as well but that stops the service at all and gives 404 error.

ElasticSearch with Docker and Traefic

When I will run elasticsearch with docker and traefic (With SSL encryption).
Than I can't connect to elasticsearch via domain.
When I remove all traefic things in the docker-compose by the elastic search tham it works over the IP and http.
Here is my docker-compose.yml:
`
version: "3.7"
networks:
traefik:
external: true
search:
services:
elasticsearch:
image: elasticsearch:7.16.2
container_name: elasticsearch
environment:
- xpack.security.enabled=false
- discovery.type=single-node
- ELASTICSEARCH_USERNAME=elastic
- ELASTICSEARCH_PASSWORD=XXXXXX
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
labels:
- "traefik.enable=true"
- "traefik.http.routers.elastic_search.rule=Host(se.mydomain.xyz)"
- "traefik.http.routers.elastic_search.entrypoints.websecure"
- "traefik.http.services.elastic_search.loadbalancer.server.port=9200"
ports:
- 9200:9200
- 9300:9300
networks:
- search
kibana:
image: kibana:7.16.2
container_name: kibana
labels:
- "traefik.enable=true"
- "traefik.http.routers.kibana_search.rule=Host(kibana.mydomain.xyz)"
- "traefik.http.routers.kibana_search.entrypoints.websecure"
- "traefik.http.services.kibana_search.loadbalancer.server.port=5601"
#ports:
#- 5601:5601
environment:
ELASTICSEARCH_URL: http://XXX.XXX.XXX.XXX:9200
ELASTICSEARCH_HOSTS: '["http://XXX.XXX.XXX.XXX:9200"]'
networks:
- search
- traefik
volumes:
elasticsearch-data:
driver: local
`
Has anyone a idea for a solution?

Logging to Elastic search is not working while pointing to docker url in Api docker image

Logging to Elastic search work fine when testing from local debugging to localhost app connection to elasticsearch url at https://localhost:9200, but it fails to connect with docker images of the dotnet.monitoring.api to the docker image of Elastic search at http://elasticsearch:9200
Below is the docker compose file.
version: '3.4'
services:
productdb:
container_name: productdb
environment:
SA_PASSWORD: "SwN12345678"
ACCEPT_EULA: "Y"
restart: always
ports:
- "1433:1433"
elasticsearch:
container_name: elasticsearch
environment:
- node.name=elasticsearch
- cluster.name-es-docker-cluster
- xpack.security.enabled=false
- "discovery.type=single-node"
networks:
- es-net
volumes:
- data01:/urs/share/elasticsearch/data
ports:
- 9200:9200
kibana:
container_name: kibana
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
networks:
- es-net
depends_on:
- elasticsearch
ports:
- 5601:5601
dotnet.monitoring.api:
container_name: dotnet.monitoring.api
environment:
- ASPNETCORE_ENVIRONMENT=Development
- "ElasticConfiguration:Url=http://elasticsearch:9200/"
- "ConnectionStrings:Product=server=productdb;Database=ProductDb;User Id=sa;Password=SampleP#&&W0rd;TrustServerCertificate=True;"
depends_on:
- productdb
- kibana
ports:
- "8001:80"
volumes:
data01:
driver: local
networks:
es-net:
driver: bridge
Your image of API is not in the same network. Make es-net as default.
networks:
default:
name: es-net
external: true

docker compose throwing econnrefused

While accessing DB it threw me an error that.
MongooseServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017
How to fix it? So my application can make a connection to the database. As in code, you can see my application is relying on multiple databases. how can I make sure before starting the application all of the database containers got started.
version: '3.8'
networks:
appnetwork:
driver: bridge
services:
mysql:
image: mysql:8.0.27
restart: always
command: --init-file /data/application/init.sql
environment:
- MYSQL_ROOT_PASSWORD=11999966
- MYSQL_DATABASE=interview
- MYSQL_USER=interviewuser
- MYSQL_PASSWORD=11999966
ports:
- 3306:3306
volumes:
- db:/var/lib/mysql
- ./migration/init.sql:/data/application/init.sql
networks:
- appnetwork
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.15.2
restart: always
ports:
- 9200:9200
environment:
- xpack.security.enabled=false
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
volumes:
- elastic:/usr/share/elasticsearch/data
networks:
- appnetwork
redis:
image: redis
restart: always
ports:
- 6379:6379
volumes:
- cache:/var/lib/redis
networks:
- appnetwork
mongodb:
image: mongo
restart: always
ports:
- 27017:27017
volumes:
- mongo:/var/lib/mongo
networks:
- appnetwork
app:
depends_on:
- mysql
- elasticsearch
- redis
- mongodb
build: .
restart: always
ports:
- 3000:3000
networks:
- appnetwork
stdin_open: true
tty: true
command: npm start
volumes:
db:
elastic:
cache:
mongo:
The container (probably app) tries to connect to a mongodb instance running on localhost (i.e. the container itself). Since there is nothing listening on port 27017 of this container, we get the error.
We can fix the probelm by reconfiguring the application running in the container to use the name of the mongodb-container (which, in the given docker-compose.yml, is also mongodb) instead of 127.0.0.1 or localhost.
If we have designed our app accoring to the 12 factors, it should be as simple as setting an environment variable for the container.
Use http://mongo:27017 as connection string instead.

Redis connection to 127.0.0.1:6379 failed using Docker

Hi i'm using docker compose to handle all of my configuration.
i have mongo, node, redis, and elastic stack.
But i can't get my redis connect to my node app.
Here is my docker-compose.yml
version: '2'
services:
mongo:
image: mongo:3.6
container_name: "backend-mongo"
ports:
- "27017:27017"
volumes:
- "./data/db:/data/db"
redis:
image: redis:4.0.7
ports:
- "6379:6379"
user: redis
adminmongo:
container_name: "backend-adminmongo"
image: "mrvautin/adminmongo"
ports:
- "1234:1234"
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.1.1
container_name: "backend-elastic"
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
web:
container_name: "backend-web"
build: .
ports:
- "8888:8888"
environment:
- MONGODB_URI=mongodb://mongo:27017/backend
restart: always
depends_on:
- mongo
- elasticsearch
- redis
volumes:
- .:/backend
- /backend/node_modules
volumes:
esdata1:
driver: local
networks:
esnet:
Things to notice:
The redis is already running ( I can ping the redis)
I don't have any services running on my host only from the container
Other containers (except redis) work well
I've tried this method below
const redisClient = redis.createClient({host: 'redis'});
const redisClient = redis.createClient(6379, '127.0.0.1');
const redisClient = redis.createClient(6379, 'redis');
I'm using
docker 17.12
xubuntu 16.04
How can i connect my app to my redis container?
adding
hostname: redis
under redis section fix this issue.
So it will be something like this,
redis:
image: redis:4.0.7
ports:
- "6379:6379"
command: ["redis-server", "--appendonly", "yes"]
hostname: redis

Resources