Nest.js - elasticsearch connection inside docker containers - docker

So I've been trying to connect my nest.js app to the elasticsearch connection but I have no idea why I can't.
app | (node:30) UnhandledPromiseRejectionWarning: ConnectionError: connect ECONNREFUSED 172.31.0.2:9200
app | at ClientRequest.onError (/usr/src/app/node_modules/#elastic/elasticsearch/lib/Connection.js:114:16)
This is the error I'm receiving. I'm composing my node.js app and elasticsearch engine in the same dockercompose file.
//docker-compose.yml
version: "3"
services:
app:
container_name: app
build:
context: .
target: development
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
ports:
- ${PORT}:${PORT}
command: npm run start:dev
networks:
- elastic
env_file:
- .env
depends_on:
- redis
- es01
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.1
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- cluster.initial_master_nodes=es01
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
redis:
image: redis
container_name: redis
ports:
- "6379:6379"
redis-commander:
container_name: redis-commander
hostname: redis-commander
image: rediscommander/redis-commander:latest
restart: always
environment:
- REDIS_HOSTS=local:redis:6379
ports:
- "8081:8081"
volumes:
data01:
driver: local
networks:
elastic:
driver: bridge
//.env
ELASTICSEARCH_NODE=http://es01:9200
ELASTICSEARCH_USERNAME=elastic
ELASTICSEARCH_PASSWORD=admin
// backend nest.js implementation
#Module({
imports: [
ConfigModule,
ElasticsearchModule.registerAsync({
imports: [ConfigModule],
useFactory: async (configService: ConfigService) => ({
node: configService.get('ELASTICSEARCH_NODE'),
auth: {
username: configService.get('ELASTICSEARCH_USERNAME'),
password: configService.get('ELASTICSEARCH_PASSWORD'),
},
}),
inject: [ConfigService],
}),
],
controllers: [ContentSearchController],
providers: [ContentSearchService],
exports: [ElasticsearchModule, ContentSearchService],
})
This config was great, when I was developing my nest.js app externally (outside from docker-compose and using the localhost connection - http://localhost:9200).
Right now I was trying to move everything into dockerfile, I managed to achieve everything apart from this unfortunate connection with elasticsearch. Can anyone advice what am I doing wrong?

Related

I got a 404 when running kibana on docker behind traefik, but elastic can be reached

I am having issues while running ELK on docker, behind Traefik. Every other services are running, but when i try to access to kibana on a browser via its url, I got a 404.
This is my docker-compose.yml :
version: '3.4'
networks:
app-network:
name: app-network
driver: bridge
ipam:
config:
- subnet: xxx.xxx.xxx.xxx/xxx
services:
reverse-proxy:
image: traefik:v2.5
command:
--providers.docker.network=app-network
--providers.docker.exposedByDefault=false
--entrypoints.web.address=:80
--entrypoints.websecure.address=:443
--providers.docker=true
--api=true
--api.dashboard=true
ports:
- "80:80"
- "443:443"
networks:
app-network:
ipv4_address: xxx.xxx.xxx.xxx
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /certs/:/certs
labels:
- traefik.enable=true
- traefik.docker.network=public
- traefik.http.routers.traefik-http.entrypoints=web
- traefik.http.routers.traefik-http.service=api#internal
elasticsearch:
hostname: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.12.0
environment:
- bootstrap.memory_lock=true
- cluster.name=docker-cluster
- cluster.routing.allocation.disk.threshold_enabled=false
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms2048m -Xmx2048m"
ulimits:
memlock:
hard: -1
soft: -1
volumes:
- esdata:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
app-network:
ipv4_address: xxx.xxx.xxx.xxx
healthcheck:
interval: 20s
retries: 10
test: curl -s http://localhost:9200/_cluster/health | grep -vq '"status":"red"'
labels:
- "traefik.enable=true"
- "traefik.http.routers.elasticsearch.entrypoints=http"
- "traefik.http.routers.elastic.rule=Host(`elastic.mydomain.fr`)"
- "traefik.http.services.elastic.loadbalancer.server.port=9200"
kibana:
hostname: kibana
image: docker.elastic.co/kibana/kibana:7.12.0
depends_on:
elasticsearch:
condition: service_healthy
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200
ELASTICSEARCH_HOSTS: http://elasticsearch:9200
ports:
- 5601:5601
networks:
app-network:
ipv4_address: xxx.xxx.xxx.xxx
links:
- elasticsearch
healthcheck:
interval: 10s
retries: 20
test: curl --write-out 'HTTP %{http_code}' --fail --silent --output /dev/null http://localhost:5601/api/status
labels:
- "traefik.enable=true"
- "traefik.http.routers.kibana.entrypoints=http"
- "traefik.http.routers.kibana.rule=Host(`kibana.mydomain.fr`)"
- "traefik.http.services.kibana.loadbalancer.server.port=5601"
- "traefik.http.routers.kibana.entrypoints=websecure"
volumes:
esdata:
driver: local
Knowing that, as I said, Elastic and other services can be accessed.
I have already tried to set the basePath, but it did not works either.
Do you have any idea what am I missing ?
You named your entrypoints at the top "web" and "websecure", but the labels are using "http" as entrypoint, you have to rename them (except you have defined http as well as entrypoint somewhere else). You have to match the word you are defining in the configuration string: --entrypoints.web.address=:80
So for example: "traefik.http.routers.elasticsearch.entrypoints=web"
Additional Tip: you can remove the label with the loadbalancer port, because as long as you are defining an exposed or mapped port in Docker, Traefik recognises the used port. I have no such line configured for my personal services. - "traefik.http.services.elastic.loadbalancer.server.port=9200"
Thanks for your answer Zeikos, it helps me a lot.
I think it could be closed now

docker compose throwing econnrefused

While accessing DB it threw me an error that.
MongooseServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017
How to fix it? So my application can make a connection to the database. As in code, you can see my application is relying on multiple databases. how can I make sure before starting the application all of the database containers got started.
version: '3.8'
networks:
appnetwork:
driver: bridge
services:
mysql:
image: mysql:8.0.27
restart: always
command: --init-file /data/application/init.sql
environment:
- MYSQL_ROOT_PASSWORD=11999966
- MYSQL_DATABASE=interview
- MYSQL_USER=interviewuser
- MYSQL_PASSWORD=11999966
ports:
- 3306:3306
volumes:
- db:/var/lib/mysql
- ./migration/init.sql:/data/application/init.sql
networks:
- appnetwork
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.15.2
restart: always
ports:
- 9200:9200
environment:
- xpack.security.enabled=false
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
volumes:
- elastic:/usr/share/elasticsearch/data
networks:
- appnetwork
redis:
image: redis
restart: always
ports:
- 6379:6379
volumes:
- cache:/var/lib/redis
networks:
- appnetwork
mongodb:
image: mongo
restart: always
ports:
- 27017:27017
volumes:
- mongo:/var/lib/mongo
networks:
- appnetwork
app:
depends_on:
- mysql
- elasticsearch
- redis
- mongodb
build: .
restart: always
ports:
- 3000:3000
networks:
- appnetwork
stdin_open: true
tty: true
command: npm start
volumes:
db:
elastic:
cache:
mongo:
The container (probably app) tries to connect to a mongodb instance running on localhost (i.e. the container itself). Since there is nothing listening on port 27017 of this container, we get the error.
We can fix the probelm by reconfiguring the application running in the container to use the name of the mongodb-container (which, in the given docker-compose.yml, is also mongodb) instead of 127.0.0.1 or localhost.
If we have designed our app accoring to the 12 factors, it should be as simple as setting an environment variable for the container.
Use http://mongo:27017 as connection string instead.

What might be wrong with my Nuxt / Docker / Traefik config?

For some reason I can't get this to work. I'm trying to forward /api to API container.
Error I'm getting:
nuxt | [6:11:03 PM] Error: connect ECONNREFUSED 127.0.0.1:80
nuxt | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1083:14)
I think /api is being redirected to 127.0.0.1:80 but I don't know why?
Traefik dashboard:
https://imgur.com/mqTXE9F
nuxt.config.js
...
axios: {
baseURL: '/api'
},
server: {
proxyTable: {
'/api': {
target: 'http://localhost:1337',
changeOrigin: true,
pathRewrite: {
"^/api": ""
}
}
}
},
...
docker-compose.yml
version: '3'
services:
reverse-proxy:
image: traefik
command: --api --docker
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- mynet
nuxt:
# build: ./app/
image: "registry.gitlab.com/username/package:latest"
container_name: nuxt
restart: always
ports:
- "3000:3000"
command:
"npm run start"
networks:
- mynet
labels:
- "traefik.backend=nuxt"
- "traefik.frontend.rule=PathPrefixStrip:/"
- "traefik.docker.network=mynet"
- "traefik.port=3000"
api:
build: .
image: strapi/strapi
container_name: api
environment:
- APP_NAME=strapi-app
- DATABASE_CLIENT=mongo
- DATABASE_HOST=db
- DATABASE_PORT=27017
- DATABASE_NAME=strapi
- DATABASE_USERNAME=
- DATABASE_PASSWORD=
- DATABASE_SSL=false
- DATABASE_AUTHENTICATION_DATABASE=strapi
- HOST=api
- NODE_ENV=development
ports:
- 1337:1337
volumes:
- ./strapi-app:/usr/src/api/strapi-app
#- /usr/src/api/strapi-app/node_modules
depends_on:
- db
restart: always
networks:
- mynet
labels:
- "traefik.backend=api"
- "traefik.docker.network=mynet"
- "traefik.frontend.rule=PathPrefixStrip:/api"
- "traefik.port=1337"
db:
image: mongo
environment:
- MONGO_INITDB_DATABASE=strapi
ports:
- 27017:27017
volumes:
- ./db:/data/db
restart: always
networks:
- mynet
networks:
mynet:
external: true
I know that this is a little late, but you should remove the proxy from the webpack-dev-server and instead set the right rules using labels on your api service.
So if you're using Traefik v2, the label on your nuxt service should be
labels:
- "traefik.http.routers.nuxt.rule=Host(`myhost`)"
then the label on your api should be
labels:
- "traefik.http.routers.api.rule=Host(`myhost`) && PathPrefix(`/api`)"

Can not share volume between services defined in docker-compose

I'm running Docker for Mac Version 17.12.0-ce-mac55
I have a docker-compose file that I'm converting from docker-compose version 3 to version 2 to work better with Openshift.
---
version: '2'
services:
fpm:
build:
context: .
dockerfile: Dockerfile.openshift
args:
TIMEZONE: America/Chicago
APACHE_DOCUMENT_ROOT: /usr/local/apache2/htdocs
image: widget-fpm
restart: always
depends_on:
- es
- db
environment:
# taken from sample.env
- TIMEZONE=${TIMEZONE}
- APACHE_DOCUMENT_ROOT=/usr/local/apache2/htdocs
- GET_HOSTS_FROM=dns
- SYMFONY__DATABASE__HOST=db
- SYMFONY__DATABASE__PORT=5432
- SYMFONY__DATABASE__NAME=widget
- SYMFONY__DATABASE__USER=widget
- SYMFONY__DATABASE__PASSWORD=widget
- SYMFONY__DATABASE__SCHEMA=widget
- SYMFONY__DATABASE__DRIVER=pdo_pgsql
- SYMFONY_ENV=prod
- SYMFONY__ELASTICSEARCH__HOST=es:9200
- SYMFONY__SECRET=dsakfhakjhsdfjkhajhjds
- SYMFONY__LOCALE=en
- SYMFONY__RBAC__HOST=rbac
- SYMFONY__RBAC__PROTOCOL=http
- SYMFONY__RBAC__CONNECT__PATH=v1/connect
- SYMFONY__PROJECT_URL=http://localhost
- SYMFONY__APP__NAME=widget
- SYMFONY__CURRENT__API__VERSION=1
volumes:
# use docroot env to change this directory
- src:/usr/local/apache2/htdocs
- symfony-cache:/usr/local/apache2/htdocs/app/cache
- symfony-log:/usr/local/apache2/htdocs/app/logs
expose:
- "9000"
networks:
- client-network
- data-network
labels:
kompose.service.expose: "false"
webserver:
build: ./provisioning/webserver/apache
image: widget_web
restart: "no"
ports:
- "80"
- "443"
volumes_from:
- fpm:ro
depends_on:
- fpm
networks:
- client-network
labels:
com.singlehop.description: "Widget Service Web Server"
com.singlehop.development: "false"
kompose.service.expose: "true"
kompose.service.type: "nodeport"
db:
build: ./provisioning/database/postgres
image: widget_postgres
restart: always
volumes:
- data-volume:/var/lib/postgresql/data
environment:
POSTGRES_USER: widget
POSTGRES_PASSWORD: widget
expose:
- "5432"
networks:
- data-network
labels:
com.singlehop.description: "Widget Service Postgres Database Server"
com.singlehop.development: "false"
io.openshift.non-scalable: "true"
kompose.service.expose: "false"
kompose.volume.size: 100Mi
es:
image: elasticsearch:5.6
restart: always
environment:
#- cluster.name=docker-cluster
#- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
command: ["-Ecluster.name=docker-cluster", "-Ebootstrap.memory_lock=true"]
ulimits:
memlock:
soft: -1
hard: -1
labels:
com.singlehop.description: "Generic Elasticsearch5 DB"
com.singlehop.development: "false"
kompose.service.expose: "false"
kompose.volume.size: 100Mi
volumes:
- es-data:/usr/share/elasticsearch/data
expose:
- "9200-9300"
networks:
- data-network
migration:
# #todo can we use the exact same build/image I created above?
image: singlehop/widget-fpm
environment:
# taken from sample.env
- TIMEZONE=America/Chicago
- APACHE_DOCUMENT_ROOT=/usr/local/apache2/htdocs
- GET_HOSTS_FROM=dns
- SYMFONY__DATABASE__HOST=db
- SYMFONY__DATABASE__PORT=5432
- SYMFONY__DATABASE__NAME=widget
- SYMFONY__DATABASE__USER=widget
- SYMFONY__DATABASE__PASSWORD=widget
- SYMFONY__DATABASE__SCHEMA=widget
- SYMFONY__DATABASE__DRIVER=pdo_pgsql
- SYMFONY_ENV=prod
- SYMFONY__ELASTICSEARCH__HOST=es:9200
- SYMFONY__SECRET=dsakfhakjhsdfjkhajhjds
- SYMFONY__LOCALE=en
- SYMFONY__PROJECT_URL=http://localhost
- SYMFONY__APP__NAME=widget
- SYMFONY__CURRENT__API__VERSION=1
entrypoint: ["/usr/local/bin/php","app/console","--no-interaction"]
command: doctrine:migrations:migrate
volumes:
- src:/usr/local/apache2/htdocs
depends_on:
- db
networks:
- data-network
labels:
com.singlehop.description: "Widget Automated Symfony Migration"
com.singlehop.development: "false"
volumes:
src: {}
data-volume: {}
es-data: {}
symfony-cache: {}
symfony-log: {}
networks:
client-network:
data-network:
I'm using the fpm service to act like a data container and share PHP code to the webserver service. For some reason the named volume src is not being shared to the webserver service/container. I've tried both setting the volumes and using volumes_from.
I'm assuming this is possible and I feel like it would be bad practice to do another copy of the source code in the widget_web Dockerfile.
The depends_on in the fpm service is breaking the named volume src. When I removed the depends_on declaration it worked like I assumed it would work. I can't tell if this is a bug or working as designed.

Redis connection to 127.0.0.1:6379 failed using Docker

Hi i'm using docker compose to handle all of my configuration.
i have mongo, node, redis, and elastic stack.
But i can't get my redis connect to my node app.
Here is my docker-compose.yml
version: '2'
services:
mongo:
image: mongo:3.6
container_name: "backend-mongo"
ports:
- "27017:27017"
volumes:
- "./data/db:/data/db"
redis:
image: redis:4.0.7
ports:
- "6379:6379"
user: redis
adminmongo:
container_name: "backend-adminmongo"
image: "mrvautin/adminmongo"
ports:
- "1234:1234"
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.1.1
container_name: "backend-elastic"
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
web:
container_name: "backend-web"
build: .
ports:
- "8888:8888"
environment:
- MONGODB_URI=mongodb://mongo:27017/backend
restart: always
depends_on:
- mongo
- elasticsearch
- redis
volumes:
- .:/backend
- /backend/node_modules
volumes:
esdata1:
driver: local
networks:
esnet:
Things to notice:
The redis is already running ( I can ping the redis)
I don't have any services running on my host only from the container
Other containers (except redis) work well
I've tried this method below
const redisClient = redis.createClient({host: 'redis'});
const redisClient = redis.createClient(6379, '127.0.0.1');
const redisClient = redis.createClient(6379, 'redis');
I'm using
docker 17.12
xubuntu 16.04
How can i connect my app to my redis container?
adding
hostname: redis
under redis section fix this issue.
So it will be something like this,
redis:
image: redis:4.0.7
ports:
- "6379:6379"
command: ["redis-server", "--appendonly", "yes"]
hostname: redis

Resources