docker compose throwing econnrefused - docker

While accessing DB it threw me an error that.
MongooseServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017
How to fix it? So my application can make a connection to the database. As in code, you can see my application is relying on multiple databases. how can I make sure before starting the application all of the database containers got started.
version: '3.8'
networks:
appnetwork:
driver: bridge
services:
mysql:
image: mysql:8.0.27
restart: always
command: --init-file /data/application/init.sql
environment:
- MYSQL_ROOT_PASSWORD=11999966
- MYSQL_DATABASE=interview
- MYSQL_USER=interviewuser
- MYSQL_PASSWORD=11999966
ports:
- 3306:3306
volumes:
- db:/var/lib/mysql
- ./migration/init.sql:/data/application/init.sql
networks:
- appnetwork
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.15.2
restart: always
ports:
- 9200:9200
environment:
- xpack.security.enabled=false
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
volumes:
- elastic:/usr/share/elasticsearch/data
networks:
- appnetwork
redis:
image: redis
restart: always
ports:
- 6379:6379
volumes:
- cache:/var/lib/redis
networks:
- appnetwork
mongodb:
image: mongo
restart: always
ports:
- 27017:27017
volumes:
- mongo:/var/lib/mongo
networks:
- appnetwork
app:
depends_on:
- mysql
- elasticsearch
- redis
- mongodb
build: .
restart: always
ports:
- 3000:3000
networks:
- appnetwork
stdin_open: true
tty: true
command: npm start
volumes:
db:
elastic:
cache:
mongo:

The container (probably app) tries to connect to a mongodb instance running on localhost (i.e. the container itself). Since there is nothing listening on port 27017 of this container, we get the error.
We can fix the probelm by reconfiguring the application running in the container to use the name of the mongodb-container (which, in the given docker-compose.yml, is also mongodb) instead of 127.0.0.1 or localhost.
If we have designed our app accoring to the 12 factors, it should be as simple as setting an environment variable for the container.

Use http://mongo:27017 as connection string instead.

Related

I got a 404 when running kibana on docker behind traefik, but elastic can be reached

I am having issues while running ELK on docker, behind Traefik. Every other services are running, but when i try to access to kibana on a browser via its url, I got a 404.
This is my docker-compose.yml :
version: '3.4'
networks:
app-network:
name: app-network
driver: bridge
ipam:
config:
- subnet: xxx.xxx.xxx.xxx/xxx
services:
reverse-proxy:
image: traefik:v2.5
command:
--providers.docker.network=app-network
--providers.docker.exposedByDefault=false
--entrypoints.web.address=:80
--entrypoints.websecure.address=:443
--providers.docker=true
--api=true
--api.dashboard=true
ports:
- "80:80"
- "443:443"
networks:
app-network:
ipv4_address: xxx.xxx.xxx.xxx
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /certs/:/certs
labels:
- traefik.enable=true
- traefik.docker.network=public
- traefik.http.routers.traefik-http.entrypoints=web
- traefik.http.routers.traefik-http.service=api#internal
elasticsearch:
hostname: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.12.0
environment:
- bootstrap.memory_lock=true
- cluster.name=docker-cluster
- cluster.routing.allocation.disk.threshold_enabled=false
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms2048m -Xmx2048m"
ulimits:
memlock:
hard: -1
soft: -1
volumes:
- esdata:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
app-network:
ipv4_address: xxx.xxx.xxx.xxx
healthcheck:
interval: 20s
retries: 10
test: curl -s http://localhost:9200/_cluster/health | grep -vq '"status":"red"'
labels:
- "traefik.enable=true"
- "traefik.http.routers.elasticsearch.entrypoints=http"
- "traefik.http.routers.elastic.rule=Host(`elastic.mydomain.fr`)"
- "traefik.http.services.elastic.loadbalancer.server.port=9200"
kibana:
hostname: kibana
image: docker.elastic.co/kibana/kibana:7.12.0
depends_on:
elasticsearch:
condition: service_healthy
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200
ELASTICSEARCH_HOSTS: http://elasticsearch:9200
ports:
- 5601:5601
networks:
app-network:
ipv4_address: xxx.xxx.xxx.xxx
links:
- elasticsearch
healthcheck:
interval: 10s
retries: 20
test: curl --write-out 'HTTP %{http_code}' --fail --silent --output /dev/null http://localhost:5601/api/status
labels:
- "traefik.enable=true"
- "traefik.http.routers.kibana.entrypoints=http"
- "traefik.http.routers.kibana.rule=Host(`kibana.mydomain.fr`)"
- "traefik.http.services.kibana.loadbalancer.server.port=5601"
- "traefik.http.routers.kibana.entrypoints=websecure"
volumes:
esdata:
driver: local
Knowing that, as I said, Elastic and other services can be accessed.
I have already tried to set the basePath, but it did not works either.
Do you have any idea what am I missing ?
You named your entrypoints at the top "web" and "websecure", but the labels are using "http" as entrypoint, you have to rename them (except you have defined http as well as entrypoint somewhere else). You have to match the word you are defining in the configuration string: --entrypoints.web.address=:80
So for example: "traefik.http.routers.elasticsearch.entrypoints=web"
Additional Tip: you can remove the label with the loadbalancer port, because as long as you are defining an exposed or mapped port in Docker, Traefik recognises the used port. I have no such line configured for my personal services. - "traefik.http.services.elastic.loadbalancer.server.port=9200"
Thanks for your answer Zeikos, it helps me a lot.
I think it could be closed now

Cannot setup docker-compose file to launch kibana at version 7.3.2

I am looking for a working version of a docker-compose file that starts up kibana and elasticsearch together on docker for mac at version 7.3.2. I've followed the most recent instructions on kibana and elasticsearch's 7.3.2 documentation and my docker-compose.yml file below is the union of what I gathered from both docs. (The kibana doc was the most vague with respect to the docker compose config). I've also tried following other stack overflow articles (written for older versions) but they don't seem to work with the latest versions. I now suspect I'm missing something version specific. 7.3.1 didn't work with the same config either.
I should note that the elasticsearch portion of the file works fine; I can hit http://localhost:9200 and I get a json response. However Kibana's url (http://localhost:5601) returns Kibana server is not ready yet with this error:
kibana | {"type":"log","#timestamp":"2019-09-12T21:45:04Z","tags":["warning","elasticsearch","admin"],"pid":7,"message":"Unable to revive connection: http://elasticsearch:9200/"}
This is my best attempt so far:
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.2
container_name: es01
environment:
- node.name=es01
- discovery.seed_hosts=es02
- cluster.initial_master_nodes=es01,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
networks:
- esnet
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.2
container_name: es02
environment:
- node.name=es02
- discovery.seed_hosts=es01
- cluster.initial_master_nodes=es01,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata02:/usr/share/elasticsearch/data
networks:
- esnet
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:7.3.2
ports:
- 5601:5601
networks:
- esnet
environment:
SERVER_NAME: kibana.example.org
ELASTICSEARCH_URL: http://elasticsearch:9200
volumes:
esdata01:
driver: local
esdata02:
driver: local
networks:
esnet:
Docker Compose automatically creates a private Docker network for you, and within that, the names of the service: blocks are valid hostnames.
When you set
ELASTICSEARCH_URL: http://elasticsearch:9200
None of your containers are named elasticsearch so the hostname lookup fails, but if you pick either node es01 or es02 it will work
ELASTICSEARCH_URL: http://es01:9200
(Note that you don’t explicitly need a networks: definition for this to work, Compose will create a network named default for you. You also don’t need to explicitly set container_name: unless you’re planning on trying to manage the same containers with non-Compose tooling.)
Use ELASTICSEARCH_HOSTS: http://es01:9200 instead of ELASTICSEARCH_URL to update the environment from the docker-compose.yml file. Here is the Elasticsearch documentation about environment variable configuration https://www.elastic.co/guide/en/kibana/current/docker.html#environment-variable-config.
You need to add network configuration also in kibana service like
networks:
- esnet
and in ELASTICSEARCH_HOST: http://es01:9200
note es01 is your container name

Deprecated field [disable_coord] used, replaced by [disable_coord has been removed]

I don't know if this a good question or not. I've never worked on elastic search before.
I'm getting "[WARN ][o.e.d.c.ParseField ] [vLJycm6] Deprecated field [disable_coord] used, replaced by [disable_coord has been removed]" in the docker output log when I start running the elastic search container. I'm using this container for graylog3.
Do I need to be concerned about this "warning" from ElasticSearch?
This is code is part of docker compose file
mongodb:
container_name: mongodb
image: mongo:latest
restart: on-failure
networks:
- dev
volumes:
- mongodbdata_dev:/data/db
ports:
- '27017:27017'
elasticsearch:
container_name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.6.1
volumes:
- elasticsearchdata_dev:/usr/share/elasticsearch/data
networks:
- dev
environment:
- http.host=0.0.0.0
- transport.host=localhost
- network.host=0.0.0.0
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
graylog:
container_name: graylog
image: graylog/graylog:3.0
volumes:
- graylogdata_dev:/usr/share/graylog/data
networks:
- dev
environment:
- GRAYLOG_PASSWORD_SECRET=somepasswordpepper
- GRAYLOG_ROOT_PASSWORD_SHA2=somesha
- GRAYLOG_HTTP_EXTERNAL_URI=http://127.0.0.1:9000/
links:
- mongodb:mongo
- elasticsearch
depends_on:
- mongodb
- elasticsearch
ports:
- 9000:9000
- 1514:1514
- 1514:1514/udp
- 12201:12201
- 12201:12201/udp
Please let me know if you need any other information.
Apparently Graylog still has some option for elasticsearch 5 and since you're using 6+, it's complaining.But no harm done until Elasticsearch 5 support is removed from graylog.So ignore for now

Redis connection to 127.0.0.1:6379 failed using Docker

Hi i'm using docker compose to handle all of my configuration.
i have mongo, node, redis, and elastic stack.
But i can't get my redis connect to my node app.
Here is my docker-compose.yml
version: '2'
services:
mongo:
image: mongo:3.6
container_name: "backend-mongo"
ports:
- "27017:27017"
volumes:
- "./data/db:/data/db"
redis:
image: redis:4.0.7
ports:
- "6379:6379"
user: redis
adminmongo:
container_name: "backend-adminmongo"
image: "mrvautin/adminmongo"
ports:
- "1234:1234"
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.1.1
container_name: "backend-elastic"
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
web:
container_name: "backend-web"
build: .
ports:
- "8888:8888"
environment:
- MONGODB_URI=mongodb://mongo:27017/backend
restart: always
depends_on:
- mongo
- elasticsearch
- redis
volumes:
- .:/backend
- /backend/node_modules
volumes:
esdata1:
driver: local
networks:
esnet:
Things to notice:
The redis is already running ( I can ping the redis)
I don't have any services running on my host only from the container
Other containers (except redis) work well
I've tried this method below
const redisClient = redis.createClient({host: 'redis'});
const redisClient = redis.createClient(6379, '127.0.0.1');
const redisClient = redis.createClient(6379, 'redis');
I'm using
docker 17.12
xubuntu 16.04
How can i connect my app to my redis container?
adding
hostname: redis
under redis section fix this issue.
So it will be something like this,
redis:
image: redis:4.0.7
ports:
- "6379:6379"
command: ["redis-server", "--appendonly", "yes"]
hostname: redis

Rancher running an APP that needs internet access

Rancher v 1.6.10, Docker v 17.06.2-ce
I'm deploying a stack via Rancher UI that contains one of the docker containers that has an app which connects to Dropbox via the internet. But the app isn't able to access the internet.
However, if I don't use rancher and simply use docker-compose up natively, then it all works fine.
The networking that the Rancher creates appears to be the problem I guess.
Can I be advised please?
My docker compose file:
version: '2'
services:
elasticsearch1:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.3
container_name: es1
environment:
- cluster.name=idc-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=false
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 1g
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- docker-elk
idcdb:
image: postgres:9.6
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=DriveMe
- POSTGRES_USER=idc
- POSTGRES_DB=idc
volumes:
- pgdata:/var/lib/db
idcredis:
image: redis:4.0
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- '6379:6379'
volumes:
- redisdata:/var/lib/redis
booking-service:
environment:
- PORT=8085
- PROFILE=integration
ports:
- 8085:8085
image: idc/idc-booking-service
depends_on:
- idcdb
- idcredis
notification-service:
environment:
- PORT=8087
- PROFILE=integration
ports:
- 8087:8087
image: idc/idc-notification-service
depends_on:
- idcredis
analytics-service:
environment:
- PORT=8088
- PROFILE=integration
ports:
- 8088:8088
image: idc/idc-analytics-service
depends_on:
- idcredis
- elasticsearch1
kibana:
image: docker.elastic.co/kibana/kibana:5.6.3
environment:
- "ELASTICSEARCH_URL=http://elasticsearch1:9200"
networks:
- docker-elk
volumes:
pgdata: {}
redisdata: {}
esdata1:
driver: local
networks:
docker-elk:
driver: bridge
You should specify the networks while starting docker
--net=host
if this does not solve your problem
sudo gedit /etc/NetworkManager/NetworkManager.conf
comment out the following line:
#dns=dnsmasq
then
sudo restart network-manager
You could use a Rancher LB and add it to your application as follows:
In the stack where you application is you will have to click on Add Service button and select Add a Load Balancer
Then you make sure that where is says Access is set to Public
In the Request Host you will have to add the desired URL such as: mylocal.dev
Then you will have to add the port 80 so it will be accessible from the outside world on port 80
Select the service you want the LB to apply for and the internal application port.
Thats' all :) now you should be able to connect to mylocal.dev from the outside world.

Resources