I have a Dockerfile and docker-compose.yml file set up, but not sure if they are correct and I am unable to run it without an error.
My Dockerfile is:
FROM golang:1.14-alpine
RUN mkdir /app
ADD . /app
WORKDIR /app
RUN go get
RUN go run server.go
and my compose.yml is:
version: "3.5"
services:
elasticsearch:
container_name: "elasticsearch"
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
ports:
- 9200:9200
gqlgen:
container_name: "gqlgen"
build: ./
restart: "on-failure"
ports:
- "8080:8080"
depends_on:
- elasticsearch
This is how the root of my folder looks like:
I tried to run: docker-compose up from the root directory and this is what I get:
panic: Get "http://127.0.0.1:9200/": dial tcp 127.0.0.1:9200: connect: connection refused
I think I am doing my setup wrong.
UPDATE:
Based on suggestions and more stuff that I read online, I changed my DOCKERFILE as:
FROM golang:1.14-alpine
RUN mkdir /app
ADD . /app
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN go build -o server .
CMD ["./server"]
and compose file:
version: "3.5"
services:
elasticsearch:
container_name: "elasticsearch"
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
environment:
- node.name=elasticsearch
- cluster.name=es-docker-cluster
- bootstrap.memory_lock=true
- cluster.initial_master_nodes=elasticsearch
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9200:9200
golang:
container_name: "golang"
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
restart: unless-stopped
depends_on:
- elasticsearch
and it builds correctly now.
But same issue with running docker-compose up.
panic: Get "http://elasticsearch:9200/": dial tcp 172.18.0.2:9200: connect: connection refused
You have a problem because you address Elasticsearch incorrectly.
Inside docker container 127.0.0.1 refers to the container itself, so your app is trying to find Elasticsearch where there isn't one.
The correct way to reference one docker container from another is by using docker container name. So in your case, it would be using name: elasticsearch.
Edit:
There is another issue with your configuration.
You miss some vital elements of Elasticsearch configuration.
Here you have snippet with minimal configuration for a single node Elasticsearch cluster.
services:
elasticsearch:
container_name: "elasticsearch"
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
environment:
- node.name=elasticsearch
- cluster.name=es-docker-cluster
- bootstrap.memory_lock=true
- cluster.initial_master_nodes=elasticsearch
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9200:9200
All I have written before is still valid. After modifying docker-compose your last version which refers to Elasticsearch via http://elasticsearch:9200 should work fine.
Edit:
As #David Maze pointed out there is a third issue in your example.
Instead of RUN go run server.go you should have CMD go run server.go.
What you are doing is running your app during your build, when you want to run your app inside the container.
The more conventional approach would be to build app, instead of copying the source, copying the binary to the container and running the binary inside the container.
Here you have some information about that: https://medium.com/travis-on-docker/multi-stage-docker-builds-for-creating-tiny-go-images-e0e1867efe5a
So the above action to replace localhost with elasticsearch is correct.
But this it should happen only when you are starting your up with docker-compose.
Do not attempt to call elasticsearch from your IDE using elasticearch instead of host.
I suggest to make the host of elasticsearch configurable and for local configuration keep localhost, but you can override it in docker-compose file.
version: "3.5"
services:
elasticsearch:
container_name: "elasticsearch"
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
environment:
- node.name=elasticsearch
- cluster.name=es-docker-cluster
- bootstrap.memory_lock=true
- cluster.initial_master_nodes=elasticsearch
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9200:9200
golang:
container_name: "golang"
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
restart: unless-stopped
depends_on:
- elasticsearch
environment:
- ELASTICSEARCH_HOST: elasticsearch
Where ELASTICSEARCH_HOST is a variable that you use in your project
Related
Unable to pull Elasticsearch and Kibana images by using docker compose.
When I was trying to retry muliple times using docker-compose up cmd, each and every time some of the service are not available, which is unpredictable.
Can somebody please guide me what causing the issue, even the proxy has been set in docker.service.
Please find the attached screenshot, I have also given the docker-compose.yaml file for reference.
Kindly let me know in case of any further information needed.
Docker-compose.yml File
version: '2.2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
container_name: elasticsearch
environment:
- node.name=elasticsearch
- discovery.seed_hosts=elasticsearch
- cluster.initial_master_nodes=elasticsearch
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
kibana:
image: docker.elastic.co/kibana/kibana:7.8.0
container_name: kibana
environment:
ELASTICSEARCH_URL: "http://elasticsearch:9200"
ports:
- 5601:5601
depends_on:
- elasticsearch
volumes:
esdata1:
driver: local
It was issue with RHEL server trying with multiple times the issue got solved
I'm having a strange problem I can't work out as my problem, when searching for this error, is different. People seem to have experienced this when trying to connect Filebeat to Logstash.
However, I am trying to write logs directly to Elasticsearch but I am getting Logstash related errors even though I am not even spinning up a container in Docker Compose??
Main Docker Compose File:
version: '2.2'
services:
filebeat:
container_name: filebeat
build:
context: .
dockerfile: filebeat.Dockerfile
volumes:
- ./logs:/var/log
networks:
- esnet
elasticsearch:
container_name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.2
ports:
- 9200:9200
- 9300:9300
environment:
- discovery.type=single-node
- cluster.name=docker-
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- esnet
elastichq:
container_name: elastichq
image: elastichq/elasticsearch-hq
ports:
- 8080:5000
environment:
- HQ_DEFAULT_URL=http://elasticsearch:9200
- HQ_ENABLE_SSL=False
- HQ_DEBUG=FALSE
networks:
- esnet
networks:
esnet:
DockerFile for Filebeat
FROM docker.elastic.co/beats/filebeat:7.5.2
COPY filebeat/filebeat.yml /usr/share/filebeat/filebeat.yml
USER root
RUN chown root:filebeat /usr/share/filebeat/filebeat.yml
RUN chmod 644 /usr/share/filebeat/filebeat.yml
USER filebeat
I am trying to read json logs that are already in Elasticsearch format, so after reading the docs I decided to try and write directly to Elasticsearch which seems to be valid depending on the application.
My Sample.json file:
{"#timestamp":"2020-02-10T09:35:20.7793960+00:00","level":"Information","messageTemplate":"The value of i is {LoopCountValue}","message":"The value of i is 0","fields":{"LoopCountValue":0,"SourceContext":"WebAppLogger.Startup","Environment":"Development","ApplicationName":"ELK Logging Demo"}}
My Filebeat.yml:
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/*.json
json.keys_under_root: true
json.add_error_key: true
json.message_key: log
#----------------------------- Elasticsearch output --------------------------------
output.elasticsearch:
hosts: ["elasticsearch:9200"]
index: "sample-%{+YYYY.MM.dd}"
As stated in the title of this post, I get this message in the console:
filebeat | 2020-02-10T09:38:24.438Z ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://logstash:5044)): lookup logstash on 127.0.0.11:53: no such host
Then when I eventually try and visualize the data in ElasticHq, inevitably, nothing is there.
So far, I've tried using commands like docker prune just in case theres something funny going on with Docker.
Is there something I'm missing?
You have misconfigured your filebeat.yml file. Look at this error:
Failed to connect to backoff(async(tcp://logstash:5044))
Filebeat tries to connect to logstash, beacause this is the default configuration. In fact on one hand you show a filebeat.yml file and on the other hand, you haven't mounted it on /usr/share/filebeat/filebeat.yml - look at your volumes settings
filebeat:
container_name: filebeat
build:
context: .
dockerfile: filebeat.Dockerfile
volumes:
- ./logs:/var/log
networks:
- esnet
You should mount it. If you try to copy it inside a docker container with dockerfile - why?????is necessary reinvent the wheel and add complexity? - you should use the root user:
USER root
and add root user to your service in docker-compose.yml:
user: root
I'm trying to run Elastic,Kibana inside docker-compose.
When I bring up the containers using docker-compose up, Elasticsearch loads up fine. After it loads, the Kibana containers start up. But once they load, they are not able to see or connect the Elasticsearch container, producing these messages:
Kibana docker Log:
{"type":"log","#timestamp":"2020-01-22T19:57:27Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"No living connections"}
{"type":"log","#timestamp":"2020-01-22T19:57:30Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"Unable to revive connection: http://elasticsearch01:9200/"}
{"type":"log","#timestamp":"2020-01-22T19:57:30Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"No living connections"}
Am not able to see the elasticsearch host from the Kibana container
curl -X GET http://elasticsearch01:9200
throws below quoted error
curl: (7) Failed connect to elasticsearch01:9200; No route to host
Deeply digged and out this is happening only in CentOS8 .
Also in same CENTOS8 am able to up and use standalone elasticsearch and kibana instance via systemctl service.
Am i missing something here ?
Can anyone help?
docker-compose.yml:
networks:
docker-elk:
driver: bridge
services:
elasticsearch01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
container_name: elasticsearch01
secrets:
- source: elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
restart: always
environment:
- node.name=elasticsearch01
- cluster.name=es-docker-cluster
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- elasticdata:/usr/share/elasticsearch/data
ports:
- "9200"
expose:
- "9200"
- "9300"
networks:
- docker-elk
kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
container_name: kibana
depends_on: ['elasticsearch01']
environment:
- SERVER_NAME=kibanaServer
restart: always
secrets:
- source: kibana.yml
target: /usr/share/kibana/config/kibana.yml
restart: always
networks:
- docker-elk
volumes:
- kibanadata:/usr/share/kibana/data
ports: ['5601:5601']
links:
- elasticsearch01
volumes:
elasticdata:
driver: local
kibanadata:
driver: local
secrets:
elasticsearch.yml:
file: ./ELK_Config/elastic/elasticsearch.yml
kibana.yml:
file: ./ELK_Config/kibana/kibana.yml
System/Docker Info
OS: CentOS 8
ELK versions 7.4.0
Docker version 19.03.4, build 9013bf583a
Docker-compose:docker-compose version 1.25.0, build 0a186604
I'm running one elasticsearch with
version: '3'
services:
elasticsearch:
build:
context: .
dockerfile: ./compose/elasticsearch/Dockerfile
args:
- VERSION=${VERSION}
- MEM=${MEM}
- ELASTICSEARCH_PORT=${ELASTICSEARCH_PORT_DEV}
- CLUSTER_NAME=${CLUSTER_NAME_DEV}
- ENV=${ENV_DEV}
container_name: elasticsearch
network_mode: host
environment:
- discovery.type=single-node
volumes:
- /var/lib/elasticsearch:/usr/share/elasticsearch/data
logstash:
build:
context: .
dockerfile: ./compose/logstash/Dockerfile
args:
- VERSION=${VERSION}
- ELASTICSEARCH_HOST=${ELASTICSEARCH_HOST_DEV}
- ELASTICSEARCH_PORT=${ELASTICSEARCH_PORT_DEV}
- DB_HOST=${DB_HOST_DEV}
- DB_NAME=${DB_NAME_DEV}
- ENV=${ENV_DEV}
container_name: logstash
network_mode: host
volumes:
- /opt/logstash/data:/usr/share/logstash/data
dns:
- 192.168.1.1 # IP necessary to connect to a database instance external to where the server in which the container is running
kibana:
build:
context: .
dockerfile: ./compose/kibana/Dockerfile
args:
- VERSION=${VERSION}
- ELASTICSEARCH_HOST=${ELASTICSEARCH_HOST_DEV}
- ELASTICSEARCH_PORT=${ELASTICSEARCH_PORT_DEV}
container_name: kibana
depends_on:
- elasticsearch
network_mode: host
nginx:
build:
context: .
dockerfile: ./compose/nginx/Dockerfile
args:
- KIBANA_HOST=${KIBANA_HOST_DEV}
- KIBANA_PORT=${KIBANA_PORT_DEV}
container_name: nginx
network_mode: host
depends_on:
- kibana
apm:
build:
context: .
dockerfile: ./compose/apm/Dockerfile
args:
- VERSION=${VERSION}
- ELASTICSEARCH_HOST=${ELASTICSEARCH_HOST_DEV}
- ELASTICSEARCH_PORT=${ELASTICSEARCH_PORT_DEV}
- APM_PORT=${APM_PORT_DEV}
container_name: apm
depends_on:
- elasticsearch
network_mode: host
(I think this one uses host's /var/lib/elasticsearch when container access /usr/share/elasticsearch/data and the data is persisted in the /var/lib/elasticsearch of the host)
Another one with
version: '3'
services:
elasticsearch-search:
restart: always
build:
context: .
dockerfile: ./compose/elasticsearch/Dockerfile
args:
- VERSION=${VERSION}
- ELASTICSEARCH_PORT=${ELASTICSEARCH_PORT_SEARCH_DEV}
- MEM=${MEM_SEARCH}
- CLUSTER_NAME=${CLUSTER_NAME_SEARCH_DEV}
- ENV=${ENV_DEV}
container_name: elasticsearch-search
network_mode: host
environment:
- discovery.type=single-node
volumes:
- /etc/localtime:/etc/localtime:ro
- data:/usr/share/elasticsearch/data
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
kibana:
build:
context: .
dockerfile: ./compose/kibana/Dockerfile
args:
- VERSION=${VERSION}
- ELASTICSEARCH_HOST=${ELASTICSEARCH_HOST_SEARCH_DEV}
- ELASTICSEARCH_PORT=${ELASTICSEARCH_PORT_SEARCH_DEV}
container_name: kibana-search
depends_on:
- elasticsearch-search
network_mode: host
volumes:
- /etc/localtime:/etc/localtime:ro
- data:/usr/share/elasticsearch/data
volumes:
data:
(I'm not sure how this one works out, but I guess docker provides persistant storage that can be accessed via /usr/share/elasticsearch/data from container)
When I run them at the same time, I expect the two elasticsearch uses separate data. but it seems they are interfering with each other.
I have a kibana running which looks at the first ES.
When I run the first ES alone, I can see the data , but as soon as I run the second ES, there's nothing, no index-pattern, no dashboard.
What am I misunderstanding?
.env
ELASTICSEARCH_PORT_DEV=29200
ELASTICSEARCH_PORT_SEARCH_DEV=29300
most probably something is wrong with your docker-compose in term of volumes: sections.
second example has this at the top
volumes:
- data:/usr/share/elasticsearch/data
and this at the bottom:
volumes:
- /etc/localtime:/etc/localtime:ro
- data:/usr/share/elasticsearch/data
which means that at least two separate container have binding to the same local folder data. which is definitely way to see strange things, because something inside of those containers (ES is one of those) will try to recreate data storage hierarchy in hosts data folder.
can you just try defining volumes for first ES as:
volumes:
- ./data/es1:/usr/share/elasticsearch/data
and for second one as:
volumes:
- ./data/es2:/usr/share/elasticsearch/data
just make sure that ./data/es1 and ./data/es2 folders are there on your host before doing docker-compose up.
or you can post whole docker-compose.yml file so we can say what is wrong with it...
Rancher v 1.6.10, Docker v 17.06.2-ce
I'm deploying a stack via Rancher UI that contains one of the docker containers that has an app which connects to Dropbox via the internet. But the app isn't able to access the internet.
However, if I don't use rancher and simply use docker-compose up natively, then it all works fine.
The networking that the Rancher creates appears to be the problem I guess.
Can I be advised please?
My docker compose file:
version: '2'
services:
elasticsearch1:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.3
container_name: es1
environment:
- cluster.name=idc-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=false
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 1g
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- docker-elk
idcdb:
image: postgres:9.6
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=DriveMe
- POSTGRES_USER=idc
- POSTGRES_DB=idc
volumes:
- pgdata:/var/lib/db
idcredis:
image: redis:4.0
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- '6379:6379'
volumes:
- redisdata:/var/lib/redis
booking-service:
environment:
- PORT=8085
- PROFILE=integration
ports:
- 8085:8085
image: idc/idc-booking-service
depends_on:
- idcdb
- idcredis
notification-service:
environment:
- PORT=8087
- PROFILE=integration
ports:
- 8087:8087
image: idc/idc-notification-service
depends_on:
- idcredis
analytics-service:
environment:
- PORT=8088
- PROFILE=integration
ports:
- 8088:8088
image: idc/idc-analytics-service
depends_on:
- idcredis
- elasticsearch1
kibana:
image: docker.elastic.co/kibana/kibana:5.6.3
environment:
- "ELASTICSEARCH_URL=http://elasticsearch1:9200"
networks:
- docker-elk
volumes:
pgdata: {}
redisdata: {}
esdata1:
driver: local
networks:
docker-elk:
driver: bridge
You should specify the networks while starting docker
--net=host
if this does not solve your problem
sudo gedit /etc/NetworkManager/NetworkManager.conf
comment out the following line:
#dns=dnsmasq
then
sudo restart network-manager
You could use a Rancher LB and add it to your application as follows:
In the stack where you application is you will have to click on Add Service button and select Add a Load Balancer
Then you make sure that where is says Access is set to Public
In the Request Host you will have to add the desired URL such as: mylocal.dev
Then you will have to add the port 80 so it will be accessible from the outside world on port 80
Select the service you want the LB to apply for and the internal application port.
Thats' all :) now you should be able to connect to mylocal.dev from the outside world.