I want to run elasticsearch and kibana with docker-compose.
This is my docker-compose.yml which I run with docker-compose --env-file dev.env up
Docker Compose
version: '3.1'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.1.1
container_name: elasticsearch
environment:
- cluster.name=elasticsearch-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.type=single-node
- xpack.security.enrollment.enabled=true
- ELASTICSEARCH_USERNAME=${ELASTICSEARCH_USERNAME}
- ELASTICSEARCH_PASSWORD=${ELASTICSEARCH_PASSWORD}
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata:/usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
networks:
- esnet
kibana:
image: docker.elastic.co/kibana/kibana:8.1.1
container_name: kibana
environment:
- ELASTICSEARCH_HOSTS=${ELASTICSEARCH_HOSTS}
- ELASTICSEARCH_USERNAME=${ELASTICSEARCH_USERNAME}
- ELASTICSEARCH_PASSWORD=${ELASTICSEARCH_PASSWORD}
- xpack.security.enabled=true
depends_on:
- elasticsearch
ports:
- "5601:5601"
networks:
- esnet
volumes:
esdata:
driver: local
postgres-data:
driver: local
networks:
esnet:
Stacktrace
Error: [config validation of [elasticsearch].username]: value of "elastic" is forbidden. This is a superuser account that cannot write to system indices that Kibana needs to function. Use a service account token instead
I manage to create service-account token for example for user elastic/kibana, but how can I set it to docker-compose? Is there a specific env variabile that should I use?
Or is there a way to make it work without the usage of service account?
I stumbled upon the same issue and tried using the kibana_admin and kibana_system built-in users but that didn't work either. Maybe you can set the password for these users but I was not able to.
The elastic user role is not allowed to have system-index write-access which Kibana needs. This is based on a change by Elastic (Link to Pullrequest).
You should instead use Service Accounts as described in the docs for Service Accounts.
Apparently, according to the docs on creating a Service Account Token, you would have to somehow create the Elasticsearch container and create a token before starting the Kibana container.
This is also discussed on the Elasticsearch forums.
Downgrading and using a previous ELK version is also a possibility and is what I did, since I only need the cluster for local development.
Related
Unable to pull Elasticsearch and Kibana images by using docker compose.
When I was trying to retry muliple times using docker-compose up cmd, each and every time some of the service are not available, which is unpredictable.
Can somebody please guide me what causing the issue, even the proxy has been set in docker.service.
Please find the attached screenshot, I have also given the docker-compose.yaml file for reference.
Kindly let me know in case of any further information needed.
Docker-compose.yml File
version: '2.2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
container_name: elasticsearch
environment:
- node.name=elasticsearch
- discovery.seed_hosts=elasticsearch
- cluster.initial_master_nodes=elasticsearch
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
kibana:
image: docker.elastic.co/kibana/kibana:7.8.0
container_name: kibana
environment:
ELASTICSEARCH_URL: "http://elasticsearch:9200"
ports:
- 5601:5601
depends_on:
- elasticsearch
volumes:
esdata1:
driver: local
It was issue with RHEL server trying with multiple times the issue got solved
In our situation we want to run an app container and a search container as separate services on our ECS cluster.
I need to create a search container to run under ECS/Fargate and linked to a load balancer.
I need to create an app container which is able to talk to our PostgreSQL RDS instance, which already has the fusionauth tables setup, and talk to the search container through the load balancer.
I started with the docker-compose.yaml and deleted the db service. I changed the values in the fusionauth section to
fusionauth:
image: fusionauth/fusionauth-app:latest
depends_on:
- search
environment:
DATABASE_URL: jdbc:postgresql://mypostgre-rds-endpoint:5432/fusionauth
DATABASE_ROOT_USER: root_user_account
DATABASE_ROOT_PASSWORD: root_user_password
DATABASE_USER: fusionauth
DATABASE_PASSWORD: fusionauth_user_password
FUSIONAUTH_MEMORY: ${FUSIONAUTH_MEMORY}
FUSIONAUTH_SEARCH_SERVERS: http://search:9200
FUSIONAUTH_URL: http://load-balancer-url:9011
networks:
- db
- search
restart: unless-stopped
ports:
- 9011:9011
volumes:
- fa_config:/usr/local/fusionauth/config
The first issue is that when i run this container, it goes into maintenance mode and tries to create the database. I get a locale error. I don't need maintenance mode, I just need it to connect to the database. So I think I must have the database url defined incorrectly.
The second problem, is I need to be able to do the same thing for search: create a container that runs under ECS/Fargate and accessed through the load balancer.
I am no docker expert (yet). But I can't find any specific documentation to help me figure out how to configure and deploy the search and the app containers.
Any pointers to existing docs, or help is appreciated to get this running.
I know i have to change the search section in the docker-compose file (posted entirely below) but I don't yet know what to change, or how to build the container for search yet.
Entire docker-compose file as it stands right now.
version: '3'
services:
search:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.1
environment:
- cluster.name=fusionauth
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=${ES_JAVA_OPTS}"
ports:
- 9200:9200
- 9300:9300
networks:
- search
restart: unless-stopped
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es_data:/usr/share/elasticsearch/data
fusionauth:
image: fusionauth/fusionauth-app:latest
depends_on:
- search
environment:
DATABASE_URL: jdbc:postgresql://mypostgre-rds-endpoint:5432/fusionauth
DATABASE_ROOT_USER: root_user_account
DATABASE_ROOT_PASSWORD: root_user_password
DATABASE_USER: fusionauth
DATABASE_PASSWORD: fusionauth_user_password
FUSIONAUTH_MEMORY: ${FUSIONAUTH_MEMORY}
FUSIONAUTH_SEARCH_SERVERS: http://search:9200
FUSIONAUTH_URL: http://load-balancer-url:9011
networks:
- db
- search
restart: unless-stopped
ports:
- 9011:9011
volumes:
- fa_config:/usr/local/fusionauth/config
networks:
db:
driver: bridge
search:
driver: bridge
volumes:
db_data:
es_data:
fa_config:
AFAICT there is no reason you should be using DATABASE_ROOT_USER and DATABASE_USER if the db is already setup
I would suggest you could start by removing that but other than that it looks pretty similar to a docker-compose setup I've been using for a while
The only other thing I'd add is this problem has nothing to do with ECS or Fargate at all as it sits, its really just a docker compose file you are having trouble getting running from what I can tell
I am looking for a working version of a docker-compose file that starts up kibana and elasticsearch together on docker for mac at version 7.3.2. I've followed the most recent instructions on kibana and elasticsearch's 7.3.2 documentation and my docker-compose.yml file below is the union of what I gathered from both docs. (The kibana doc was the most vague with respect to the docker compose config). I've also tried following other stack overflow articles (written for older versions) but they don't seem to work with the latest versions. I now suspect I'm missing something version specific. 7.3.1 didn't work with the same config either.
I should note that the elasticsearch portion of the file works fine; I can hit http://localhost:9200 and I get a json response. However Kibana's url (http://localhost:5601) returns Kibana server is not ready yet with this error:
kibana | {"type":"log","#timestamp":"2019-09-12T21:45:04Z","tags":["warning","elasticsearch","admin"],"pid":7,"message":"Unable to revive connection: http://elasticsearch:9200/"}
This is my best attempt so far:
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.2
container_name: es01
environment:
- node.name=es01
- discovery.seed_hosts=es02
- cluster.initial_master_nodes=es01,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
networks:
- esnet
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.2
container_name: es02
environment:
- node.name=es02
- discovery.seed_hosts=es01
- cluster.initial_master_nodes=es01,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata02:/usr/share/elasticsearch/data
networks:
- esnet
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:7.3.2
ports:
- 5601:5601
networks:
- esnet
environment:
SERVER_NAME: kibana.example.org
ELASTICSEARCH_URL: http://elasticsearch:9200
volumes:
esdata01:
driver: local
esdata02:
driver: local
networks:
esnet:
Docker Compose automatically creates a private Docker network for you, and within that, the names of the service: blocks are valid hostnames.
When you set
ELASTICSEARCH_URL: http://elasticsearch:9200
None of your containers are named elasticsearch so the hostname lookup fails, but if you pick either node es01 or es02 it will work
ELASTICSEARCH_URL: http://es01:9200
(Note that you don’t explicitly need a networks: definition for this to work, Compose will create a network named default for you. You also don’t need to explicitly set container_name: unless you’re planning on trying to manage the same containers with non-Compose tooling.)
Use ELASTICSEARCH_HOSTS: http://es01:9200 instead of ELASTICSEARCH_URL to update the environment from the docker-compose.yml file. Here is the Elasticsearch documentation about environment variable configuration https://www.elastic.co/guide/en/kibana/current/docker.html#environment-variable-config.
You need to add network configuration also in kibana service like
networks:
- esnet
and in ELASTICSEARCH_HOST: http://es01:9200
note es01 is your container name
I don't know if this a good question or not. I've never worked on elastic search before.
I'm getting "[WARN ][o.e.d.c.ParseField ] [vLJycm6] Deprecated field [disable_coord] used, replaced by [disable_coord has been removed]" in the docker output log when I start running the elastic search container. I'm using this container for graylog3.
Do I need to be concerned about this "warning" from ElasticSearch?
This is code is part of docker compose file
mongodb:
container_name: mongodb
image: mongo:latest
restart: on-failure
networks:
- dev
volumes:
- mongodbdata_dev:/data/db
ports:
- '27017:27017'
elasticsearch:
container_name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.6.1
volumes:
- elasticsearchdata_dev:/usr/share/elasticsearch/data
networks:
- dev
environment:
- http.host=0.0.0.0
- transport.host=localhost
- network.host=0.0.0.0
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
graylog:
container_name: graylog
image: graylog/graylog:3.0
volumes:
- graylogdata_dev:/usr/share/graylog/data
networks:
- dev
environment:
- GRAYLOG_PASSWORD_SECRET=somepasswordpepper
- GRAYLOG_ROOT_PASSWORD_SHA2=somesha
- GRAYLOG_HTTP_EXTERNAL_URI=http://127.0.0.1:9000/
links:
- mongodb:mongo
- elasticsearch
depends_on:
- mongodb
- elasticsearch
ports:
- 9000:9000
- 1514:1514
- 1514:1514/udp
- 12201:12201
- 12201:12201/udp
Please let me know if you need any other information.
Apparently Graylog still has some option for elasticsearch 5 and since you're using 6+, it's complaining.But no harm done until Elasticsearch 5 support is removed from graylog.So ignore for now
Rancher v 1.6.10, Docker v 17.06.2-ce
I'm deploying a stack via Rancher UI that contains one of the docker containers that has an app which connects to Dropbox via the internet. But the app isn't able to access the internet.
However, if I don't use rancher and simply use docker-compose up natively, then it all works fine.
The networking that the Rancher creates appears to be the problem I guess.
Can I be advised please?
My docker compose file:
version: '2'
services:
elasticsearch1:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.3
container_name: es1
environment:
- cluster.name=idc-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=false
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 1g
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- docker-elk
idcdb:
image: postgres:9.6
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=DriveMe
- POSTGRES_USER=idc
- POSTGRES_DB=idc
volumes:
- pgdata:/var/lib/db
idcredis:
image: redis:4.0
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- '6379:6379'
volumes:
- redisdata:/var/lib/redis
booking-service:
environment:
- PORT=8085
- PROFILE=integration
ports:
- 8085:8085
image: idc/idc-booking-service
depends_on:
- idcdb
- idcredis
notification-service:
environment:
- PORT=8087
- PROFILE=integration
ports:
- 8087:8087
image: idc/idc-notification-service
depends_on:
- idcredis
analytics-service:
environment:
- PORT=8088
- PROFILE=integration
ports:
- 8088:8088
image: idc/idc-analytics-service
depends_on:
- idcredis
- elasticsearch1
kibana:
image: docker.elastic.co/kibana/kibana:5.6.3
environment:
- "ELASTICSEARCH_URL=http://elasticsearch1:9200"
networks:
- docker-elk
volumes:
pgdata: {}
redisdata: {}
esdata1:
driver: local
networks:
docker-elk:
driver: bridge
You should specify the networks while starting docker
--net=host
if this does not solve your problem
sudo gedit /etc/NetworkManager/NetworkManager.conf
comment out the following line:
#dns=dnsmasq
then
sudo restart network-manager
You could use a Rancher LB and add it to your application as follows:
In the stack where you application is you will have to click on Add Service button and select Add a Load Balancer
Then you make sure that where is says Access is set to Public
In the Request Host you will have to add the desired URL such as: mylocal.dev
Then you will have to add the port 80 so it will be accessible from the outside world on port 80
Select the service you want the LB to apply for and the internal application port.
Thats' all :) now you should be able to connect to mylocal.dev from the outside world.