I am using docker-compose file for running elk service but i am running elk stack of version 7.5 and i want to update this to 7.8 without stopping services.I've tried docker-compose pull but it can't pull the latest image of elasticsearch logstash and kibana and i tried another way by manually pulling the latest image using docker pull command and then after i've updated the image name in docker-compose
docker-compose.yml
version: "3.3"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.0
container_name: elasticsearch
environment:
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
volumes:
- elasticsearch:/usr/share/elasticsearch/data
secrets:
- source: elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
ulimits:
memlock:
soft: -1
hard: -1
nproc: 20480
nofile:
soft: 160000
hard: 160000
logging:
driver: "json-file"
options:
max-file: "9"
max-size: "6m"
restart: always
ports:
- 9200:9200
networks:
- esnet
kibana:
image: docker.elastic.co/kibana/kibana:7.5.0
container_name: kibana
depends_on:
- elasticsearch
restart: always
logging:
driver: "json-file"
options:
max-file: "9"
max-size: "6m"
secrets:
- source: kibana.yml
target: /usr/share/kibana/config/kibana.yml
networks:
- esnet
logstash:
image: docker.elastic.co/logstash/logstash:7.5.0
container_name: logstash
volumes:
- ./logstash/pipeline:/usr/share/logstash/pipeline
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./logstash/config/jvm.options:/usr/share/logstash/config/jvm.options
- ./logstash/plugins:/usr/share/logstash/plugins
restart: always
logging:
driver: "json-file"
options:
max-file: "9"
max-size: "6m"
networks:
- esnet
when docker-compose pull command doesn't work i tried this
docker pull docker.elastic.co/elasticsearch/elasticsearch:7.8.0
docker pull docker.elastic.co/kibana/kibana:7.8.0
docker pull docker.elastic.co/logstash/logstash:7.8.0
after that i made some changes to my docker-compose file i change image version so that docker-compose command does not take time to download the image so i already pull latest image
version: "3.3"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
and finally i used this command
docker-compose restart
You can't do that. When you want to update the image, you have to run another container from the new image you want. Docker not support this feature. You can only update manually by the way change the name of the image and up again.
Related
Need to upgrade Elasticsearch , Kibana installed with docker compose as a 3 node cluster on linux from 7.10 to 7.17
This document shares other methods but not containers installed/started with docker compose - swarm.
Is their a step by step documentation for the same?
I have upgraded from my elastic from 7.10 to 7.17.6 I have not faced any issues. I have just used docker compose in this scenario. In your case can you try to rename your elastic search it seems that's your older elastic container is still up and its conflicting the name? If this is not a production setup let me know we could try few more things as well.
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.6
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- cluster.initial_master_nodes=es01
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
kib01:
image: docker.elastic.co/kibana/kibana:7.17.6
container_name: kib01
ports:
- 5601:5601
environment:
ELASTICSEARCH_URL: http://es01:9200
ELASTICSEARCH_HOSTS: '["http://es01:9200"]'
networks:
- elastic
volumes:
data01:
driver: local
data02:
driver: local
data03:
driver: local
networks:
elastic:
driver: bridge
I have created a docker-compose file that will start up Kabana and ElasticSearch containers. I already have created a network and volume for these in my VM. I am using docker compose version 3.4.
Command: docker volumes ls
DRIVER VOLUME NAME
local elasticsearch-data
local portainer_data
Command: docker volumes ls
NETWORK ID NAME DRIVER SCOPE
75464cd8c8ab bridge bridge local
587a311f6f4f host host local
649ac00b7f93 none null local
4b5923b1d144 stars.api.web bridge local
Command: docker-compose up -d
ERROR: yaml.scanner.ScannerError: mapping values are not allowed here
in "./docker-compose.yml", line 33, column 27
docker-compose.yml
version: '3.4'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
container_name: elasticsearch
ports:
- "9200:9200"
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
networks:
- stars.api.web
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: changeme
discovery.type: single-node
ulimits:
memlock:
soft: -1
hard: -1
kibana:
image: docker.elastic.co/kibana/kibana:7.6.0
container_name: kibana
ports:
- "5601:5601"
depends_on:
- elasticsearch
networks:
- stars.api.web
volumes:
name: elasticsearch-data:
networks:
name: stars.api.web:
EDIT:
removing the : from the name, eg name: elasticsearch-data throws the following error:
ERROR: In file './docker-compose.yml', volume 'name' must be a mapping not a string.
Your yaml is invalid according to the docs.
Please use the following compose file:
version: '3.4'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
container_name: elasticsearch
ports:
- "9200:9200"
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
networks:
- stars.api.web
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: changeme
discovery.type: single-node
ulimits:
memlock:
soft: -1
hard: -1
kibana:
image: docker.elastic.co/kibana/kibana:7.6.0
container_name: kibana
ports:
- "5601:5601"
depends_on:
- elasticsearch
networks:
- stars.api.web
volumes:
elasticsearch-data:
external: true
networks:
stars.api.web:
I assume that you have already created defined volume and network. Note that external: true is required when the volume has been created outside of docker-compose context.
In addition, a nice trick to check if you Compose file is valid:
docker-compose -f file config
If the alternate file is omitted will take docker-compose.yml as default.
from the help page:
config: Validate and view the Compose file
After applying the suggested edits by #leopal,
If you want a "quite" output,
$ docker-compose -f docker-compose.yaml config -q
$ docker-compose -f your.yaml config
networks:
stars.api.web: {}
services:
elasticsearch:
container_name: elasticsearch
environment:
ELASTIC_PASSWORD: changeme
ES_JAVA_OPTS: -Xmx256m -Xms256m
discovery.type: single-node
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
networks:
stars.api.web: null
ports:
- published: 9200
target: 9200
ulimits:
memlock:
hard: -1
soft: -1
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data:rw
kibana:
container_name: kibana
depends_on:
- elasticsearch
image: docker.elastic.co/kibana/kibana:7.6.0
networks:
stars.api.web: null
ports:
- published: 5601
target: 5601
version: '3.4'
volumes:
elasticsearch-data: {}
I'm trying to run Elastic,Kibana inside docker-compose.
When I bring up the containers using docker-compose up, Elasticsearch loads up fine. After it loads, the Kibana containers start up. But once they load, they are not able to see or connect the Elasticsearch container, producing these messages:
Kibana docker Log:
{"type":"log","#timestamp":"2020-01-22T19:57:27Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"No living connections"}
{"type":"log","#timestamp":"2020-01-22T19:57:30Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"Unable to revive connection: http://elasticsearch01:9200/"}
{"type":"log","#timestamp":"2020-01-22T19:57:30Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"No living connections"}
Am not able to see the elasticsearch host from the Kibana container
curl -X GET http://elasticsearch01:9200
throws below quoted error
curl: (7) Failed connect to elasticsearch01:9200; No route to host
Deeply digged and out this is happening only in CentOS8 .
Also in same CENTOS8 am able to up and use standalone elasticsearch and kibana instance via systemctl service.
Am i missing something here ?
Can anyone help?
docker-compose.yml:
networks:
docker-elk:
driver: bridge
services:
elasticsearch01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
container_name: elasticsearch01
secrets:
- source: elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
restart: always
environment:
- node.name=elasticsearch01
- cluster.name=es-docker-cluster
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- elasticdata:/usr/share/elasticsearch/data
ports:
- "9200"
expose:
- "9200"
- "9300"
networks:
- docker-elk
kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
container_name: kibana
depends_on: ['elasticsearch01']
environment:
- SERVER_NAME=kibanaServer
restart: always
secrets:
- source: kibana.yml
target: /usr/share/kibana/config/kibana.yml
restart: always
networks:
- docker-elk
volumes:
- kibanadata:/usr/share/kibana/data
ports: ['5601:5601']
links:
- elasticsearch01
volumes:
elasticdata:
driver: local
kibanadata:
driver: local
secrets:
elasticsearch.yml:
file: ./ELK_Config/elastic/elasticsearch.yml
kibana.yml:
file: ./ELK_Config/kibana/kibana.yml
System/Docker Info
OS: CentOS 8
ELK versions 7.4.0
Docker version 19.03.4, build 9013bf583a
Docker-compose:docker-compose version 1.25.0, build 0a186604
I'm running one elasticsearch with
version: '3'
services:
elasticsearch:
build:
context: .
dockerfile: ./compose/elasticsearch/Dockerfile
args:
- VERSION=${VERSION}
- MEM=${MEM}
- ELASTICSEARCH_PORT=${ELASTICSEARCH_PORT_DEV}
- CLUSTER_NAME=${CLUSTER_NAME_DEV}
- ENV=${ENV_DEV}
container_name: elasticsearch
network_mode: host
environment:
- discovery.type=single-node
volumes:
- /var/lib/elasticsearch:/usr/share/elasticsearch/data
logstash:
build:
context: .
dockerfile: ./compose/logstash/Dockerfile
args:
- VERSION=${VERSION}
- ELASTICSEARCH_HOST=${ELASTICSEARCH_HOST_DEV}
- ELASTICSEARCH_PORT=${ELASTICSEARCH_PORT_DEV}
- DB_HOST=${DB_HOST_DEV}
- DB_NAME=${DB_NAME_DEV}
- ENV=${ENV_DEV}
container_name: logstash
network_mode: host
volumes:
- /opt/logstash/data:/usr/share/logstash/data
dns:
- 192.168.1.1 # IP necessary to connect to a database instance external to where the server in which the container is running
kibana:
build:
context: .
dockerfile: ./compose/kibana/Dockerfile
args:
- VERSION=${VERSION}
- ELASTICSEARCH_HOST=${ELASTICSEARCH_HOST_DEV}
- ELASTICSEARCH_PORT=${ELASTICSEARCH_PORT_DEV}
container_name: kibana
depends_on:
- elasticsearch
network_mode: host
nginx:
build:
context: .
dockerfile: ./compose/nginx/Dockerfile
args:
- KIBANA_HOST=${KIBANA_HOST_DEV}
- KIBANA_PORT=${KIBANA_PORT_DEV}
container_name: nginx
network_mode: host
depends_on:
- kibana
apm:
build:
context: .
dockerfile: ./compose/apm/Dockerfile
args:
- VERSION=${VERSION}
- ELASTICSEARCH_HOST=${ELASTICSEARCH_HOST_DEV}
- ELASTICSEARCH_PORT=${ELASTICSEARCH_PORT_DEV}
- APM_PORT=${APM_PORT_DEV}
container_name: apm
depends_on:
- elasticsearch
network_mode: host
(I think this one uses host's /var/lib/elasticsearch when container access /usr/share/elasticsearch/data and the data is persisted in the /var/lib/elasticsearch of the host)
Another one with
version: '3'
services:
elasticsearch-search:
restart: always
build:
context: .
dockerfile: ./compose/elasticsearch/Dockerfile
args:
- VERSION=${VERSION}
- ELASTICSEARCH_PORT=${ELASTICSEARCH_PORT_SEARCH_DEV}
- MEM=${MEM_SEARCH}
- CLUSTER_NAME=${CLUSTER_NAME_SEARCH_DEV}
- ENV=${ENV_DEV}
container_name: elasticsearch-search
network_mode: host
environment:
- discovery.type=single-node
volumes:
- /etc/localtime:/etc/localtime:ro
- data:/usr/share/elasticsearch/data
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
kibana:
build:
context: .
dockerfile: ./compose/kibana/Dockerfile
args:
- VERSION=${VERSION}
- ELASTICSEARCH_HOST=${ELASTICSEARCH_HOST_SEARCH_DEV}
- ELASTICSEARCH_PORT=${ELASTICSEARCH_PORT_SEARCH_DEV}
container_name: kibana-search
depends_on:
- elasticsearch-search
network_mode: host
volumes:
- /etc/localtime:/etc/localtime:ro
- data:/usr/share/elasticsearch/data
volumes:
data:
(I'm not sure how this one works out, but I guess docker provides persistant storage that can be accessed via /usr/share/elasticsearch/data from container)
When I run them at the same time, I expect the two elasticsearch uses separate data. but it seems they are interfering with each other.
I have a kibana running which looks at the first ES.
When I run the first ES alone, I can see the data , but as soon as I run the second ES, there's nothing, no index-pattern, no dashboard.
What am I misunderstanding?
.env
ELASTICSEARCH_PORT_DEV=29200
ELASTICSEARCH_PORT_SEARCH_DEV=29300
most probably something is wrong with your docker-compose in term of volumes: sections.
second example has this at the top
volumes:
- data:/usr/share/elasticsearch/data
and this at the bottom:
volumes:
- /etc/localtime:/etc/localtime:ro
- data:/usr/share/elasticsearch/data
which means that at least two separate container have binding to the same local folder data. which is definitely way to see strange things, because something inside of those containers (ES is one of those) will try to recreate data storage hierarchy in hosts data folder.
can you just try defining volumes for first ES as:
volumes:
- ./data/es1:/usr/share/elasticsearch/data
and for second one as:
volumes:
- ./data/es2:/usr/share/elasticsearch/data
just make sure that ./data/es1 and ./data/es2 folders are there on your host before doing docker-compose up.
or you can post whole docker-compose.yml file so we can say what is wrong with it...
I am trying to use kompose convert on my docker-compose.yaml files however, when I run the command:
kompose convert -f docker-compose.yaml
I get the output:
WARN Volume mount on the host "/home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafka-connect" isn't supported - ignoring path on the host
WARN Volume mount on the host "/home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafka-elasticsearch" isn't supported - ignoring path on the host
WARN Volume mount on the host "/home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafak" isn't supported - ignoring path on the host
It also says more warning for the other persistent volumes
My docker-compose file is:
version: '3'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.2.1
container_name: es01
environment:
[env]
ulimits:
nproc: 3000
nofile: 65536
memlock: -1
volumes:
- /home/centos/Sprint0Demo/Servers/elasticsearch:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- kafka_demo
zookeeper:
image: confluentinc/cp-zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
volumes:
- /home/centos/Sprint0Demo/Servers/zookeeper/zk-data:/var/lib/zookeeper/data
- /home/centos/Sprint0Demo/Servers/zookeeper/zk-txn-logs:/var/lib/zookeeper/log
networks:
kafka_demo:
kafka0:
image: confluentinc/cp-kafka
container_name: kafka0
environment:
[env]
volumes:
- /home/centos/Sprint0Demo/Servers/kafkaData:/var/lib/kafka/data
ports:
- "9092:9092"
depends_on:
- zookeeper
- es01
networks:
kafka_demo:
schema_registry:
image: confluentinc/cp-schema-registry:latest
container_name: schema_registry
environment:
[env]
ports:
- 8081:8081
networks:
- kafka_demo
depends_on:
- kafka0
- es01
elasticSearchConnector:
image: confluentinc/cp-kafka-connect:latest
container_name: elasticSearchConnector
environment:
[env]
volumes:
- /home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafka-connect:/etc/kafka-connect
- /home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafka-elasticsearch:/etc/kafka-elasticsearch
- /home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafak:/etc/kafka
ports:
- "28082:28082"
networks:
- kafka_demo
depends_on:
- kafka0
- es01
networks:
kafka_demo:
driver: bridge
Does anyone know how I can fix this issue? I was thinking it has to do with the error message saying that its a volume mount vs host mount?
I have made some research and there are three things to point out:
kompose does not support volume mount on host. You might consider using emptyDir instead.
Kubernetes makes it difficult to pass in host/root volumes. You can try with
hostPath.
kompose convert --volumes hostPath works for k8s.
Also you can check out Compose on Kubernetes if you'd like to run things on a single machine.
Please let me know if that helped.