What is wrong in my docker-compose.yml file? - docker

I need a little bit of help to find why my docker-compose.yml doesn't work. The story is that I have a docker-compose.yml who works (and who create Grafana, Prometheus, nodeexporter, cAdvisor and alertManager) then I want to do one without Grafana so I just removed all the Grafana things of the file but it doesn't work.
the one who works :
version: '2'
networks:
monitor-net:
driver: bridge
volumes:
prometheus_data: {}
grafana_data: {}
services:
prometheus:
image: prom/prometheus
container_name: prometheus
volumes:
- ./prometheus/:/etc/prometheus/
- prometheus_data:/prometheus
command:
- '-config.file=/etc/prometheus/prometheus.yml'
- '-storage.local.path=/prometheus'
- '-alertmanager.url=http://alertmanager:9093'
- '-storage.local.memory-chunks=100000'
restart: unless-stopped
expose:
- 9090
ports:
- 9090:9090
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
alertmanager:
image: prom/alertmanager
container_name: alertmanager
volumes:
- ./alertmanager/:/etc/alertmanager/
command:
- '-config.file=/etc/alertmanager/config.yml'
- '-storage.path=/alertmanager'
restart: unless-stopped
expose:
- 9093
ports:
- 9093:9093
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
nodeexporter:
image: prom/node-exporter
container_name: nodeexporter
restart: unless-stopped
expose:
- 9100
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
cadvisor:
image: google/cadvisor:v0.24.1
container_name: cadvisor
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
restart: unless-stopped
expose:
- 8080
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
grafana:
image: grafana/grafana
container_name: grafana
volumes:
- grafana_data:/var/lib/grafana
env_file:
- user.config
restart: unless-stopped
expose:
- 3000
ports:
- 3000:3000
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
The one who doesn't work:
version: '2'
networks:
monitor-net:
driver: bridge
volumes:
prometheus_data: {}
services:
prometheus:
image: prom/prometheus
container_name: prometheus
volumes:
- ./prometheus/:/etc/prometheus/
- prometheus_data:/prometheus
command:
- '-config.file=/etc/prometheus/prometheus.yml'
- '-storage.local.path=/prometheus'
- '-alertmanager.url=http://alertmanager:9093'
- '-storage.local.memory-chunks=100000'
restart: unless-stopped
expose:
- 9090
ports:
- 9090:9090
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
alertmanager:
image: prom/alertmanager
container_name: alertmanager
volumes:
- ./alertmanager/:/etc/alertmanager/
command:
- '-config.file=/etc/alertmanager/config.yml'
- '-storage.path=/alertmanager'
restart: unless-stopped
expose:
- 9093
ports:
- 9093:9093
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
nodeexporter:
image: prom/node-exporter
container_name: nodeexporter
restart: unless-stopped
expose:
- 9100
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
cadvisor:
image: google/cadvisor
container_name: cadvisor
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
restart: unless-stopped
expose:
- 8080
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
With the second one the container Prometheus can't run and the logs are:
level=info msg="Starting prometheus (version=1.5.2, branch=master, revision=bd1182d29f462c39544f94cc822830e1c64cf55b)" source="main.go:75"
level=info msg="Build context (go=go1.7.5, user=root#1a01c5f68840, date=20170220-07:00:00)" source="main.go:76"
level=info msg="Loading configuration file /etc/prometheus/prometheus.yml" source="main.go:248"
level=error msg="Error opening memory series storage: leveldb: manifest corrupted (field 'comparer'): missing [file=MANIFEST-000009]" source="main.go:182"

The fact that Prometheus at least starts and then errors out means that your compose files is probably correct.
It seems to at least try to load the configuration file /etc/prometheus/prometheus.yml and fails doing so.
In the compose file I see that it adds a host volume which is expected to exist on your host system at location ./prometheus/. Did you also copy this folder and it's contents? If yes, did you verify that the configuration is correct and is expected to work without Grafana? Also it's important what your current directory is when you run docker-compose, it must be where the ./prometheus/ directory is located.

Related

Node Exporter Basic Auth (docker-compose)

I want to set up node exporter on my server to be monitored using docker compose but do not want the metrics to be freely available to all.
My current docker-compose.yml file looks like this;
version: '3.8'
networks:
monitoring:
driver: bridge
volumes:
prometheus_data: {}
services:
node-exporter:
image: prom/node-exporter:latest
container_name: node-exporter
restart: unless-stopped
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.rootfs=/rootfs'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
- '--collector.netclass.ignored-devices=^(veth.*)$$'
ports:
- 9100:9100
networks:
- monitoring
labels:
org.label-schema.group: "monitoring"
When I add bottom line to my docker-compose.yml file then I get error message "services.node-exporter Additional property basic_auth_users is not allowed".
basic_auth_users:
prometheus: my_pass
can someone please help me where I am making mistakes or how the whole thing would work.
Ps: I would like to install on the server to be monitored only Node-Exporter since a Prometheus instance is not necessary there.... (Correct me if it is wrong approach)
Best regards
Solution -> docker-compose.yml
version: '3.8'
networks:
monitoring:
driver: bridge
volumes:
prometheus_data: {}
services:
node-exporter:
image: prom/node-exporter:latest
container_name: node-exporter
restart: unless-stopped
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
- ./prometheus/web.yml:/etc/prometheus/web.yml
command:
- '--path.procfs=/host/proc'
- '--path.rootfs=/rootfs'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
- '--collector.netclass.ignored-devices=^(veth.*)$$'
- '--web.config=/etc/prometheus/web.yml'
ports:
- 9100:9100
networks:
- monitoring
labels:
org.label-schema.group: "monitoring"

Got problem deploying docker-compose service(port issue)

I want to deploy a service that will allow me to use Spark and MongoDB in a Jupiter notebook.
I use docker-compose to build up the service, and it`s as followed:
version: "3.3"
volumes:
shared-workspace:
networks:
spark-net:
driver: bridge
services:
spark-master:
image: uqteaching/cloudcomputing:spark-master-v1
container_name: spark-master
networks:
- "spark-net"
ports:
- "8080:8080"
- "7077:7077"
environment:
- INIT_DAEMON_STEP=setup_spark
- "PYSPARK_PYTHON=/usr/bin/python3"
- "PYSPARK_DRIVER_PYTHON=/usr/bin/python3"
spark-worker-1:
image: uqteaching/cloudcomputing:spark-worker-v1
container_name: spark-worker-1
depends_on:
- spark-master
networks:
- "spark-net"
ports:
- "8081:8081"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
- "PYSPARK_PYTHON=/usr/bin/python3"
- "PYSPARK_DRIVER_PYTHON=/usr/bin/python3"
spark-worker-2:
image: uqteaching/cloudcomputing:spark-worker-v1
container_name: spark-worker-2
depends_on:
- spark-master
networks:
- "spark-net"
ports:
- "8082:8082"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
- "PYSPARK_PYTHON=/usr/bin/python3"
- "PYSPARK_DRIVER_PYTHON=/usr/bin/python3"
mongo:
image: mongo
container_name: 'mongo'
networks:
- "spark-net"
ports:
- "27017:27017"
mongo_admin:
image: mongo-express
container_name: 'mongoadmin'
networks:
- "spark-net"
depends_on:
- mongo
links:
- mongo
ports:
- "8091:8091"
jupyter-notebook:
container_name: jupyternb
image: jupyter/all-spark-notebook:42f4c82a07ff
depends_on:
- mongo
- spark-master
links:
- mongo
expose:
- "8888"
networks:
- "spark-net"
ports:
- "8888:8888"
volumes:
- ./nbs:/home/jovyan/work/nbs
- ./events:/tmp/spark-events
environment:
- "PYSPARK_PYTHON=/usr/bin/python3"
- "PYSPARK_DRIVER_PYTHON=/usr/bin/python3"
command: "start-notebook.sh \
--ip=0.0.0.0 \
--allow-root \
--no-browser \
--notebook-dir=/home/jovyan/work/nbs \
--NotebookApp.token='' \
--NotebookApp.password=''
"
And the result is something like this:
I dont know why. Even I set these 2` services to listen to a different port.
They are using 8081/tcp at the same time, which caused them both to crash.
I want to solve this.
mongo-express seems to need port 8081 internal, so use another external port to be able to login to the webui.
http://localhost:8092 would then be something like this:
mongo_admin:
image: mongo-express
container_name: 'mongoadmin'
networks:
- "spark-net"
depends_on:
- mongo
links:
- mongo
ports:
- "8092:8091"

replicate standard_ds3_v2 databricks cluster in docker

replicating standard_ds3_v2 databricks cluster in docker (14gb 4 cores)
would I just change the spark worker 1 and 2 in dockerfile from 512MB each to 7GB and from 1 core to 2. would this result in the exact same config as the standard_ds3_v2 data bricks cluster?
below is the docker cofig
version: "3.6"
volumes:
shared-workspace:
name: "hadoop-distributed-file-system"
driver: local
services:
jupyterlab:
image: jupyterlab
container_name: jupyterlab
ports:
- 8888:8888
volumes:
- shared-workspace:/opt/workspace
spark-master:
image: spark-master
container_name: spark-master
ports:
- 8080:8080
- 7077:7077
volumes:
- shared-workspace:/opt/workspace
spark-worker-1:
image: spark-worker
container_name: spark-worker-1
environment:
- SPARK_WORKER_CORES=1
- SPARK_WORKER_MEMORY=512m
ports:
- 8081:8081
volumes:
- shared-workspace:/opt/workspace
depends_on:
- spark-master
spark-worker-2:
image: spark-worker
container_name: spark-worker-2
environment:
- SPARK_WORKER_CORES=1
- SPARK_WORKER_MEMORY=512m
ports:
- 8082:8081
volumes:
- shared-workspace:/opt/workspace
depends_on:
- spark-master

Passing from manual Docker host network to Docker Compose bridge

I have 2 docker images a modbus server and a client which I run manually with docker run --network host server and the same with the client and work perfectly. But now I need to add them to a docker-compose file where the network is bridge, what I did like this:
autoserver:
image: 19mikel95/pymodmikel:autoserversynchub
container_name: autoserver
restart: unless-stopped
clientperf:
image: 19mikel95/pymodmikel:reloadcomp
container_name: clientperf
restart: unless-stopped
depends_on:
- autoserver
links:
- "autoserver:server"
And I read that to refer from a container to another one(client to server) I have to use the service name in the dockercompose YML (autoserver) so that is what I did. In the python file executed in the client(which is the performance.py from pymodbus) I changed 'localhost' to:
host = 'autoserver'
client = ModbusTcpClient(host, port=5020)
However I get this error:
[ERROR/MainProcess] failed to run test successfully Traceback (most
recent call last): File "performance.py", line 72, in
single_client_test
client.read_holding_registers(10, 1, unit=1)
File "/usr/lib/python3/dist-packages/pymodbus/client/common.py", line
114, in read_holding_registers
return self.execute(request) File "/usr/lib/python3/dist-packages/pymodbus/client/sync.py", line 107, in
execute
raise ConnectionException("Failed to connect[%s]" % (self.str())) pymodbus.exceptions.ConnectionException: Modbus
Error: [Connection] Failed to
connect[ModbusTcpClient(autoserver:5020)]
as asked, my full docker-compose YML is this:
version: '2.1'
networks:
monitor-net:
driver: bridge
volumes:
prometheus_data: {}
grafana_data: {}
services:
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- ./prometheus:/etc/prometheus
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
- '--storage.tsdb.retention.time=200h'
- '--web.enable-lifecycle'
restart: unless-stopped
expose:
- 9090
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
alertmanager:
image: prom/alertmanager:latest
container_name: alertmanager
volumes:
- ./alertmanager:/etc/alertmanager
command:
- '--config.file=/etc/alertmanager/config.yml'
- '--storage.path=/alertmanager'
restart: unless-stopped
expose:
- 9093
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
nodeexporter:
image: prom/node-exporter:latest
container_name: nodeexporter
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- c:\:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.rootfs=/rootfs'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.ignored-mount-points=^/(sys|proc|dev|host|etc)($$|/)'
restart: unless-stopped
expose:
- 9100
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
cadvisor:
image: gcr.io/google-containers/cadvisor:latest
container_name: cadvisor
volumes:
- c:\:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker:/var/lib/docker:ro
#- /cgroup:/cgroup:ro #doesn't work on MacOS only for Linux
restart: unless-stopped
expose:
- 8080
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
grafana:
image: grafana/grafana:latest
container_name: grafana
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/provisioning:/etc/grafana/provisioning
environment:
- GF_SECURITY_ADMIN_USER=${ADMIN_USER:-admin}
- GF_SECURITY_ADMIN_PASSWORD=${ADMIN_PASSWORD:-admin}
- GF_USERS_ALLOW_SIGN_UP=false
restart: unless-stopped
expose:
- 3000
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
pushgateway:
image: prom/pushgateway:latest
container_name: pushgateway
restart: unless-stopped
expose:
- 9091
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
caddy:
image: stefanprodan/caddy
container_name: caddy
ports:
- "3000:3000"
- "9090:9090"
- "9093:9093"
- "9091:9091"
volumes:
- ./caddy:/etc/caddy
environment:
- ADMIN_USER=${ADMIN_USER:-admin}
- ADMIN_PASSWORD=${ADMIN_PASSWORD:-admin}
restart: unless-stopped
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
autoserver:
image: 19mikel95/pymodmikel:autoserversynchub
container_name: autoserver
ports:
- "5020:5020"
restart: unless-stopped
networks:
- monitor-net
clientperf:
image: 19mikel95/pymodmikel:reloadcomp
container_name: clientperf
restart: unless-stopped
networks:
- monitor-net
depends_on:
- autoserver
links:
- "autoserver:server"
The problem is in the StartTcpServer(context, identity=identity, address=("localhost", 5020)) of sincserver.py file from autoserver image. The localhost allows for TcpServer to accept connection only from localhost. It should be replaced with 0.0.0.0 in order to allow any external requests to this port.
The following Docker Compose shows it (sed -i 's|localhost|0.0.0.0|g' sincserver.py replaces the hostname):
version: '2.1'
services:
autoserver:
image: 19mikel95/pymodmikel:autoserversynchub
command: sh -c "
sed -i 's|localhost|0.0.0.0|g' sincserver.py;
python3 sincserver.py daemon off
"
ports:
- "5020:5020"
restart: unless-stopped
clientperf:
image: 19mikel95/pymodmikel:reloadcomp
restart: unless-stopped
depends_on:
- autoserver
Run:
docker-compose up -d
docker-compose logs -f clientperf
And you will see log like
clientperf_1 | [DEBUG/MainProcess] 574 requests/second
clientperf_1 | [DEBUG/MainProcess] time taken to complete 1000 cycle by 10 workers is 1.7410697999875993 seconds
clientperf_1 | [DEBUG/MainProcess] 692 requests/second
clientperf_1 | [DEBUG/MainProcess] time taken to complete 1000 cycle by 10 workers is 1.4434449000109453 seconds
clientperf_1 | [DEBUG/MainProcess] 708 requests/second
clientperf_1 | [DEBUG/MainProcess] time taken to complete 1000 cycle by 10 workers is 1.4116760999895632 seconds
clientperf_1 | [DEBUG/MainProcess] 890 requests/second
clientperf_1 | [DEBUG/MainProcess] time taken to complete 1000 cycle by 10 workers is 1.1230684999900404 seconds
clientperf_1 | [DEBUG/MainProcess] 803 requests/second
clientperf_1 | [DEBUG/MainProcess] time taken to complete 1000 cycle by 10 workers is 1.2450218999874778 seconds
clientperf_1 | [DEBUG/MainProcess] 753 requests/second
clientperf_1 | [DEBUG/MainProcess] time taken to complete 1000 cycle by 10 workers is 1.3274328999978025 seconds
clientperf_1 | [DEBUG/MainProcess] 609 requests/second
clientperf_1 | [DEBUG/MainProcess] time taken to complete 1000 cycle by 10 workers is 1.6399398999928962 seconds
You can try using the env variable like AUTO_SERVER_HOST and call this in your code
version: '2.1'
networks:
monitor-net:
driver: bridge
volumes:
prometheus_data: {}
grafana_data: {}
services:
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- ./prometheus:/etc/prometheus
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
- '--storage.tsdb.retention.time=200h'
- '--web.enable-lifecycle'
restart: unless-stopped
expose:
- 9090
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
alertmanager:
image: prom/alertmanager:latest
container_name: alertmanager
volumes:
- ./alertmanager:/etc/alertmanager
command:
- '--config.file=/etc/alertmanager/config.yml'
- '--storage.path=/alertmanager'
restart: unless-stopped
expose:
- 9093
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
nodeexporter:
image: prom/node-exporter:latest
container_name: nodeexporter
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- c:\:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.rootfs=/rootfs'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.ignored-mount-points=^/(sys|proc|dev|host|etc)($$|/)'
restart: unless-stopped
expose:
- 9100
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
cadvisor:
image: gcr.io/google-containers/cadvisor:latest
container_name: cadvisor
volumes:
- c:\:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker:/var/lib/docker:ro
#- /cgroup:/cgroup:ro #doesn't work on MacOS only for Linux
restart: unless-stopped
expose:
- 8080
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
grafana:
image: grafana/grafana:latest
container_name: grafana
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/provisioning:/etc/grafana/provisioning
environment:
- GF_SECURITY_ADMIN_USER=${ADMIN_USER:-admin}
- GF_SECURITY_ADMIN_PASSWORD=${ADMIN_PASSWORD:-admin}
- GF_USERS_ALLOW_SIGN_UP=false
restart: unless-stopped
expose:
- 3000
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
pushgateway:
image: prom/pushgateway:latest
container_name: pushgateway
restart: unless-stopped
expose:
- 9091
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
caddy:
image: stefanprodan/caddy
container_name: caddy
ports:
- "3000:3000"
- "9090:9090"
- "9093:9093"
- "9091:9091"
volumes:
- ./caddy:/etc/caddy
environment:
- ADMIN_USER=${ADMIN_USER:-admin}
- ADMIN_PASSWORD=${ADMIN_PASSWORD:-admin}
restart: unless-stopped
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
autoserver:
image: 19mikel95/pymodmikel:autoserversynchub
container_name: autoserver
ports:
- "5020:5020"
restart: unless-stopped
networks:
- monitor-net
clientperf:
image: 19mikel95/pymodmikel:reloadcomp
container_name: clientperf
restart: unless-stopped
networks:
- monitor-net
depends_on:
- autoserver
environment:
- AUTO_SERVER_HOST=autoserver
Call env variable like below
host = os.environ['AUTO_SERVER_HOST']
client = ModbusTcpClient(host, port=5020)

Running Kibana in Docker image gives Non-Root error

I'm having issues attempting to setup the ELK stack (v7.6.0) in docker using Docker-Compose.
Elastic Search & Logstash startup fine, but Kibana instantly exists, the docker logs for that container report:
Kibana should not be run as root. Use --allow-root to continue.
the docker-compose for those elements looks like this:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
environment:
- discovery.type=single-node
ports:
- 9200:9200
mem_limit: 2gb
kibana:
image: docker.elastic.co/kibana/kibana:7.6.0
environment:
- discovery.type=single-node
ports:
- 5601:5601
depends_on:
- elasticsearch
logstash:
image: docker.elastic.co/logstash/logstash:7.6.0
ports:
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600"
mem_limit: 2gb
depends_on:
- elasticsearch
How do I either disable the run as root error, or set the application to not run as root?
In case you don't have a separate Dockerfile for Kibana and you just want to set the startup arg in docker-compose, the syntax there is as follows:
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:7.9.2
ports:
- 5601:5601
depends_on:
- elasticsearch
environment:
- ELASTICSEARCH_URL=http://localhost:9200
networks:
- elastic
entrypoint: ["./bin/kibana", "--allow-root"]
That works around the problem of running it on a Windows Container.
That being said, I don't know why Kibana should not be executed as root.
i've just run this docker image and all works perfectly, i share my docker-compose file:
version: '3.7'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
container_name: elasticsearch
environment:
- node.name=node
- cluster.name=elasticsearch-default
- bootstrap.memory_lock=true
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
ports:
- "9200:9200"
expose:
- "9200"
networks:
- "esnet"
kibana:
image: docker.elastic.co/kibana/kibana:7.6.0
container_name: kibana
ports:
- "5601:5601"
expose:
- "5601"
networks:
- "esnet"
depends_on:
- elasticsearch
logstash:
image: docker.elastic.co/logstash/logstash:7.6.0
ports:
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600
depends_on:
- elasticsearch
networks:
- "esnet"
networks:
esnet:
I had the same problem. I did run it manually by adding the ENTRYPOINT to the end of Dockerfile.
ARG ELK_VERSION
FROM docker.elastic.co/kibana/kibana:${ELK_VERSION}
ENTRYPOINT ["./bin/kibana", "--allow-root"]
The docker-compose.yml
version: '3.2'
# Elastic stack (ELK) on Docker https://github.com/deviantony/docker-elk
services:
elasticsearch:
build:
context: elasticsearch/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: changeme
discovery.type: single-node
networks:
- elk
logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
ports:
- "5002:5002/tcp"
- "5002:5002/udp"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
build:
context: kibana/ ############## Dockerfile ##############
args:
ELK_VERSION: $ELK_VERSION
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: nat
volumes:
elasticsearch:

Resources