Passing from manual Docker host network to Docker Compose bridge - docker

I have 2 docker images a modbus server and a client which I run manually with docker run --network host server and the same with the client and work perfectly. But now I need to add them to a docker-compose file where the network is bridge, what I did like this:
autoserver:
image: 19mikel95/pymodmikel:autoserversynchub
container_name: autoserver
restart: unless-stopped
clientperf:
image: 19mikel95/pymodmikel:reloadcomp
container_name: clientperf
restart: unless-stopped
depends_on:
- autoserver
links:
- "autoserver:server"
And I read that to refer from a container to another one(client to server) I have to use the service name in the dockercompose YML (autoserver) so that is what I did. In the python file executed in the client(which is the performance.py from pymodbus) I changed 'localhost' to:
host = 'autoserver'
client = ModbusTcpClient(host, port=5020)
However I get this error:
[ERROR/MainProcess] failed to run test successfully Traceback (most
recent call last): File "performance.py", line 72, in
single_client_test
client.read_holding_registers(10, 1, unit=1)
File "/usr/lib/python3/dist-packages/pymodbus/client/common.py", line
114, in read_holding_registers
return self.execute(request) File "/usr/lib/python3/dist-packages/pymodbus/client/sync.py", line 107, in
execute
raise ConnectionException("Failed to connect[%s]" % (self.str())) pymodbus.exceptions.ConnectionException: Modbus
Error: [Connection] Failed to
connect[ModbusTcpClient(autoserver:5020)]
as asked, my full docker-compose YML is this:
version: '2.1'
networks:
monitor-net:
driver: bridge
volumes:
prometheus_data: {}
grafana_data: {}
services:
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- ./prometheus:/etc/prometheus
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
- '--storage.tsdb.retention.time=200h'
- '--web.enable-lifecycle'
restart: unless-stopped
expose:
- 9090
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
alertmanager:
image: prom/alertmanager:latest
container_name: alertmanager
volumes:
- ./alertmanager:/etc/alertmanager
command:
- '--config.file=/etc/alertmanager/config.yml'
- '--storage.path=/alertmanager'
restart: unless-stopped
expose:
- 9093
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
nodeexporter:
image: prom/node-exporter:latest
container_name: nodeexporter
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- c:\:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.rootfs=/rootfs'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.ignored-mount-points=^/(sys|proc|dev|host|etc)($$|/)'
restart: unless-stopped
expose:
- 9100
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
cadvisor:
image: gcr.io/google-containers/cadvisor:latest
container_name: cadvisor
volumes:
- c:\:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker:/var/lib/docker:ro
#- /cgroup:/cgroup:ro #doesn't work on MacOS only for Linux
restart: unless-stopped
expose:
- 8080
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
grafana:
image: grafana/grafana:latest
container_name: grafana
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/provisioning:/etc/grafana/provisioning
environment:
- GF_SECURITY_ADMIN_USER=${ADMIN_USER:-admin}
- GF_SECURITY_ADMIN_PASSWORD=${ADMIN_PASSWORD:-admin}
- GF_USERS_ALLOW_SIGN_UP=false
restart: unless-stopped
expose:
- 3000
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
pushgateway:
image: prom/pushgateway:latest
container_name: pushgateway
restart: unless-stopped
expose:
- 9091
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
caddy:
image: stefanprodan/caddy
container_name: caddy
ports:
- "3000:3000"
- "9090:9090"
- "9093:9093"
- "9091:9091"
volumes:
- ./caddy:/etc/caddy
environment:
- ADMIN_USER=${ADMIN_USER:-admin}
- ADMIN_PASSWORD=${ADMIN_PASSWORD:-admin}
restart: unless-stopped
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
autoserver:
image: 19mikel95/pymodmikel:autoserversynchub
container_name: autoserver
ports:
- "5020:5020"
restart: unless-stopped
networks:
- monitor-net
clientperf:
image: 19mikel95/pymodmikel:reloadcomp
container_name: clientperf
restart: unless-stopped
networks:
- monitor-net
depends_on:
- autoserver
links:
- "autoserver:server"

The problem is in the StartTcpServer(context, identity=identity, address=("localhost", 5020)) of sincserver.py file from autoserver image. The localhost allows for TcpServer to accept connection only from localhost. It should be replaced with 0.0.0.0 in order to allow any external requests to this port.
The following Docker Compose shows it (sed -i 's|localhost|0.0.0.0|g' sincserver.py replaces the hostname):
version: '2.1'
services:
autoserver:
image: 19mikel95/pymodmikel:autoserversynchub
command: sh -c "
sed -i 's|localhost|0.0.0.0|g' sincserver.py;
python3 sincserver.py daemon off
"
ports:
- "5020:5020"
restart: unless-stopped
clientperf:
image: 19mikel95/pymodmikel:reloadcomp
restart: unless-stopped
depends_on:
- autoserver
Run:
docker-compose up -d
docker-compose logs -f clientperf
And you will see log like
clientperf_1 | [DEBUG/MainProcess] 574 requests/second
clientperf_1 | [DEBUG/MainProcess] time taken to complete 1000 cycle by 10 workers is 1.7410697999875993 seconds
clientperf_1 | [DEBUG/MainProcess] 692 requests/second
clientperf_1 | [DEBUG/MainProcess] time taken to complete 1000 cycle by 10 workers is 1.4434449000109453 seconds
clientperf_1 | [DEBUG/MainProcess] 708 requests/second
clientperf_1 | [DEBUG/MainProcess] time taken to complete 1000 cycle by 10 workers is 1.4116760999895632 seconds
clientperf_1 | [DEBUG/MainProcess] 890 requests/second
clientperf_1 | [DEBUG/MainProcess] time taken to complete 1000 cycle by 10 workers is 1.1230684999900404 seconds
clientperf_1 | [DEBUG/MainProcess] 803 requests/second
clientperf_1 | [DEBUG/MainProcess] time taken to complete 1000 cycle by 10 workers is 1.2450218999874778 seconds
clientperf_1 | [DEBUG/MainProcess] 753 requests/second
clientperf_1 | [DEBUG/MainProcess] time taken to complete 1000 cycle by 10 workers is 1.3274328999978025 seconds
clientperf_1 | [DEBUG/MainProcess] 609 requests/second
clientperf_1 | [DEBUG/MainProcess] time taken to complete 1000 cycle by 10 workers is 1.6399398999928962 seconds

You can try using the env variable like AUTO_SERVER_HOST and call this in your code
version: '2.1'
networks:
monitor-net:
driver: bridge
volumes:
prometheus_data: {}
grafana_data: {}
services:
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- ./prometheus:/etc/prometheus
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
- '--storage.tsdb.retention.time=200h'
- '--web.enable-lifecycle'
restart: unless-stopped
expose:
- 9090
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
alertmanager:
image: prom/alertmanager:latest
container_name: alertmanager
volumes:
- ./alertmanager:/etc/alertmanager
command:
- '--config.file=/etc/alertmanager/config.yml'
- '--storage.path=/alertmanager'
restart: unless-stopped
expose:
- 9093
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
nodeexporter:
image: prom/node-exporter:latest
container_name: nodeexporter
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- c:\:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.rootfs=/rootfs'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.ignored-mount-points=^/(sys|proc|dev|host|etc)($$|/)'
restart: unless-stopped
expose:
- 9100
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
cadvisor:
image: gcr.io/google-containers/cadvisor:latest
container_name: cadvisor
volumes:
- c:\:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker:/var/lib/docker:ro
#- /cgroup:/cgroup:ro #doesn't work on MacOS only for Linux
restart: unless-stopped
expose:
- 8080
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
grafana:
image: grafana/grafana:latest
container_name: grafana
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/provisioning:/etc/grafana/provisioning
environment:
- GF_SECURITY_ADMIN_USER=${ADMIN_USER:-admin}
- GF_SECURITY_ADMIN_PASSWORD=${ADMIN_PASSWORD:-admin}
- GF_USERS_ALLOW_SIGN_UP=false
restart: unless-stopped
expose:
- 3000
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
pushgateway:
image: prom/pushgateway:latest
container_name: pushgateway
restart: unless-stopped
expose:
- 9091
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
caddy:
image: stefanprodan/caddy
container_name: caddy
ports:
- "3000:3000"
- "9090:9090"
- "9093:9093"
- "9091:9091"
volumes:
- ./caddy:/etc/caddy
environment:
- ADMIN_USER=${ADMIN_USER:-admin}
- ADMIN_PASSWORD=${ADMIN_PASSWORD:-admin}
restart: unless-stopped
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
autoserver:
image: 19mikel95/pymodmikel:autoserversynchub
container_name: autoserver
ports:
- "5020:5020"
restart: unless-stopped
networks:
- monitor-net
clientperf:
image: 19mikel95/pymodmikel:reloadcomp
container_name: clientperf
restart: unless-stopped
networks:
- monitor-net
depends_on:
- autoserver
environment:
- AUTO_SERVER_HOST=autoserver
Call env variable like below
host = os.environ['AUTO_SERVER_HOST']
client = ModbusTcpClient(host, port=5020)

Related

Got problem deploying docker-compose service(port issue)

I want to deploy a service that will allow me to use Spark and MongoDB in a Jupiter notebook.
I use docker-compose to build up the service, and it`s as followed:
version: "3.3"
volumes:
shared-workspace:
networks:
spark-net:
driver: bridge
services:
spark-master:
image: uqteaching/cloudcomputing:spark-master-v1
container_name: spark-master
networks:
- "spark-net"
ports:
- "8080:8080"
- "7077:7077"
environment:
- INIT_DAEMON_STEP=setup_spark
- "PYSPARK_PYTHON=/usr/bin/python3"
- "PYSPARK_DRIVER_PYTHON=/usr/bin/python3"
spark-worker-1:
image: uqteaching/cloudcomputing:spark-worker-v1
container_name: spark-worker-1
depends_on:
- spark-master
networks:
- "spark-net"
ports:
- "8081:8081"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
- "PYSPARK_PYTHON=/usr/bin/python3"
- "PYSPARK_DRIVER_PYTHON=/usr/bin/python3"
spark-worker-2:
image: uqteaching/cloudcomputing:spark-worker-v1
container_name: spark-worker-2
depends_on:
- spark-master
networks:
- "spark-net"
ports:
- "8082:8082"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
- "PYSPARK_PYTHON=/usr/bin/python3"
- "PYSPARK_DRIVER_PYTHON=/usr/bin/python3"
mongo:
image: mongo
container_name: 'mongo'
networks:
- "spark-net"
ports:
- "27017:27017"
mongo_admin:
image: mongo-express
container_name: 'mongoadmin'
networks:
- "spark-net"
depends_on:
- mongo
links:
- mongo
ports:
- "8091:8091"
jupyter-notebook:
container_name: jupyternb
image: jupyter/all-spark-notebook:42f4c82a07ff
depends_on:
- mongo
- spark-master
links:
- mongo
expose:
- "8888"
networks:
- "spark-net"
ports:
- "8888:8888"
volumes:
- ./nbs:/home/jovyan/work/nbs
- ./events:/tmp/spark-events
environment:
- "PYSPARK_PYTHON=/usr/bin/python3"
- "PYSPARK_DRIVER_PYTHON=/usr/bin/python3"
command: "start-notebook.sh \
--ip=0.0.0.0 \
--allow-root \
--no-browser \
--notebook-dir=/home/jovyan/work/nbs \
--NotebookApp.token='' \
--NotebookApp.password=''
"
And the result is something like this:
I dont know why. Even I set these 2` services to listen to a different port.
They are using 8081/tcp at the same time, which caused them both to crash.
I want to solve this.
mongo-express seems to need port 8081 internal, so use another external port to be able to login to the webui.
http://localhost:8092 would then be something like this:
mongo_admin:
image: mongo-express
container_name: 'mongoadmin'
networks:
- "spark-net"
depends_on:
- mongo
links:
- mongo
ports:
- "8092:8091"

Kafka manager no application loader is configured

Here's my docker-compose:
version: '3'
services:
nodered:
container_name: nodered
image: nodered/node-red
ports:
- "1880:1880"
volumes:
- ./nodered:/data
depends_on:
- mosquitto
environment:
TZ: "America/Toronto"
restart: always
mosquitto:
image: eclipse-mosquitto
container_name: mqtt
restart: always
ports:
- "1883:1883"
volumes:
- "./mosquitto/config:/mosquitto/config"
- "./mosquitto/data:/mosquitto/data"
- "./mosquitto/log:/mosquitto/log"
environment:
- TZ=America/Toronto
user: "${PUID}:${PGID}"
portainer:
ports:
- "9000:9000"
container_name: portainer
restart: always
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "./portainer/portainer_data:/data"
image: portainer/portainer-ce
zookeeper:
image: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
volumes:
- "zookeeper_data:/bitnami"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: bitnami/kafka
container_name: kafka
ports:
- "9092:9092"
volumes:
- "kafka_data:/bitnami"
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
restart: on-failure
cmak:
image: hlebalbau/kafka-manager
container_name: cmak
restart: always
depends_on:
- kafka
- zookeeper
ports:
- "9080:9080"
environment:
- ZK_HOSTS=zookeper:2181
- APPLICATION_SECRET=letmein
command: bin/cmak -Dconfig.file=/opt/cmak/conf/application.conf -Dhttp.port=9080
volumes:
zookeeper_data:
driver: local
kafka_data:
driver: local
My port 9000 is already used by Portainer and it works properly, but when I'm trying to run Kafka manager on 9080, I'm getting this error without any further explanation:
nodered | 14 Sep 21:59:41 - [info] Starting flows
nodered | 14 Sep 21:59:41 - [info] Started flows
cmak | Oops, cannot start the server.
cmak | java.lang.RuntimeException: No application loader is configured. Please configure an application loader either using the play.application.loader configuration property, or by depending on a module that configures one. You can add the Guice support module by adding "libraryDependencies += guice" to your build.sbt.
cmak | at scala.sys.package$.error(package.scala:30)
cmak | at play.api.ApplicationLoader$.play$api$ApplicationLoader$$loaderNotFound(ApplicationLoader.scala:44)
cmak | at play.api.ApplicationLoader$.apply(ApplicationLoader.scala:70)
cmak | at play.core.server.ProdServerStart$.start(ProdServerStart.scala:50)
cmak | at play.core.server.ProdServerStart$.main(ProdServerStart.scala:25)
cmak | at play.core.server.ProdServerStart.main(ProdServerStart.scala)
I have a feeling it's either my path to kafka-manager is wrong or I might have to expose the hostname on my kafka container...

replicate standard_ds3_v2 databricks cluster in docker

replicating standard_ds3_v2 databricks cluster in docker (14gb 4 cores)
would I just change the spark worker 1 and 2 in dockerfile from 512MB each to 7GB and from 1 core to 2. would this result in the exact same config as the standard_ds3_v2 data bricks cluster?
below is the docker cofig
version: "3.6"
volumes:
shared-workspace:
name: "hadoop-distributed-file-system"
driver: local
services:
jupyterlab:
image: jupyterlab
container_name: jupyterlab
ports:
- 8888:8888
volumes:
- shared-workspace:/opt/workspace
spark-master:
image: spark-master
container_name: spark-master
ports:
- 8080:8080
- 7077:7077
volumes:
- shared-workspace:/opt/workspace
spark-worker-1:
image: spark-worker
container_name: spark-worker-1
environment:
- SPARK_WORKER_CORES=1
- SPARK_WORKER_MEMORY=512m
ports:
- 8081:8081
volumes:
- shared-workspace:/opt/workspace
depends_on:
- spark-master
spark-worker-2:
image: spark-worker
container_name: spark-worker-2
environment:
- SPARK_WORKER_CORES=1
- SPARK_WORKER_MEMORY=512m
ports:
- 8082:8081
volumes:
- shared-workspace:/opt/workspace
depends_on:
- spark-master

Hive-metastore not finding Hadoop Datanode

I have an Hadoop cluster with one namenode and one datanode instantiated with docker compose. In addition I am trying to launch Hive but the Hive-metastore seems not to find my datanode even if this one is up and running, infact checking the log it shows:
namenode:9870 is available.
check for datanode:9871...
datanode:9871 is not available yet
try in 5s once again ...
Here is my docker-compose.yml
#HADOOP
namenode:
image: bde2020/hadoop-namenode:2.0.0-hadoop3.2.1-java8
container_name: namenode
restart: always
expose:
- "9870"
- "54310"
- "9000"
ports:
- 9870:9870
- 9000:9000
volumes:
- ./data/hadoop_data/:/hadoop_data
environment:
- CLUSTER_NAME=test
- CORE_CONF_fs_defaultFS=hdfs://namenode:9000
- CORE_CONF_hadoop_http_staticuser_user=root
- CORE_CONF_hadoop_proxyuser_hue_hosts=*
- CORE_CONF_hadoop_proxyuser_hue_groups=*
- CORE_CONF_io_compression_codecs=org.apache.hadoop.io.compress.SnappyCodec
- HDFS_CONF_dfs_webhdfs_enabled=true
- HDFS_CONF_dfs_permissions_enabled=false
- HDFS_CONF_dfs_namenode_datanode_registration_ip___hostname___check=false
- HDFS_CONF_dfs_safemode_threshold_pct=0
datanode:
image: bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8
container_name: datanode
restart: always
expose:
- "9871"
environment:
SERVICE_PRECONDITION: "namenode:9870"
ports:
- "9871:9871"
env_file:
- hive.env
hive-server:
image: bde2020/hive:2.3.2-postgresql-metastore
container_name: hive-server
volumes:
- ./employee:/employee
env_file:
- hive.env
environment:
HIVE_CORE_CONF_javax_jdo_option_ConnectionURL: "jdbc:postgresql://hive-metastore/metastore"
SERVICE_PRECONDITION: "hive-metastore:9083"
depends_on:
- hive-metastore
ports:
- "10000:10000"
hive-metastore:
image: bde2020/hive:2.3.2-postgresql-metastore
container_name: hive-metastore
env_file:
- hive.env
command: /opt/hive/bin/hive --service metastore
environment:
SERVICE_PRECONDITION: "namenode:9870 datanode:9871 hive-metastore-postgresql:5432"
depends_on:
- hive-metastore-postgresql
ports:
- "9083:9083"
hive-metastore-postgresql:
image: bde2020/hive-metastore-postgresql:2.3.0
container_name: hive-metastore-postgresql
volumes:
- ./metastore-postgresql/postgresql/data:/var/lib/postgresql/data
depends_on:
- datanode

What is wrong in my docker-compose.yml file?

I need a little bit of help to find why my docker-compose.yml doesn't work. The story is that I have a docker-compose.yml who works (and who create Grafana, Prometheus, nodeexporter, cAdvisor and alertManager) then I want to do one without Grafana so I just removed all the Grafana things of the file but it doesn't work.
the one who works :
version: '2'
networks:
monitor-net:
driver: bridge
volumes:
prometheus_data: {}
grafana_data: {}
services:
prometheus:
image: prom/prometheus
container_name: prometheus
volumes:
- ./prometheus/:/etc/prometheus/
- prometheus_data:/prometheus
command:
- '-config.file=/etc/prometheus/prometheus.yml'
- '-storage.local.path=/prometheus'
- '-alertmanager.url=http://alertmanager:9093'
- '-storage.local.memory-chunks=100000'
restart: unless-stopped
expose:
- 9090
ports:
- 9090:9090
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
alertmanager:
image: prom/alertmanager
container_name: alertmanager
volumes:
- ./alertmanager/:/etc/alertmanager/
command:
- '-config.file=/etc/alertmanager/config.yml'
- '-storage.path=/alertmanager'
restart: unless-stopped
expose:
- 9093
ports:
- 9093:9093
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
nodeexporter:
image: prom/node-exporter
container_name: nodeexporter
restart: unless-stopped
expose:
- 9100
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
cadvisor:
image: google/cadvisor:v0.24.1
container_name: cadvisor
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
restart: unless-stopped
expose:
- 8080
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
grafana:
image: grafana/grafana
container_name: grafana
volumes:
- grafana_data:/var/lib/grafana
env_file:
- user.config
restart: unless-stopped
expose:
- 3000
ports:
- 3000:3000
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
The one who doesn't work:
version: '2'
networks:
monitor-net:
driver: bridge
volumes:
prometheus_data: {}
services:
prometheus:
image: prom/prometheus
container_name: prometheus
volumes:
- ./prometheus/:/etc/prometheus/
- prometheus_data:/prometheus
command:
- '-config.file=/etc/prometheus/prometheus.yml'
- '-storage.local.path=/prometheus'
- '-alertmanager.url=http://alertmanager:9093'
- '-storage.local.memory-chunks=100000'
restart: unless-stopped
expose:
- 9090
ports:
- 9090:9090
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
alertmanager:
image: prom/alertmanager
container_name: alertmanager
volumes:
- ./alertmanager/:/etc/alertmanager/
command:
- '-config.file=/etc/alertmanager/config.yml'
- '-storage.path=/alertmanager'
restart: unless-stopped
expose:
- 9093
ports:
- 9093:9093
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
nodeexporter:
image: prom/node-exporter
container_name: nodeexporter
restart: unless-stopped
expose:
- 9100
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
cadvisor:
image: google/cadvisor
container_name: cadvisor
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
restart: unless-stopped
expose:
- 8080
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
With the second one the container Prometheus can't run and the logs are:
level=info msg="Starting prometheus (version=1.5.2, branch=master, revision=bd1182d29f462c39544f94cc822830e1c64cf55b)" source="main.go:75"
level=info msg="Build context (go=go1.7.5, user=root#1a01c5f68840, date=20170220-07:00:00)" source="main.go:76"
level=info msg="Loading configuration file /etc/prometheus/prometheus.yml" source="main.go:248"
level=error msg="Error opening memory series storage: leveldb: manifest corrupted (field 'comparer'): missing [file=MANIFEST-000009]" source="main.go:182"
The fact that Prometheus at least starts and then errors out means that your compose files is probably correct.
It seems to at least try to load the configuration file /etc/prometheus/prometheus.yml and fails doing so.
In the compose file I see that it adds a host volume which is expected to exist on your host system at location ./prometheus/. Did you also copy this folder and it's contents? If yes, did you verify that the configuration is correct and is expected to work without Grafana? Also it's important what your current directory is when you run docker-compose, it must be where the ./prometheus/ directory is located.

Resources