The docker compose doesn't run - docker

I tried to up the docker compose and received the following error:
$ docker-compose up
Creating network "evaluatehumanbalance_default" with the default driver
Pulling redis (redis:6.0.6)...
6.0.6: Pulling from library/redis
bf5952930446: Pull complete
911b8422b695: Pull complete
093b947e0ade: Pull complete
5b1d5f59e382: Pull complete
7a5f59580c0b: Pull complete
f9c63997c980: Pull complete
Digest: sha256:09c33840ec47815dc0351f1eca3befe741d7105b3e95bc8fdb9a7e4985b9e1e5
Status: Downloaded newer image for redis:6.0.6
Pulling zookeeper (confluentinc/cp-zookeeper:5.5.1)...
5.5.1: Pulling from confluentinc/cp-zookeeper
0cd7281e66ed: Pull complete
ee8abe01e201: Pull complete
19bb39092429: Pull complete
e8a27d9d6e72: Pull complete
cadbdfe0e559: Pull complete
184cb34023c9: Pull complete
Digest: sha256:1ef59713eea58401b333827dc44f23556cbc4b6437968a261f0b0a7b105126be
Status: Downloaded newer image for confluentinc/cp-zookeeper:5.5.1
Pulling kafka (confluentinc/cp-kafka:5.5.1)...
5.5.1: Pulling from confluentinc/cp-kafka
0cd7281e66ed: Already exists
ee8abe01e201: Already exists
19bb39092429: Already exists
e8a27d9d6e72: Already exists
8efe498170fa: Pull complete
b5050338516f: Pull complete
Digest: sha256:4de6a6f317991d858fe1bd84636c55dc17d9312db6d4a80be0f85354b9e481fc
Status: Downloaded newer image for confluentinc/cp-kafka:5.5.1
Pulling banking-simulation (gcr.io/simulation-screenshots/banking-simulation:)...
ERROR: Head https://gcr.io/v2/simulation-screenshots/banking-simulation/manifests/latest: unknown: Project 'project:simulation-screenshots' not found or deleted.
This is when I try to run it interactively:
$ docker run -it gcr.io/simulation-screenshots/banking-simulation
Unable to find image 'gcr.io/simulation-screenshots/banking-simulation:latest' locally
docker: Error response from daemon: Head https://gcr.io/v2/simulation-screenshots/banking-simulation/manifests/latest: unknown: Project 'project:simulation-screenshots' not found or deleted.
The docker-compose.yaml file is provided:
#
# This docker-compose file starts and runs:
# * A redis server
# * A 1-node kafka cluster
# * A 1-zookeeper ensemble
# * Kafka Connect with Redis Source
# * 3 Java Applications- Trucking-Simulation, Banking-Simulation, and STEDI
# * A Spark master
# * A Spark worker
version: '3.7'
services:
redis:
image: redis:6.0.6
ports:
- "6379:6379"
zookeeper:
image: confluentinc/cp-zookeeper:5.5.1
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: "2181"
kafka:
image: confluentinc/cp-kafka:5.5.1
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 0
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_ADVERTISED_LISTENERS: "INTERNAL://kafka:19092,EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9092"
KAFKA_INTER_BROKER_LISTENER_NAME: "INTERNAL"
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: "INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: "1"
depends_on:
- "zookeeper"
banking-simulation:
image: gcr.io/simulation-screenshots/banking-simulation
environment:
REDIS_HOST: redis
REDIS_PORT: 6379
REDIS_DB: 0
KAFKA_BROKER: kafka:19092
depends_on:
- "kafka"
- "redis"
trucking-simulation:
image: gcr.io/simulation-screenshots/trucking-simulation
environment:
REDIS_HOST: redis
REDIS_PORT: 6379
REDIS_DB: 0
KAFKA_BROKER: kafka:19092
depends_on:
- "kafka"
- "redis"
stedi:
image: gcr.io/simulation-screenshots/stedi
ports:
- "4567:4567"
environment:
REDIS_HOST: redis
REDIS_PORT: 6379
REDIS_DB: 0
KAFKA_BROKER: kafka:19092
KAFKA_RISK_TOPIC: risk-topic
depends_on:
- "kafka"
- "redis"
connect:
image: gcr.io/simulation-screenshots/kafka-connect-redis-source
ports:
- "8083:8083"
- "5005:5005"
environment:
CONNECT_BOOTSTRAP_SERVERS: "PLAINTEXT://kafka:19092"
CONNECT_GROUP_ID: "connect"
CONNECT_REST_ADVERTISED_HOST_NAME: "connect"
CONNECT_PLUGIN_PATH: "/usr/share/java"
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_KEY_CONVERTER: "org.apache.kafka.connect.converters.ByteArrayConverter"
CONNECT_VALUE_CONVERTER: "org.apache.kafka.connect.storage.StringConverter"
CONNECT_CONFIG_STORAGE_TOPIC: "connect-config"
CONNECT_OFFSET_STORAGE_TOPIC: "connect-offset"
CONNECT_STATUS_STORAGE_TOPIC: "connect-status"
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_DEBUG: "y"
DEBUG_SUSPEND_FLAG: "y"
CLASSPATH: "/usr/share/java/kafka-connect-redis-source/*"
depends_on:
- "kafka"
- "redis"
spark:
image: docker.io/bitnami/spark:3-debian-10
environment:
- SPARK_MODE=master
- SPARK_RPC_AUTHENTICATION_ENABLED=no
- SPARK_RPC_ENCRYPTION_ENABLED=no
- SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
- SPARK_SSL_ENABLED=no
ports:
- '8080:8080'
volumes:
- ./:/home/workspace/
- ./spark/jars:/opt/bitnami/spark/.ivy2
spark-worker-1:
image: docker.io/bitnami/spark:3-debian-10
environment:
- SPARK_MODE=worker
- SPARK_MASTER_URL=spark://spark:7077
- SPARK_WORKER_MEMORY=1G
- SPARK_WORKER_CORES=1
- SPARK_RPC_AUTHENTICATION_ENABLED=no
- SPARK_RPC_ENCRYPTION_ENABLED=no
- SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
- SPARK_SSL_ENABLED=no
volumes:
- ./:/home/workspace/
- ./spark/jars:/opt/bitnami/spark/.ivy2
What's the issue here and how do I solve it?

The repository (in the form of a Google Cloud Platform project) has been deleted (or made inaccessible). As a result you're unable to retrieve the image from the repository.
You may want to contact the author of the documentation that you're using to ask for an update.
You can confirm this by browsing the link:
https://gcr.io/v2/simulation-screenshots/banking-simulation/manifests/latest
For extant repositories|images, Google Container Registry (GCR and hence gcr.io) will redirect HTTP GETs to a registry browser (https://console.cloud.google.com) so that you may browse the repository. Here's an (unrelated) example to show how it would usually work:
https://gcr.io/cadvisor/cadvisor:v0.40.0
The GCR registry has image:tag URLs of the form:
[us|eu].gcr.io/${PROJECT}/${IMAGE}:${TAG}

I changed the image version to 'latest'.
zookeeper:
image: confluentinc/cp-zookeeper:latest
Found it here: https://github.com/confluentinc/cp-docker-images/issues/582

Related

Remote webdriver can't connect to docker selenium grid

I got a problem about using webdriver.remote(in docker container) to connect to the other selenium grid container. These are my docker-compose file and the python file using the webdriver.
python file:
sleep(10)
Options = webdriver.ChromeOptions()
Options.add_argument('--no-sandbox')
Options.add_argument('--headless')
driver = webdriver.Remote(
command_executor= 'http://selenium-hub:4444/wd/hub',
desired_capabilities = DesiredCapabilities.CHROME,
)
docker-compose file:
version: "3"
services:
selenium-hub:
image: selenium/hub:3.14.0
container_name: selenium-hub
ports:
- "9090:4444"
chromenode:
image: selenium/node-chrome:3.14.0
depends_on:
- selenium-hub
links:
- selenium-hub:hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
flask-web:(execute python file)
image: main
container_name: template_flask
depends_on:
- selenium-hub
- chromenode
links:
- selenium-hub
- chromenode
The error I got :
MaxRetryError: HTTPConnectionPool(host='selenium-hub', port=4444): Max retries exceeded with url:
I have seen many discussions about this error but can't still solve it . Could anyone give me some tips ? Thanks!
Sounds like a network issue between containers. I would suggest create a network for all the containers, that way they can talk to each other by using the service names defined in your docker-compose file, refer to:
version: "3"
services:
hub:
image: selenium/hub:3
ports:
- "4444:4444"
environment:
GRID_MAX_SESSION: 16
GRID_BROWSER_TIMEOUT: 180
GRID_TIMEOUT: 60
networks:
- selenium_net
chrome:
image: selenium/node-chrome-debug:3
container_name: chrome_node
depends_on:
- hub
environment:
HUB_PORT_4444_TCP_ADDR: hub
HUB_PORT_4444_TCP_PORT: 4444
NODE_MAX_SESSION: 4
NODE_MAX_INSTANCES: 4
volumes:
- /dev/shm:/dev/shm
ports:
- "9001:5900"
links:
- hub
networks:
- selenium_net
firefox:
image: selenium/node-firefox-debug:3
container_name: firefox_node
depends_on:
- hub
environment:
HUB_PORT_4444_TCP_ADDR: hub
HUB_PORT_4444_TCP_PORT: 4444
NODE_MAX_SESSION: 5
NODE_MAX_INSTANCES: 5
volumes:
- /dev/shm:/dev/shm
ports:
- "9003:5900"
links:
- hub
networks:
- selenium_net
networks:
selenium_net:
driver: bridge
name: selenium-net

Set up Prometheus and Cadvisor with docker-compose

i am new to prometheus , cadvisor and docker-compose. i made a docker-compose file including my own created application named chat, with a mongo container. those work fine. now i want to monitor my containers with prometheus and cadvisor. im getting following errors:
cadvisor | W0419 11:41:00.576916 1 sysinfo.go:203] Nodes topology is not available, providing CPU topology
cadvisor | W0419 11:41:00.577437 1 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
cadvisor | E0419 11:41:00.582000 1 info.go:114] Failed to get system UUID: open /etc/machine-id: no such file or directory
and
prometheus | ts=2022-04-19T11:54:19.051Z caller=main.go:438 level=error msg="Error loading config (--config.file=/etc/prometheus/prometheus.yml)" file=/etc/prometheus/prometheus.yml err="parsing YAML file /etc/prometheus/prometheus.yml: yaml: unmarshal errors:\n line 2: field scrape-interval not found in type config.plain"
i tryed to change the config parameter from my docker-compose into, but it dont changed the error:
command:
- '--config.file=./prometheus/prometheus.yml'
docker-compose.yml:
version : '3.7'
services:
chat-api:
container_name: chat-api
build:
context: .
dockerfile: ./Dockerfile
ports:
- '4000:4000'
networks:
- cchat
restart: 'on-failure'
userdb:
image: mongo:latest
container_name: mongodb
volumes:
- userdb:/data/db
networks:
- cchat
prometheus:
image: prom/prometheus:latest
container_name: prometheus
restart: always
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
command:
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- '9080:9080'
networks:
- cloudchat
cadvisor:
image: gcr.io/cadvisor/cadvisor:latest
container_name: cadvisor
restart: always
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker:/var/lib/docker:ro
devices:
- /dev/kmsg:/dev/kmsg
depends_on:
- chat-api
networks:
- cchat
volumes:
userdb:
networks:
cchat:
prometheus.yml:
global:
scrape-interval: 2s
scrape_configs:
- job_name: 'cadvisor'
static_configs:
- targets: ['cadvisor:8080']
project structure:
picture of project structure
I guess it's quite late but you can try mounting /etc/machine-id:/etc/machine-id:ro.
Running in privileged mode could help too. This is my configuration which is working without problems:
cadvisor:
image: gcr.io/cadvisor/cadvisor:v0.47.0
container_name: cadvisor
restart: unless-stopped
privileged: true
ports:
- "8080:8080"
volumes:
- /:/rootfs:ro
- /var/run:/var/run:ro
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
- /dev/disk/:/dev/disk:ro
Some important note, don't use latest it seems it's not the latest version (source: https://github.com/google/cadvisor/issues/3066).

docker-compose: zipkin cannot connect to elasticsearch

I try to setup zipkin, elasticsearch, prometheus and grafana with docker-compose.yml
When I run dockers, see in the log:
dependencies_zipkin | 19/09/30 14:37:09 ERROR NetworkClient: Node [172.28.0.2:9200] failed (java.net.ConnectException: Connection refused (Connection refused)); no other nodes left - aborting...
I'm on MacOS X with docker 2.1.0.3
the content of my docker-compose.yml is this one:
version: '3.7'
services:
storage:
image: openzipkin/zipkin-elasticsearch7
container_name: elasticsearch
ports:
- "9200:9200"
environment:
- "xpack.security.enabled=false"
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
restart: unless-stopped
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- $PWD/prometheus:/etc/prometheus/
- /tmp/prometheus:/prometheus/data:rw
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- "9090:9090"
restart: unless-stopped
zipkin:
image: openzipkin/zipkin
container_name: zipkin
depends_on:
- dependencies
- storage
environment:
- "STORAGE_TYPE=elasticsearch"
- "ES_HOSTS=storage"
ports:
- "9411:9411"
restart: unless-stopped
grafana:
image: grafana/grafana
container_name: grafana
ports:
- "3000:3000"
restart: unless-stopped
dependencies:
image: openzipkin/zipkin-dependencies
container_name: dependencies_zipkin
depends_on:
- storage
environment:
- "STORAGE_TYPE=elasticsearch"
- "ES_HOSTS=storage"
When I connect to localhost:9200, I see that elasticsearch is working fine and on port 9411, zipkin is deployed but I have the error:
ERROR: cannot load service names: server error (Service Unavailable)(due to the network error
In the log, I have this information:
105 ^[[35mdependencies_zipkin |^[[0m 19/09/30 14:45:20 ERROR NetworkClient: Node [172.28.0.2:9200] failed (java.net.ConnectException: Connection refused (Connection refused)); no other nodes left - aborting...
and this one
^[[31mzipkin |^[[0m java.lang.IllegalStateException: couldn't connect any of [Endpoint{storage:80, ipAddr=172.28.0.2, weight=1000}]
Any idea?
UPDATE
by using mysql it is working fine, so the problem is at the level of elastic search.
I tried alsoo by using
"STORAGE_PORT_9200_TCP_ADDR=127.0.0.1"
but the issue still occurs.
UPDATE
As mention is the solution gave by Brian, I have to use:
ES_HOSTS=http://storage:9300
the key is on port, I was using the port 9200
The error disappear between zipkin and es but still occurs between es and zipkin-dependencies.
The problem lies in your ES_HOSTS variable, from the docs here:
ES_HOSTS: A comma separated list of elasticsearch base urls to connect to ex. http://host:9200.
Defaults to "http://localhost:9200".
So you will need: ES_HOSTS=http://storage:9200
Finally I have this file:
version: '3.7'
services:
storage:
image: openzipkin/zipkin-elasticsearch7
container_name: elasticsearch
ports:
- 9200:9200
zipkin:
image: openzipkin/zipkin
container_name: zipkin
environment:
- STORAGE_TYPE=elasticsearch
- "ES_HOSTS=elasticsearch:9300"
ports:
- 9411:9411
depends_on:
- storage
dependencies:
image: openzipkin/zipkin-dependencies
container_name: dependencies
entrypoint: crond -f
depends_on:
- storage
environment:
- STORAGE_TYPE=elasticsearch
- "ES_HOSTS=elasticsearch:9300"
- "ES_NODES_WAN_ONLY=true"
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- $PWD/prometheus:/etc/prometheus/
- /tmp/prometheus:/prometheus/data:rw
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- "9090:9090"
grafana:
image: grafana/grafana
container_name: grafana
depends_on:
- prometheus
ports:
- "3000:3000"
Main differences are the usage of
"ES_HOSTS=elasticsearch:9300"
instead of
"ES_HOSTS=storage:9300"
and in the dependencies configuration I add the entrypoint in dependencies:
entrypoint: crond -f
This one is really the key to not have the exception when I start docker-compose.
To solve this issue, I check the this project: https://github.com/openzipkin/docker-zipkin
The remaining question is: why do I need to use entrypoint: crond -f

How to use local Docker image with docker-compose?

I have a docker-compose file and want to be able to make one of the images be spun up from the image in my local cache vs. pulling from dockerhub. I'm using the sbt docker plugin, so I can see the image being created, and can see it when I do docker images at the command line. Yet, when I do docker-compose up -d myimage it always defaults to the remote image. How can I force it to use my local image??
Here is the relevant part of my compose file:
spark-master:
image: gettyimages/spark:2.2.0-hadoop-2.7
command: bin/spark-class org.apache.spark.deploy.master.Master -h spark-master
hostname: spark-master
environment:
MASTER: spark://spark-master:7077
SPARK_CONF_DIR: /conf
SPARK_PUBLIC_DNS: localhost
expose:
- 7001
- 7002
- 7003
- 7004
- 7005
- 7006
- 7077
- 6066
ports:
- 4040:4040
- 6066:6066
- 7077:7077
- 8080:8080
volumes:
- ./conf/master:/conf
- ./data:/tmp/data
hydra-streams:
image: ****/hydra-spark-core
command: bin/spark-class org.apache.spark.deploy.worker.Worker spark://spark-master:7077
hostname: worker
environment:
SPARK_CONF_DIR: /conf
SPARK_WORKER_CORES: 2
SPARK_WORKER_MEMORY: 1g
SPARK_WORKER_PORT: 8881
SPARK_WORKER_WEBUI_PORT: 8091
SPARK_PUBLIC_DNS: localhost
links:
- spark-master
expose:
- 7012
- 7013
- 7014
- 7015
- 7016
- 8881
ports:
- 8091:8091
volumes:
- ./conf/worker:/conf
- ./data:/tmp/data
You can force using the local image by retaging the existing image:
docker tag remote/image local_image
And then inside the compose file using local_image instead of remote/image.

jupyter fails to open a directory to run a docker container

The docker is running and I want to run a docker container in Windows 10. When I run the docker-compose from Windows power shell, some downloading jobs are completed, an error occurs, and the docker container cannot run. It seems that jupyter fails to build or open a directory. Anyone could help me about this problem? The command line and the error is as the following:
PS C:\Users\mmva> cd C:\Users\mmva\Documents\GitHub\CerebralCortex-DockerCompose
PS C:\Users\mmva\Documents\GitHub\CerebralCortex-DockerCompose> docker-compose up
Building jupyter
Step 1/19 : FROM jupyter/jupyterhub
latest: Pulling from jupyter/jupyterhub
efd26ecc9548: Extracting [==================================================>] 51.34MB/51.34MB
a3ed95caeb02: Download complete
298ffe4c3e52: Download complete
758b472747c8: Download complete
8b9809a68afc: Download complete
93b253b5483d: Download complete
ef8136abb53c: Download complete
ERROR: Service 'jupyter' failed to build: failed to register layer: re-exec error: exit status 1: output: Failed to OpenForBackup failed in Win32: open \\?\C:\ProgramData\Docker\windowsfilter\eb9ac9d604f051d5490a876043809e7929197356387569bc50a3694b77d1b721\usr\share\man\man3\Locale::gettext.3pm.gz: The filename, directory name, or volume label syntax is incorrect. (0x1f) \\?\C:\ProgramData\Docker\windowsfilter\eb9ac9d604f051d5490a876043809e7929197356387569bc50a3694b77d1b721\usr\share\man\man3\Locale::gettext.3pm.gz
My docker version is 17.09.0-ce-win33 (13620).
I think the docker-compose's version is 3.
The content of docker-compose file:
version: '3'
# IPTABLES RULES IF NECESSARY
#-A INPUT -i br+ -j ACCEPT
#-A INPUT -i docker0 -j ACCEPT
#-A OUTPUT -o br+ -j ACCEPT
#-A OUTPUT -o docker0 -j ACCEPT
# The .env file is for production use with server-specific configurations
services:
# Frontend web proxy for accessing services and providing TLS encryption
nginx:
build: ./nginx
container_name: md2k-nginx
restart: always
volumes:
- ./nginx/site:/var/www
- ./nginx/nginx-selfsigned.crt:/etc/ssh/certs/ssl-cert.crt
- ./nginx/nginx-selfsigned.key:/etc/ssh/certs/ssl-cert.key
ports:
- "443:443"
- "80:80"
links:
- apiserver
- grafana
- jupyter
apiserver:
build: ../CerebralCortex-APIServer
container_name: md2k-api-server
restart: always
expose:
- 80
links:
- mysql
- kafka
- minio
depends_on:
- mysql
environment:
- MINIO_HOST=${MINIO_HOST:-minio}
- MINIO_ACCESS_KEY=${MINIO_ACCESS_KEY:-ZngmrLWgbSfZUvgocyeH}
- MINIO_SECRET_KEY=${MINIO_SECRET_KEY:-IwUnI5w0f5Hf1v2qVwcr}
- MYSQL_HOST=${MYSQL:-mysql}
- MYSQL_DB_USER=${MYSQL_ROOT_USER:-root}
- MYSQL_DB_PASS=${MYSQL_ROOT_PASSWORD:-random_root_password}
- KAFKA_HOST=${KAFKA_HOST:-kafka}
- JWT_SECRET_KEY=${MINIO_SECRET_KEY:-IwUnI5w0f5Hf1v2qVwcr}
- FLASK_HOST=${FLASK_HOST:-0.0.0.0}
- FLASK_PORT=${FLASK_PORT:-80}
- FLASK_DEBUG=${FLASK_DEBUG:-False}
volumes:
- ./data:/data
# Data vizualizations
grafana:
image: "grafana/grafana"
container_name: md2k-grafana
restart: always
ports:
- "3000:3000"
links:
- influxdb
environment:
- GF_SERVER_ROOT_URL=%(protocol)s://%(domain)s:%(http_port)s/grafana/
# - GF_INSTALL_PLUGINS=raintank-worldping-app,grafana-clock-panel,grafana-simple-json-datasource
volumes:
- timeseries-storage:/var/lib/grafana
# - timeseries-storage:/etc/grafana
influxdb:
image: "influxdb:alpine"
container_name: md2k-influxdb
restart: always
ports:
- "8086:8086"
volumes:
- timeseries-storage:/var/lib/influxdb
# Data Science Dashboard Interface
jupyter:
build: ./jupyterhub
container_name: md2k-jupyterhub
ports:
- 8000
restart: always
network_mode: "host"
pid: "host"
environment:
TINI_SUBREAPER: 'true'
volumes:
- ./jupyterhub/conf:/srv/jupyterhub/conf
command: jupyterhub --no-ssl --config /srv/jupyterhub/conf/jupyterhub_config.py
# Cerebral Cortex backend
kafka:
image: wurstmeister/kafka:0.10.2.0
container_name: md2k-kafka
restart: always
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: ${MACHINE_IP:-10.0.0.1}
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_MESSAGE_MAX_BYTES: 2000000
KAFKA_CREATE_TOPICS: "filequeue:4:1,processed_stream:16:1"
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- data-storage:/kafka
depends_on:
- zookeeper
zookeeper:
image: wurstmeister/zookeeper
container_name: md2k-zookeeper
restart: always
ports:
- "2181:2181"
mysql:
image: "mysql:5.7"
container_name: md2k-mysql
restart: always
ports:
- 3306:3306 # Default mysql port
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD:-random_root_password}
- MYSQL_DATABASE=${MYSQL_DATABASE:-cerebralcortex}
- MYSQL_USER=${MYSQL_USER:-cerebralcortex}
- MYSQL_PASSWORD=${MYSQL_PASSWORD:-cerebralcortex_pass}
volumes:
- ./mysql/initdb.d:/docker-entrypoint-initdb.d
- metadata-storage:/var/lib/mysql
minio:
image: "minio/minio"
container_name: md2k-minio
restart: always
ports:
- 9000:9000 # Default minio port
environment:
- MINIO_ACCESS_KEY=${MINIO_ACCESS_KEY:-ZngmrLWgbSfZUvgocyeH}
- MINIO_SECRET_KEY=${MINIO_SECRET_KEY:-IwUnI5w0f5Hf1v2qVwcr}
command: server /export
volumes:
- object-storage:/export
cassandra:
build: ./cassandra
container_name: md2k-cassandra
restart: always
ports:
- 9160:9160 # Thrift client API
- 9042:9042 # CQL native transport
environment:
- CASSANDRA_CLUSTER_NAME=cerebralcortex
volumes:
- data-storage:/var/lib/cassandra
volumes:
object-storage:
metadata-storage:
data-storage:
temp-storage:
timeseries-storage:
user-storage:
log-storage

Resources