The docker is running and I want to run a docker container in Windows 10. When I run the docker-compose from Windows power shell, some downloading jobs are completed, an error occurs, and the docker container cannot run. It seems that jupyter fails to build or open a directory. Anyone could help me about this problem? The command line and the error is as the following:
PS C:\Users\mmva> cd C:\Users\mmva\Documents\GitHub\CerebralCortex-DockerCompose
PS C:\Users\mmva\Documents\GitHub\CerebralCortex-DockerCompose> docker-compose up
Building jupyter
Step 1/19 : FROM jupyter/jupyterhub
latest: Pulling from jupyter/jupyterhub
efd26ecc9548: Extracting [==================================================>] 51.34MB/51.34MB
a3ed95caeb02: Download complete
298ffe4c3e52: Download complete
758b472747c8: Download complete
8b9809a68afc: Download complete
93b253b5483d: Download complete
ef8136abb53c: Download complete
ERROR: Service 'jupyter' failed to build: failed to register layer: re-exec error: exit status 1: output: Failed to OpenForBackup failed in Win32: open \\?\C:\ProgramData\Docker\windowsfilter\eb9ac9d604f051d5490a876043809e7929197356387569bc50a3694b77d1b721\usr\share\man\man3\Locale::gettext.3pm.gz: The filename, directory name, or volume label syntax is incorrect. (0x1f) \\?\C:\ProgramData\Docker\windowsfilter\eb9ac9d604f051d5490a876043809e7929197356387569bc50a3694b77d1b721\usr\share\man\man3\Locale::gettext.3pm.gz
My docker version is 17.09.0-ce-win33 (13620).
I think the docker-compose's version is 3.
The content of docker-compose file:
version: '3'
# IPTABLES RULES IF NECESSARY
#-A INPUT -i br+ -j ACCEPT
#-A INPUT -i docker0 -j ACCEPT
#-A OUTPUT -o br+ -j ACCEPT
#-A OUTPUT -o docker0 -j ACCEPT
# The .env file is for production use with server-specific configurations
services:
# Frontend web proxy for accessing services and providing TLS encryption
nginx:
build: ./nginx
container_name: md2k-nginx
restart: always
volumes:
- ./nginx/site:/var/www
- ./nginx/nginx-selfsigned.crt:/etc/ssh/certs/ssl-cert.crt
- ./nginx/nginx-selfsigned.key:/etc/ssh/certs/ssl-cert.key
ports:
- "443:443"
- "80:80"
links:
- apiserver
- grafana
- jupyter
apiserver:
build: ../CerebralCortex-APIServer
container_name: md2k-api-server
restart: always
expose:
- 80
links:
- mysql
- kafka
- minio
depends_on:
- mysql
environment:
- MINIO_HOST=${MINIO_HOST:-minio}
- MINIO_ACCESS_KEY=${MINIO_ACCESS_KEY:-ZngmrLWgbSfZUvgocyeH}
- MINIO_SECRET_KEY=${MINIO_SECRET_KEY:-IwUnI5w0f5Hf1v2qVwcr}
- MYSQL_HOST=${MYSQL:-mysql}
- MYSQL_DB_USER=${MYSQL_ROOT_USER:-root}
- MYSQL_DB_PASS=${MYSQL_ROOT_PASSWORD:-random_root_password}
- KAFKA_HOST=${KAFKA_HOST:-kafka}
- JWT_SECRET_KEY=${MINIO_SECRET_KEY:-IwUnI5w0f5Hf1v2qVwcr}
- FLASK_HOST=${FLASK_HOST:-0.0.0.0}
- FLASK_PORT=${FLASK_PORT:-80}
- FLASK_DEBUG=${FLASK_DEBUG:-False}
volumes:
- ./data:/data
# Data vizualizations
grafana:
image: "grafana/grafana"
container_name: md2k-grafana
restart: always
ports:
- "3000:3000"
links:
- influxdb
environment:
- GF_SERVER_ROOT_URL=%(protocol)s://%(domain)s:%(http_port)s/grafana/
# - GF_INSTALL_PLUGINS=raintank-worldping-app,grafana-clock-panel,grafana-simple-json-datasource
volumes:
- timeseries-storage:/var/lib/grafana
# - timeseries-storage:/etc/grafana
influxdb:
image: "influxdb:alpine"
container_name: md2k-influxdb
restart: always
ports:
- "8086:8086"
volumes:
- timeseries-storage:/var/lib/influxdb
# Data Science Dashboard Interface
jupyter:
build: ./jupyterhub
container_name: md2k-jupyterhub
ports:
- 8000
restart: always
network_mode: "host"
pid: "host"
environment:
TINI_SUBREAPER: 'true'
volumes:
- ./jupyterhub/conf:/srv/jupyterhub/conf
command: jupyterhub --no-ssl --config /srv/jupyterhub/conf/jupyterhub_config.py
# Cerebral Cortex backend
kafka:
image: wurstmeister/kafka:0.10.2.0
container_name: md2k-kafka
restart: always
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: ${MACHINE_IP:-10.0.0.1}
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_MESSAGE_MAX_BYTES: 2000000
KAFKA_CREATE_TOPICS: "filequeue:4:1,processed_stream:16:1"
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- data-storage:/kafka
depends_on:
- zookeeper
zookeeper:
image: wurstmeister/zookeeper
container_name: md2k-zookeeper
restart: always
ports:
- "2181:2181"
mysql:
image: "mysql:5.7"
container_name: md2k-mysql
restart: always
ports:
- 3306:3306 # Default mysql port
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD:-random_root_password}
- MYSQL_DATABASE=${MYSQL_DATABASE:-cerebralcortex}
- MYSQL_USER=${MYSQL_USER:-cerebralcortex}
- MYSQL_PASSWORD=${MYSQL_PASSWORD:-cerebralcortex_pass}
volumes:
- ./mysql/initdb.d:/docker-entrypoint-initdb.d
- metadata-storage:/var/lib/mysql
minio:
image: "minio/minio"
container_name: md2k-minio
restart: always
ports:
- 9000:9000 # Default minio port
environment:
- MINIO_ACCESS_KEY=${MINIO_ACCESS_KEY:-ZngmrLWgbSfZUvgocyeH}
- MINIO_SECRET_KEY=${MINIO_SECRET_KEY:-IwUnI5w0f5Hf1v2qVwcr}
command: server /export
volumes:
- object-storage:/export
cassandra:
build: ./cassandra
container_name: md2k-cassandra
restart: always
ports:
- 9160:9160 # Thrift client API
- 9042:9042 # CQL native transport
environment:
- CASSANDRA_CLUSTER_NAME=cerebralcortex
volumes:
- data-storage:/var/lib/cassandra
volumes:
object-storage:
metadata-storage:
data-storage:
temp-storage:
timeseries-storage:
user-storage:
log-storage
Related
to set the context, i have a very basic kafka server setup through a docker-compose.yml file, and then i spin up a ui app for kafka.
depending on the config, the ui app will/wont work becuase of port 8080 being used/free.
my question is how does 8080 tie into this, when the difference between working and non working configs is the host ip.
btw this is done in wsl (with the wsl ip being the ip in question 172.20.123.69.)
ui app:
podman run \
--name kafka_ui \
-p 8080:8080 \
-e KAFKA_CLUSTERS_0_NAME=local \
-e KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=172.20.123.69:9092 \
-d provectuslabs/kafka-ui:latest
ui works with this kafka server config:
version: "2"
services:
zookeeper:
container_name: zookeeper-server
image: docker.io/bitnami/zookeeper:3.8
ports:
- "2181:2181"
volumes:
- "/home/rndom/volumes/zookeeper:/bitnami/zookeeper"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
container_name: kafka-server
image: docker.io/bitnami/kafka:3.3
ports:
- "9092:9092"
volumes:
- "/home/rndom/volumes/kafka:/bitnami/kafka"
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://172.20.123.69:9092
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
volumes:
zookeeper_data:
driver: bridge
kafka_data:
driver: bridge
networks:
downloads_default:
driver: bridge
notice the environment variable KAFKA_CFG_ADVERTISED_LISTENERS has the wsl ip.
ui doesn't work with the following:
version: "2"
services:
zookeeper:
container_name: zookeeper-server
image: docker.io/bitnami/zookeeper:3.8
ports:
- "2181:2181"
volumes:
- "/home/rndom/volumes/zookeeper:/bitnami/zookeeper"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
container_name: kafka-server
image: docker.io/bitnami/kafka:3.3
ports:
- "9092:9092"
volumes:
- "/home/rndom/volumes/kafka:/bitnami/kafka"
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
volumes:
zookeeper_data:
driver: bridge
kafka_data:
driver: bridge
networks:
downloads_default:
driver: bridge
the latter i got from the official bitnami docker hub repo.
the error i get when i use it:
*************************
APPLICATION FAILED TO START
*************************
etc etc.
Web server failed to start Port 8080 was already in use
Since i got it to work, this is really just for my own understanding.
Do use KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
Don't use podman run for one container. Put the UI container in the same compose file
Use KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
If you still get errors, then as the error says, the port is occupied, so use a different one like 8081:8080. That has nothing to do with the Kafka setup.
This Compose file works fine for me
version: "2"
services:
zookeeper:
image: docker.io/bitnami/zookeeper:3.8
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: docker.io/bitnami/kafka:3.3
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_LISTENERS=PLAINTEXT://0.0.0.0:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
kafka_ui:
image: docker.io/provectuslabs/kafka-ui:latest
ports:
- 8080:8080
environment:
- KAFKA_CLUSTERS_0_NAME=local
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
try the command "netstat -tupan" and check if 8080 is not used by any other process.
I want to make my nifi data volume and configuration persist means even if I delete container and docker compose up again I would like to keep what I built so far in my nifi. I try to mount volumes as follows in my docker compose file in volumes section nevertheless it doesn't work and my nifi processors are not saved. How can I do it correctly? Below my docker-compose.yaml file.
version: "3.7"
services:
nifi:
image: koroslak/nifi:latest
container_name: nifi
restart: always
environment:
- NIFI_HOME=/opt/nifi/nifi-current
- NIFI_LOG_DIR=/opt/nifi/nifi-current/logs
- NIFI_PID_DIR=/opt/nifi/nifi-current/run
- NIFI_BASE_DIR=/opt/nifi
- NIFI_WEB_HTTP_PORT=8080
ports:
- 9000:8080
depends_on:
- openldap
volumes:
- ./volume/nifi-current/state:/opt/nifi/nifi-current/state
- ./volume/database/database_repository:/opt/nifi/nifi-current/repositories/database_repository
- ./volume/flow_storage/flowfile_repository:/opt/nifi/nifi-current/repositories/flowfile_repository
- ./volume/nifi-current/content_repository:/opt/nifi/nifi-current/repositories/content_repository
- ./volume/nifi-current/provenance_repository:/opt/nifi/nifi-current/repositories/provenance_repository
- ./volume/log:/opt/nifi/nifi-current/logs
#- ./volume/conf:/opt/nifi/nifi-current/conf
postgres:
image: koroslak/postgres:latest
container_name: postgres
restart: always
environment:
- POSTGRES_PASSWORD=secret123
ports:
- 6000:5432
volumes:
- postgres:/var/lib/postgresql/data
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4:4.18
restart: always
environment:
- PGADMIN_DEFAULT_EMAIL=admin
- PGADMIN_DEFAULT_PASSWORD=admin
ports:
- 8090:80
metabase:
container_name: metabase
image: metabase/metabase:v0.34.2
restart: always
environment:
MB_DB_TYPE: postgres
MB_DB_DBNAME: metabase
MB_DB_PORT: 5432
MB_DB_USER: metabase_admin
MB_DB_PASS: secret123
MB_DB_HOST: postgres
ports:
- 3000:3000
depends_on:
- postgres
openldap:
image: osixia/openldap:1.3.0
container_name: openldap
restart: always
ports:
- 38999:389
# Mocked source systems
jira-api:
image: danielgtaylor/apisprout:latest
container_name: jira-api
restart: always
ports:
- 8000:8000
command: https://raw.githubusercontent.com/mvrabel/nifi-postgres-metabase/master/api_examples/jira-api.json
pipedrive-api:
image: danielgtaylor/apisprout:latest
container_name: pipedrive-api
restart: always
ports:
- 8100:8000
command: https://raw.githubusercontent.com/mvrabel/nifi-postgres-metabase/master/api_examples/pipedrive-api.yaml
restcountries-api:
image: danielgtaylor/apisprout:latest
container_name: restcountries-api
restart: always
ports:
- 8200:8000
command: https://raw.githubusercontent.com/mvrabel/nifi-postgres-metabase/master/api_examples/restcountries-api.json
volumes:
postgres:
nifi:
openldap:
metabase:
pgadmin:
Using Registry you can achieve that all changes you are doing or your nifi are committed to git. I.e. if you change some processor configuration, it will be reflected in your git repo.
As for flow files, you may need to fix volumes mappings.
I have docker-compose.yml file which contains frontend,backend,testing,postgres and pgadmin container. The containers except testing are able to communicate each other. But the testing container fails to communicate with backend and frontend container in docker-compose.
version: '3.7'
services:
frontend:
container_name: test-frontend
build:
context: ./frontend
dockerfile: Dockerfile.local
ports:
- '3000:3000'
networks:
- test-network
environment:
# For the frontend can be applied only during the build!
# (while it's applied when TS is compiled)
# You have to build manually without cache if one of those are changed at least for the prod mode.
- REACT_APP_BACKEND_API=http://localhost:8000/api/v1
- REACT_APP_GOOGLE_CLIENT_ID=1234567dfghjjnfd
- CI=true
- CHOKIDAR_USEPOLLING=true
postgres:
image: postgres
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: dev
PGDATA: /data/postgres
volumes:
- postgres:/data/postgres
ports:
- "5432:5432"
networks:
- test-network
restart: unless-stopped
pgadmin:
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: "dev#dev.com"
PGADMIN_DEFAULT_PASSWORD: dev
volumes:
- pgadmin:/root/.pgadmin
- ./pgadmin-config/servers.json:/pgadmin4/servers.json
ports:
- "5050:80"
networks:
- test-network
restart: unless-stopped
backend:
container_name: test-backend
build:
context: ./backend
dockerfile: Dockerfile.local
ports:
- '8000:80'
volumes:
- ./backend:/app
command: >
bash -c "alembic upgrade head
&& exec /start-reload.sh"
networks:
- test-network
depends_on:
- postgres
environment:
- GOOGLE_APPLICATION_CREDENTIALS=/app/.secret/secret.json
- APP_DB_CONNECTION_STRING=postgresql+psycopg2://dev:dev#postgres:5432/postgres
- LOG_LEVEL=debug
- SQLALCHEMY_ECHO=True
- AUTH_ENABLED=True
- CORS=*
- GCP_ALLOWED_DOMAINS=*
testing:
container_name: test-testing
build:
context: ./testing
dockerfile: Dockerfile
volumes:
- ./testing:/isp-app
command: >
bash -c "/wait
&& robot ."
networks:
- test-network
depends_on:
- backend
- frontend
environment:
- WAIT_HOSTS= frontend:3000, backend:8000
- WAIT_TIMEOUT= 3000
- WAIT_SLEEP_INTERVAL=300
- WAIT_HOST_CONNECT_TIMEOUT=300
volumes:
postgres:
pgadmin:
networks:
test-network:
driver: bridge
All the containers are mapped to test-network. When the testing container tried to connect to frontend:3000 or backend:8000, it throws "Host [ backend:8000] not yet available"
How to fix it?
I'm trying to setup a docker to run mysql Mosquitto and node red but keep getting the unsupported config option errors..
Services:
mysql:
image: mysql
container_name: mysql
restart: always
ports:
- “6603:3306”
Environment:
MYSQL_ROOT_PASSWORD: “abcd1234”
volumes:
- mysql-data
node-red:
image: nodered/node-red:latest
restart: always
container_name: nodered
environment:
-TZ=Europe/London
depends_on:
- mysql
ports:
- “1880:1880”
links:
- mysql:mysql
- mosquitto:mosquitto
volumes:
- node-red-data
mosquitto:
image: eclipse-mosquitto
hostname: mosquitto
container_name: mosquitto
restart: always
ports:
- "1883:1883"
volumes:
mysql-data:
node-red-data:
Any thoughts on why im getting these errors?
Unsupported config option for Services: 'mosquitto'
Unsupported config option for volumes: 'mysql-data'
I try to setup zipkin, elasticsearch, prometheus and grafana with docker-compose.yml
When I run dockers, see in the log:
dependencies_zipkin | 19/09/30 14:37:09 ERROR NetworkClient: Node [172.28.0.2:9200] failed (java.net.ConnectException: Connection refused (Connection refused)); no other nodes left - aborting...
I'm on MacOS X with docker 2.1.0.3
the content of my docker-compose.yml is this one:
version: '3.7'
services:
storage:
image: openzipkin/zipkin-elasticsearch7
container_name: elasticsearch
ports:
- "9200:9200"
environment:
- "xpack.security.enabled=false"
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
restart: unless-stopped
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- $PWD/prometheus:/etc/prometheus/
- /tmp/prometheus:/prometheus/data:rw
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- "9090:9090"
restart: unless-stopped
zipkin:
image: openzipkin/zipkin
container_name: zipkin
depends_on:
- dependencies
- storage
environment:
- "STORAGE_TYPE=elasticsearch"
- "ES_HOSTS=storage"
ports:
- "9411:9411"
restart: unless-stopped
grafana:
image: grafana/grafana
container_name: grafana
ports:
- "3000:3000"
restart: unless-stopped
dependencies:
image: openzipkin/zipkin-dependencies
container_name: dependencies_zipkin
depends_on:
- storage
environment:
- "STORAGE_TYPE=elasticsearch"
- "ES_HOSTS=storage"
When I connect to localhost:9200, I see that elasticsearch is working fine and on port 9411, zipkin is deployed but I have the error:
ERROR: cannot load service names: server error (Service Unavailable)(due to the network error
In the log, I have this information:
105 ^[[35mdependencies_zipkin |^[[0m 19/09/30 14:45:20 ERROR NetworkClient: Node [172.28.0.2:9200] failed (java.net.ConnectException: Connection refused (Connection refused)); no other nodes left - aborting...
and this one
^[[31mzipkin |^[[0m java.lang.IllegalStateException: couldn't connect any of [Endpoint{storage:80, ipAddr=172.28.0.2, weight=1000}]
Any idea?
UPDATE
by using mysql it is working fine, so the problem is at the level of elastic search.
I tried alsoo by using
"STORAGE_PORT_9200_TCP_ADDR=127.0.0.1"
but the issue still occurs.
UPDATE
As mention is the solution gave by Brian, I have to use:
ES_HOSTS=http://storage:9300
the key is on port, I was using the port 9200
The error disappear between zipkin and es but still occurs between es and zipkin-dependencies.
The problem lies in your ES_HOSTS variable, from the docs here:
ES_HOSTS: A comma separated list of elasticsearch base urls to connect to ex. http://host:9200.
Defaults to "http://localhost:9200".
So you will need: ES_HOSTS=http://storage:9200
Finally I have this file:
version: '3.7'
services:
storage:
image: openzipkin/zipkin-elasticsearch7
container_name: elasticsearch
ports:
- 9200:9200
zipkin:
image: openzipkin/zipkin
container_name: zipkin
environment:
- STORAGE_TYPE=elasticsearch
- "ES_HOSTS=elasticsearch:9300"
ports:
- 9411:9411
depends_on:
- storage
dependencies:
image: openzipkin/zipkin-dependencies
container_name: dependencies
entrypoint: crond -f
depends_on:
- storage
environment:
- STORAGE_TYPE=elasticsearch
- "ES_HOSTS=elasticsearch:9300"
- "ES_NODES_WAN_ONLY=true"
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- $PWD/prometheus:/etc/prometheus/
- /tmp/prometheus:/prometheus/data:rw
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- "9090:9090"
grafana:
image: grafana/grafana
container_name: grafana
depends_on:
- prometheus
ports:
- "3000:3000"
Main differences are the usage of
"ES_HOSTS=elasticsearch:9300"
instead of
"ES_HOSTS=storage:9300"
and in the dependencies configuration I add the entrypoint in dependencies:
entrypoint: crond -f
This one is really the key to not have the exception when I start docker-compose.
To solve this issue, I check the this project: https://github.com/openzipkin/docker-zipkin
The remaining question is: why do I need to use entrypoint: crond -f