My problem is as the title states.
Here are the steps to reproduce. Note I am using docker compose file version 3.3, as I am running this off the apt python version of docker-compose as there is no binary for ARM64.
Create a docker file with these contents:
version: "3.3"
volumes:
owncloud_data:
external: true
owncloud_mysql:
external: true
owncloud_backup:
external: true
owncloud_redis:
external: true
services:
traefik:
image: "traefik"
container_name: "traefik"
restart: "always"
command:
- "--log.level=DEBUG"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.websecure.address=:443"
- "--certificatesresolvers.myresolver.acme.tlschallenge=true"
#- "--certificatesresolvers.myresolver.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory"
# Be sure to have LeGo installed
- "--certificatesresolvers.myresolver.acme.email=<my email>"
- "--certificatesresolvers.myresolver.acme.storage=/letsencrypt/acme.json"
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- "./letsencrypt:/letsencrypt"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
whoami:
image: "containous/whoami"
container_name: "simple-service"
restart: "always"
labels:
- "traefik.enable=true"
- "traefik.http.routers.whoami.rule=Host(`<my domain>`)"
- "traefik.http.routers.whoami.entrypoints=websecure"
- "traefik.http.routers.whoami.tls.certresolver=myresolver"
owncloud:
image: "owncloud/server"
container_name: "owncloud"
restart: "always"
depends_on:
- db
- redis
environment:
- OWNCLOUD_DOMAIN=owncloud.<my domain>
- OWNCLOUD_DB_TYPE=mysql
- OWNCLOUD_DB_NAME=owncloud
- OWNCLOUD_DB_USERNAME=owncloud
- OWNCLOUD_DB_PASSWORD=owncloud
- OWNCLOUD_DB_HOST=db
- OWNCLOUD_ADMIN_USERNAME=<my username>
- OWNCLOUD_ADMIN_PASSWORD=<my password>
- OWNCLOUD_MYSQL_UTF8MB4=true
- OWNCLOUD_REDIS_ENABLED=true
- OWNCLOUD_REDIS_HOST=redis
healthcheck:
test: ["CMD", "/usr/bin/healthcheck"]
interval: 30s
timeout: 10s
retries: 5
labels:
- "traefik.enable=true"
- "traefik.http.routers.owncloud.rule=Host(`<my domain>`)"
- "traefik.http.routers.owncloud.entrypoints=websecure"
- "traefik.http.routers.owncloud.tls.certresolver=myresolver"
volumes:
- type: volume
source: owncloud_data
target: /owncloud/data
db:
image: webhippie/mariadb:latest
restart: always
environment:
- MARIADB_ROOT_PASSWORD=owncloud
- MARIADB_USERNAME=owncloud
- MARIADB_PASSWORD=owncloud
- MARIADB_DATABASE=owncloud
- MARIADB_MAX_ALLOWED_PACKET=128M
- MARIADB_INNODB_LOG_FILE_SIZE=64M
healthcheck:
test: ["CMD", "/usr/bin/healthcheck"]
interval: 30s
timeout: 10s
retries: 5
volumes:
- type: volume
source: owncloud_mysql
target: /owncloud/mysql
- type: volume
source: owncloud_backup
target: /owncloud/backup
redis:
image: webhippie/redis:latest
restart: "always"
environment:
- REDIS_DATABASES=1
healthcheck:
test: ["CMD", "/usr/bin/healthcheck"]
interval: 30s
timeout: 10s
retries: 5
volumes:
- type: volume
source: owncloud_redis
target: /owncloud/redis
I run this command before I start the containers:
for v in owncloud_data owncloud_mysql owncloud_backup owncloud_redis; do
sudo docker volume create $v
done
I then run sudo docker-compose up.
When I run sudo docker volume ls I get my four (dangling volumes) I created earlier, and 4 more ones that were created from docker compose. What's the deal here? I specified their name explicitly. I'm not sure why the bind for the traefik container works.
I have tried seemingly everything. I put things in quotes. I tried the prepended project name. I tried the short syntax owncloud_data:/owncloud/data. I tried binds, although I don't want to use binds since it's easier to backup a volume.
Thank you, Logan
docker-compose is using the volumes you named, for the binds you specified. However, it is also creating unnamed volumes for each VOLUME declaration on the images you are using and not explicitly binding.
I've made this little script to extract the declared volumes from the images you're using:
images="traefik containous/whoami owncloud/server webhippie/mariadb:latest webhippie/redis:latest"
echo $images | xargs -n1 docker pull
docker inspect $images -f '{{.RepoTags}}, {{.Config.Volumes}}'
The result:
[traefik:latest], map[]
[containous/whoami:latest], map[]
[owncloud/server:latest], map[/mnt/data:{}]
[webhippie/mariadb:latest], map[/var/lib/backup:{} /var/lib/mysql:{}]
[webhippie/redis:latest], map[/var/lib/redis:{}]
None of your binds are binding to the declared volumes, so you're effectively creating new ones and leaving the declared ones empty.
Related
I have a project structure:
configs
-config.yaml
server
...
docker-compose.yaml
the docker file is :
version: '3.8'
services:
volumes:
- /configs:/configs
postgres:
image: postgres:12
restart: always
ports:
- '5432:5432'
volumes:
- ./db_data:/var/lib/postgresql/data
- ./server/scripts/init.sql:/docker-entrypoint-initdb.d/create_tables.sql
env_file:
- local.env
healthcheck:
test: [ "CMD", "pg_isready", "-q", "-d", "devdb", "-U","postgres" ]
timeout: 45s
interval: 10s
retries: 10
app:
build:
context: ./server/app
dockerfile: Dockerfile
env_file:
- local.env
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=devdb
volumes:
configs:
The app uses config.yml and I'm wondering how to add the configs folder to the container? I tried to do this :
volumes:
- /configs:/configs
but it gives me services.volumes must be a mapping.
How can this be resolved?
You need to put volumes directive inside a service. Probably something like this:
app:
build:
context: ./server/app
dockerfile: Dockerfile
env_file:
- local.env
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=devdb
volumes:
- ./configs:/configs
If multiple containers need it you'll have to repeat it in multiple services.
I am trying to design a docker-compose.yml file that will allow me to easily launch environments to develop inside of. Sometimes I would like to have 2 or more of these up at the same time but doing it naively I get ERROR: for pdb Cannot create container for service pdb: Conflict. The container name "/pdb" is already in use by container ... (even if they are on different stacks).
version: '3.4'
services:
pdb:
hostname: "pdb"
container_name: "pdb"
image: "postgres:latest"
(...other services...)
Is there a way to automatically name these in a distinguishable but systematic way? For example something like this:
version: '3.4'
services:
pdb:
hostname: "${stack_name}_pdb"
container_name: "${stack_name}_pdb"
image: "postgres:latest"
(...other services...)
EDIT: Apparently this is a somewhat service specific question so here is the complete compose file just in case...
version: '3.4'
services:
rmq:
hostname: "rmq"
container_name: "rmq"
image: "rabbitmq:latest"
networks:
- "fakenet"
ports:
- "5672:5672"
healthcheck:
test: "rabbitmq-diagnostics -q ping"
interval: 30s
timeout: 30s
retries: 3
pdb:
hostname: "pdb"
container_name: "pdb"
image: "postgres:latest"
networks:
- "fakenet"
ports:
- 5432:5432
environment:
POSTGRES_PASSWORD: ******
POSTGRES_USER: postgres
POSTGRES_DB: test_db
volumes:
- "./deploy/pdb:/docker-entrypoint-initdb.d"
- "./data/dbase:/var/lib/postgresql/data"
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
workenv:
hostname: "aiida"
container_name: "aiida"
image: "aiida_workenv:v0.1"
expose:
- "8888" # AiiDa Lab
- "8890" # Jupyter Lab
- "5000" # REST API
ports: # local:container
- 8888:8888
- 8890:8890
- 5000:5000
volumes:
- "./data/codes:/home/devuser/codes"
- "./data/aiida:/home/devuser/.aiida"
depends_on:
pdb:
condition: service_healthy
rmq:
condition: service_healthy
networks:
- "fakenet"
environment:
LC_ALL: "en_US.UTF-8"
LANG: "en_US.UTF-8"
PSQL_HOST: "pdb"
PSQL_PORT: "5432"
command: "tail -f /dev/null"
networks:
fakenet:
driver: bridge
Just don't manually set container_name: at all. Compose will automatically assign a name based on the current project name. Similarly, you don't usually need to set hostname: (RabbitMQ is one extremely specific exception, if you're using that).
If you do need to publish ports out of your Compose setup to be able to access them from the host system, the other obvious pitfall is that the first ports: number must be unique across the entire host. You can specify a single number for ports: to let Docker pick the host port, though you'll need to look it up later with docker-compose port.
version: '3.8'
services:
pdb:
image: "postgres:latest"
# no hostname: or ports:
app:
build: .
environment:
PGHOST: pdb
ports:
- 3000 # container-side port, Docker picks host port
docker-compose -p myname -d up
docker-compose -p myname port app 3000
I am trying to get a docker-compose file working with both airflow and spark. Airflow typically runs on 8080:8080 which is needed by spark as well. I have the following docker-compose file:
version: '3.7'
services:
master:
image: gettyimages/spark
command: bin/spark-class org.apache.spark.deploy.master.Master -h master
hostname: master
environment:
MASTER: spark://master:7077
SPARK_CONF_DIR: /conf
SPARK_PUBLIC_DNS: localhost
expose:
- 7001
- 7002
- 7003
- 7004
- 7005
- 7077
- 6066
ports:
- 4040:4040
- 6066:6066
- 7077:7077
- 8080:8080
volumes:
- ./conf/master:/conf
- ./data:/tmp/data
worker:
image: gettyimages/spark
command: bin/spark-class org.apache.spark.deploy.worker.Worker spark://master:7077
hostname: worker
environment:
SPARK_CONF_DIR: /conf
SPARK_WORKER_CORES: 2
SPARK_WORKER_MEMORY: 1g
SPARK_WORKER_PORT: 8881
SPARK_WORKER_WEBUI_PORT: 8081
SPARK_PUBLIC_DNS: localhost
links:
- master
expose:
- 7012
- 7013
- 7014
- 7015
- 8881
ports:
- 8081:8081
volumes:
- ./conf/worker:/conf
- ./data:/tmp/data
postgres:
image: postgres:9.6
environment:
- POSTGRES_USER=airflow
- POSTGRES_PASSWORD=airflow
- POSTGRES_DB=airflow
logging:
options:
max-size: 10m
max-file: "3"
webserver:
image: puckel/docker-airflow:1.10.9
restart: always
depends_on:
- postgres
environment:
- LOAD_EX=y
- EXECUTOR=Local
logging:
options:
max-size: 10m
max-file: "3"
volumes:
- ./dags:/usr/local/airflow/dags
# Add this to have third party packages
- ./requirements.txt:/requirements.txt
# - ./plugins:/usr/local/airflow/plugins
ports:
- "8082:8080" # NEED TO CHANGE THIS LINE
command: webserver
healthcheck:
test: ["CMD-SHELL", "[ -f /usr/local/airflow/airflow-webserver.pid ]"]
interval: 30s
timeout: 30s
retries: 3
but specifically need to change the line:
ports:
- "8082:8080" # NEED TO CHANGE THIS LINE
under web server so there's no port conflict. However when I change the container port to something other than 8080:8080 it doesnt work (cannot connect/find server). How do I change the container port successfully?
When you specified port mapping in docker you are giving 2 ports, for instance: 8082:8080.
The right port is the one that is listening within the container.
You can have several containers that listening internally to the same port. They are still not available in your localhost - that's why we use the ports section for.
Now, in your localhost you cannot bind the same port more than once. That's why docker failed when you try to set on the left side 8080 more than one time.
In your current compose file, the spark service port is mapped to 8080 (left side of 8080:8080) and the webserver service is mapped to 8082 (left side of 8082:8080).
if you want to access spark, goto: http://localhost:8080 and for the web server, go to http://localhost:8082
I am getting segmentation fault and docker exited with code 139 on running hyperledger-explorer docker image.
docker-compose file for creating explorer-db
version: "2.1"
volumes:
data:
walletstore:
pgadmin_4:
external: true
networks:
mynetwork.com:
external:
name: bikeblockchain_network
services:
explorerdb.mynetwork.com:
image: hyperledger/explorer-db:V1.0.0
container_name: explorerdb.mynetwork.com
hostname: explorerdb.mynetwork.com
restart: always
ports:
- 54320:5432
environment:
- DATABASE_DATABASE=fabricexplorer
- DATABASE_USERNAME=hppoc
- DATABASE_PASSWORD=password
healthcheck:
test: "pg_isready -h localhost -p 5432 -q -U postgres"
interval: 30s
timeout: 10s
retries: 5
volumes:
- data:/var/lib/postgresql/data
networks:
mynetwork.com:
aliases:
- postgresdb
pgadmin:
image: dpage/pgadmin4
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: user#domain.com
PGADMIN_DEFAULT_PASSWORD: SuperSecret
PGADMIN_CONFIG_ENHANCED_COOKIE_PROTECTION: "True"
# PGADMIN_CONFIG_LOGIN_BANNER: "Authorized Users Only!"
PGADMIN_CONFIG_CONSOLE_LOG_LEVEL: 10
volumes:
- "pgadmin_4:/var/lib/pgadmin"
ports:
- 8080:80
networks:
- mynetwork.com
docker-compose-explorer file
version: "2.1"
volumes:
data:
walletstore:
external: true
pgadmin_4:
external: true
networks:
mynetwork.com:
external:
name: bikeblockchain_network
services:
explorer.mynetwork.com:
image: hyperledger/explorer:V1.0.0
container_name: explorer.mynetwork.com
hostname: explorer.mynetwork.com
# restart: always
environment:
- DATABASE_HOST=xx.xxx.xxx.xxx
#Host is VM IP address with ports exposed for postgres. No issues here
- DATABASE_PORT=54320
- DATABASE_DATABASE=fabricexplorer
- DATABASE_USERNAME=hppoc
- DATABASE_PASSWD=password
- LOG_LEVEL_APP=debug
- LOG_LEVEL_DB=debug
- LOG_LEVEL_CONSOLE=info
# - LOG_CONSOLE_STDOUT=true
- DISCOVERY_AS_LOCALHOST=false
volumes:
- ./config.json:/opt/explorer/app/platform/fabric/config.json
- ./connection-profile:/opt/explorer/app/platform/fabric/connection-profile
- ./examples/net1/crypto:/tmp/crypto
- walletstore:/opt/wallet
- ./crypto-config/:/etc/data
command: sh -c "node /opt/explorer/main.js && tail -f /dev/null"
ports:
- 6060:6060
networks:
- mynetwork.com
error
Attaching to explorer.mynetwork.com
explorer.mynetwork.com | Segmentation fault
explorer.mynetwork.com exited with code 139
Postgres is working fine. Docker is updated to the latest version.
Fabric network being used is generated inside IBM Blockchain VS Code extension.
I too face the same problem with docker images but I was success on manual start.sh but not on the docker image. After some exploration, i came to know this is due to some architecture build related and there seem to be a segmentation fault issue in the latest v1.0.0 container image.
This get fixed it on the latest master branch, but not yet released it on Docker Hub.
Please build Explorer container image by yourself by using build_docker_image.sh on your local for the time being.
from hlf forum
Okay!! So I did some testings and found that if, the Docker is set to Run on Windows Login, Explorer will throw error of segmentation fault, but if, I manually start Docker after windows login, it works well. Strange !!
How to setup login credentials for kibana gui with docker elk stack containers.
What arguments and environmental variables must be passed in docker-compose.yaml file to get this working.
For setting kibana user credentials for docker elk stack, we have to set xpack.security.enabled: true either in elasticsearch.yml or pass this as a environment variable in docker-compose.yml file.
Pass username & password as environment variable in docker-compose.yml like below:
version: '3.3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.6.1
ports:
- "9200:9200"
- "9300:9300"
configs:
- source: elastic_config
target: /usr/share/elasticsearch/config/elasticsearch.yml
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_USERNAME: "elastic"
ELASTIC_PASSWORD: "MyPw123"
http.cors.enabled: "true"
http.cors.allow-origin: "*"
xpack.security.enabled: "true"
networks:
- elk
deploy:
mode: replicated
replicas: 1
logstash:
image: docker.elastic.co/logstash/logstash:6.6.1
ports:
- "5044:5044"
- "9600:9600"
configs:
- source: logstash_config
target: /usr/share/logstash/config/logstash.yml:rw
- source: logstash_pipeline
target: /usr/share/logstash/pipeline/logstash.conf
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
xpack.monitoring.elasticsearch.url: "elasticsearch:9200"
xpack.monitoring.elasticsearch.username: "elastic"
xpack.monitoring.elasticsearch.password: "MyPw123"
networks:
- elk
deploy:
mode: replicated
replicas: 1
kibana:
image: docker.elastic.co/kibana/kibana:6.6.1
ports:
- "5601:5601"
configs:
- source: kibana_config
target: /usr/share/kibana/config/kibana.yml
networks:
- elk
deploy:
mode: replicated
replicas: 1
configs:
elastic_config:
file: ./elasticsearch/config/elasticsearch.yml
logstash_config:
file: ./logstash/config/logstash.yml
logstash_pipeline:
file: ./logstash/pipeline/logstash.conf
kibana_config:
file: ./kibana/config/kibana.yml
networks:
elk:
driver: overlay
Then add this following lines to kibana.yml:
elasticsearch.username: "elastic"
elasticsearch.password: "MyPw123"
Did not managed to get it working without adding XPACK_MONITORING & SECURITY flags to kibana's container and there was no need for a config file
However I was not able to use kibana user, even after logging in with elastic user and changing kibana's password through the UI.
NOTE: looks like you can't setup default built-in users other than elastic superuser in docker-compose through it's environment. I've tried several times with kibana and kibana_system to no success.
version: "3.7"
services:
elasticsearch:
image: elasticsearch:7.4.0
restart: always
ports:
- 9200:9200
environment:
- discovery.type=single-node
- xpack.security.enabled=true
- ELASTIC_PASSWORD=123456
kibana:
image: kibana:7.4.0
restart: always
ports:
- 5601:5601
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
- XPACK_MONITORING_ENABLED=true
- XPACK_MONITORING_COLLECTION_ENABLED=true
- XPACK_SECURITY_ENABLED=true
- ELASTICSEARCH_USERNAME=elastic
- ELASTICSEARCH_PASSWORD="123456"
depends_on:
- elasticsearch
SOURCE
NOTE: looks like this won't work with 8.5.3, Kibana won't accept superuser elastic.
Update
I was able to setup 8.5.3 but with a couple twists. I would build the whole environment, then in elastic's container run the setup-passwords auto
bin/elasticsearch-setup-passwords auto
Grab the auto generated password for kibana_system user and replace it in docker-compose then restart only kibana's container
Kibana 8.5.3 with environment variables:
kibana:
image: kibana:8.5.3
restart: always
ports:
- 5601:5601
environment:
- ELASTICSEARCH_USERNAME="kibana_system"
- ELASTICSEARCH_PASSWORD="sVUurmsWYEwnliUxp3pX"
Restart kibana's container:
docker-compose up -d --build --force-recreate --no-deps kibana
NOTE: make sure to use --no-deps flag otherwise it will restart elastic container if tagged to kibana's