Trying to install nextcloud with docker on windows (Docker version: 19.03.13) and I'm starting windows powershell with admin rights, and using docker-compose up -d.
my compose yaml looks like this:
version: '3'
services:
proxy:
image: jwilder/nginx-proxy:alpine
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"
container_name: nextcloud-proxy
networks:
- nextcloud_network
ports:
- 80:80
- 443:443
volumes:
- ./proxy/conf.d:/etc/nginx/conf.d:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- ./proxy/certs:/etc/nginx/certs:ro
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: unless-stopped
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nextcloud-letsencrypt
depends_on:
- proxy
networks:
- nextcloud_network
volumes:
- ./proxy/certs:/etc/nginx/certs:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
restart: unless-stopped
db:
image: mariadb
container_name: nextcloud-mariadb
networks:
- nextcloud_network
volumes:
- db:/var/lib/mysql
- /etc/localtime:/etc/localtime:ro
environment:
- MYSQL_ROOT_PASSWORD=1984cstr
- MYSQL_PASSWORD=cstrike
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
restart: unless-stopped
app:
image: nextcloud:latest
container_name: nextcloud-app
networks:
- nextcloud_network
depends_on:
- letsencrypt
- proxy
- db
volumes:
- nextcloud:/var/www/html
- ./app/config:/var/www/html/config
- ./app/custom_apps:/var/www/html/custom_apps
- ./app/data:/var/www/html/data
- ./app/themes:/var/www/html/themes
- /etc/localtime:/etc/localtime:ro
environment:
- VIRTUAL_HOST=nextcloud.example.de
- LETSENCRYPT_HOST=nextcloud.example.de
- LETSENCRYPT_EMAIL=realmail#gmail.com
restart: unless-stopped
volumes:
nextcloud:
db:
networks:
nextcloud_network:
But I'm getting the following errors:
ERROR: for nextcloud-mariadb Cannot start service db: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused "rootfs_linux.go:58: mounting \"/etc/localtime\" to rootfs \"/var/lib/docker/overlay2/5111ae9606906d7a02c039fc8ea7987272d4b2738dabd763fde72bdf56c8bb59/merged\" at \"/var/lib/docker/overlay2/5111ae9606906d7a02c039fc8ea7987272d4b2738dabd763fde72bdf56c8bb59/merged/usr/share/zoneinfo/Etc/UTC\" caused \"not a directory\""": unknown: Are you trying to mount a directory onto a file (or vice-versa)? ChCreating nextcloud-proxy ... done
Creating nextcloud-letsencrypt ... done
ERROR: for db Cannot start service db: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused "rootfs_linux.go:58: mounting \"/etc/localtime\" to rootfs \"/var/lib/docker/overlay2/5111ae9606906d7a02c039fc8ea7987272d4b2738dabd763fde72bdf56c8bb59/merged\" at \"/var/lib/docker/overlay2/5111ae9606906d7a02c039fc8ea7987272d4b2738dabd763fde72bdf56c8bb59/merged/usr/share/zoneinfo/Etc/UTC\" caused \"not a directory\""": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
ERROR: Encountered errors while bringing up the project.
What is wrong? Or what additional Information do I need to provide so that the problem can be found?
Since you are on a Windows host, the mount paths like /etc/localtime won’t work because they don’t exist on your system. The configuration you are using is for a Linux-based host.
Although, it’s recommended, you can remove those mounts from your services.
But, keep in mind that you need to keep the docker socket mount, and you will need to adjust it for a Windows host (since the one you have is also for a Linux host). You can try some solution from here.
docker-compose.yml
version: "3.3"
services:
sonarqube:
container_name: sonarqube
image: sonarqube:7.9.2-community
ports:
- "9000:9000"
environment:
- SONARQUBE_JDBC_URL=jdbc:postgresql://db:5432/sonar
- SONARQUBE_JDBC_USERNAME=sonar
- SONARQUBE_JDBC_PASSWORD=sonar
networks:
- sonarnet
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_logs:/opt/sonarqube/logs
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled-plugins:/opt/sonarqube/lib/bundled-plugins
db:
container_name: sonardb
image: postgres
networks:
- sonarnet
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
sonarscanner:
container_name: sonarscanner
image: newtmitch/sonar-scanner
networks:
- sonarnet
volumes:
- sonarvol:/usr/src
networks:
sonarnet:
volumes:
sonarqube_conf:
sonarqube_data:
sonarqube_logs:
sonarqube_extensions:
sonarqube_bundled-plugins:
postgresql:
postgresql_data:
sonarvol:
There are two containers that are exiting, each with its own cause:
sonarqube initially fails with max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]. To solve this, run on your host machine the following command:
sysctl -w vm.max_map_count=262144
once this is solved, there is a second problem:
the sonarscanner container is launched before sonarqube is running, causing this error: ERROR: SonarQube server [http://sonarqube:9000] can not be reached. To solve this issue, add a restart policy on failure under your scanner, in the compose file:
sonarscanner:
[...]
restart: on-failure
Once this problem is also solved, the scanner will exit 0(success) hence the execution completes successfully, after printing(which seems to be the normal behavior):
INFO: ANALYSIS SUCCESSFUL, you can browse http://sonarqube:9000/dashboard?id=MyProjectKey
INFO: Note that you will be able to access the updated dashboard once the server has processed the submitted analysis report
Context: I want to create a docker-compose to run ELK + Beats + Kafka for logging purposes.
I had successfully started this task and suddenly I decided to update docker-compose from version 2 to version 3. After that I keep getting:
ERROR: for kibana Cannot start service kibana: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/host_mnt/c/Dockers/megalog-try-1/kibana.yml\\\" to rootfs \\\"/var/lib/docker/overlay2/e1bf99bc19edf4bb68bfad5a76c1e6b9ac1b69f84af85767c2127fd1295c0536/merged\\\" at \\\"/var/lib/docker/overlay2/e1bf99bc19edf4bb68bfad5a76c1e6b9ac1b69f84af85767c2127fd1295c0536/merged/usr/share/kibana/config/kibana.yml\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
ERROR: Encountered errors while bringing up the project.
Firstly I though it was because previous volume was impacting somehow then I deleted all volumes and previous containers. But it didn't fix it.
I read carefully Are you trying to mount a directory onto a file (or vice-versa)? and checked all suggestions but I didn't move forward. In my case, I am not using Oracle Virtual Box at all.
Any thing suggestion to check will be highly appreciatted.
All my docker-compose is:
version: '3'
services:
kibana:
image: docker.elastic.co/kibana/kibana:7.5.2
volumes:
- "./kibana.yml:/usr/share/kibana/config/kibana.yml"
restart: always
ports:
- "5601:5601"
links:
- elasticsearch
depends_on:
- elasticsearch
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.2
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- "./esdata:/usr/share/elasticsearch/data"
ports:
- "9200:9200"
logstash:
image: docker.elastic.co/logstash/logstash:7.5.2
volumes:
- "./logstash.conf:/config-dir/logstash.conf"
restart: always
command: logstash -f /config-dir/logstash.conf
ports:
- "9600:9600"
- "7777:7777"
links:
- elasticsearch
- kafka1
- kafka2
- kafka3
kafka1:
image: wurstmeister/kafka
depends_on:
- zoo1
- zoo2
- zoo3
links:
- zoo1
- zoo2
- zoo3
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ADVERTISED_PORT: 9092
KAFKA_LOG_RETENTION_HOURS: "168"
KAFKA_LOG_RETENTION_BYTES: "100000000"
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
KAFKA_CREATE_TOPICS: "log:3:3"
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'
kafka2:
image: wurstmeister/kafka
depends_on:
- zoo1
- zoo2
- zoo3
links:
- zoo1
- zoo2
- zoo3
ports:
- "9093:9092"
environment:
KAFKA_BROKER_ID: 2
KAFKA_ADVERTISED_PORT: 9092
KAFKA_LOG_RETENTION_HOURS: "168"
KAFKA_LOG_RETENTION_BYTES: "100000000"
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
KAFKA_CREATE_TOPICS: "log:3:3"
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'
kafka3:
image: wurstmeister/kafka
depends_on:
- zoo1
- zoo2
- zoo3
links:
- zoo1
- zoo2
- zoo3
ports:
- "9094:9092"
environment:
KAFKA_BROKER_ID: 3
KAFKA_ADVERTISED_PORT: 9092
KAFKA_LOG_RETENTION_HOURS: "168"
KAFKA_LOG_RETENTION_BYTES: "100000000"
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
KAFKA_CREATE_TOPICS: "log:3:3"
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'
zoo1:
image: elevy/zookeeper:latest
environment:
MYID: 1
SERVERS: zoo1,zoo2,zoo3
ports:
- "2181:2181"
zoo2:
image: elevy/zookeeper:latest
environment:
MYID: 2
SERVERS: zoo1,zoo2,zoo3
ports:
- "2182:2181"
zoo3:
image: elevy/zookeeper:latest
environment:
MYID: 3
SERVERS: zoo1,zoo2,zoo3
ports:
- "2183:2181"
filebeat:
image: docker.elastic.co/beats/filebeat:7.5.2
volumes:
- "./filebeat.yml:/usr/share/filebeat/filebeat.yml:ro"
- "./apache-logs:/apache-logs"
links:
- kafka1
- kafka2
- kafka3
depends_on:
- apache
- kafka1
- kafka2
- kafka3
apache:
image: lzrbear/docker-apache2-ubuntu
volumes:
- "./apache-logs:/var/log/apache2"
ports:
- "8888:80"
depends_on:
- logstash
IN case it is relevant, filebeat and kibana.yml are:
filebeat.yml
filebeat.prospectors:
- paths:
- /apache-logs/access.log
tags:
- testenv
- apache_access
input_type: log
document_type: apache_access
fields_under_root: true
- paths:
- /apache-logs/error.log
tags:
- testenv
- apache_error
input_type: log
document_type: apache_error
fields_under_root: true
output.kafka:
hosts: ["kafka1:9092", "kafka2:9092", "kafka3:9092"]
topic: 'log'
partition.round_robin:
reachable_only: false
required_acks: 1
compression: gzip
max_message_bytes: 1000000
kibana.yml
server.name: kibana
server.host: "0"
elasticsearch.url: http://elasticsearch:9200
xpack.monitoring.ui.container.elasticsearch.enabled: false
The whole log is:
C:\Dockers\megalog-try-1>docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3366ee0766a8 lzrbear/docker-apache2-ubuntu "apachectl -D FOREGR…" 16 hours ago Up About a minute 0.0.0.0:8888->80/tcp megalog-try-1_apache_1
6fcdcbf8e75e docker.elastic.co/logstash/logstash:7.5.2 "/usr/local/bin/dock…" 16 hours ago Up 15 seconds 0.0.0.0:7777->7777/tcp, 5044/tcp, 0.0.0.0:9600->9600/tcp megalog-try-1_logstash_1
dd854b18aa80 elevy/zookeeper:latest "/entrypoint.sh zkSe…" 16 hours ago Up About a minute 2888/tcp, 3888/tcp, 9010/tcp, 0.0.0.0:2183->2181/tcp megalog-try-1_zoo3_1
498c3d3132fd elevy/zookeeper:latest "/entrypoint.sh zkSe…" 16 hours ago Up About a minute 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp, 9010/tcp megalog-try-1_zoo1_1
555279c42b9d elevy/zookeeper:latest "/entrypoint.sh zkSe…" 16 hours ago Up About a minute 2888/tcp, 3888/tcp, 9010/tcp, 0.0.0.0:2182->2181/tcp megalog-try-1_zoo2_1
C:\Dockers\megalog-try-1>docker-compose up -d
Creating megalog-try-1_zoo3_1 ... done Creating megalog-try-1_zoo2_1 ... done Creating megalog-try-1_zoo1_1 ... done Creating megalog-try-1_elasticsearch_1 ... done Creating megalog-try-1_kibana_1 ... error Creating megalog-try-1_kafka2_1 ...
Creating megalog-try-1_kafka1_1 ...
Creating megalog-try-1_kafka3_1 ...
ERROR: for megalog-try-1_kibana_1 Cannot start service kibana: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/host_mnt/c/Dockers/megalog-try-1/kibana.yml\\\" to rootfs \\\"/var/lib/docker/overlay2/e1bf99bc19edf4bb68bfad5a76c1e6b9ac1b69f84afCreating megalog-try-1_kafka2_1 ... done Creating megalog-try-1_kafka1_1 ... done Creating megalog-try-1_kafka3_1 ... done Creating megalog-try-1_logstash_1 ... done Creating megalog-try-1_apache_1 ... done Creating megalog-try-1_filebeat_1 ... error
ERROR: for megalog-try-1_filebeat_1 Cannot start service filebeat: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/host_mnt/c/Dockers/megalog-try-1/filebeat.yml\\\" to rootfs \\\"/var/lib/docker/overlay2/bc908c4b9e42c9c3c0a0f2f88387ca1dee1d20b341d18175df4678136a4e7730/merged\\\" at \\\"/var/lib/docker/overlay2/bc908c4b9e42c9c3c0a0f2f88387ca1dee1d20b341d18175df4678136a4e7730/merged/usr/share/filebeat/filebeat.yml\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
ERROR: for filebeat Cannot start service filebeat: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/host_mnt/c/Dockers/megalog-try-1/filebeat.yml\\\" to rootfs \\\"/var/lib/docker/overlay2/bc908c4b9e42c9c3c0a0f2f88387ca1dee1d20b341d18175df4678136a4e7730/merged\\\" at \\\"/var/lib/docker/overlay2/bc908c4b9e42c9c3c0a0f2f88387ca1dee1d20b341d18175df4678136a4e7730/merged/usr/share/filebeat/filebeat.yml\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
ERROR: for kibana Cannot start service kibana: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/host_mnt/c/Dockers/megalog-try-1/kibana.yml\\\" to rootfs \\\"/var/lib/docker/overlay2/e1bf99bc19edf4bb68bfad5a76c1e6b9ac1b69f84af85767c2127fd1295c0536/merged\\\" at \\\"/var/lib/docker/overlay2/e1bf99bc19edf4bb68bfad5a76c1e6b9ac1b69f84af85767c2127fd1295c0536/merged/usr/share/kibana/config/kibana.yml\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
ERROR: Encountered errors while bringing up the project.
*** Edited
I could move forward by:
1 - deleting all images. After that I got this other error message
ERROR: for elasticsearch Cannot start service elasticsearch: error while creating mount source path '/host_mnt/c/Dockers/megalog-try-1/esdata': mkdir /host_mnt/c/Dockers/megalog-try-1/esdata: file exists
2 - then I read somewhere someone saying this weird solution: remove D drive from share files, save, restart, share again, save and restart. Well then it worked. Honestly I don't consider this the answer to my above question mainly beacause it was working wiht docker-compose version 2 and now I am jumping from one error to another. Either I did something wrong in docker-compose file or there is some concept I am missing ( I can't delete images and remove shared drive on daily base).
3 - now I can't log in Kibana with "Kibana server is not ready yet" and this message from Docker
{"type":"log","#timestamp":"2020-02-04T04:03:53Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"No living connections"}
{"type":"log","#timestamp":"2020-02-04T04:03:56Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"Unable to revive connection: http://localhost:9200/"}
{"type":"log","#timestamp":"2020-02-04T04:03:56Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"No living connections"}
Here is the whole console from PowerShell:
PS C:\Users\Cast> $images = docker images -a -q
PS C:\Users\Cast> foreach ($image in $images) { docker image rm $image -f }
...
new error:
C:\Dockers\megalog-try-1>docker-compose up -d
megalog-try-1_zoo3_1 is up-to-date
megalog-try-1_zoo2_1 is up-to-date
Starting megalog-try-1_elasticsearch_1 ...
Starting megalog-try-1_elasticsearch_1 ... error Starting megalog-try-1_kafka2_1 ...
Starting megalog-try-1_kafka1_1 ...
Starting megalog-try-1_kafka3_1 ...
Starting megalog-try-1_kafka2_1 ... done Starting megalog-try-1_kafka1_1 ... done Starting megalog-try-1_kafka3_1 ... done
ERROR: for elasticsearch Cannot start service elasticsearch: error while creating mount source path '/host_mnt/c/Dockers/megalog-try-1/esdata': mkdir /host_mnt/c/Dockers/megalog-try-1/esdata: file exists
ERROR: Encountered errors while bringing up the project.
After removing C drive/restart/share again C drive/restart Docker
C:\Dockers\megalog-try-1>docker-compose up -d
megalog-try-1_zoo3_1 is up-to-date
megalog-try-1_zoo2_1 is up-to-date
megalog-try-1_zoo1_1 is up-to-date
Starting megalog-try-1_elasticsearch_1 ... done Starting megalog-try-1_kafka2_1 ... done Starting megalog-try-1_kafka1_1 ... done Starting megalog-try-1_kafka3_1 ... done Creating megalog-try-1_kibana_1 ... done Creating megalog-try-1_logstash_1 ... done Creating megalog-try-1_apache_1 ... done Creating megalog-try-1_filebeat_1 ... done
C:\Dockers\megalog-try-1>
By the way, I wasn't get issues at all when using docker-compose version 2. So I am wondering if docker-compose version isn't the issue.
I have an nextcloud installation based on an docker-compose.yml which was running smoothely since at least 6 months. All of a sudden I couldn't reach the frontend anymore (got an 500).
Honestly I do not know what happened and need help.
What I did until now ...
I logged and did an pull and afterwards an docker-copmuse up -d.
It seems that mariadb cannot be started anymore ....
The error is:
Removing nextcloud-letsencrypt
Removing nextcloud-mariadb
Recreating 4378ae40b393_nextcloud-mariadb ...
Recreating 4378ae40b393_nextcloud-mariadb ... error
Recreating 30867336c79f_nextcloud-letsencrypt ...
ERROR: for 4378ae40b393_nextcloud-mariadb Cannot start service db: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"process_linux.go:367: setting cgroup config for procHooks process caused \\"failed to write c 10:200 rwm to devices.allow: write /sys/fs/cgroup/devices/docker/c26f550b873f2c0c37Recreating 30867336c79f_nextcloud-letsencrypt ... error
ERROR: for 30867336c79f_nextcloud-letsencrypt Cannot start service letsencrypt: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"process_linux.go:367: setting cgroup config for procHooks process caused \\"failed to write c 10:200 rwm to devices.allow: write /sys/fs/cgroup/devices/docker/487b874d0bff9bf4810cb908fa3d27c955fb1d65dd0f07c727f4b5667f24767d/devices.allow: operation not permitted\\"\"": unknown
ERROR: for db Cannot start service db: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"process_linux.go:367: setting cgroup config for procHooks process caused \\"failed to write c 10:200 rwm to devices.allow: write /sys/fs/cgroup/devices/docker/c26f550b873f2c0c376f5174ce4b1f64e536e8b876bf3438bb3ef77f16b76426/devices.allow: operation not permitted\\"\"": unknown
ERROR: for letsencrypt Cannot start service letsencrypt: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"process_linux.go:367: setting cgroup config for procHooks process caused \\"failed to write c 10:200 rwm to devices.allow: write /sys/fs/cgroup/devices/docker/487b874d0bff9bf4810cb908fa3d27c955fb1d65dd0f07c727f4b5667f24767d/devices.allow: operation not permitted\\"\"": unknown
ERROR: Encountered errors while bringing up the project.
hasp#h2800129:~/myCloud$ ERROR: for 4378ae40b393_nextcloud-mariadb Cannot start service db: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"process_linux.go:367: setting cgroup config for procHooks process caused \\"failed to write c 10:200 rwm to devices.allow: write /sys/fs/cgroup/devices/docker/c26f550b873f2c0c37Recreating 30867336c79f_nextcloud-letsencrypt ... error
For the ones who can help me as a thumb user y docker-compose.yml was not touched within the last months ... And looks like this
version: '3'
services:
proxy:
image: jwilder/nginx-proxy:alpine
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"
container_name: nextcloud-proxy
networks:
- nextcloud_network
ports:
- 80:80
- 443:443
volumes:
- ./proxy/conf.d:/etc/nginx/conf.d:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- ./proxy/certs:/etc/nginx/certs:ro
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./uploadsize.conf:/etc/nginx/conf.d/uploadsize.conf:ro
restart: unless-stopped
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nextcloud-letsencrypt
depends_on:
- proxy
networks:
- nextcloud_network
volumes:
- ./proxy/certs:/etc/nginx/certs:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
restart: unless-stopped
db:
image: mariadb
container_name: nextcloud-mariadb
networks:
- nextcloud_network
volumes:
- db:/var/lib/mysql
- /etc/localtime:/etc/localtime:ro
environment:
- MYSQL_ROOT_PASSWORD=some_root_password
- MYSQL_PASSWORD=some_password
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
restart: unless-stopped
app:
image: nextcloud:latest
container_name: nextcloud-app
networks:
- nextcloud_network
depends_on:
- letsencrypt
- proxy
- db
volumes:
- nextcloud:/var/www/html
- ./app/config:/var/www/html/config
- ./app/custom_apps:/var/www/html/custom_apps
- ./app/data:/var/www/html/data
- ./app/themes:/var/www/html/themes
- /etc/localtime:/etc/localtime:ro
environment:
- VIRTUAL_HOST=nextcloud.hasp.de
- LETSENCRYPT_HOST=nextcloud.hasp.de
- LETSENCRYPT_EMAIL=hannes#hasp.de
restart: unless-stopped
volumes:
nextcloud:
db:
networks:
nextcloud_network:
I have a docker-compose file and want to be able to make one of the images be spun up from the image in my local cache vs. pulling from dockerhub. I'm using the sbt docker plugin, so I can see the image being created, and can see it when I do docker images at the command line. Yet, when I do docker-compose up -d myimage it always defaults to the remote image. How can I force it to use my local image??
Here is the relevant part of my compose file:
spark-master:
image: gettyimages/spark:2.2.0-hadoop-2.7
command: bin/spark-class org.apache.spark.deploy.master.Master -h spark-master
hostname: spark-master
environment:
MASTER: spark://spark-master:7077
SPARK_CONF_DIR: /conf
SPARK_PUBLIC_DNS: localhost
expose:
- 7001
- 7002
- 7003
- 7004
- 7005
- 7006
- 7077
- 6066
ports:
- 4040:4040
- 6066:6066
- 7077:7077
- 8080:8080
volumes:
- ./conf/master:/conf
- ./data:/tmp/data
hydra-streams:
image: ****/hydra-spark-core
command: bin/spark-class org.apache.spark.deploy.worker.Worker spark://spark-master:7077
hostname: worker
environment:
SPARK_CONF_DIR: /conf
SPARK_WORKER_CORES: 2
SPARK_WORKER_MEMORY: 1g
SPARK_WORKER_PORT: 8881
SPARK_WORKER_WEBUI_PORT: 8091
SPARK_PUBLIC_DNS: localhost
links:
- spark-master
expose:
- 7012
- 7013
- 7014
- 7015
- 7016
- 8881
ports:
- 8091:8091
volumes:
- ./conf/worker:/conf
- ./data:/tmp/data
You can force using the local image by retaging the existing image:
docker tag remote/image local_image
And then inside the compose file using local_image instead of remote/image.