Docker can't see some files? - docker

I want to create a docker registry on my server using this docker-compose.yaml file :
version: '3'
services:
registry:
restart: always
image: registry:2
ports:
- 5000:5000
volumes:
- /home/ubuntu/registry/volumes/data:/var/lib/registry
- /home/ubuntu/registry/volumes/certs:/certs
- /home/ubuntu/registry/volumes/auth:/auth
environment:
REGISTRY_HTTP_TLS_CERTIFICATE: /home/ubuntu/registry/certs/domain.crt
REGISTRY_HTTP_TLS_KEY: /home/ubuntu/registry/certs/domain.key
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_PATH: /home/ubuntu/registry/auth/htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
I am running
'''
docker-compose up
'''
but this error occurs.
registry_1 | time="2019-08-03T21:17:38.938127498Z" level=fatal msg="open /home/ubuntu/registry/certs/domain.crt: no such file or directory"
I am sure those files exist, do you have any idea?

volumes:
- /home/ubuntu/registry/volumes/certs:/certs
Here you are saying to use the HOST path /home/ubuntu/registry/volumes/certs and make it available as /certs inside the CONTAINER. So perhaps you want to change the path on the container side to match the host path, or change the environment variables to reflect the actual container paths.
Also note that you have used /home/ubuntu/registry/volumes/certs in one location and /home/ubuntu/registry/certs (without "volumes") in another, which I assume might need to be fixed up as well.

Related

Content of docker bind mount is not showing inside the container

EDITED:
Rclone has a bucket mounted to the host directory /home/user/rclone. I want to access the contents of this directory inside nextcloud docker instance. So I would bind mount it to /var/www/html/data. With the option shared, any changes made in the container will be reflected in the host, and vice versa.
I have set the permission of /home/user/rclone to be 777. And the content is visible with a ls command from the host. Once the docker container is restarted, a ls command from within the container does not show any files. Rclone is still running properly.
I am suspecting that because the volume nextcloud is mounted at /var/www/html, the mount of the bind mount at /var/www/html/data is covered up.
So then I picked another directory inside the container, namely /mnt and tried it. Still no files show up with a ls command.
My nextcloud docker compose: (mysql does not have anything to do with this; showing the /var/www/html/data mount version only.)
version: '2'
volumes:
nextcloud:
db:
services:
db:
image: mariadb
restart: always
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
volumes:
- db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=xxx
- MYSQL_PASSWORD=xxx
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
network_mode: npm_default
container_name: db
app:
image: nextcloud:latest
restart: always
links:
- db
volumes:
- nextcloud:/var/www/html
- /home/user/rclone:/var/www/html/data:shared
environment:
- MYSQL_PASSWORD=xxx
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_HOST=db
- NEXTCLOUD_TRUSTED_DOMAINS=xxx
network_mode: npm_default
container_name: nextcloud
Another way of putting it:
rclone cloud storage --> host --> —> docker —-> nextcloud external storage
So the reason why the nextcloud service cannot view the content is due to permission problems.
If you do exec -it nextcloud bash to check the contents, they are there because you are root.
So the proper solution if you want to use a shared bind mount with the host directory, is to set permission to 666 so the others in the container can view the file.
But at the end, I have figured that volume plugins are a way better solution, thus this is somewhat deprecated.

Grafana on Docker

I am using docker to run prometheus, grafana and node exporter. I am trying to use named volumes and I am having some issues with that. My docker-compose code is:
version: "3.7"
volumes:
grafana_ini:
prometheus_data:
grafana_data:
dashboards_data:
services:
grafana:
build: ./grafana
volumes:
- grafana_ini:/etc/grafana/grafana.ini
- grafana_data:/etc/grafana/provisioning/datasources/datasource.yml
- dashboards_data:/etc/grafana/provisioning/dashboards
- ./dashboards/linux_dashboard.json:/etc/grafana/provisioning/dashboards/linux_dashboard.json
ports:
- 3000:3000
links:
- prometheus
prometheus:
build: ./prometheus
volumes:
- prometheus_data:/etc/prometheus/prometheus.yml
ports:
- 9090:9090
node-exporter:
image: prom/node-exporter:latest
container_name: node_exporter
restart: unless-stopped
expose:
- 9100
and my dockerfile for grafana is:
FROM grafana/grafana:latest
COPY ./Ini/grafana.ini /etc/grafana/grafana.ini
COPY datasource.yml /etc/grafana/provisioning/datasources/datasource.yml
COPY ./dashboards/dashboard.yml /etc/grafana/provisioning/dashboards
COPY ./dashboards/server/linux_dashboard.json /etc/grafana/provisioning/dashboards
COPY ./dashboards/server/windows_dashboard.json /etc/grafana/provisioning/dashboards
EXPOSE 3000:3000
and I am getting this error while building it
ERROR: for 2022_grafana_1 Cannot create container for service grafana: source /var/lib/docker/overlay2/4ac5b487fd7fd52491b250c4afaa433801420cd907ac4a70ddb4589fdb99368b/merged/etc/grafana/grafana.ini is not directory
ERROR: for grafana Cannot create container for service grafana: source /var/lib/docker/overlay2/4ac5b487fd7fd52491b250c4afaa433801420cd907ac4a70ddb4589fdb99368b/merged/etc/grafana/grafana.ini is not directory
Can anybody please help me.
It looks like there are some problems with the volume configuration in your Grafana container:
First, I think this was simply a typo in your question:
- grafana_ini:/etc/grafana/grafana.inianticipated location in container
I suspect that you were actually intending this:
- grafana_ini:/etc/grafana/grafana.ini
Which doesn't make any sense: grafana.ini is a file, but a volume is
a directory. Docker won't allow you to mount a directory on top of a
file, hence the error:
ERROR: .../etc/grafana/grafana.ini is not directory
You have the same problem with the grafana_data volume, which you're
attempting to mount on top of datasource.yml:
- grafana_data:/etc/grafana/provisioning/datasources/datasource.yml
I think you may be approaching this configuration in the wrong way;
you may want to read through these documents:
https://grafana.com/docs/grafana/latest/installation/docker/
https://grafana.com/docs/grafana/latest/administration/configure-docker/
https://grafana.com/docs/grafana/latest/administration/provisioning/
It is possible to configure Grafana (and Prometheus!) using only bind
mounts and environment variables (this includes installing plugin,
data sources, and dashboards), so you don't need to build your own
custom images.
Unrelated to this particular problem, there are some other things in
your docker-compose.yml that are worth changing. You should no
longer be using the links directive...
links:
- prometheus
...because Docker maintains DNS for you automatically; your containers
can refer to each other by name with no additional configuration.

Docker Compose service is not using volume

I have this service...
storage:
image: mcr.microsoft.com/azure-storage/azurite
ports:
"20000:10000"
restart: unless-stopped
volumes:
C:/Data:/hello
I can add data to the Azurite service and I can browse it in the volume via Docker Desktop but I can't see any files in my local file system - the folder is always empty.
Why isn't the volume mapped to my file system?
You need to add quotations in your volumes declaration since there are yaml special characters in local path.
Hope this helps. Below is a docker compose file where you can start Azurite with volumes. Please create a folder called storagedata at the same directory as your docker compose file exists.
version: '3.4'
services:
storageemulator:
image: mcr.microsoft.com/azure-storage/azurite
command: "azurite --loose --blobHost 0.0.0.0 --blobPort 10000 --queueHost 0.0.0.0 --queuePort 10001 --tableHost 0.0.0.0 --tablePort 10002 --location /workspace --debug /workspace/debug.log"
ports:
- "10000:10000"
- "10001:10001"
- "10002:10002"
volumes:
- ./storagedata:/workspace

How to Add a shared folder location to my application (Docker)

I have a shared network folder, e.g.
\\pa02ptsdfs002.corp.lgd.afk\files\Public\chris\temp
There is a file in the shared folder that I would like to be visible to my dockerized application. The ultimate goal is, to have my application pick up and process this file, then put it into a database.
I have a Dockerfile and docker-compose.yml that I am thinking I will need to add a volume with the shared folder location (I'm not sure if this is the correct approach, this is where I need help!)
So far I've tried adding a volume in my yml which threw an error when i did docker-compose up -d
airflow:
build: ./airflow
image: digitalImage/airflow
container_name: di-airflow
environment:
AIRFLOW__CORE__EXECUTOR: 'LocalExecutor'
POSTGRES_USER: 'airflowStuff'
POSTGRES_PASSWORD: 'postgresCreds'
POSTGRES_HOST: 'host-postgres'
POSTGRES_PORT: '5432'
POSTGRES_DB: 'postgres-db'
DATE_VALUE: '1 DEC 2020 00:00:00'
volumes:
- ./airflow/released_dags:/usr/local/airflow/dags
- \\pa02ptsdfs002.corp.lgd.afk\files\Public\chris\temp:/usr/local/airflow/dags/inboundFiles
networks:
- di-airflowStuff
ports:
- 8081:8080
depends_on:
- postgres
ERROR: Cannot create container for service airflow: \pa02ptsdfs002.corp.lgd.afk\files\Public\chris\temp%! (EXTRA string=is not a valid Windows path)
p.s. I can access this shared folder location from my file explorer and python without a problem.
You don't need docker-compose to mount an external volume to your container, just configure it when running the container:
docker run --name name -v path_host:path_in_container image:tag
both directories must exist
Microsoft recomends mapping shares to network drives (if you're running docker on Windows):
https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/persistent-storage#smb-mounts

Docker & MySQL: No secrets are created with docker-compose file

My Docker container keeps restarting when running docker-compose up -d. When inspecting the logs with docker logs --tail 50 --follow --timestamps db, I get the following error:
/usr/local/bin/docker-entrypoint.sh: line 37: "/run/secrets/db_mysql_root_pw": No such file or directory
This probably means that no secrets are made. The output of docker secret ls also gives no secrets.
My docker-compose.yml file looks something like this (excluding port info etc.):
version: '3.4'
services:
db:
image: mysql:8.0
container_name: db
restart: always
environment:
- MYSQL_USER_FILE="/run/secrets/db_mysql_user"
- MYSQL_PASSWORD_FILE="/run/secrets/db_mysql_user_pw"
- MYSQL_ROOT_PASSWORD_FILE="/run/secrets/db_mysql_root_pw"
secrets:
- db_mysql_user
- db_mysql_user_pw
- db_mysql_root_pw
volumes:
- "./mysql-data:/docker-entrypoint-initdb.d"
secrets:
db_mysql_user:
file: ./db_mysql_user.txt
db_mysql_user_pw:
file: ./db_mysql_user_pw.txt
db_mysql_root_pw:
file: ./db_mysql_root_pw.txt
In the same directory I have the 3 text files which simply contain the values for the environment variables. e.g. db_mysql_user_pw.txt contains password.
I am running Linux containers on a Windows host.
This is pretty dumb but changing
environment:
- MYSQL_USER_FILE="/run/secrets/db_mysql_user"
- MYSQL_PASSWORD_FILE="/run/secrets/db_mysql_user_pw"
- MYSQL_ROOT_PASSWORD_FILE="/run/secrets/db_mysql_root_pw"
to
environment:
- MYSQL_USER_FILE=/run/secrets/db_mysql_user
- MYSQL_PASSWORD_FILE=/run/secrets/db_mysql_user_pw
- MYSQL_ROOT_PASSWORD_FILE=/run/secrets/db_mysql_root_pw
made it work. I still don't know why I cannot see the secrets with docker secret ls though.

Resources