i just started learning docker and am interested in saving logs from container to local machine(for storage/review)
Is there a way to save /var/lib/docker/containers/CONTAINER_ID/CONTAINER_ID-json.log to Windows filesystem?
i tried specifying volume in my docker-compose.yml file running image "dtf"
services:
web:
image: dtf
ports:
- '5000:5000'
logging:
driver: "json-file"
options:
max-size: "1k"
max-file: "3"
volumes:
- C:\logs:/var/lib/docker/containers/
from what i understood about docker volumes, i should be able to access the .log file at C:\logs, but i'm not sure how to correctly write path to the file itself (the /CONTAINER_ID/ part)
For this you need to look up docker volumes. You can expose a part of the host file system to the docker container.
Check out Docker logging strategies which illustrates different approaches to perform logging. The recommended method is Docker logging driver, please check out more at Configuring logging drivers.
As shown in Better ways of handling logging in containers, you can link the log folder with a host folder via a data volume container using this command:
# docker run -ti -v /dev/log:/dev/log fedora sh
The above solution has been taken from this stackoverflow answer, and just provided the answer in case the original solution link becomes obsolete due to deletion or something.
Related
I've been banging my head against the wall trying to sort out permissions issues when running a container that uses a remote SMB share for storing configuration files.
I found this post and answer but still can't seem to get things to work:
docker-add-network-drive-as-volume-on-windows
For the below YAML code, yes everything is formatted correctly. I just copied this from my reddit post and the indents are not showing correctly now.
My set-up is as follows:
Running Proxmox as my hypervisor with:
TrueNAS Scale as the NAS
Debian VM for hosting Docker
The TrueNAS VM has a single pool, with 1 dataset for SMB shares and 1 dataset for NFS shares (implemented for troubleshooting purposes)
I have credentials steve:steve (1000:1000) supersecurepassword with Full Control ACL permissions on the SMB share. I can access this share via windows and CLI and have all expected operations behaving as expected.
On the Debian host, I have created user steve:steve (1000:1000) with supersecurepassword.
I have been able to successfully mount and map the share within the debian host using cifs using:
//192.168.10.206/dockerdata /mnt/dockershare cifsuid=1000,gid=1000,vers=3.0,credentials=/root/.truenascreds 0 0
The credentials are:
username=steve
password=supersecurepassword
I can read/write from CLI through the mount point, view files, modify files, etc.
I have also successfully mounted & read/write the share with these additional options:
file_mode=0777,dir_mode=0777,noexec,nosuid,nosetuids,nodev
Now here's where I start having problems. I can create a container user docker compose, portainer (manual creation and stack for compose) but run into database errors as the container attempts to start.
version: "2.1"
services:
babybuddytestsmbmount:
image: lscr.io/linuxserver/babybuddy:latest
container_name: babybuddytestsmbmount
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
- CSRF_TRUSTED_ORIGINS=http://127.0.0.1:8000,https://babybuddy.domain.com
ports:
- 1801:8000
restart: unless-stopped
volumes:
- /mnt/dockershare/babybuddy2:/config
Docker will create all folders and files, start the container but the webui will return a server 500 error. The logs indicate these database errors which results in a large number of exceptions:
sqlite3.OperationalError: database is locked
django.db.utils.OperationalError: database is locked
django.db.migrations.exceptions.MigrationSchemaMissing: Unable to create the django_migrations table (database is locked)
I also tried mounting the SMB share in a docker volume using the following:
version: "2.1"
services:
babybuddy:
image: lscr.io/linuxserver/babybuddy:latest
container_name: babybuddy
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
- CSRF_TRUSTED_ORIGINS=http://127.0.0.1:8000,https://babybuddy.domain.com
ports:
- 1800:8000
restart: unless-stopped
volumes:
- dockerdata:/config
volumes:
dockerdata:
driver_opts:
type: "cifs"
o: "username=steve,password=supersecurepassword,uid=1000,gid=1000,file_mode=0777,dir_mode=0777,noexec,nosuid,nosetuids,nodev,vers=3.0"
device: "//192.168.10.206/dockerdata"
I have also tried this under options:
o: "username=steve,password=supersecurepassword,uid=1000,gid=1000,rw,vers=3.0"
Docker again is able to create the container, create & mount the volume, create all folders and files, but encouters the same DB errors indicated above.
I believe this is because the container is trying to access the SMB share as root, which TrueNAS does not permit. I have verified that all files and folders are under the correct ownership, and during troubleshooting have also stopped the container, recursively chown and chgrp the dataset to root:root, restarting the container and no dice. Changing the SMB credntials on the debian host to root results in a failure to connect.
Testing to ensure I didn't have a different issue causing problems, I was able to sucessfully start the container locally on the host as well as using a remote NFS share from the same TrueNAS VM.
I have also played with the dataset permissions, changing owners within TrueNAS, attempting permissions without ACL, etc.
Each of these variations was done with fresh dataset for SMB and a wipeout and recreation of docker as well as reinstall of debian.
Any help or suggestions would be greatly appreciated.
Edit: I also tried this with Ubuntu as the docker host and attempted to have docker run under the steve user to disastrous results.
I expected to be able to mount the SMB share on my TrueNAS system on my Debian docker host machine and encounter write errors in the database files that are part of the container. Local docker instances and NFS mounts work fine.
I have a problem that I just can't understand. I am using docker to run certain containers, but I have problems with at least one Volume, where I't like to ask if anybody can give me a hint what I am doing wrong. I am using Nifi-Ingestion as example, but it affects even more container volumes.
First, let's talk about the versions I use:
Docker version 19.03.8, build afacb8b7f0
docker-compose version 1.27.4, build 40524192
Ubuntu 20.04.1 LTS
Now, let's show the volume in my working docker-compose-file:
In my container, it is configured as followed:
volumes:
- nifi-ingestion-conf:/opt/nifi/nifi-current/conf
Below my docker-compose file it is defined as a normal named volume:
volumes:
nifi-ingestion-conf:
This is a snippet from the docker-compose that I'd like to get working
In my container, it is configured in this case as followed (having my STORAGE_VOLUME_PATH defined as /mnt/storage/docker_data):
volumes:
- ${STORAGE_VOLUME_PATH}/nifi-ingestion-conf:/opt/nifi/nifi-current/conf
On the bottom I guess there is something to do but I don't know what I could need to do here. In this case it is the same as in the working docker-compose:
volumes:
nifi-ingestion-conf:
So, now whats my problem?
I have two docker-compose files. One uses the normal named volumes, and one uses the volumes in my extra mount path. When I run the containers, the volumes seem to work different since files are written in the first style, but not in the second. My mount paths are generated in the second version so there is nothing wrong with my environment variables in the .env-file.
Hint: the /mnt/storage/docker_data is an NFS-mount but my machine has the full privileges on that share.
Here is my fstab-entry to mount that volume (maybe I have to set other options):
10.1.0.2:/docker/data /mnt/storage/docker_data nfs auto,rw
Bigger snippets
Here is a bigger snipped if the docker-compose (i need to cut and remove confident data, my problem is not that it does not work, it is only that the volume acts different. Everything for this one volume is in the code.):
version: "3"
services:
nifi-ingestion:
image: my image on my personal repo
container_name: nifi-ingestion
ports:
- 0000
labels:
- app-specivic
volumes:
- ${STORAGE_VOLUME_PATH}/nifi-ingestion-conf:/opt/nifi/nifi-current/conf
#working: - nifi-ingestion-conf:/opt/nifi/nifi-current/conf
environment:
- app-specivic
networks:
- cnetwork
volumes:
nifi-ingestion-conf:
networks:
cnetwork:
external: false
ipam:
driver: default
config:
- subnet: 192.168.1.0/24
And here of the env (only the value we are using)
STORAGE_VOLUME_PATH=/mnt/storage/docker_data
if i understand your question correctly, you wonder why the following docker-compose snippet works for you
version: "3"
services:
nifi-ingestion:
volumes:
- nifi-ingestion-conf:/opt/nifi/nifi-current/conf
volumes:
nifi-ingestion-conf:
and the following docker-compose snippet does not work for you
version: "3"
services:
nifi-ingestion:
volumes:
- ${STORAGE_VOLUME_PATH}/nifi-ingestion-conf:/opt/nifi/nifi-current/conf
what makes them different is how you use volumes. you need to differentiate between mount host paths and mount named volumes
You can mount a host path as part of a definition for a single service, and there is no need to define it in the top level volumes key.
But, if you want to reuse a volume across multiple services, then define a named volume in the top-level volumes key.
named volumes are managed by docker
If you start a container with a volume that does not yet exist, Docker creates the volume for you.
also, would advise you to read this answer
update:
you might also want to read about docker nfs volumes
Is there better way to avoid folder permission issues when a relative folder is being set in a docker compose file when using manjaro?
For instance, take the bitnami/elasticsearch:7.7.0 image as an example:
This image as an example will always throw the ElasticsearchException[failed to bind service]; nested: AccessDeniedException[/bitnami/elasticsearch/data/nodes]; error.
I can get around in by:
create the data directory with sudo, followed by chmod 777
attaching a docker volume
But I am looking for a bit easier to manage solution, similar to the docker experience in Ubuntu and OSX which I do not have to first create a directory with root in order for folder mapping to work.
I have made sure that my user is in the docker group by following the post install instructions on docker docs. I have no permission issues when accessing docker info, or sock.
docker-compose.yml
version: '3.7'
services:
elasticsearch:
image: bitnami/elasticsearch:7.7.0
container_name: elasticsearch
ports:
- 9200:9200
networks:
- proxy
environment:
- ELASTICSEARCH_HEAP_SIZE=512m
volumes:
- ./data/:/bitnami/elasticsearch/data
- ./config/elasticsearch.yml:/opt/bitnami/elasticsearch/config/elasticsearch.yml
networks:
proxy:
external: true
I am hoping for a more seamless experience when using my compose files from git which works fine in other systems, but running into this permission issue on the data folder on manjaro.
I did check other posts on SO, some some are temporary, like disabling selinux, while other require running docker with the --privileged flag, but I am trying to do with from compose.
This has nothing to do with the Linux distribution but is a general problem with Docker and bind mounts. A bind mount is when you mount a directory of your host into a container. The problem is that the Docker daemon creates the directory under the user it runs with (root) and the UID/GIDs are mapped literally into the container.
Not that it is advisable to run as root, but depending on your requirements, the official Elasticsearch image (elasticsearch:7.7.0) runs as root and does not have this problem.
Another solution that would work for the bitnami image is to make the ./data directory owned by group root and group writable, since it appears the group of the Elasticsearch process is still root.
A third solution is to change the GID of the bitnami image to whatever group you had the data created with and make it group writable.
Due to space limitations on my local machine I need to ensure my docker containers store their data on my external hard drive.
The docker project I am using has a docker compose file and it specifies a number of volumes like so:
version: "2"
volumes:
pgdata:
cache:
services:
postgres:
image: "openmaptiles/postgis:${TOOLS_VERSION}"
volumes:
- pgdata:/var/lib/postgresql/data
Those volumes ultimately exist on my local machine. I'd like to change their location to somewhere on my external drive e.g /Volumes/ExternalDrive/docker/
How do I go about this?
I have read the docker documentation on volumes and and docker-compose but can't find a way to specify the path of where a volume should exist.
If anyone could point the way I would be most grateful.
You can explore and test more features related to volumes using the CLI help and then transpose to compose.
docker volume create --help
https://docs.docker.com/engine/reference/commandline/volume_create/
Note that example below might not work on Windows hence the built-in local driver on Windows does not support any options, but if you're running docker on Linux, the compose file below should do the job:
version: "2"
volumes:
pgdata:
driver: local
driver_opts:
type: 'none'
o: 'bind'
device: '/path/to/the/external/storage'
cache:
services:
postgres:
image: "openmaptiles/postgis:${TOOLS_VERSION}"
volumes:
- pgdata:/var/lib/postgresql/data
You might also consider to change the docker launch options to store its data in a location of your choice. Here's a guide: https://linuxconfig.org/how-to-move-docker-s-default-var-lib-docker-to-another-directory-on-ubuntu-debian-linux
Alternatively, if you're more comfortable trying to solve this in the OS rather than docker, you could try some tricks at the filesystem level to create a symbolic link at /var/lib/docker/volumes/ to point at your external storage, but be careful and backup everything. I personally never tried this, but I believe it should be transparent for the docker storage driver.
I am attempting to mount a volume from C:/Users into a container running on a docker-machine using the hyperv driver on docker for windows (win 10 pro). I am a using the lastest docker (1.13.1) and the same on the hyper vm machine. I have tried switching to using a local account, shared the drive in the docker settings menu and ive pretty much tried everything i could find on google.
Running the test volume run command in the settings menu works for me. At this point in time I presume hyperv does not support mounting volumes from the host however i cant find anywhere that explicitly says that volumes mounting will not work in hyperv.
This is my docker-compose config:
networks: {}
services:
app:
build:
context: C:\users\deep\projects\chat\app
command: sleep 3600
image: app
links:
- rethinkdb
- redis
ports:
- 4005:4005
- 4007:4007
volumes:
- /c/users/deep/projects/chat/app:/usr/src/app:rw
redis:
image: redis
rethinkdb:
image: rethinkdb:2.3.5
version: '2.0'
volumes: {}
In my Dockerfile i can see copy files into the container to usr/src/app. When i up the services with the volume specified in the compose file the directory is emptied, however i omit this volume mount i can see my files that i copied into the container from the dockerfile.
Running verbose when starting my services i can see a volumes path specified as such 'Binds': [u'/c/users/deep/projects/chat/app:/usr/src/app:rw']. However, when i inspect the container using docker-compose inspect app i see volumes set to null "Volumes": null.
I presume at this point that mounting volumes into a container running inside a hyperv VM is not supported? Can someone confirm so that I can RIP :)
I think you just need to share the volume (c:) of the folder from the Docker app settings.
See the "Shared Drives" paragraph from the getting started guide
I'm using mounted folders with a similar configuration and it works fine once the drive has been shared.
As stupid as it seems, this happens to me often. The solution is to un-check the C drive in "Docker for windows" - > Setting - > Shared Drives, apply and check it again with apply.
You should use /c/Users, with a capital "C".