I have a shared network folder, e.g.
\\pa02ptsdfs002.corp.lgd.afk\files\Public\chris\temp
There is a file in the shared folder that I would like to be visible to my dockerized application. The ultimate goal is, to have my application pick up and process this file, then put it into a database.
I have a Dockerfile and docker-compose.yml that I am thinking I will need to add a volume with the shared folder location (I'm not sure if this is the correct approach, this is where I need help!)
So far I've tried adding a volume in my yml which threw an error when i did docker-compose up -d
airflow:
build: ./airflow
image: digitalImage/airflow
container_name: di-airflow
environment:
AIRFLOW__CORE__EXECUTOR: 'LocalExecutor'
POSTGRES_USER: 'airflowStuff'
POSTGRES_PASSWORD: 'postgresCreds'
POSTGRES_HOST: 'host-postgres'
POSTGRES_PORT: '5432'
POSTGRES_DB: 'postgres-db'
DATE_VALUE: '1 DEC 2020 00:00:00'
volumes:
- ./airflow/released_dags:/usr/local/airflow/dags
- \\pa02ptsdfs002.corp.lgd.afk\files\Public\chris\temp:/usr/local/airflow/dags/inboundFiles
networks:
- di-airflowStuff
ports:
- 8081:8080
depends_on:
- postgres
ERROR: Cannot create container for service airflow: \pa02ptsdfs002.corp.lgd.afk\files\Public\chris\temp%! (EXTRA string=is not a valid Windows path)
p.s. I can access this shared folder location from my file explorer and python without a problem.
You don't need docker-compose to mount an external volume to your container, just configure it when running the container:
docker run --name name -v path_host:path_in_container image:tag
both directories must exist
Microsoft recomends mapping shares to network drives (if you're running docker on Windows):
https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/persistent-storage#smb-mounts
Related
I stumbled across a problem with docker volumes while starting docker containers with a docker compose file (MariaDB, RabbitMQ, Maven). I start them simply with docker-compose up -d (WITHOUT SUDO)
My volumes are definied like this:
...
volumes:
- ./production/mysql:/var/lib/mysql:z
...
Everything is working fine and the ./production directory is created (where the volumes are mapped)
But when I again try to restart the docker containers with down/up, I get following error:
error checking context: 'no permission to read from '…/production/mysql/aria_log.00000001'
When I check the mentioned file I saw that it needs root:root permission. This is because the file is generated with the root user inside the container. So I tried to use namespace as mentioned in the docs.
Anyway the error still occurs. Any ideas or references?
Thanks.
Docker Compose File:
version: '3.8'
services:
mysql:
image: mariadb:latest
restart: always
env_file:
- config.env
volumes:
- ./production/mysql:/var/lib/mysql:z
environment:
MYSQL_DATABASE: ${DATABASE_NAME}
MYSQL_USER: ${DATABASE_USER}
MYSQL_PASSWORD: ${DATABASE_PASSWORD}
MYSQL_ROOT_PASSWORD: ${DATABASE_PASSWORD}
networks:
- testnetwork
networks:
testnetwork:
The issue comes from the mapping between the host user/group IDs and the ones inside the container. One of the solutions is to use a named volume and avoid all this hassle, but you can also do the following:
Add user: ${UID}:${GID} to your service inside the docker-compose file.
Run UID=${id -u} GID=${id -g} docker-compose up. This way you make sure that the user in the container will have the same UID/GID as the user on the host and files created in the container will have proper permissions.
NOTE: Docker for Mac (using the osxfs driver) does this behind the scenes and you don't need to worry about users and groups.
Run the Docker daemon as a non-root user this can be helpfull for your purpose.
all document are here.
I have been trying to install drupal using the official image from docker hub. I created a new folder in my D directory, for my Drupal project and created a docker-compose.yml file.
Drupal with PostgreSQL
Access via "http://localhost:8080"
(or "http://$(docker-machine ip):8080" if using docker-machine)
During initial Drupal setup,
Database type: PostgreSQL
Database name: postgres
Database username: postgres
Database password: example
ADVANCED OPTIONS; Database host: postgres
version: '3.1' services:
drupal:
image: drupal:8-apache ports:
- 8080:80
volumes:
- /var/www/html/modules
- /var/www/html/profiles
- /var/www/html/themes
this takes advantage of the feature in Docker that a new anonymous
volume (which is what we're creating here) will be initialized with the
existing content of the image at the same location
- /var/www/html/sites
restart: always
postgres:
image: postgres:10
environment:
POSTGRES_PASSWORD: example
restart: always
When I ran the docker-compose up -d command in a terminal from within the folder which constrong texttained docker-compose.yml file, my drupal container and its databse were successfully installed and running and I was able to access the site from http://localhost:8080 but I couldnt find their core files in the folder. It was just docker-compose.yml file in the folder.
I then removed the whole docker container and began with a fresh installation again with by editing the volume section in the docker-compose.yml file to point to the directory and folder where I want the core files of drupal to be populated.
Example D:/My Project/Drupal Project.
Drupal with PostgreSQL
Access via "http://localhost:8080"
(or "http://$(docker-machine ip):8080" if using docker-machine)
During initial Drupal setup,
Database type: PostgreSQL
Database name: postgres
Database username: postgres
Database password: example
ADVANCED OPTIONS; Database host: postgres
version: '3.1'
services:
drupal:
image: drupal:latest
ports:
- 8080:80
volumes:
- d:\projects\drupalsite/var/www/html/modules
- d:\projects\drupalsite/var/www/html/profiles
- d:\projects\drupal/var/www/html/themes
this takes advantage of the feature in Docker that a new anonymous
volume (which is what we're creating here) will be initialized with the
existing content of the image at the same location
- d:\projects\drupalsite/var/www/html/sites
restart: always
postgres:
image: postgres:10
environment:
POSTGRES_PASSWORD: example
restart: always
When I ran the docker-compose.yml command I received the error as shown below.
Container drupalsite_postgres_1 Created 3.2s
- Container drupalsite_drupal_1 Creating 3.2s
Error response from daemon: invalid mount config for type "volume": invalid mount path: 'z:/projects/drupalsite/var/www/html/sites' mount path must be absolute
PS Z:\Projects\drupalsite>
Please help me find a solution to this.
If these directories contain your application, they probably shouldn't be in volumes: at all. Create a file named Dockerfile that initializes your custom application:
FROM drupal:8-apache
COPY modules/ /var/www/html/modules/
COPY profiles/ /var/www/html/profiles/
COPY themes/ /var/www/html/themes/
COPY sites/ /var/www/html/sites/
# EXPOSE, CMD, etc. come from the base image
Then reference this in your docker-compose.yml file:
version: '3.8'
services:
drupal:
build: . # instead of image:
ports:
- 8080:80
restart: always
# no volumes:
postgres:
image: postgres:10
environment:
POSTGRES_PASSWORD: example
restart: always
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
If you really want to use volumes: here, there are three forms of that directive. The form you have in the question with just a path creates an anonymous volume: it causes Compose to persist that directory, initialized from what's in the image, but disconnected from your host system. With a bare name and a path, it creates a named volume, which is similar but can be explicitly managed. With two paths, it creates a bind mount, which unconditionally replaces the container content with the host-system content (there is no initialization).
version: '3.8'
services:
something:
volumes:
- /path1 # anonymous volume
- named:/path2 # named volume
- /host/path:/path3 # bind mount
volumes: # named volumes referenced in containers only
named: # usually do not need any settings
So if you do want to replace the image's contents with host directories, you need to use the bind-mount syntax. Relative paths here are interpreted relative to the location of the docker-compose.yml file.
version: '3.8'
services:
drupal:
image: drupal:8-apache
volumes:
- ./modules:/var/www/html/modules
# etc.
A final comment on named volume initialization: your file has a comment about initializing anonymous volumes. There are two major problems with this approach, though. First, the second time you start the container, the content of the volume takes precedence, and any changes in the underlying images will be ignored. Second, this setup only works for Docker named and anonymous volumes, but not Docker bind mounts, volume mounts in Kubernetes, or other types of mount. I'd generally avoid relying on this "feature".
When running Corda in docker with external Postgres DB configurations, I get insufficient privileges to access error.
Note:
Corda: 4.6 Postgresql: 9.6
Docker engine 20.10.6
Docker-compose: docker-compose version 1.29.1, build c34c88b2
docker-compose.yml file:
version: '3.3'
services:
partyadb:
hostname: partyadb
container_name: partyadb
image: "postgres:9.6"
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: partyadb
ports:
- 5432
partya:
hostname: partya
# image: corda/corda-zulu-java1.8-4.7:RELEASE
image: corda/corda-zulu-java1.8-4.6:latest
container_name: partya
ports:
- 10006
- 2223
command: /bin/bash -c "java -jar /opt/corda/bin/corda.jar run-migration-scripts -f /etc/corda/node.conf --core-schemas --app-schemas && /opt/corda/bin/run-corda"
volumes:
- ./partya/node.conf:/etc/corda/node.conf:ro
- ./partya/certificates:/opt/corda/certificates:ro
- ./partya/persistence.mv.db:/opt/corda/persistence/persistence.mv.db:rw
- ./partya/persistence.trace.db:/opt/corda/persistence/persistence.trace.db:rw
# - ./partya/logs:/opt/corda/logs:rw
- ./shared/additional-node-infos:/opt/corda/additional-node-infos:rw
- ./shared/cordapps:/opt/corda/cordapps:rw
- ./shared/drivers:/opt/corda/drivers:ro
- ./shared/network-parameters:/opt/corda/network-parameters:rw
environment:
- ACCEPT_LICENSE=${ACCEPT_LICENSE}
depends_on:
- partyadb
Error:
[ERROR] 12:41:24+0000 [main] internal.NodeStartupLogging. - Exception during node startup. Corda started with insufficient privileges to access /opt/corda/additional-node-infos/nodeInfo-5B........................................47D
The corda/corda-zulu-java1.8-4.6:latest image runs under the user corda, not root. This user has user id 1000, and also is in a group called corda, also with gid 1000:
corda#5bb6f196a682:~$ id -u corda
1000
corda#5bb6f196a682:~$ groups corda
corda : corda
corda#5bb6f196a682:~$ id -G corda
1000
The problem here seems to be that the file you are mounting into the docker container (./shared/additional-node-infos/nodeInfo-5B) does not have permissions setup in such a way as to allow this user to access it. I'm assuming the user needs read and write access. A very simple fix would be to give other read and write access to this file:
$ chmod o+rw ./shared/additional-node-infos/nodeInfo-5B
There are plenty of other ways to manage this kind of permissions issue in docker, but remember that the permissions are based on uid/gid which usually do not map nicely from your host machine into the docker container.
So the error itself describes that it's a permission problem.
I don't know if you crafted this dockerfile yourself, you may want to take a look at generating them with the dockerform task (https://docs.corda.net/docs/corda-os/4.8/generating-a-node.html#use-cordform-and-dockerform-to-create-a-set-of-local-nodes-automatically)
This permission problem could be that you're setting only read / write within the container:
- ./shared/additional-node-infos:/opt/corda/additional-node-infos:rw
or it could be that you need to change the permissions on the shared folder. Try changing the permissions of shared to 777 and see if that works, then restrict your way back down to permissions you're comfortable with.
I just configure the image to be run as root. This works but may not be safe. Simply add
services:
cordaNode:
user: root
to the service configuration.
Ref: How to configure docker-compose.yml to up a container as root
I have a docker-compose file to build a web server with django and a postgres database. It basically looks like that :
version: '3'
services:
server:
build:
context: .
dockerfile: ./docker/server/Dockerfile
image: backend
volumes:
- ./api:/app
ports:
- 8000:8000
depends_on:
- postgres
- redis
environment:
- PYTHONUNBUFFERED=1
postgres:
image: kartoza/postgis:11.0-2.5
volumes:
- pg_data:/var/lib/postgresql/data:rw
environment:
POSTGRES_DB: "gis,backend"
POSTGRES_PORT: "5432"
POSTGRES_USER: "user"
POSTGRES_PASS: "pass"
POSTGRES_MULTIPLE_EXTENSIONS: "postgis,postgis_topology"
ports:
- 5432:5432
redis:
image: "redis:alpine"
volumes:
pg_data:
I'm using a volume to make my data persistent
I managed to run my containers and add data to the database. A volume has successfully been created : docker volume ls
DRIVER VOLUME NAME
local server_pg_data
But this volume is empty as the output of docker system df -v shows:
Local Volumes space usage:
VOLUME NAME LINKS SIZE
server_pg_data 1 0B
Also, if I want or need to build the containers once again using docker-compose down and docker-compose up, data has been purged from my database. Yet, I thought that volumes were used to make data persistent on disk…
I must be missing something in the way I'm using docker and volumes but I don't get what:
why does my volume appears empty while there is some data in my postgres container ?
why does my volume does not persist after doing docker-compose down ?
This thread (How to persist data in a dockerized postgres database using volumes) looked similar but the solution does not seem to apply.
The kartoza/postgis image isn't configured the same way as the standard postgres image. Its documentation notes (under "Cluster Initializations"):
By default, DATADIR will point to /var/lib/postgresql/{major-version}. You can instead mount the parent location like this: -v data-volume:/var/lib/postgresql
If you look at the Dockerfile in GitHub, you will also see that parent directory named as a VOLUME, which has some interesting semantics here.
With the setting you show, the actual data will be stored in /var/lib/postgresql/11.0; you're mounting the named volume on a different directory, /var/lib/postgresql/data, which is why it stays empty. Changing the volume mount to just /var/lib/postgresql should address this:
volumes:
- pg_data:/var/lib/postgresql:rw # not .../data
I'm trying to run jira from docker compose (as of now this would not be necessary but later on I will add additional services). So far almost everything works.
version: '3'
services:
jira-local-standalone:
container_name: jira-local-standalone
image: atlassian/jira-software:8.5.0
volumes:
- ./work/jira-home:/var/atlassian/application-data/jira
ports:
- 8080:8080
environment:
TZ: Europe/Berlin
JVM_RESERVED_CODE_CACHE_SIZE: 1024m
JVM_MINIMUM_MEMORY: 512m
JVM_MAXIMUM_MEMORY: 2048m
networks:
- jiranet
networks:
jiranet:
driver: bridge
The only thing that does not work is this: I want the jira home to be accessible via the file manager on my host nachine. To access logs etc. and to "upload" import files. So I want to copy the files to the specified location on my host and be available to jira inside my container. More or less like some sort of a shared folder.
I expected the directory ./work/jira-home to be populated during startup (creating the /var/atlassian/application-data/jira inside ./work/jira-home and write logfiles and stuff). But the ./work/jira-home folder remains empty.
How can I expose the jira-home from inside the container to my host machine?
Thanks and best regards, Sebastian