persist %USERPROFILE% folder using docker compose volume - docker

I am searching on how to persist user profile folder in volume mounting
I have folder C:\Users\ABEL\source\repos which needs to be persisted for a windows container. The username should be from the host. It is unknown.
Below is my docker-compose file, The volume section is not correct.
Any comments will be helpful. Thanks in advance
version: '3.4'
services:
directoryservice:
image: abc-directoryservice:latest
build: .
ports:
- "44309:44309"
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://+:44309;
- ASPNETCORE_Kestrel__Certificates__Default__Password=welcome123#
- ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx
volumes:
- ./devops/https/abccert.pfx:/https/aspnetapp.pfx:ro
# - "$env:USERPROFILE/source:$env:USERPROFILE/source"
- ${Env:USERPROFILE}\source:${Env:USERPROFILE}\source
I get below error
invalid interpolation format for services.directoryservice.volumes.[]: "${Env:USERPROFILE}\\source:${Env:USERPROFILE}\\source". You may need to escape any $ with another $.

The $env:USERPROFILE/ ${env:USERPROFILE} syntax is specific to PowerShell.
Judging by the docs, docker-compose uses its own syntax: $USERPROFILE / ${USERPROFILE}
You report a follow-up problem, namely that the Windows-style path stored in $USERPROFILE (%USERPROFILE%) (e.g. C:\Users\jdoe\source) isn't converted to a Unix-style path (e.g. c/Users/jdoe/source)
This answer suggests that you must set environment variable COMPOSE_CONVERT_WINDOWS_PATHS to 1, before running your docker-compose command.
E.g., in a PowerShell session:
$env:COMPOSE_CONVERT_WINDOWS_PATHS=1
Consider adding this statement to your $PROFILE file so that it takes effect in future PowerShell sessions too.

Related

RavenDB ignoring environment variables in docker-compose

I am trying to setup a cluster of 3 RavenDB instances using docker-compose and I am having problems with the RavenDB server not picking up the values in the RAVEN_ environment variables.
At first, I was running a single instance, using this docker-compose file:
version: '3'
services:
ravendb:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
- "38888:38888"
volumes:
- ../data:/opt/RavenDB/Server/RavenData
With a simple Dockerfile that used the latest raven image and simply copied a settings.json file into the container.
FROM ravendb/ravendb
COPY settings.json /opt/RavenDB/Server/settings.json
{
"License.Eula.Accepted": true,
"License": {/*License here*/},
"Setup.Mode": "Unsecured",
"Security.UnsecuredAccessAllowed": "PublicNetwork",
"ServerUrl": "http://0.0.0.0:8080",
"ServerUrl.Tcp": "tcp://0.0.0.0:38888"
}
Now that I am trying to setup 3 instances, I wanted to avoid this way of creating the containers, since I would have to have different Dockerfiles and a settings.json file for each one.
Therefore, I thought of using a single docker-compose file that creates three containers and configures each one with enviroment variables.
I started with a single instance, to see if any problems would arise:
version: '3'
services:
raven1:
container_name: raven1
image: ravendb/ravendb
ports:
- "8080:8080"
- "38888:38888"
environment:
- RAVEN_Security_UnsecuredAccessAllowed=PublicNetwork
- RAVEN_Setup_Mode=Unsecured
- RAVEN_License_Eula_Accepted=true
- "RAVEN_ServerUrl=http://0.0.0.0:8080"
- "RAVEN_ServerUrl_Tcp=tcp://0.0.0.0:38888"
volumes:
- ../data:/opt/RavenDB/Server/RavenData
And arise they did! Despite the environment variables being set correctly, they are not picked up by the server, and the settings.json file is the default one.
root#8ad95cc439d4:/opt/RavenDB/Server# env
RAVEN_ARGS=
RAVEN_Security_UnsecuredAccessAllowed=PublicNetwork
RAVEN_AUTO_INSTALL_CA=true
RAVEN_ServerUrl=http://0.0.0.0:8080
RAVEN_SETTINGS=
RAVEN_ServerUrl_Tcp=tcp://0.0.0.0:38888
RAVEN_IN_DOCKER=true
RAVEN_Setup_Mode=Unsecured
RAVEN_License_Eula_Accepted=true
RAVEN_DataDir=RavenData
root#8ad95cc439d4:/opt/RavenDB/Server# cat settings.json
{
"Security.UnsecuredAccessAllowed": "PrivateNetwork"
}
Any idea why this might be happening? I can't seem to find any mention of issues regarding this.
Do I understand correctly you expected the configuration from environment variables to be included in the settings.json file in the container?
If that's the case I would like to clarify that passing environment variables does not modify RavenDB's settings.json file. Instead RavenDB loads them up directly from the environment.
Configuration options are loaded in the following order of precedence:
command line arguments
settings.json configuration file
RAVEN_ prefixed environment variables
So if you wanted to override the configuration option Security.UnsecuredAccessAllowed found in settings.json file you would need to either change the file on the container or pass it as a CLI argument --Security.UnsecuredAccessAllowed PublicNetwork.
Both cases are supported by RavenDB docker images:
to clear the default settings.json you can pass RAVEN_SETTINGS={} environment variable to the container.
to pass command line arguments to the RavenDB server binary you can use RAVEN_ARGS environment variable. E.g. RAVEN_ARGS=--Security.UnsecuredAccessAllowed PublicNetwork

"key cannot contain a space" error while running docker compose

I am trying to deploy my django app to app engine using dockerfile and for that after following a few blogs such as these, I created a docker-compose.yml file but when I run the docker compose up command or docker-compose -f docker-compose-deploy.yml run --rm gcloud sh -c "gcloud app deploy" I get an error key cannot contain a space. See below:
For example:
$ docker compose up
key cannot contain a space
$ cat docker-compose.yml
version: '3.7'
services:
app:
build:
context: .
ports: ['8000:8000']
volumes: ['./app:/app']
Can someone please help me to fix this error? I have tried yamllint to validate the yaml file for any space/indentation type of error and it doesn't show any error to me.
EDIT:
Here is the content for file in the longer command:
version: '3.7'
services:
gcloud:
image: google/cloud-sdk:338.0.0
volumes:
- gcp-creds:/creds
- .:/app
working_dir: /app
environment:
- CLOUDSDK_CONFIG=/creds
volumes:
gcp-creds:
Ok this is resolved finally! After beating my head around, I was able to finally resolve this issue by doing the following things:
Unchecked the option to use "Docker Compose v2" from my docker desktop settings. Here is the setting in Docker Desktop
Closed the docker desktop app and restarted it.
Please try these steps in case you face the issue. Thanks!
Just adding another alt answer here that I confirmed worked for me when following the steps above did not. My case is slightly different, but as Google brought me here first I thought I'd leave a note.
Check your env var values for spaces!
This may only be applicable if you are using env_var files (appreciate that OP is not in the minimal example, hence saying this is different).
Unescaped spaces in variables will cause this cryptic error message.
So, given a compose file like this:
version: '3.7'
services:
gcloud:
image: google/cloud-sdk:338.0.0
volumes:
- gcp-creds:/creds
- .:/app
working_dir: /app
env_file:
- some_env_file.env
If some_env_file.env looks like this:
MY_VAR=some string with spaces
then we get the cryptic key cannot contain a space.
If instead we change some_env_file.env to be like this:
MY_VAR="some string with spaces"
then all is well.
The issue has been reported to docker-compose.
Google brought me here first, and when your suggestion sadly didn't work for me, it then took me to this reddit thread, where I found out the above.
Docker Compose (at least since v2) automatically parses .env files before processing the docker-compose.yml file, regardless of any env_file setting within the yaml file. If any of the variables inside your .env file contains spaces, then you will get the error key cannot contain a space.
Two workarounds exist at this time:
Rename your .env file to something else, or
Create an alternate/empty .env file, e.g. .env.docker and then explicitly set the --env-file parameter, i.e. docker compose --env-file .env.docker config.
Track the related issues here:
https://github.com/docker/compose/issues/6741
https://github.com/docker/compose/issues/8736
https://github.com/docker/compose/issues/6951
https://github.com/docker/compose/issues/4642
https://github.com/docker/compose/commit/ed18cefc040f66bb7f5f5c9f7b141cbd3afbbc89
https://docs.docker.com/compose/env-file/
One more thing to be care about - since Compose V2, this error may be raise in case you have inline comments in the env file used by Compose. For example
---
version: "3.7"
services:
backend:
build:
context: .
dockerfile: Dockerfile
env_file: .app.env
and that .app.env is like this
RABBIT_USER=user # RabbitMQ user
the same error may occur. To fix just move comment to its own line
# RabbitMQ user
RABBIT_USER=user

define volumes in docker-compose.yaml

I am writing a docker-compose.yaml file for my project. I have checked the volumes documentation here .
I also understand the concept of volume in docker that I can mount a volume e.g. -v my-data/:/var/lib/db where my-data/ is a directory on my host machine while /var/lib/db is the path inside database container.
My confuse is with the link I put above. There it has the following sample:
version: "3.9"
services:
db:
image: db
volumes:
- data-volume:/var/lib/db
backup:
image: backup-service
volumes:
- data-volume:/var/lib/backup/data
volumes:
data-volume:
I wonder does it mean that I have to create a directory named data-volume on my host machine? What if I have a directory on my machine with path temp/my-data/ and I want to mount that path to the database container /var/lib/db ? Should I do something like below?
version: "3.9"
services:
db:
image: db
volumes:
- temp/my-data/:/var/lib/db
volumes:
temp/my-data/:
My main confusion is the volumes: section at the bottom, I am not sure whether the volume name should be the path of my directory or should be just literally a name I give & if it is the latter case then how could the given name be mapped with temp/my-data/ on my machine? The sample doesn't indicate that & is ambiguous to clarify that.
Could someone please clarify it for me?
P.S. I tried with above docker-compose I guessed, ended up with the error:
ERROR: The Compose file './docker-compose.yaml' is invalid because:
volumes value 'temp/my-data/' does not match any of the regexes: '^[a-zA-Z0-9._-]+$'
Mapped volumes can either be files/directories on the host machine (sometimes called bind mounts in the documentation) or they can be docker volumes that can be managed using docker volume commands.
The volumes: section in a docker-compose file specify docker volumes, i.e. not files/directories. The first docker-compose in your post uses such a volume.
If you want to map a file or directory (like in your last docker-compose file), you don't need to specify anything in the volumes: section.
Docker volumes (the ones specified in the volumes: section or created using docker volume create) are of course also stored somewhere on your host computer, but docker manages that and you shouldn't normally need to know where or what the format is.
This part of the documentation is pretty good about explaining it, I think https://docs.docker.com/storage/volumes/
As #HansKilian mentions, you don't need both volumes and services.volumes. To use services.volumes, map the host directory to the container directory like this:
services:
db:
image: db
volumes:
- /host/path/lib/db:/container/path/lib/db
With that, the directory /host/path/lib/db on the host machine will be used by the container and available at /container/path/lib/db.
Now, if you're like me, I get really confused with fake examples, so let's say the real directory on your host machine is /var/lib/db and you just want to see it at /db when you run a shell in Docker (i.e., docker exec -it /bin/bash container-id).
docker-compose.yaml would look like this:
services:
db:
image: db
volumes:
- /var/lib/db:/db
Now when you run the shell, cd /logs and ls, you'll see the same results as if you'd cd /var/lib/db on the host.
If you want to use the volumes section to indicate a global volume to use, you first have to create that volume using docker volume create. The documentation Hans linked includes steps to do this. The syntax of /host/path:/container/path is replaced by volume-name:/container/path. Then, once defined, you'd alter your docker-compose.yaml to be more like this:
services:
db:
image: db
volumes:
- your-global-volume-name:/db
volumes:
your-global-volume-name:
external: true
Note that I have not tested or used the this configuration. I'm assuming it's correct based on the other method working and the few changes I can identify in the docs.

docker-compose not finding environment variable $PWD on Ubuntu WSL 2

I am relatively new to docker. I have been trying to compose the file below:
version: "3"
services:
postgres:
restart: always
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_DB=test_db
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD_FILE=/run/secrets/postgres-passwd
volumes:
- test_db:${PWD}
pgweb:
restart: always
image: sosedoff/pgweb
ports:
- "8081:8081"
environment:
- DATABASE_URL=postgres://postgres:POSTGRES_USER#POSTGRES_PASSWORD_FILE:5432/POSTGRES_DB?sslmode=disable
depends_on:
- postgres
volumes:
test_db:
What I am trying to do is mount the volume test_db to my current working directory by using the environment variable $PWD. When I run docker-compose up in my terminal I get the following warning:
The PWD variable is not set. Defaulting to a blank string.
Now it is important to note that I am currently using Ubuntu running on WSL2 on windows 10. Another thing to note is that I am running ZSH and not BASH.
I followed the exact steps mentioned in the documentation.
I also checked another question which seemed to be similar to mine but not quite the same, as it was possible to replace ${PWD} with ./ which simply does not work in my case.
When using ./ instead of $PWD I get the following error:
for pg_test_postgres_1 Cannot create container for service postgres:\
invalid volume specification: 'pg_test_test_db:.:rw': invalid mount config\
for type "volume": invalid mount path: '.' mount path must be absolute
If you are trying to see what you can an cannot do, this is something you cannot do. Docker does not load an environment variable that would normally be set by the shell, zsh or bash, doesn't matter. And yes, it's the shell that sets $PWD and $OLDPWD. Docker CAN define a variable that will be passed to the distro as an environment variable and also be used by Docker at the time of the container build. Also volumes need to be defined using absolute paths.
Also, like David Maze mentions, your PostgreSQL data folder needs to be specifically in /var/lib/postgresql/data or that folder needs to be symlinked to a different arbitrary folder where PostgreSQL has read-write access. The point of a container is to build it for your needs so under normal circumstances you should know where everything goes and set volumes' paths explicitly.

Allowing multiple services in docker-compose to share a merged volume

Given a docker-compose.yml file like below, I'm looking for a way that both service a and b can have access to a shared volume which consists of the merged contents of both containers.
version: '3'
volumes:
shared-merged-volume:
services:
a:
volumes:
- shared-merged-volume:/shared
b:
volumes:
- shared-merged-volume:/shared
Let's say service a has a directory at /shared/dir-from-a and service b has a similar /shared-dir-from-b directory. The desired result is to end up with:
$ ls /shared # from either container
dir-from-a
dir-from-b
What I find is that one of the containers "wins" and only one of those two directories is ever present. I can work around the issue like this but is more verbose and requires modification if directory contents ever changes:
version: '3'
volumes:
service-a-shared-volume:
service-b-shared-volume:
services:
a:
volumes:
- service-a-shared-volume:/shared/dir-from-a
- service-b-shared-volume:/shared/dir-from-b
b:
volumes:
- service-a-shared-volume:/shared/dir-from-a
- service-b-shared-volume:/shared/dir-from-b
Thanks in advance for any help!
Is using a named volume a requirement?
If not, then to accomplish such merging I usually just map directories to one location on the host drive, instead of using volumes, and it merges with no problems. Tested on big loads and multiple containers writing simultaneously.
proposed compose file:
version: '3'
volumes:
shared-merged-volume:
services:
a:
volumes:
- /location/on/host/system:/shared
b:
volumes:
- /location/on/host/system:/shared
Edit from comments
This method mounts everything that's in the local host directory to the /shared, meaning if it's empty - it'll mount the empty dir, and whatever was there - will be overwritten by the empty dir. Everything that will be written inside that mount after your service starts, will be persisted and merged across services as expected.
If both containers are creating different folders, I don't see how they can be contending to create their own respective folders, unless they both delete the contents of /shared first, then they create the folders? But that would mean that the use of volumes in this case is null because the contents will be deleted every time the container starts?
In any case, I find that it is often useful to persuade the containers to share the same folder by use of path redirection. I will share two ways of accomplishing this:
If you have access to the code that creates the folders in /shared, then you can use environment variables to change the expected location of /shared for each service
version: '3'
volumes:
shared-merged-volume:
services:
a:
environment:
SHARED_VOLUME_PATH: /shared/a/
volumes:
- shared-merged-volume:/shared
b:
environment:
SHARED_VOLUME_PATH: /shared/b/
volumes:
- shared-merged-volume:/shared
You may need to have the services create SHARED_VOLUME_PATH, but now they can both live peaceably with each other.
If you are unable to change the location of /shared, which means each service will always want to use that path, another way to create path redirection is to use symbolic links. For this to work, you will have to override the entrypoint of your services or do this step during the build process of the image.
version: '3'
volumes:
shared-merged-volume:
services:
a:
entrypoint: [ "ln", "-sf", "/symshared/a/", "/shared/" ]
volumes:
- shared-merged-volume:/symshared
b:
entrypoint: [ "ln", "-sf", "/symshared/b/", "/shared/" ]
volumes:
- shared-merged-volume:/symshared
Alternatively, build the images ahead of time, and add a simple RUN command in the Dockerfile which creates this symbolic link:
...
ARG SHARED_VOLUME_PATH
RUN ln -sf ${SHARED_VOLUME_PATH} /shared/
What this allows you to do is that each container will keep using /shared as they used to, but you will still be able to store it's content in the volume, without interfering with what other containers want to do with their own version of /shared.
Needless to say, the ln command only works on linux and other unixes, and in some cases, you may need to install it prior. If your container image is based on something else like windows for example, then find something else that can be used to create symlinks.

Resources