Dynamically replace current time inside docker-compose command - docker

version: '3.7'
services:
pgdump:
image: postgres:alpine
command: pg_dump -f "backup-`date -u -Iseconds`.pg_restore" $DATABASE_URL
This produces a file named
backup-`date -u -Iseconds`.pg_restore
instead of the desired
backup-2021-04-14T16:42:54+00:00.pg_restore.
I also tried:
command: pg_dump -f backup-`date -u -Iseconds`.pg_restore $DATABASE_URL
command: pg_dump -f "backup-${date -u -Iseconds}.pg_restore" $DATABASE_URL
command: pg_dump -f backup-${date -u -Iseconds}.pg_restore $DATABASE_URL
All of them yield different errors.

As of April 2021 command substitution is not supported by docker-compose according to this GitHub issue.
As a workaround in my use case, one could either use native docker run commands, where substitution works or use an .env file.

Current command
The date command itself is incorrect. Try running it on its own
date -u -Iseconds
echo `date -u -Iseconds`
From your command, I presume you want date in UTC seconds since epoch? Epoch by itself is UTC. So you just need seconds since Epoch. No need for -u parameter.
Solution
Here's the correct command in two forms:
A.
command: pg_dump -f "backup-`date +'%s'`.pg_restore" $DATABASE_URL
B.
command: pg_dump -f "backup-$(date +'%s').pg_restore" $DATABASE_URL
Explanation
There are multiple things to watch out for in the command you provided:
Notice the double quotes around the file name? This means you cannot nest another double-quote within the original outer pair without escaping the inner ones with \. Another alternative option is to use as many single-quote pairs you want within a pair of double-quotes. See this answer and this excerpt about 2.2.2 Single-Quotes and 2.2.3 Double-Quotes.
For string interpolation, you can use either $() or `` notation. But NOT within single-quotes as I said.
As a dry-run test, create a file directly with said notations:
vi "backup-`date +'%s'`.txt"
vi "backup-$(date +'%s').txt"
As for date format. Both GNU/date BSD/date accept %s to represent seconds since Epoch. Find "%s" in ss64 or man7 or cyberciti.
Docker-related, watch out what command does. Source:
command overrides the the default command declared by the container image (i.e. by Dockerfile's CMD).

You can create the filename and store it as a variable with shell command before doing the pg_dump:
version: '3.7'
services:
pgdump:
image: postgres:alpine
entrypoint: ["/bin/sh","-c"]
command: >
"FILENAME=backup-`date -u -Iseconds`.pg_restore
&& pg_dump -f $$FILENAME $$DATABASE_URL"
Successfully tested against Docker image for postgres 13.6.

Related

How can I export a command that has quotes in it? Im getting an error when I try to nest single quotes within double quote

I am attempting to add this to export in my profile
export CLI ='docker-compose exec cardano-node sh -c "CARDANO_NODE_SOCKET_PATH=/ipc/node.socket cardano-cli"'
This command works with no issue from the command line
docker-compose exec cardano-node sh -c "CARDANO_NODE_SOCKET_PATH=/ipc/node.socket cardano-cli"
I tried it placing it in quote a number of ways and I keep getting the following error
cardano-cli": -c: line 0: unexpected EOF while looking for matching `"'
cardano-cli": -c: line 1: syntax error: unexpected end of file
What's the deal here?
docker-compose exec takes a -e option to set environment variables. If you use that option, you don't need the sh -c wrapper, since you are just running a simple command, and this removes a layer of quotes.
export CLI=`docker-compose exec -e CARDANO_NODE_SOCKET_PATH=/ipc/node.socket cardano-node cardano-cli`
docker-compose exec will also get the environment variables from the docker-compose.yml file, so if you declare the variables there
services:
cardano-node:
environment:
- CARDANO_NODE_SOCKET_PATH=/ipc/node.socket
you don't have to repeat them in docker-compose exec at all
export CLI=`docker-compose exec cardano-node cardano-cli`

Problem exporting environment variable in Makefile

I would like to export the environment variable in the Makefile. In this case, is to get the IP for debugging with docker
Makefile
start:
export XDEBUG_REMOTE_HOST=$$(/sbin/ip route|awk '/kernel.*metric/ { print $$9 }') \
; docker-compose up -d
Update from answers:
version: '3.5'
services:
php:
build:
context: .
dockerfile: docker/php/Dockerfile
environment:
- XDEBUG_CONFIG="idekey=docker"
- XDEBUG_REMOTE_HOST=${XDEBUG_REMOTE_HOST}
output:
$ make start
export XDEBUG_REMOTE_HOST=$(/sbin/ip route|awk '/kernel.*metric/ { print $9 }') \
; docker-compose up -d
Starting service_php ... done
$ docker-compose exec php bash
WARNING: The XDEBUG_REMOTE_HOST variable is not set. Defaulting to a blank string.
You need to make sure the variable assignment and the docker command run in the same shell. Trivially, put them in the same rule:
start:
XDEBUG_REMOTE_HOST=$$(/sbin/ip route|awk '/kernel.*metric/ { print $$9 }') \
docker-compose up -d
I took out the # because it's probably simply a bad idea, especially if you need to understand what's going on here. You can use make -s once your Makefile is properly tested if you don't want to see what it's doing.
The purpose of export is to expose a variable to subprocesses, but that's not necessary here. Instead, we use the shell's general
variable=value anothervar=anothervalue command
syntax to set the value of a variable for the duration of a single command.
If the internals of docker-compose require the variable to be exported, then of course, you can do that too:
start:
export XDEBUG_REMOTE_HOST=$$(/sbin/ip route|awk '/kernel.*metric/ { print $$9 }') \
; docker-compose up -d
Notice how the backslash at the end of the first line of the command list joins the two commands on a single logical line, so they get passed to the same shell instance, and the ; command separator is required to terminate the first command. (I put the semicolon at beginning of line as an ugly reminder to the reader that this is all one command line.)
Specifically for docker-compose, the customary way to set a variable from the command line is with a specific named option;
start:
docker-compose up -e XDEBUG_REMOTE_HOST=$$(/sbin/ip route|awk '/kernel.*metric/ { print $$9 }') -d
There are other ways to solve this such as the GNU Make .ONESHELL directive but this is simple and straightforward, and portable to any Make.
If you assume that the route exists when make is first invoked, you can assign a make variable as opposed to a shell variable as follows:
export XDRH_MAKE_VAR:=$(shell /sbin/ip route|awk '/kernel.*metric/ { print $$9 }')
start:
#echo XDHR_MAKE_VAR=$(XDRH_MAKE_VAR)
XDEBUG_REMOTE_HOST=$(XDRH_MAKE_VAR) docker-compose up -d
XDRH_FILE:
echo $(XDRH_MAKE_VAR) > $#
someother_target:
XDEBUG_REMOTE_HOST=$(XDRH_MAKE_VAR) some_other_command
command_that_uses_it_as_param $(XDRH_MAKE_VAR)
NOTE_does_not_work:
XDEBUG_REMOTE_HOST=$(XDRH_MAKE_VAR) echo $$XDEBUG_REMOTE_HOST
The last one does not work, because the bash shell will expand $XDEBUG_REMOTE_HOST before assigning it (See here). Also, the variable is set at make parse time, so if any of your rules effect the route, then this will not be reflected in its value.
If you want to access the value in the shell afterwards, you would want to do something like:
bash> make start XDRH_FILE
bash> XDEBUG_REMOTE_HOST=`cat XDRH_FILE`
bash> docker-compose exec php bash

Docker compose args return of linux command

I am trying to set an arg for docker compose using an output of linux command as my example:
args:
ID_GITLAB: $(id -u $USER)
but when I run my compose I get following error:
ERROR: Invalid interpolation format for "build" option in service "gpc-fontes-ci": "$(id -u $USER)"
Just do
USER_ID=$(id -u $USER) docker-compose
With the Compose file using a regular variable
args:
ID_GITLAB: $USER_ID
You need to escape $ character with another $, e.g.
args:
ID_GITLAB: $$(id -u $USER)
It's the same rule for command.
See: Variable substitution.
You can use a $$ (double-dollar sign) when your configuration needs a literal dollar sign. This also prevents Compose from interpolating a value, so a $$ allows you to refer to environment variables that you don’t want processed by Compose.
For example:
web:
build: .
command: "$$VAR_NOT_INTERPOLATED_BY_COMPOSE"

Docker error: invalid reference format: repository name must be lowercase

Ran into this Docker error with one of my projects:
invalid reference format: repository name must be lowercase
What are the various causes for this generic message?
I already figured it out after some effort, so I'm going to answer my own question in order to document it here as the solution doesn't come up right away when doing a web search and also because this error message doesn't describe the direct problem Docker encounters.
A "reference" in docker is a pointer to an image. It may be an image name, an image ID, include a registry server in the name, use a sha256 tag to pin the image, and anything else that can be used to point to the image you want to run.
The invalid reference format error message means docker cannot convert the string you've provided to an image. This may be an invalid name, or it may be from a parsing error earlier in the docker run command line if that's how you run the image.
If the name itself is invalid, the repository name must be lowercase means you use upper case characters in your registry or repository name, e.g. YourImageName:latest should be yourimagename:latest.
With the docker run command line, this is often the result in not quoting parameters with spaces, missing the value for an argument, and mistaking the order of the command line. The command line is ordered as:
docker ${args_to_docker} run ${args_to_run} image_ref ${cmd_to_exec}
The most common error in passing args to the run is a volume mapping expanding a path name that includes a space in it, and not quoting the path or escaping the space. E.g.
docker run -v $(pwd):/data image_ref
Where if you're in the directory /home/user/Some Project Dir, that would define an anonymous volume /home/user/Some in your container, and try to run Project:latest with the command Dir:/data image_ref. And the fix is to quote the argument:
docker run -v "$(pwd):/data" image_ref
Other common places to miss quoting include environment variables:
docker run -e SOME_VAR=Value With Spaces image_ref
which docker would interpret as trying to run the image With:latest and the command Spaces image_ref. Again, the fix is to quote the environment parameter:
docker run -e "SOME_VAR=Value With Spaces" image_ref
With a compose file, if you expand a variable in the image name, that variable may not be expanding correctly. So if you have:
version: 2
services:
app:
image: ${your_image_name}
Then double check that your_image_name is defined to an all lower case string.
In my case was the -e before the parameters for mysql docker
docker run --name mysql-standalone -e MYSQL_ROOT_PASSWORD=hello -e MYSQL_DATABASE=hello -e MYSQL_USER=hello -e MYSQL_PASSWORD=hello -d mysql:5.6
Check also if there are missing whitespaces
Let me emphasise that Docker doesn't even allow mixed characters.
Good:
docker build -t myfirstechoimage:0.1 .
Bad:
docker build -t myFirstEchoImage:0.1 .
had a space in the current working directory and usign $(pwd) to map volumes. Doesn't like spaces in directory names.
In my case, the image name defined in docker-compose.yml contained uppercase letters. The fact that the error message mentioned repository instead of image did not help describe the problem and it took a while to figure out.
In my case the problem was in parameters arrangement. Initially I had --name parameter after environment parameters and then volume and attach_dbs parameters, and image at the end of command like below.
docker run -p 1433:1433 -e sa_password=myComplexPwd -e ACCEPT_EULA=Y --name sql1 -v c:/temp/:c:/temp/ attach_dbs="[{'dbName':'TestDb','dbFiles':['c:\\temp\\TestDb.mdf','c:\\temp\\TestDb_log.ldf']}]" -d microsoft/mssql-server-windows-express
After rearranging the parameters like below everything worked fine (basically putting --name parameter followed by image name).
docker run -d -p 1433:1433 -e sa_password=myComplexPwd -e ACCEPT_EULA=Y --name sql1 microsoft/mssql-server-windows-express -v C:/temp/:C:/temp/ attach_dbs="[{'dbName':'TestDb','dbFiles':['C:\\temp\\TestDb.mdf','C:\\temp\\TestDb_log.ldf']}]"
On MacOS when your are working on an iCloud drive, your $PWD will contain a directory "Mobile Documents". It does not seem to like the space!
As a workaround, I copied my project to local drive where there is no space in the path to my project folder.
I do not see a way you can get around changnig the default path to iCloud which is ~/Library/Mobile Documents/com~apple~CloudDocs
The space in the path in "Mobile Documents" seems to be what docker run does not like.
If you encounter this problem in go-swagger (Windows).
#echo off
echo.
docker run --rm -it --env GOPATH=/go -v %CD%:/go/src -w /go/src quay.io/goswagger/swagger %*
Use this instead: (add quote)
#echo off
echo.
docker run --rm -it --env GOPATH=/go -v "%CD%:/go/src" -w /go/src quay.io/goswagger/swagger %*
A reference in Docker is what points to an image. This could be in a remote registry or the local registry. Let me describe the error message first and then show the solutions for this.
invalid reference format
This means that the reference we have used is not a valid format. This means, the reference (pointer) we have used to identify an image is invalid. Generally, this is followed by a description as follows. This will make the error much clearer.
invalid reference format: repository name must be lowercase
This means the reference we are using should not have uppercase letters. Try running docker run Ubuntu (wrong) vs docker run ubuntu (correct). Docker does not allow any uppercase characters as an image reference. Simple troubleshooting steps.
1) Dockerfile contains a capital letters as images.
FROM Ubuntu (wrong)
FROM ubuntu (correct)
2) Image name defined in the docker-compose.yml had uppercase letters
3) If you are using Jenkins or GoCD for deploying your docker container, please check the run command, whether the image name includes a capital letter.
Please read this document written specifically for this error.
sometimes you miss -e flag while specific multiple env vars inline
e.g.
bad: docker run --name somecontainername -e ENV_VAR1=somevalue1 ENV_VAR2=somevalue2 -d -v "mypath:containerpath" <imagename e.g. postgres>
good: docker run --name somecontainername -e ENV_VAR1=somevalue1 -e ENV_VAR2=somevalue2 -d -v "mypath:containerpath" <imagename e.g. postgres>
In my case I had a naked --env switch, i.e. one without an actual variable name or value, e.g.:
docker run \
--env \ <----- This was the offending item
--rm \
--volume "/home/shared:/shared" "$(docker build . -q)"
Replacing image: ${DOCKER_REGISTRY}notificationsapi
with image:notificationsapi
or image: ${docker_registry}notificationsapi
in docker-compose.yml did solves the issue
file with error
version: '3.4'
services:
notifications.api:
image: ${DOCKER_REGISTRY}notificationsapi
build:
context: .
dockerfile: ../Notifications.Api/Dockerfile
file without error
version: '3.4'
services:
notifications.api:
image: ${docker_registry}notificationsapi
build:
context: .
dockerfile: ../Notifications.Api/Dockerfile
So i think error was due to non lower case letters it had
For me the issue was with the space in volume mapping that was not escaped. The jenkins job which was running the docker run command had a space in it and as a result docker engine was not able to understand the docker run command.
Indeed, the docker registry as of today (sha 2e2f252f3c88679f1207d87d57c07af6819a1a17e22573bcef32804122d2f305) does not handle paths containing upper-case characters. This is obviously a poor design choice, probably due to wanting to maintain compatible with certain operating systems that do not distinguish case at the file level (ie, windows).
If one authenticates for a scope and tries to fetch a non-existing repository with all lowercase, the output is
(auth step not shown)
curl -s -H "Authorization: Bearer $TOKEN" -X GET https://$LOCALREGISTRY/v2/test/someproject/tags/list
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":[{"Type":"repository","Class":"","Name":"test/someproject","Action":"pull"}]}]}
However, if one tries to do this with an uppercase component, only 404 is returned:
(authorization step done but not shown here)
$ curl -s -H "Authorization: Bearer $TOKEN" -X GET https://docker.uibk.ac.at:443/v2/test/Someproject/tags/list
404 page not found
I solve this changing some uppercase words on my Dockerfile like:
FROM Base as Build
RUN npm run Build:prod
to
FROM base as build
RUN npm run build:prod
Another place:
FROM Base as Release
COPY --from=Build /usr/path/here/dist/ ./dist
to
FROM base as Release
COPY --from=build /usr/path/here/dist/ ./dist
I've encountered the same issue while using docker with mlflow.
In my case, the directory name containing my Dockerfile was "My Project" which I changed to myproject or my_project and It worked for me.
Also, follow the same naming format for all the root/super directories under which, the Dockerfile resides.
Not only for docker, but it's also good practice (especially in Unix based OS) to avoid the following while defining a directory name:-
white spaces
camel-case
upper-case
I had the same error, and for some reason it appears to have been cause by uppercase letters in the Jenkins job that ran the docker run command.
This is happening because of the spaces in the current working directory that came from $(pwd) for map volumes. So, I used docker-compose instead.
The docker-compose.yml file.
version: '3'
services:
react-app:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- /app/node_modules
- .:/app
"docker build -f Dockerfile -t SpringBoot-Docker ."
As in the above commend, we are creating an image file for docker container. commend says create image use file(-f refer to docker file) and -t for the target of the image file we are going to push to docker. the "." represents the current directory
solution for the above problem: provide target image name in lowercase
Docker can build images automatically by reading the instructions from a Dockerfile. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.
example:
FROM python:3.7-alpine
The 'python' should be in lowercase
In my case I was trying to run postgres through docker. Initially I was running as :
docker run -d -p 5432:5432 -e POSTGRES_PASSWORD=test_password POSTGRES_USER=test_user POSTGRES_DB=test_db --rm -v ~/docker/volumes/postgres:/var/lib/postgresql/data --name pg-docker postgres
I was missing -e after each environment variable. Changing the above command to the one below worked
docker run -d -p 5432:5432 -e POSTGRES_PASSWORD=test_password -e POSTGRES_USER=test_user -e POSTGRES_DB=test_db --rm -v ~/docker/volumes/postgres:/var/lib/postgresql/data --name pg-docker postgres
I wish the error message would output the problem string. I was getting this due to a weird copy and paste problem of a "docker run" command. A space-like character was being used before the repo and image name.
Most of the answers above did not work for my case, so I will document this in case somebody finds it helpful. The first line in the dockerfile FROM node:10 for my case, the word node should not be uppercase i.e FROM NODE:10. I made that change and it worked.
In my case DockerFile contained the image name in mixed case instead of lower case.
Earlier line in my DockerFile
FROM CentOs
and when I changed above to FROM centos, it worked smoothly.
You need to enter the Name of the Docker-Image and not your File Name :P
$ docker run {your image}
Another possible cause of this error is that in your Dockerfile you have mixed capitalization in the syntax declaration itself. For example:
# syntax=docker/Dockerfile:1
instead of
# syntax=docker/dockerfile:1
If you come here after encountering this error in your GitHub Actions worflows…
Make sure to use docker/metadata-action action to handle repository naming for you. Just call it before docker/build-push-action:
# Add this
- id: docker-metadata
uses: docker/metadata-action#v4
with:
images: ghcr.io/${{ github.repository }}
# Use the extracted metadata
- uses: docker/build-push-action#v3
with:
tags: ${{ steps.docker-metadata.outputs.tags }}
labels: ${{ steps.docker-metadata.outputs.labels }}
… other properties …

Docker-compose not passing environment variable to container

I am using Docker 17.04.0-ce, build 4845c56 with docker-compose 1.12.0, build b31ff33 on Ubuntu 16.04.2 LTS. I simply want to pass an environment variable and display it from my script running in a container. I am doing this according to the documentation https://docs.docker.com/compose/compose-file/#environment . The problem is that the variable is not passed to the container.
My docker-compose.yml file:
env-file-test:
build: .
dockerfile: Dockerfile
environment:
- DEMO_VAR
My Dockerfile:
FROM alpine
COPY docker-start.sh /
CMD ["/docker-start.sh"]
And the docker-start.sh file:
#!/bin/sh
echo "DEMO_VAR Var Passed in: $DEMO_VAR"
I try to set the variable in my current terminal session and pass it to the container:
$ export DEMO_VAR=aabbdd
$ echo $DEMO_VAR
aabbdd
$ sudo docker-compose up
Starting envfiletest_env-file-test_1
Attaching to envfiletest_env-file-test_1
env-file-test_1 | DEMO_VAR Var Passed in:
envfiletest_env-file-test_1 exited with code 0
So you can see that the variable DEMO_VAR is empty!
I also tried using variables in docker-compose.yml like this: DEMO_VAR=${DEMO_VAR} but then when I run sudo docker-compose up, I get a warning: "WARNING: The DEMO_VAR variable is not set. Defaulting to a blank string.".
What am I doing wrong? What should I do to pass the variable to the container?
I found a solution. Answering my own question...
The problem was with the sudo command. It turned out that it does not pass environment variables by default. There are some possible solutions:
Use sudo -E. Demo:
$ export DEMO_VAR=aabbdd
$ echo $DEMO_VAR
aabbdd
$ sudo -E docker-compose up
env-file-test_1 | DEMO_VAR Var Passed in: aabbdd
Use sudo VAR=value:
sudo DEMO_VAR=$DEMO_VAR docker-compose up
Add environment variables to the sudoers file (https://stackoverflow.com/a/8636711)
Use docker without sudo (https://askubuntu.com/questions/477551/how-can-i-use-docker-without-sudo)
you should use ENV in your Dockerfile, and avoid export.
See the doc
https://docs.docker.com/engine/reference/builder/#env

Resources