Unable to access JSON file through shell script inside docker container - docker

I'm trying to populate data using REST endpoints of another container (i.e. my-blog-container) by creating a container (data-loader), for which I'm running a shell script dataloader.sh inside container (i.e. data-loader). In the shell script I'm trying to access the json file (i.e. mydata.json) to make the POST call in shell script while running the container.
Basically Goal is to populate the data by spinning up the container on its
own.
But I'm getting an error in my container:
Warning: Couldn't read data from file 
Warning: "mydata.json", this makes an empty
Warning: POST.
docker-compose.yaml
data-loader:
<<: *my-base-service
build:
context: .
dockerfile: Dockerfile.dataloader
args:
IMAGE_NAME: <alpine base image>
container_name: data-loader
volumes:
- $MY_PATH/config/mydata.json
Dockerfile.dataloader
ARG IMAGE_NAME
FROM ${IMAGE_NAME}
USER root
RUN apk update && apk upgrade && apk add curl
COPY dataloader.sh /
RUN chmod +x /dataloader.sh
ENTRYPOINT [ "sh", "/dataloader.sh" ]
dataloader.sh
response=$(curl -X POST "http://myblogcontainer:8080/blog/create" -H "accept: */*" -H "Content-Type: application/json" \
-d #mydata.json)
echo ${response}
I want to populate the data using this json file. But unable to make it. Please help me out.
If you have any questions, please feel free to ask.
Thanks in advance :)

Given the assumption that you have a colon as part of MY_PATH, you mount the file into /config/ but you are using a relative path in your script.
If your workdir is not /config, then your script will not find the file. To be sure you could use an absolute path.
-d '#/config/mydata.json'
In general, it is useful in these kinds of scenarios to do some debugging by entering the container interactively to poke around in the file system. Or by adding some print statements to your script to ensure your assumptions are correct.
Also note that your script is missing a shebang. The default interpreter is actually sh, so it works out, but it is good practice to add one anyway.
You probably should also double quote the variable response when using to prevent word splitting and globbing.

Related

Building image with Dockerfile is not using existing layers?

I'm quite new to Dockerfiles and not sure whats going on here. I'm using Podman and building a new image with a Dockerfile to set some additional permissions using a base image from Docker Hub. According to the layers (#8) in this base image it seems that the variabele SDKMAN_DIR is being set to /home/javaUser/.sdkman.
/bin/sh -c export SDKMAN_DIR="/home/javaUser/.sdkman" && curl -s "https://get.sdkman.io" | bash
However, when I build a new image based on the newly created image with the Dockerfile, start a new container, exec into container and execute the command echo $SDKMAN_DIR, the result is empty. I was expecting that my variant of the image had the same variabels set.
Can anyone tell me what I'm missing here?
bash-5.1$ echo $SDKMAN_DIR
Below you'll find the Dockerfile I'm using.
FROM docker.io/itext/dito-sdk:2.4.5
# Make available to rootless users
USER root
RUN chmod g+x /home/javaUser/.sdkman/bin/sdkman-init.sh
# Become rootless user
USER 2000
EXPOSE 8080
ENTRYPOINT /bin/bash -c "source /home/javaUser/.sdkman/bin/sdkman-init.sh && kotlin /opt/dito/startup-prepare.main.kts && java -jar /opt/dito/dito-sdk-docker.jar server /etc/opt/dito/config.yml"
Thanks in advance.
Edit
Somehow I got it to work by passing in an ENV. Still I'm not exactly sure why the variable is not available without explicitly defining an ENV variable. Does this has to do with not being the root but 2000 user after an exec into the container?
Edit
The variable is not available at runtime because of the fact that the variabele is being declared in the RUN step, and therefore only limited to that scope. Thanks to #DavidMaze for clarifying.

Dockerfile: `curl` command unable to save file

I'm building what I think is a simple dockerfile and have got one line in the code that is throwing an error.
# dockerfile
...
RUN curl -k --output bin/theta `curl -k 'https://mainnet-data.thetatoken.org/binary?os=linux&name=theta'`
RUN curl -k --output bin/thetacli `curl -k 'https://mainnet-data.thetatoken.org/binary?os=linux&name=thetacli'`
RUN curl -k --output guardian_mainnet/node/config.yaml `curl -k 'https://mainnet-data.thetatoken.org/config?is_guardian=true'`
...
The first two curl commands run without any issue. The third curl command throws an error:
Warning: Failed to create the file guardian_mainnet/node/config.yaml: No such file or directory
The directory is created and does exist. The prior two curl commands use exactly the same format and result in both the theta and thetacli files being created.
I've actually setup a docker container with the base image predicated on the FROM for this dockerfile. From there I've run the code in the dockerfile line-by-line and it has executed without any problem (including the third line). In other words, if I manually run the dockerfile commands from the CLI for the base container it works - it's only when building the container from the dockerfile at the host level that the error is thrown.
The only differences are (i) the file type of .yaml and (ii) the ? in the https link. But I've found nothing that says that would be a problem. [I've tried saving without the extension and it didn't make a difference.]
What am I missing?

Pass ENV in docker run command

Is there a way we can pass a variable lets say in this example I want to pass a list of animals into an entrypoint.sh file using ENV animals="turtle, monkey, goose"
But I want to be able to pass different animals when running the container for example docker run -t image animals="mouse,rat,kangaroo"
How do you go about passing arguments when running the docker run command?
The goal is to take that variable when using the docker run command and insert them into that entrypoint.sh file
Right now i hard code that in my Dockerfile. But i want to be able to do this when running the docker run command so I dont always have to change the Dockerfile.
FROM anapsix/alpine-java:8u121b13_jdk
ENV FILE_NAME="file_to_run.zip"
ENV animals="turtle, monkey, goose"
ADD ${FILE_NAME} .
RUN echo "${FILENAME} ${animals}" > ./entrypoint.sh
CMD [ "/bin/ash", "./entrypoint.sh" ]
It looks like you might be confusing the image build with the container run. If the difference between the two isn't immediately clear, I'd recommend reviewing some other questions and docs like:
In Docker, what's the difference between a container and an image?
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
RUN echo "${FILENAME} ${animals}" > ./entrypoint.sh
With the above, the variables will be expanded during the image build. The entrypoint.sh will not contain ${FILENAME} ${animals}. Instead, it will contain
file_to_run.zip turtle, monkey, goose
After the build, the docker run command will create a container from that image and run the above script with the environment variables defined but never used since the script already has the variables expanded. To prevent the variable expansion, you need to escape the $ or use single quotes to prevent the expansion, e.g.
RUN echo "\${FILENAME} \${animals}" > ./entrypoint.sh
or
RUN echo '${FILENAME} ${animals}' > ./entrypoint.sh
I would also recommend being explicit with a #!/bin/ash at the top of this script. Then when you run the script, do not override the command with parameters after the image name. Instead set the environment variables with the appropriate flag to run:
docker run -it -e animals="mouse,rat,kangaroo" image
Simplest way, forward individual variables:
docker run ... --env animals="turtle, monkey, goose" --env FILE_NAME="file_to_run.zip"
Forward several variables using file:
Or if you need to grab all your environment variables from outside, you can do something like this first:
printenv | grep -E 'animals|FILE_NAME' > my-env
The grep is because Docker doesn't like some variables, e.g. with spaces in them, which you might possibly have in your real environment.
Then use that file in your Docker command:
docker run ... --env-file ./my-env
The latter is also useful if you want to avoid sending environment variables to logs (like for sensitive variables). I use this approach in a CI/CD pipeline that runs some scripts.
Using variables inside Docker:
With either approach, the environment variables actually become available to scripts running inside the container to use.
#BMitch's answer has more complete details about how to achieve this in your case, where you have related logic in both build and execution.
Reference
See docs here.

Docker error: invalid reference format: repository name must be lowercase

Ran into this Docker error with one of my projects:
invalid reference format: repository name must be lowercase
What are the various causes for this generic message?
I already figured it out after some effort, so I'm going to answer my own question in order to document it here as the solution doesn't come up right away when doing a web search and also because this error message doesn't describe the direct problem Docker encounters.
A "reference" in docker is a pointer to an image. It may be an image name, an image ID, include a registry server in the name, use a sha256 tag to pin the image, and anything else that can be used to point to the image you want to run.
The invalid reference format error message means docker cannot convert the string you've provided to an image. This may be an invalid name, or it may be from a parsing error earlier in the docker run command line if that's how you run the image.
If the name itself is invalid, the repository name must be lowercase means you use upper case characters in your registry or repository name, e.g. YourImageName:latest should be yourimagename:latest.
With the docker run command line, this is often the result in not quoting parameters with spaces, missing the value for an argument, and mistaking the order of the command line. The command line is ordered as:
docker ${args_to_docker} run ${args_to_run} image_ref ${cmd_to_exec}
The most common error in passing args to the run is a volume mapping expanding a path name that includes a space in it, and not quoting the path or escaping the space. E.g.
docker run -v $(pwd):/data image_ref
Where if you're in the directory /home/user/Some Project Dir, that would define an anonymous volume /home/user/Some in your container, and try to run Project:latest with the command Dir:/data image_ref. And the fix is to quote the argument:
docker run -v "$(pwd):/data" image_ref
Other common places to miss quoting include environment variables:
docker run -e SOME_VAR=Value With Spaces image_ref
which docker would interpret as trying to run the image With:latest and the command Spaces image_ref. Again, the fix is to quote the environment parameter:
docker run -e "SOME_VAR=Value With Spaces" image_ref
With a compose file, if you expand a variable in the image name, that variable may not be expanding correctly. So if you have:
version: 2
services:
app:
image: ${your_image_name}
Then double check that your_image_name is defined to an all lower case string.
In my case was the -e before the parameters for mysql docker
docker run --name mysql-standalone -e MYSQL_ROOT_PASSWORD=hello -e MYSQL_DATABASE=hello -e MYSQL_USER=hello -e MYSQL_PASSWORD=hello -d mysql:5.6
Check also if there are missing whitespaces
Let me emphasise that Docker doesn't even allow mixed characters.
Good:
docker build -t myfirstechoimage:0.1 .
Bad:
docker build -t myFirstEchoImage:0.1 .
had a space in the current working directory and usign $(pwd) to map volumes. Doesn't like spaces in directory names.
In my case, the image name defined in docker-compose.yml contained uppercase letters. The fact that the error message mentioned repository instead of image did not help describe the problem and it took a while to figure out.
In my case the problem was in parameters arrangement. Initially I had --name parameter after environment parameters and then volume and attach_dbs parameters, and image at the end of command like below.
docker run -p 1433:1433 -e sa_password=myComplexPwd -e ACCEPT_EULA=Y --name sql1 -v c:/temp/:c:/temp/ attach_dbs="[{'dbName':'TestDb','dbFiles':['c:\\temp\\TestDb.mdf','c:\\temp\\TestDb_log.ldf']}]" -d microsoft/mssql-server-windows-express
After rearranging the parameters like below everything worked fine (basically putting --name parameter followed by image name).
docker run -d -p 1433:1433 -e sa_password=myComplexPwd -e ACCEPT_EULA=Y --name sql1 microsoft/mssql-server-windows-express -v C:/temp/:C:/temp/ attach_dbs="[{'dbName':'TestDb','dbFiles':['C:\\temp\\TestDb.mdf','C:\\temp\\TestDb_log.ldf']}]"
On MacOS when your are working on an iCloud drive, your $PWD will contain a directory "Mobile Documents". It does not seem to like the space!
As a workaround, I copied my project to local drive where there is no space in the path to my project folder.
I do not see a way you can get around changnig the default path to iCloud which is ~/Library/Mobile Documents/com~apple~CloudDocs
The space in the path in "Mobile Documents" seems to be what docker run does not like.
If you encounter this problem in go-swagger (Windows).
#echo off
echo.
docker run --rm -it --env GOPATH=/go -v %CD%:/go/src -w /go/src quay.io/goswagger/swagger %*
Use this instead: (add quote)
#echo off
echo.
docker run --rm -it --env GOPATH=/go -v "%CD%:/go/src" -w /go/src quay.io/goswagger/swagger %*
A reference in Docker is what points to an image. This could be in a remote registry or the local registry. Let me describe the error message first and then show the solutions for this.
invalid reference format
This means that the reference we have used is not a valid format. This means, the reference (pointer) we have used to identify an image is invalid. Generally, this is followed by a description as follows. This will make the error much clearer.
invalid reference format: repository name must be lowercase
This means the reference we are using should not have uppercase letters. Try running docker run Ubuntu (wrong) vs docker run ubuntu (correct). Docker does not allow any uppercase characters as an image reference. Simple troubleshooting steps.
1) Dockerfile contains a capital letters as images.
FROM Ubuntu (wrong)
FROM ubuntu (correct)
2) Image name defined in the docker-compose.yml had uppercase letters
3) If you are using Jenkins or GoCD for deploying your docker container, please check the run command, whether the image name includes a capital letter.
Please read this document written specifically for this error.
sometimes you miss -e flag while specific multiple env vars inline
e.g.
bad: docker run --name somecontainername -e ENV_VAR1=somevalue1 ENV_VAR2=somevalue2 -d -v "mypath:containerpath" <imagename e.g. postgres>
good: docker run --name somecontainername -e ENV_VAR1=somevalue1 -e ENV_VAR2=somevalue2 -d -v "mypath:containerpath" <imagename e.g. postgres>
In my case I had a naked --env switch, i.e. one without an actual variable name or value, e.g.:
docker run \
--env \ <----- This was the offending item
--rm \
--volume "/home/shared:/shared" "$(docker build . -q)"
Replacing image: ${DOCKER_REGISTRY}notificationsapi
with image:notificationsapi
or image: ${docker_registry}notificationsapi
in docker-compose.yml did solves the issue
file with error
version: '3.4'
services:
notifications.api:
image: ${DOCKER_REGISTRY}notificationsapi
build:
context: .
dockerfile: ../Notifications.Api/Dockerfile
file without error
version: '3.4'
services:
notifications.api:
image: ${docker_registry}notificationsapi
build:
context: .
dockerfile: ../Notifications.Api/Dockerfile
So i think error was due to non lower case letters it had
For me the issue was with the space in volume mapping that was not escaped. The jenkins job which was running the docker run command had a space in it and as a result docker engine was not able to understand the docker run command.
Indeed, the docker registry as of today (sha 2e2f252f3c88679f1207d87d57c07af6819a1a17e22573bcef32804122d2f305) does not handle paths containing upper-case characters. This is obviously a poor design choice, probably due to wanting to maintain compatible with certain operating systems that do not distinguish case at the file level (ie, windows).
If one authenticates for a scope and tries to fetch a non-existing repository with all lowercase, the output is
(auth step not shown)
curl -s -H "Authorization: Bearer $TOKEN" -X GET https://$LOCALREGISTRY/v2/test/someproject/tags/list
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":[{"Type":"repository","Class":"","Name":"test/someproject","Action":"pull"}]}]}
However, if one tries to do this with an uppercase component, only 404 is returned:
(authorization step done but not shown here)
$ curl -s -H "Authorization: Bearer $TOKEN" -X GET https://docker.uibk.ac.at:443/v2/test/Someproject/tags/list
404 page not found
I solve this changing some uppercase words on my Dockerfile like:
FROM Base as Build
RUN npm run Build:prod
to
FROM base as build
RUN npm run build:prod
Another place:
FROM Base as Release
COPY --from=Build /usr/path/here/dist/ ./dist
to
FROM base as Release
COPY --from=build /usr/path/here/dist/ ./dist
I've encountered the same issue while using docker with mlflow.
In my case, the directory name containing my Dockerfile was "My Project" which I changed to myproject or my_project and It worked for me.
Also, follow the same naming format for all the root/super directories under which, the Dockerfile resides.
Not only for docker, but it's also good practice (especially in Unix based OS) to avoid the following while defining a directory name:-
white spaces
camel-case
upper-case
I had the same error, and for some reason it appears to have been cause by uppercase letters in the Jenkins job that ran the docker run command.
This is happening because of the spaces in the current working directory that came from $(pwd) for map volumes. So, I used docker-compose instead.
The docker-compose.yml file.
version: '3'
services:
react-app:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- /app/node_modules
- .:/app
"docker build -f Dockerfile -t SpringBoot-Docker ."
As in the above commend, we are creating an image file for docker container. commend says create image use file(-f refer to docker file) and -t for the target of the image file we are going to push to docker. the "." represents the current directory
solution for the above problem: provide target image name in lowercase
Docker can build images automatically by reading the instructions from a Dockerfile. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.
example:
FROM python:3.7-alpine
The 'python' should be in lowercase
In my case I was trying to run postgres through docker. Initially I was running as :
docker run -d -p 5432:5432 -e POSTGRES_PASSWORD=test_password POSTGRES_USER=test_user POSTGRES_DB=test_db --rm -v ~/docker/volumes/postgres:/var/lib/postgresql/data --name pg-docker postgres
I was missing -e after each environment variable. Changing the above command to the one below worked
docker run -d -p 5432:5432 -e POSTGRES_PASSWORD=test_password -e POSTGRES_USER=test_user -e POSTGRES_DB=test_db --rm -v ~/docker/volumes/postgres:/var/lib/postgresql/data --name pg-docker postgres
I wish the error message would output the problem string. I was getting this due to a weird copy and paste problem of a "docker run" command. A space-like character was being used before the repo and image name.
Most of the answers above did not work for my case, so I will document this in case somebody finds it helpful. The first line in the dockerfile FROM node:10 for my case, the word node should not be uppercase i.e FROM NODE:10. I made that change and it worked.
In my case DockerFile contained the image name in mixed case instead of lower case.
Earlier line in my DockerFile
FROM CentOs
and when I changed above to FROM centos, it worked smoothly.
You need to enter the Name of the Docker-Image and not your File Name :P
$ docker run {your image}
Another possible cause of this error is that in your Dockerfile you have mixed capitalization in the syntax declaration itself. For example:
# syntax=docker/Dockerfile:1
instead of
# syntax=docker/dockerfile:1
If you come here after encountering this error in your GitHub Actions worflows…
Make sure to use docker/metadata-action action to handle repository naming for you. Just call it before docker/build-push-action:
# Add this
- id: docker-metadata
uses: docker/metadata-action#v4
with:
images: ghcr.io/${{ github.repository }}
# Use the extracted metadata
- uses: docker/build-push-action#v3
with:
tags: ${{ steps.docker-metadata.outputs.tags }}
labels: ${{ steps.docker-metadata.outputs.labels }}
… other properties …

Docker echo environment variable

I'm trying to write a little docker file that sets a User and just echos the current user as a little example to prove to myself it is working. I've tried a number of variants and couldn't find much help in the documentation.
FROM ubuntu
USER daemon
# ENTRYPOINT ["echo", "$USER"]
# just gives "$USER"
# ENTRYPOINT ["echo", "-e", "${USER}"]
# just gives "$USER"
# ENTRYPOINT echo $USER
# gives empty string
# ENTRYPOINT ["/bin/echo", "$USER"]
# just gives "$USER"
I'm running docker build . on the dockerfile and then running docker run <image-id> and getting the results
Expected result is daemon, or without the USER daemon line, I expect root. Probably a really simple answer.
This is the expected behavior, as weird as it seems!
When ENTRYPOINT is a list (as in ENTRYPOINT ["echo", "$USER"]), it is used as-is, without further parsing or interpretation. So $USER remains $USER, because there is no shell involved in the process to replace it with the value of the USER environment variable.
Now, when ENTRYPOINT is a string (as in ENTRYPOINT echo $USER), what is actually executed is sh -c "echo $USER", and $USER is replaced with the value of the environment variable (as you would expect).
However, the environment variable USER is not set by default. It is set by the login process; and when you just run sh -c ... the login process is not involved.
Compare the environment when running docker run -t -i ubuntu bash and docker run -t -i ubuntu login -f root. In the former case, you will get a very basic environment; in the latter case, you will get the complete environment that you are used to (including USERvariable).
Couldn't you set, in the Dockerfile, the ENV command to a default value, and then, when run-ning a container, use the -e, --env dictionary to override what would be interpreted by the:
ENTRYPOINT echo $SOMEENVVAR
form of ENTRYPOINT?
I think there´s a series of issues here.
when I
docker run -i -t ubuntu /bin/bash
echo $USER
set
I don´t see $USER set at all - whoami does report daemon though.
additionally, I have the suspicion (but have not looked at the code yet) that ENV vars in the Dockerfile are escaped, to avoid their use (many people assume that they can export host variables to the built container, but this is something that the docker guys would like to avoid)

Resources