How to add dynamic file to docker container - docker

whenever I run a docker container, I want to send dynamic filename as some environment variable.
That is accessible in container so its printing its value when we 'echo'.
But ADD command not adding that file.
Dockerfile:
ADD $filename ./
echo ls # Not showing file
docker run -e filename='/path/to/file.extension'

Try using a volume instead:
$ echo "hello world" > somefile.txt
$ docker run -it --rm -v $PWD/somefile.txt:/data/somefile.txt alpine cat /data/somefile.txt
hello world
The Dockerfile lists the actions that occur when you run a "docker build". It's not possible to pass in an environment variable at run-time, because, at that point, the image is already built :-)

ADD is run during compile (build) time. When you run docker exec -e that is after the container has been built.
You cannot add dynamic files because it's compiled. The previous command about volumes is correct because you can provide those files ad-hoc during exec and have your application pick them up.

to add to Mark's answer.
If you want to use a docker-compose.yml file (a good idea if youre planning on running the container over and over).
mysql:
image: mysql
volumes:
- /someLocalFolder/lib/mysql/:/var/lib/mysql
you can add as many volumes as you like this way, including individual files which can be handy for config etc.

Related

Building a binary inside docker and mount back to host

I have a requirement where I need to build a executable binary but inside a docker container because of the difficulty in building the binary in different environments. I have a sample docker-compose of what I want and trying to convert it to a Dockerfile. The docker-compose is as below.
version: "3.7"
services:
wasm_compile_update:
image: envoyproxy/envoy-build-ubuntu:e33c93e6d79804bf95ff80426d10bdcc9096c785
command: |
bash -c "bazel build //examples/wasm-cc:envoy_filter_http_wasm_updated_example.wasm \
&& cp -a bazel-bin/examples/wasm-cc/* /build"
working_dir: /source
volumes:
- ../..:/source
- ./lib:/build
What will be the equivalent Dockerfile for this ?? I was trying to use CMD but couldn't make it work. Any help will be appreciated since I'm on a tight deadline. Thanks
You can create a dockerfile that'll have the right tools in to build your binary, but you'll still have to use docker run to do the build itself because you can't mount drives during the build process nor can you copy things out of the image during the build. However, you can do this:
A dockerfile
from envoyproxy/envoy-build-ubuntu:e33c93e6d79804bf95ff80426d10bdcc9096c785
workdir /examples
entrypoint ["bazel", "build"]
Build it like this:
docker build -t MyBuildkit .
And run it like this:
docker run -it --rm \
-v $(pwd)/examples:/examples \
-v $(pwd)/bin:/bazel-bin/examples/wasm-cc \
MyBuildkit /examples/wasm-cc:envoy_filter_http_wasm_updated_example.wasm
Now, I don't know enough about the directories here to work out if that's exactly right, but the gist is there.
The first volume mount (-v) is there to mount your source code (which I'm assuming is examples) into a folder in the container (which I've also called examples). The final bin directory is also mounted, in the second mount, which I've mounted into a host folder called bin and I've assumed that the copy command you had contained the binary so that would ma to /bazel-bin/examples/wasm-cc in the container.
Another assumption I've made is around the command to send to the container. I've set the entrypoint to be what is presumably your compiler (basel build) and to that I've passed in what is presumably the name of the thing to build (/examples/wasm-cc:envoy_filter_http_wasm_updated_example.wasm).
Because I don't know basel at all it is entirely possible that I've got one or more of these details wrong, but the general pattern stands. Mount your source and your bin, pass the target of the build into the entrypoint, and build into the bin.

docker-compose and listing volume contents

Maybe I'm just not understanding correctly but I'm trying to visually verify that I have used volumes properly.
In my docker-compose I'd have something like
some-project:
volumes:
- /some-local-path/some-folder:/v-test
I can verify it's contents via "ls -la /some-local-path/some-folder"
In some-projects Dockerfile I'd have something like
RUN ls -la /v-test
which returns 'No such file or directory"
Is this the correct way to use it? If so, why can't I view the contents from inside the container?
Everything in the Dockerfile runs before anything outside the build: block in the docker-compose.yml file is considered. The image build doesn't see volumes or environment variables that get declared only in docker-compose.yml, and it can't access other services.
In your example, first the Dockerfile tries to ls the directory, then Compose will start the container with the bind mount.
If you're just doing this for verification, you can docker-compose run a container with most of its settings from the docker-compose.yml file, but an alternate command:
docker-compose run some-project \
ls -la /v-test
(Doing this requires that the image's CMD is a well-formed shell command; either it has no ENTRYPOINT or the ENTRYPOINT is a wrapper script that ends in exec "$#" to run the CMD. If you only have ENTRYPOINT, change it to CMD; if you've split the command across both directives, consolidate it into a single CMD line.)

Is it possible to run script or executable on Docker container with docker-compose.yml only without Dockerfile

Scripts or executables can run on a Docker container automatically when running docker-compose up --build with a configured Dockerfile containing syntax RUN, through which a script or executables etc. can run automatically during build.
Question: But is it possible to achieve the same goal, say run executables or scripts, with docker-compose only without a Dockerfile? In this case there are probably the similar command in docker-compose.yml like the RUN in Dockerfile ?
What you can do in docker-compose is overriding the default command that is executed after the build, by setting "command".
See here: https://docs.docker.com/compose/compose-file/#command
I don't think there is a RUN-like thing for docker-compose.yml.
if I understand your problem, then the answer is yes: on your container configuration in your docker-compose.yml file use:
entrypoint: ["/bin/sh","-c"]
command:
- |
ls -la
echo 'hello'
or whatever is your commands you want to run.

How to set image name in Dockerfile?

You can set image name when building a custom image, like this:
docker build -t dude/man:v2 . # Will be named dude/man:v2
Is there a way to define the name of the image in Dockerfile, so I don't have to mention it in the docker build command?
Using -t on invocation
How to build an image with custom name without using yml file:
docker build -t image_name .
How to run a container with custom name:
docker run -d --name container_name image_name
Workaround using docker-compose
Tagging of the image isn't supported inside the Dockerfile. This needs to be done in your build command. As a workaround, you can do the build with a docker-compose.yml that identifies the target image name and then run a docker-compose build. A sample docker-compose.yml would look like
version: '2'
services:
man:
build: .
image: dude/man:v2
That said, there's a push against doing the build with compose since that doesn't work with swarm mode deploys. So you're back to running the command as you've given in your question:
docker build -t dude/man:v2 .
Personally, I tend to build with a small shell script in my folder (build.sh) which passes any args and includes the name of the image there to save typing. And for production, the build is handled by a ci/cd server that has the image name inside the pipeline script.
Workaround using docker-compose
Here is another version if you have to reference a specific docker file:
version: "3"
services:
nginx:
container_name: nginx
build:
context: ../..
dockerfile: ./docker/nginx/Dockerfile
image: my_nginx:latest
Then you just run
docker-compose build
My Dockerfile alone solution is adding a shebang line:
#!/usr/bin/env -S docker build . --tag=dude/man:v2 --network=host --file
FROM ubuntu:22.04
# ...
Then chmod +x Dockerfile and ./Dockerfile is to go.
I even add more docker build command line arguments like specifying a host network.
NOTE: env with -S/--split-string support is only available for newer coreutils versions.
With a specific Dockerfile you could try:
docker build --tag <Docker Image name> --file <specific Dockerfile> .
for example
docker build --tag second --file Dockerfile_Second .
Workaround using Docker (and a Makefile)
Generally in Docker you can't say what you want the image to be tagged as in the Dockerfile. So what you do is
Create a Dockerfile
Create a Makefile
.PHONY: all
all: docker build -t image_name .
Use make instead of invoking docker build directly
Or, use buildah
But here is a better idea... Don't build images with Docker! Instead build them with buildah, the new build tool provided by the podman crew which uses shell (or any language), allows building in the cloud easily (without using a different project like kaniko), and allows rootless building of images! At the end of the build script just save the image inside with buildah commit. Here is what it looks like.
#!/bin/sh
# Create a new offline container from the `alpine:3` image, return the id.
ctr=$(buildah from "alpine:3")
# Create a new mount, return the path on the host.
mnt=$(buildah mount "$ctr")
# Copy files to the mount
cp -Rv files/* "$mnt/"
# Do some things or whatever
buildah config --author "Evan Carroll" --env "FOO=bar" -- "$ctr"
# Run a script inside the container
buildah run "$ctr" -- /bin/sh <<EOF
echo "This is just a regular shell script"
echo "Do all the things."
EOF
# Another one, same layer though
buildah run "$ctr" -- /bin/sh <<EOF
echo "Another one!"
echo "No excess layers created as with RUN."
EOF
# Commit this container as "myImageName"
buildah commit -- "$ctr" "myImageName"
Now you don't have to hack around with a Makefile. You have one shell script that does everything, and is far more powerful than a Dockerfile.
Side note, buildah can also build from Dockerfiles (using buildah bud), but this short coming is with the Dockerfile. So that won't help.

docker: executable file not found in $PATH

I have a docker image which installs grunt, but when I try to run it, I get an error:
Error response from daemon: Cannot start container foo_1: \
exec: "grunt serve": executable file not found in $PATH
If I run bash in interactive mode, grunt is available.
What am I doing wrong?
Here is my Dockerfile:
# https://registry.hub.docker.com/u/dockerfile/nodejs/ (builds on ubuntu:14.04)
FROM dockerfile/nodejs
MAINTAINER My Name, me#email.com
ENV HOME /home/web
WORKDIR /home/web/site
RUN useradd web -d /home/web -s /bin/bash -m
RUN npm install -g grunt-cli
RUN npm install -g bower
RUN chown -R web:web /home/web
USER web
RUN git clone https://github.com/repo/site /home/web/site
RUN npm install
RUN bower install --config.interactive=false --allow-root
ENV NODE_ENV development
# Port 9000 for server
# Port 35729 for livereload
EXPOSE 9000 35729
CMD ["grunt"]
This was the first result on google when I pasted my error message, and it's because my arguments were out of order.
The container name has to be after all of the arguments.
Bad:
docker run <container_name> -v $(pwd):/src -it
Good:
docker run -v $(pwd):/src -it <container_name>
When you use the exec format for a command (e.g., CMD ["grunt"], a JSON array with double quotes), it will be executed without a shell. This means that most environment variables will not be present.
If you specify your command as a regular string (e.g. CMD grunt) then the string after CMD will be executed with /bin/sh -c.
More info on this is available in the CMD section of the Dockerfile reference.
I found the same problem. I did the following:
docker run -ti devops -v /tmp:/tmp /bin/bash
When I change it to
docker run -ti -v /tmp:/tmp devops /bin/bash
it works fine.
For some reason, I get that error unless I add the "bash" clarifier. Even adding "#!/bin/bash" to the top of my entrypoint file didn't help.
ENTRYPOINT [ "bash", "entrypoint.sh" ]
There are several possible reasons for an error like this.
In my case, it was due to the executable file (docker-entrypoint.sh from the Ghost blog Dockerfile) lacking the executable file mode after I'd downloaded it.
Solution: chmod +x docker-entrypoint.sh
I had the same problem, After lots of googling, I couldn't find out how to fix it.
Suddenly I noticed my stupid mistake :)
As mentioned in the docs, the last part of docker run is the command you want to run and its arguments after loading up the container.
NOT THE CONTAINER NAME !!!
That was my embarrassing mistake.
Below I provided you with the picture of my command line to see what I have done wrong.
And this is the fix as mentioned in the docs.
A Docker container might be built without a shell (e.g. https://github.com/fluent/fluent-bit-docker-image/issues/19).
In this case, you can copy-in a statically compiled shell and execute it, e.g.
docker create --name temp-busybox busybox:1.31.0
docker cp temp-busybox:/bin/busybox busybox
docker cp busybox mycontainerid:/busybox
docker exec -it mycontainerid /bin/busybox sh
In the error message shown:
Error response from daemon: Cannot start container foo_1: \
exec: "grunt serve": executable file not found in $PATH
It is complaining that it cannot find the executable grunt serve, not that it could not find the executable grunt with the argument serve. The most likely explanation for that specific error is running the command with the json syntax:
[ "grunt serve" ]
in something like your compose file. That's invalid since the json syntax requires you to split up each parameter that would normally be split by the shell on each space for you. E.g.:
[ "grunt", "serve" ]
The other possible way you can get both of those into a single parameter is if you were to quote them into a single arg in your docker run command, e.g.
docker run your_image_name "grunt serve"
and in that case, you need to remove the quotes so it gets passed as separate args to the run command:
docker run your_image_name grunt serve
For others seeing this, the executable file not found means that Linux does not see the binary you are trying to run inside your container with the default $PATH value. That could mean lots of possible causes, here are a few:
Did you remember to include the binary inside your image? If you run a multi-stage image, make sure that binary install is run in the final stage. Run your image with an interactive shell and verify it exists:
docker run -it --rm your_image_name /bin/sh
Your path when shelling into the container may be modified for the interactive shell, particularly if you use bash, so you may need to specify the full path to the binary inside the container, or you may need to update the path in your Dockerfile with:
ENV PATH=$PATH:/custom/dir/bin
The binary may not have execute bits set on it, so you may need to make it executable. Do that with chmod:
RUN chmod 755 /custom/dir/bin/executable
The binary may include dynamically linked libraries that do not exist inside the image. You can use ldd to see the list of dynamically linked libraries. A common reason for this is compiling with glibc (most Linux environments) and running with musl (provided by Alpine):
ldd /path/to/executable
If you run the image with a volume, that volume can overlay the directory where the executable exists in your image. Volumes do not merge with the image, they get mounted in the filesystem tree same as any other Linux filesystem mount. That means files from the parent filesystem at the mount point are no longer visible. (Note that named volumes are initialized by docker from the image content, but this only happens when the named volume is empty.) So the fix is to not mount volumes on top of paths where you have executables you want to run from the image.
If you run a binary for a different platform, and haven't configured binfmt_misc with the --fix-binary option, qemu will be looking for the interpreter inside the container filesystem namespace instead of the host filesystem. See this Ubuntu bug report for more details on this issue.
If the error is from a shell script, the issue is often with the first line of that script (e.g. the #!/bin/bash). Either the command doesn't exist inside the image for a reason above, or the file is not saved as ascii or utf8 with Linux linefeeds. You can attempt dos2unix to fix the linefeeds, or check your git and editor settings.
in my case i order params wrong move all switchs before image name
I got this error message, when I was building alpine base image :
ERROR: for web Cannot start service web: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "bash": executable file not found in $PATH: unknown
In my docker-compose file, I had the command directive in which executing command using bash and bash does not come with alpine base image.
command: bash -c "python manage.py runserver 0.0.0.0:8000"
Then I realized and executed command using sh (shell).
It worked for me.
problem is glibc, which is not part of apline base iamge.
After adding it worked for me :)
Here are the steps to get the glibc
apk --no-cache add ca-certificates wget
wget -q -O /etc/apk/keys/sgerrand.rsa.pub https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub
wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.28-r0/glibc-2.28-r0.apk
apk add glibc-2.28-r0.apk
Refering to the title.
My mistake was to put variables via --env-file during docker run. Among others the file consisted of a PATH extension: PATH=$PATH:something, which caused PATH var look literally like PATH=$PATH:something (var resolution hadn't been performed) instead of PATH:/usr/bin...:something.
I couldn't make the resolution work through --env-file, so the only way I see this working is by using ENV in Dockerfile.
I ran into this issue using docker-compose. None of the solutions here or on this related question resolved my issue. Ultimately what worked for me was clearing all cached docker artifacts with docker prune -a and restarting docker.
to make it work add soft reference to /usr/bin:
ln -s $(which node) /usr/bin/node
ln -s $(which npm) /usr/bin/npm

Resources