Image's rootfs is incomplete while building from Dockerfile - docker

I'm building an image for Jetson from a Dokerfile. Here's an excerpt from it:
FROM nvcr.io/nvidia/l4t-pytorch:r32.4.4-pth1.6-py3
# some installation
RUN ls -l /usr/local/cuda-10.2/targets/aarch64-linux/lib/
# more installation
The ls command returns just a couple of files. However when I run the resulting container and use its shell, this directory contains many more files.
The problem is that I need some of the libraries from that folder to install something. I want to be able to install it from the Dockerfile but only can do so from the container's shell.
Why is the directory incomplete and is there a way to force-build it so it's ready when I need it?
Thanks.

Solved it by adding "default-runtime": "nvidia" to /etc/docker/daemon.json. Further details here: https://github.com/dusty-nv/jetson-containers#docker-default-runtime

Related

Can't use docker cp to copy file from /tmp

When using docker cp to move files from my local machine /tmp/data.txt to the container, it fails with the error:
lstat /tmp/data.txt: no such file or directory
The file exists and I can run stat /tmp/data.txt and cat /tmp/data.txt without any issues.
Even if I create another file in /tmp like data2.txt I get the exact same error.
But if I create a file outside /tmp like in ~/documents and copy it with docker cp it works fine.
I checked out the documentation for docker cp and it mentions:
It is not possible to copy certain system files such as resources under /proc, /sys, /dev, tmpfs, and mounts created by the user in the container
but doesn't mention /tmp as such a directory.
I'm running on Debian 10, but a friend of mine who is on Ubuntu 20.04 can do it just fine.
We're both using the same version of docker (19.03.11).
What could be the cause?
I figured out the solution.
I had install docker as a snap. I uninstalled it (sudo snap remove docker) and installed it using the official Docker guidelines for installing on Debian.
After this, it worked just fine.
I think it might've been due to snap packages having limited access to system resources - but I don't know for sure.

List all files in Build Context and/or in WORKDIR when building container image

I am trying to build a container image using Docker task on Azure Pipeline.
I created a dockerfile, but it looks like a made a mistake cause I keep getting
WARN saveError ENOENT: no such file or directory, open
'/usr/src/app/package.json'
I thought it would be good to list all files that exist in build context and/or WORKDIR so it
would be easier for me to find a solution.
Is there any appropriate dockerfile command, something like...
dir
ls
RUN. You can run any command you want in a container.
RUN ls will run ls and print the output of the command.

"Docker compose: command not found" - Despite configuring the path and making file as executable

I am new to Docker. I would like to run a docker-compose file. I have my docker-compose installed and please find below the details
which docker-compose
# returns usr/local/bin/docker-compose
docker-compose -v
# returns docker-compose version 1.24.0, build 0aa59064
I was referring the STACK OVERFLOW posts and was able to find something regarding the PATH variable, so I have also placed my docker-compose in /usr/bin/ path as well.
So, the below command
find /usr/bin/ -name "docker-compose"
# returns /usr/bin/docker-compose
In addition, I made the files under both paths executable by using "chmod +x" command under respective folders.
So when I finally try to execute my docker-compose.yml file, I get the below error
sudo ./docker-compose up - I am executing this command right in the folder where docker-compose.yml file is present
And I get the below error,
sudo: ./docker-compose: command not found
How can I execute my docker-compose.yml file without any issues and start my containers. Am I making any mistakes while making it as an executable file?
Try sudo docker-compose instead. The ./ means that the shell you're in will look in the current directory. If you aren't inside /usr/local/bin or /usr/bin, it will fail.
Also if you're on linux, you should follow the following instructions to avoid having to run docker as root: https://docs.docker.com/install/linux/linux-postinstall/

How to run custom Docker file?

So here is a very cool Docker file.
To run it, I do:
wget https://cran.r-project.org/src/contrib/FastRCS_0.0.7.tar.gz
tar -xvzf FastRCS_0.0.7.tar.gz
docker run --rm -ti -v $(pwd):/mnt rocker/r-devel-ubsan-clang check.r --setwd /mnt -a --install-deps FastRCS_0.0.7.tar.gz
But now suppose I want to save this DockerFile and run the saved version from the current directory (i.e. not just the one on github).
How can I do this?
The idea is that I need to customize this DockerFile a bit and run the customized version.
Sounds like you want to download the raw file from https://raw.githubusercontent.com/rocker-org/r-devel-san-clang/master/Dockerfile
and save it into a file named Dockerfile
Then you could edit the file to make your changes, and then just build your image with docker build . when you are in the Dockerfile directory
This is a basic Docker usage question--look into docker commit.
You may want to study one of the many fine Docker tutorials out there.

docker: executable file not found in $PATH

I have a docker image which installs grunt, but when I try to run it, I get an error:
Error response from daemon: Cannot start container foo_1: \
exec: "grunt serve": executable file not found in $PATH
If I run bash in interactive mode, grunt is available.
What am I doing wrong?
Here is my Dockerfile:
# https://registry.hub.docker.com/u/dockerfile/nodejs/ (builds on ubuntu:14.04)
FROM dockerfile/nodejs
MAINTAINER My Name, me#email.com
ENV HOME /home/web
WORKDIR /home/web/site
RUN useradd web -d /home/web -s /bin/bash -m
RUN npm install -g grunt-cli
RUN npm install -g bower
RUN chown -R web:web /home/web
USER web
RUN git clone https://github.com/repo/site /home/web/site
RUN npm install
RUN bower install --config.interactive=false --allow-root
ENV NODE_ENV development
# Port 9000 for server
# Port 35729 for livereload
EXPOSE 9000 35729
CMD ["grunt"]
This was the first result on google when I pasted my error message, and it's because my arguments were out of order.
The container name has to be after all of the arguments.
Bad:
docker run <container_name> -v $(pwd):/src -it
Good:
docker run -v $(pwd):/src -it <container_name>
When you use the exec format for a command (e.g., CMD ["grunt"], a JSON array with double quotes), it will be executed without a shell. This means that most environment variables will not be present.
If you specify your command as a regular string (e.g. CMD grunt) then the string after CMD will be executed with /bin/sh -c.
More info on this is available in the CMD section of the Dockerfile reference.
I found the same problem. I did the following:
docker run -ti devops -v /tmp:/tmp /bin/bash
When I change it to
docker run -ti -v /tmp:/tmp devops /bin/bash
it works fine.
For some reason, I get that error unless I add the "bash" clarifier. Even adding "#!/bin/bash" to the top of my entrypoint file didn't help.
ENTRYPOINT [ "bash", "entrypoint.sh" ]
There are several possible reasons for an error like this.
In my case, it was due to the executable file (docker-entrypoint.sh from the Ghost blog Dockerfile) lacking the executable file mode after I'd downloaded it.
Solution: chmod +x docker-entrypoint.sh
I had the same problem, After lots of googling, I couldn't find out how to fix it.
Suddenly I noticed my stupid mistake :)
As mentioned in the docs, the last part of docker run is the command you want to run and its arguments after loading up the container.
NOT THE CONTAINER NAME !!!
That was my embarrassing mistake.
Below I provided you with the picture of my command line to see what I have done wrong.
And this is the fix as mentioned in the docs.
A Docker container might be built without a shell (e.g. https://github.com/fluent/fluent-bit-docker-image/issues/19).
In this case, you can copy-in a statically compiled shell and execute it, e.g.
docker create --name temp-busybox busybox:1.31.0
docker cp temp-busybox:/bin/busybox busybox
docker cp busybox mycontainerid:/busybox
docker exec -it mycontainerid /bin/busybox sh
In the error message shown:
Error response from daemon: Cannot start container foo_1: \
exec: "grunt serve": executable file not found in $PATH
It is complaining that it cannot find the executable grunt serve, not that it could not find the executable grunt with the argument serve. The most likely explanation for that specific error is running the command with the json syntax:
[ "grunt serve" ]
in something like your compose file. That's invalid since the json syntax requires you to split up each parameter that would normally be split by the shell on each space for you. E.g.:
[ "grunt", "serve" ]
The other possible way you can get both of those into a single parameter is if you were to quote them into a single arg in your docker run command, e.g.
docker run your_image_name "grunt serve"
and in that case, you need to remove the quotes so it gets passed as separate args to the run command:
docker run your_image_name grunt serve
For others seeing this, the executable file not found means that Linux does not see the binary you are trying to run inside your container with the default $PATH value. That could mean lots of possible causes, here are a few:
Did you remember to include the binary inside your image? If you run a multi-stage image, make sure that binary install is run in the final stage. Run your image with an interactive shell and verify it exists:
docker run -it --rm your_image_name /bin/sh
Your path when shelling into the container may be modified for the interactive shell, particularly if you use bash, so you may need to specify the full path to the binary inside the container, or you may need to update the path in your Dockerfile with:
ENV PATH=$PATH:/custom/dir/bin
The binary may not have execute bits set on it, so you may need to make it executable. Do that with chmod:
RUN chmod 755 /custom/dir/bin/executable
The binary may include dynamically linked libraries that do not exist inside the image. You can use ldd to see the list of dynamically linked libraries. A common reason for this is compiling with glibc (most Linux environments) and running with musl (provided by Alpine):
ldd /path/to/executable
If you run the image with a volume, that volume can overlay the directory where the executable exists in your image. Volumes do not merge with the image, they get mounted in the filesystem tree same as any other Linux filesystem mount. That means files from the parent filesystem at the mount point are no longer visible. (Note that named volumes are initialized by docker from the image content, but this only happens when the named volume is empty.) So the fix is to not mount volumes on top of paths where you have executables you want to run from the image.
If you run a binary for a different platform, and haven't configured binfmt_misc with the --fix-binary option, qemu will be looking for the interpreter inside the container filesystem namespace instead of the host filesystem. See this Ubuntu bug report for more details on this issue.
If the error is from a shell script, the issue is often with the first line of that script (e.g. the #!/bin/bash). Either the command doesn't exist inside the image for a reason above, or the file is not saved as ascii or utf8 with Linux linefeeds. You can attempt dos2unix to fix the linefeeds, or check your git and editor settings.
in my case i order params wrong move all switchs before image name
I got this error message, when I was building alpine base image :
ERROR: for web Cannot start service web: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "bash": executable file not found in $PATH: unknown
In my docker-compose file, I had the command directive in which executing command using bash and bash does not come with alpine base image.
command: bash -c "python manage.py runserver 0.0.0.0:8000"
Then I realized and executed command using sh (shell).
It worked for me.
problem is glibc, which is not part of apline base iamge.
After adding it worked for me :)
Here are the steps to get the glibc
apk --no-cache add ca-certificates wget
wget -q -O /etc/apk/keys/sgerrand.rsa.pub https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub
wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.28-r0/glibc-2.28-r0.apk
apk add glibc-2.28-r0.apk
Refering to the title.
My mistake was to put variables via --env-file during docker run. Among others the file consisted of a PATH extension: PATH=$PATH:something, which caused PATH var look literally like PATH=$PATH:something (var resolution hadn't been performed) instead of PATH:/usr/bin...:something.
I couldn't make the resolution work through --env-file, so the only way I see this working is by using ENV in Dockerfile.
I ran into this issue using docker-compose. None of the solutions here or on this related question resolved my issue. Ultimately what worked for me was clearing all cached docker artifacts with docker prune -a and restarting docker.
to make it work add soft reference to /usr/bin:
ln -s $(which node) /usr/bin/node
ln -s $(which npm) /usr/bin/npm

Resources