Attaching to a docker container with base image scratch? - docker

I have a docker file as follows:
FROM scratch
ARG VERSION=NOT_SET
ENV VERSION $VERSION
COPY foobar foobar
COPY foobar-*.yaml /etc/
COPY jwt/ /etc/jwt/
EXPOSE 8082
ENTRYPOINT ["./foobar"]
CMD ["-config", "/etc/foobar-local.yaml"]
Now, docker ps shows the following:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
653a9b287eb6 7693481.dkr.ecr.us-east-1.amazonaws.com/foobar:0.0.1 "./foobar -config /e" About a minute ago Up About a minute foobar
When I try to exec to this container with the following command:
sudo docker exec -it 653a9b287eb6 /bin/bash
it shows the following error:
rpc error: code = 2 desc = oci runtime error:
exec failed: exec: "/bin/bash": stat /bin/bash: no such file or directory

You need to add a shell to your empty base image (SCRATCH) in order to attach to it.
Right now, your image only include an executable, which is not enough.
As mentioned in issue 17896
FROM scratch literally is an empty, zero-byte image / filesystem, where you add everything yourself.
See for example, the hello-world which, produces an image that's 860 bytes total.
If you need a shell to attach to it through docker exec, start from a small image like Alpine (which has only /bin/sh though: you would need apk add bash to add bash, as commented below by user2915097).

If you are using Kubernetes, I have a repo that installs busybox on FROM scratch -image: https://github.com/phzfi/scratch-debug
In principle, by using the same process, you can also install busybox and shell on any other docker container, but I need to still make a script for debugging e.g. docker-compose and swarm orchestrators.

Related

How to run a private Docker image

docker run -i -t testing bash
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"bash\": executable file not found in $PATH": unknown.
I created the image in Docker Hub , it is private image.
FROM scratch
# Set the working directory to /app
WORKDIR Desktop
ADD . /Dockerfile
RUN ./Dockerfile
EXPOSE 8085
ENV NAME testing
This is in my Dockerfile
I tired to run it, when i run docker images i am getting the details
I think you need to do login in command prompt.useing below command.
docker login -u username -p password url
Apart from the login which should not cause these, as you build an image on your local system which I assume it should exist on local system which will only pull image if not exist on local, the real reason is you are building an image from scratch and there are no binaries in scratch image, even no bash or sh.
Second mistake:
RUN ./Dockerfile
Your Dockerfile is a file, not binaries, while here you are trying to execute using RUN directive.
While scratch appears in Docker’s repository on the hub, you can’t
pull it, run it, or tag any image with the name scratch. Instead, you
can refer to it in your Dockerfile. For example, to create a minimal
container using scratch:
FROM scratch
COPY hello /
CMD ["/hello"]
While here hello can be an executable file such as a C++ compiled file.
Docker scratch image
But what I would suggest to say "hello" in Docker is to use Busybox or Alpine as a base image which has a shell and both are under 5MB.
FROM busybox
CMD ["echo","hello Docker!"]
now build and run
docker build -t hello-docker .
docker run --rm -it hello-docker

How to 'docker exec' a container built from scratch?

I am trying to docker exec a container that is built from scratch (say, a NATS container). Seems pretty straight-forward, but since it is built from scratch, I am unable to access /bin/bash, /bin/sh and literally any such command.
I get the error: oci runtime error (command not found, file not found, etc. depending upon the command that I enter).
I tried some commands like:
docker exec -it <container name> /bin/bash
docker exec -it <container name> /bin/sh
docker exec -it <container name> ls
My question is, how do I docker exec a container that is built from scratch and consisting only of binaries? By doing a docker exec, I wish to find out if the files have been successfully copied from my host to the container (I have a COPY in the Dockerfile).
If your scratch container is running you can copy a shell (and other needed utils) into its filesystem and then exec it. The shell would need to be a static binary. Busybox is a great choice here because it can double as so many other binaries.
Full example:
# Assumes scratch container is last launched one, else replace with container ID of
# scratch image, e.g. from `docker ps`, for example:
# scratch_container_id=401b31621b36
scratch_container_id=$(docker ps -ql)
docker run -d busybox:latest sleep 100
busybox_container_id=$(docker ps -ql)
docker cp "$busybox_container_id":/bin/busybox .
# The busybox binary will become whatever you name it (or the first arg you pass to it), for more info run:
# docker run busybox:latest /bin/busybox
# The `busybox --install` command copies the binary with different names into a directory.
docker cp ./busybox "$scratch_container_id":/busybox
docker exec -it "$scratch_container_id" /busybox sh -c '
export PATH="/busybin:$PATH"
/busybox mkdir /busybin
/busybox --install /busybin
sh'
For Kubernetes I think Ephemeral Containers provide or will provide equivalent functionality.
References:
distroless java docker image error
https://github.com/GoogleContainerTools/distroless/issues/168#issuecomment-371077961
There are several options.
You can do docker container cp ${CONTAINER}:/path/to/file/on/container /path/to/temp/dir/on/host. This will copy the files to your host where you can inspect things using host tools.
You can add an appropriate VOLUME to your Dockerfile. Then you can docker container inspect ${CONTAINER}. This will expose the volume name where the files should be. You can then inspect those in another container (based off an image with all the tools you need).
You can at runtime bind the container to a volume or host directory at the appropriate place.
You can add those binaries that you feel you need to the image. If you need /bin/ls or /bin/sh, then you can add them.
You can bind mount the necessary binaries to the container - so the container has them for verification purposes but the image is not bloated by them.
You can only use docker exec to run commands that actually exist in a container. If those commands don't exist, you can't run them. As you've noted, the scratch base image contains nothing – no shells, no libraries, no system files, nothing.
If all you're trying to check is if a Dockerfile COPY command actually copied the files you said it would, I'd generally assume the tooling works and just reference the copied files in my application.
Since it sounds like you control the Dockerfile, one workaround could be to change the base image to something lightweight but non-empty, like FROM busybox. That would give you a minimal set of tools that you could work with without blowing up the image size too much.
I am trying to do the same files check for my needs. I ended up with docker cp copy this file from container. In my case I am using nats container, but you can use any other container running scratch-based-image
sudo docker cp nats_nats_1:/nats-server.conf ./nats-server.conf
You can just grab the container identifier and throw it into a variable. For example, let's say the (truncated) output of docker ps -a is listed with your running container:
CONTAINER ID IMAGE
111111111111 neo4j-migrator
To further the example, you can docker exec -t using the variable you created. For example:
CONTAINER_ID=`docker ps -aqf "ancestor=neo4j-migrator"`
docker exec -it $CONAINER_ID \
sh -c "/usr/bin/neo4j-migrations \
--password $NEO4J_PASSWORD \
--username $NEO4J_USERNAME \
--address $NEO4J_URI \
migrate"

OCI runtime exec failed: exec failed: (...) executable file not found in $PATH": unknown

I have dockerized an app which has ffmpeg installed in it via libav-tools. The app launches without problem, yet the problem occured when fluent-ffmpeg npm module tried to execute ffmpeg command, which was not found. When I wanted to check the version of the ffmpeg and the linux distro set up in the image, I used sudo docker exec -it c44f29d30753 "lsb_release -a" command, but it gave the following error: OCI runtime exec failed: exec failed: container_linux.go:296: starting container process caused "exec: \"lsb_release -a\": executable file not found in $PATH": unknown
Then I realized that it gives me the same error with all the commands that I try to run inside the image or the container.
OCI runtime exec failed: exec failed: container_linux.go:296: starting container process caused "exec: \"ffmpeg -a\": executable file not found in $PATH": unknown
This is my Dockerfile:
FROM ubuntu:xenial
FROM node
RUN apt-get -y update
RUN apt-get --yes install libav-tools
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN npm install
COPY . /usr/src/app
RUN npm run build
ENV NODE_ENV production
EXPOSE 8000
CMD ["npm", "run", "start:prod"]
I would kindly ask for your help. Thank you very much!
This happened to me on windows. See below for any of the commands that match your case.
NOTE
You will need to run the commands that match your case below using the correct shell in your container i.e. /bin/bash or /bin/sh. Using sh instead of bash or vice versa will also give you this error. So, confirm that you are using the right shell, or just try both shells and see the one that works.
For these examples, I will be using sh
On Windows CMD (not switching to bash):
docker exec -it <container-id> /bin/sh
On Windows CMD (after switching to bash):
docker exec -it <container-id> //bin//sh
or
winpty docker exec -it <container-id> //bin//sh
On Git Bash:
winpty docker exec -it <container-id> //bin//sh
For Windows users, the reason is documented in the ReleaseNotes file of Git and it is well explained here - Bash in Git for Windows: Weirdness... :
The cause is to do with trying to ensure that posix paths end up being
passed to the git utilities properly. For this reason, Git for Windows
includes a modified MSYS layer that affects command arguments.
Linux
docker exec -it <container-id> /bin/sh
docker exec -it <containerId> sh
I had this due to a simple ordering mistake on my end. I called
[WRONG] docker run <image> <arguments> <command>
When I should have used
docker run <arguments> <image> <command>
Same resolution on similar question: https://stackoverflow.com/a/50762266/6278
If #papigee does solution doesn't work, maybe you don't have the permissions.
I tried #papigee solution but does't work without sudo.
I did :
sudo docker exec -it <container id or name> /bin/sh
Get rid of your quotes around your command. When you quote it, docker tries to run the full string "lsb_release -a" as a command, which doesn't exist. Instead, you want to run the command lsb_release with an argument -a, and no quotes.
sudo docker exec -it c44f29d30753 lsb_release -a
Note, everything after the container name is the command and arguments to run inside the container, docker will not process any of that as options to the docker command.
For others with this error, the debugging steps I'd recommend:
Verify the order of your arguments. Everything after the container name/id is a command to run. So you don't want docker exec $cid -it /bin/sh because that will try to run the command -it in the $cid container. Instead you want docker exec -it $cid /bin/sh
Look at the command that is failing, everything in the quotes after the exec error (e.g. lsb_release -a in "exec: \"lsb_release -a\") is the binary trying to be run. Make sure that binary exists in your image. E.g. if you are using alpine or busybox, bash may not exist, but /bin/sh does. And that binary is the full string, e.g. you would be able to run something like ls "/usr/bin/lsb_release -a" and see a file with the space and -a in the filename.
If you're using Windows with Git bash and see a long path prefixed on that command trying to be run, that's Git bash trying to do some automatic conversions of /path/to/binary, you can disable that by doubling the first slash, e.g. //bin/sh.
If the command you're running is a script in the container, check the first line of that script, containing the #!/path/to/interpreter, make sure that interpreter exists in the image, at that path, and that the script is saved with linux linefeeds (lf, not cr+lf, you won't want the \r showing in the file when read in linux because that becomes part of the command it's looking to execute).
If you don't have a full path to the binary in the command you're running, check the value of $PATH in the image, and verify the binary exists within one of those directories. E.g. you can docker exec -it $cid /bin/sh and echo $PATH and type some_command to verify some_command is found in your path.
If your command is not an executable, but rather a shell builtin, you'll need to execute it with a shell instead of directly. That can be done with docker exec -it $cid /bin/sh -c "your_shell_builtin"
I solved this with this commands:
Run the container:
docker run -d <image-name>
List containers:
docker ps -a
Use the container ID:
docker exec -it <container-id> /bin/sh
I was running into this issue and it turned out that I needed to do this:
docker run ${image_name} bash -c "${command}"
You can use another shell to execute the same command:
Error I get when i execute:
[jenkins#localhost jenkins_data]$ docker exec -it mysqldb \bin\bash
OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "exec: \"binsh\": executable file not found in $PATH": unknown
Solution:
When I execute it with below command, using bash shell it works:
[jenkins#localhost jenkins_data]$ docker exec -it mysqldb bash
root#<container-ID>:/#
What I did to solve was simply:
Run docker ps -a
Check for the command of the container (mine started with /bin/sh)
Run docker-compose exec < name_of_service > /bin/sh (if that is what started your command
This is for solving when using docker compose
I was running a container in a docker-compose.
entrypoint:
- ls
worked, but
entrypoint:
- ls tests
did not.
It's because the arguments have to be on separate lines.. 🤦‍♂
entrypoint:
- ls
- tests
This has happened to me. My issue was caused when I didn't mount Docker file system correctly, so I configured the Disk Image Location and re-bind File sharing mount, and this now worked correctly.
For reference, I use Docker Desktop in Windows.
In my case i saved the docker image and instead of load-ing it on the other machine i imported it which are very different and lead me to an error similar to this.
you have to run like below:
docker exec sh -c 'echo "$ENV_NAME"'
I had windows line endings in a shell script. change to LF dos2unix
If you got this error when using the docker run command, you may have made a simple syntax error.
Example
Incorrect:
docker run myimage -p 3838:3838
docker: Error response from daemon: failed to create shim: OCI runtime create
failed: container_linux.go:380: starting container process caused:
exec: "-p": executable file not found in $PATH: unknown.
ERRO[0000] error waiting for container: context canceled
Correct (options go before image name):
docker run -p 3838:3838 myimage

Check the File in Exited Container

I have an issue invoking the script to start the container. I think I'd better first find a way to tell if the script is actually located in the right place. But neither docker exec nor docker attach seems to allow me to get into an exited container.
I also tried docker run -it --volumes-from [exited_container_id] ubuntu. I thought I might be able to see the file system in ubuntu but I cannot find the mounting point. Is there any way for me to login to an exited container and see the files that I ADDed?
You can check if the script is located in the right place adding a RUN ls -l / line in your Dockerfile and building the image
FROM frolvlad/alpine-oraclejdk8:slim
ADD build/libs/zuul*.jar /app.jar
ADD src/main/script/startup.sh /startup.sh
RUN ls -lah /
EXPOSE 8080 8999
ENTRYPOINT ["/startup.sh"]
Then just build the Dockerfile
docker build -t myapp .
You should see the result of that ls in the output of the build

Container is not running

I tried to start a exited container like follows,
I listed down all available containers using docker ps -a. It listed the following:
I entered the following commands to start the container which is in the exited stage and enter into the terminal of that image.
docker start 79b3fa70b51d
docker exec -it 79b3fa70b51d /bin/sh
It is throwing the following error.
FATA[0000] Error response from daemon: Container 79b3fa70b51d is not running
But when I start the container using docker start 79b3fa70b51d. It throws the container ID as output which is normal if it have everything work normally.
What is the cause of this error?
By default, docker container will exit immediately if you do not have any task running on the container.
To keep the container running in the background, try to run it with --detach (or -d) argument.
For examples:
docker pull debian
docker run -t -d --name my_debian debian
e7672d54b0c2
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e7672d54b0c2 debian "bash" 3 minutes ago Up 3 minutes my_debian
#now you can execute command on the container
docker exec -it my_debian bash
root#e7672d54b0c2:/#
Container 79b3fa70b51d seems to only do an echo.
That means it starts, echo and then exits immediately.
The next docker exec command wouldn't find it running in order to attach itself to that container and execute any command: it is too late. The container has already exited.
The docker exec command runs a new command in a running container.
The command started using docker exec will only run while the container's primary process (PID 1) is running
If it's not possible to start the main process again (for long enough), there is also the possibility to commit the container to a new image and run a new container from this image. While this is not the usual best practice workflow (the new image is not repeatable), I find it really useful to debug a failing script once in a while.
docker exec -it 6198ef53d943 bash
Error response from daemon: Container 6198ef53d9431a3f38e8b38d7869940f7fb803afac4a2d599812b8e42419c574 is not running
docker commit 6198ef53d943
sha256:ace7ca65e6e3fdb678d9cdfb33a7a165c510e65c3bc28fecb960ac993c37ef33
docker run -it ace7ca65e6e bash
root#72d38a8c787d:/#
This happens with images for which the script does not launch a service awaiting requests, therefore the container exits at the end of the script.
This is typically the case with most base OS images (centos, debian, etc.), or also with the node images.
Your best bet is to run the image in interactive mode. Example below with the node image:
docker run -it node /bin/bash
Output is
root#cacc7897a20c:/# echo $SHELL
/bin/bash
First of all, we have to start the docker container
ankit#ankit-HP-Notebook:~$ sudo docker start 3a19b39ea021
3a19b39ea021
After that, check the docker container:
ankit#ankit-HP-Notebook:~$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3a19b39ea021 coreapps/ubuntu16.04:latest "bash" 13 hours ago
Up 9 seconds ubuntu1
455b66057060 hello-world "/hello" 4 weeks ago
Exited (0) 4 weeks ago vigorous_bardeen
Then execute by using the command below:
ankit#ankit-HP-Notebook:~$ sudo docker exec -it 3a19b39ea021 bash
root#3a19b39ea021:/#
Here is what worked for me.
Get the container ID and restart.
docker ps -a --no-trunc
ace7ca65e6e3fdb678d9cdfb33a7a165c510e65c3bc28fecb960ac993c37ef33
docker restart ace7ca65e6e3fdb678d9cdfb33a7a165c510e65c3bc28fecb960ac993c37ef33
docker run -it --entrypoint /bin/bash <imageid>
This was posted by L0j1k in the below post and worked for me.
How do I get into a Docker container's shell?
use command
> docker container ls
> docker image ls
Check your Image id and note it down. Here my Image id is "6c929ca002da" , you guys have to use your own Image id instead of mine..
> docker start 6c929ca002da
here our image is in down mode we have to start it first by using image id.
6c929ca002da is my image id
> `docker exec -it 6c929ca002da bash`
after running this command you can see
your image file in running mode like this
root#6c929ca002da
Here I am using root mode go root mode by using command
sudo su
The reason is just what the accepted answer said. I add some extra information, which may provide a further understanding about this issue.
The status of a container includes Created, Running, Stopped,
Exited, Dead and others as I know.
When we execute docker create, docker daemon will create a
container with its status of Created.
When docker start, docker daemon will start a existing container
which its status may be Created or Stopped.
When we execute docker run, docker daemon will finish it in two
steps: docker create and docker start.
When docker stop, obviously docker daemon will stop a container.
Thus container would be in Stopped status.
Coming the most important one, a container actually imagine itself
holding a long time process in it. When the process exits, the
container holding process would exit too. Thus the status of this
container would be Exited.
When does the process exit? In another word, what’s the process, how did we start it?
The answer is CMD in a dockerfile or command in the following expression, which is bash by default in some images, i.e. ubutu:18.04.
docker run ubuntu:18.04 [command]
docker run -it <image_id> /bin/bash
Run in interactive mode executing then bash shell
For anyone attempting something similar using a Dockerfile...
Running in detached mode won't help. The container will always exit (stop running) if the command is non-blocking, this is the case with bash.
In this case, a workaround would be:
1. Commit the resulting image:
(container_name = the name of the container you want to base the image off of,
image_name = the name of the image to be created
docker commit container_name image_name
2. Use docker run to create a new container using the new image, specifying the command you want to run. Here, I will run "bash":
docker run -it image_name bash
This would get you the interactive login you're looking for.
Here's a solution when the docker container exits normally and you can edit the Dockerfile.
Generally, when a docker container is run, an application is served by running a command. From the Dockerfile reference,
Both CMD and ENTRYPOINT instructions define what command gets executed when
running a container. ...
Dockerfile should specify at least one of CMD or ENTRYPOINT commands.
When you build a image and not specify any command with CMD or ENTRYPOINT, the base image's CMD or ENTRYPOINT command would be executed.
For example, the Official Ubuntu Dockerfile has CMD ["/bin/bash"] (https://hub.docker.com/_/ubuntu). Now, the bin/bash/ command can accept input and docker run -it IMAGE_ID command attaches STDIN to the container. The result is that you get an interactive terminal and the container keeps running.
When a command with CMD or ENTRYPOINT is specified in the Dockerfile, this command gets executed when running the container. Now, if this command can finish without requiring any input, it will finish and the container will exit. docker run -it IMAGE_ID will NOT provide the interactive terminal in this case. An example would be the docker image built from the Dockerfile below-
FROM ubuntu
ENTRYPOINT echo hello
If you need to go to the terminal of this image, you will need to keep the container running by modifying the entrypoint command.
FROM ubuntu
ENTRYPOINT echo hello && sleep infinity
After running the container normally with docker run IMAGE_ID, you can just go to another terminal and use docker exec -it CONTAINER_ID bash to get the container's terminal.
Perhaps too late for this active community, but there are a lot of causes because a container may not execute correctly and exit writing a console message or not. For all the newbies making nodeJS containers I'll recommend you to change the Dockerfile and erase all CMD and ENTRYPOINT you may have, and add only an ENTRYPOINT to ["/bin/sh"] (See my attached test Dockerfile example). Then rebuild the Docker image and run it with the command:
docker run -it --rm your_named_image:tag
Voilà you will be getting inside the container with a shell. Then you can test your app typing the command yourself i.e. node app.js and see what is happening. After you see all is ok, you can then change your docker file and erase the ENTRYPOINT to "/bin/sh" and use yourself i.e ["node","app.js"] or whatever. Always consider the previous answers to this post; When the app inside the container finish it will stop the running container.
Here is an example for my "test" Dockerfile:
FROM node:16.4.0-alpine
ENV NODE_ENV=production
WORKDIR /app
COPY ["package.json","package-lock.json*", "./"]
RUN npm install --production
COPY ./dist .
ENTRYPOINT ["/bin/sh"]
NOTE: My source files for the app (.js) on the local computer are on directory ./dist, so I have to copy at the container as you can see.
In my case , i changed certain file names and directory names of the parent directory of the Dockerfile . Due to which container not finding the required parameters to start it again.
After renaming it back to the original names, container started like butter.
I have a different take on this. I could do a docker ps and see that there is a docker container running, I even tried to restart it, but as soon as I tried to get a session for it with New-PSSession -ContainerId $containerId -RunAsAdministrator It would error out, saying:
##[error]New-PSSession : The input ContainerId xxx does not exist,
##[error]or the corresponding container is not running.
My problem was I was running with network service and it did not have enough permissions to see the container, even though I had given it permissions to run docker commands (with docker security group configuration)
I didn't know how to enable working with containers, so I had to revert to running it as an admin user instead
In my case, I had previously killed the running container with,
sudo docker kill testdeb
So when I exec the container I got the error,
Error response from daemon: Container fcc29295fe78a425155c533506f58fc5b30a50ee9eb85c21031e8699b3f6ff01 is not running
The solution was to start the container with,
sudo docker start testdeb
Now I have a container running ,
sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fcc29295fe78 debian "bash" 9 hours ago Up 11 seconds testdeb
Which wasn't previously running
The below approach I tried works in an windows vscode environment.
docker run --name yourcontainer -p 3306:3306 -e MYSQL_ROOT_PASSWORD=your password -d mysql
I see lot of similar answers but adding port number '-p 3306:3306', made the status up and running. You can verify by using the command docker ps -a

Resources