Related
I've noticed with docker that I need to understand what's happening inside a container or what files exist in there. One example is downloading images from the docker index - you don't have a clue what the image contains so it's impossible to start the application.
What would be ideal is to be able to ssh into them or equivalent. Is there a tool to do this, or is my conceptualisation of docker wrong in thinking I should be able to do this.
Here are a couple different methods...
A) Use docker exec (easiest)
Docker version 1.3 or newer supports the command exec that behave similar to nsenter. This command can run new process in already running container (container must have PID 1 process running already). You can run /bin/bash to explore container state:
docker exec -t -i mycontainer /bin/bash
see Docker command line documentation
B) Use Snapshotting
You can evaluate container filesystem this way:
# find ID of your running container:
docker ps
# create image (snapshot) from container filesystem
docker commit 12345678904b5 mysnapshot
# explore this filesystem using bash (for example)
docker run -t -i mysnapshot /bin/bash
This way, you can evaluate filesystem of the running container in the precise time moment. Container is still running, no future changes are included.
You can later delete snapshot using (filesystem of the running container is not affected!):
docker rmi mysnapshot
C) Use ssh
If you need continuous access, you can install sshd to your container and run the sshd daemon:
docker run -d -p 22 mysnapshot /usr/sbin/sshd -D
# you need to find out which port to connect:
docker ps
This way, you can run your app using ssh (connect and execute what you want).
D) Use nsenter
Use nsenter, see Why you don't need to run SSHd in your Docker containers
The short version is: with nsenter, you can get a shell into an
existing container, even if that container doesn’t run SSH or any kind
of special-purpose daemon
UPDATE: EXPLORING!
This command should let you explore a running docker container:
docker exec -it name-of-container bash
The equivalent for this in docker-compose would be:
docker-compose exec web bash
(web is the name-of-service in this case and it has tty by default.)
Once you are inside do:
ls -lsa
or any other bash command like:
cd ..
This command should let you explore a docker image:
docker run --rm -it --entrypoint=/bin/bash name-of-image
once inside do:
ls -lsa
or any other bash command like:
cd ..
The -it stands for interactive... and tty.
This command should let you inspect a running docker container or image:
docker inspect name-of-container-or-image
You might want to do this and find out if there is any bash or sh in there. Look for entrypoint or cmd in the json return.
NOTE: This answer relies on commen tool being present, but if there is no bash shell or common tools like ls present you could first add one in a layer if you have access to the Dockerfile:
example for alpine:
RUN apk add --no-cache bash
Otherwise if you don't have access to the Dockerfile then just copy the files out of a newly created container and look trough them by doing:
docker create <image> # returns container ID the container is never started.
docker cp <container ID>:<source_path> <destination_path>
docker rm <container ID>
cd <destination_path> && ls -lsah
see docker exec documentation
see docker-compose exec documentation
see docker inspect documentation
see docker create documentation
In case your container is stopped or doesn't have a shell (e.g. hello-world mentioned in the installation guide, or non-alpine traefik), this is probably the only possible method of exploring the filesystem.
You may archive your container's filesystem into tar file:
docker export adoring_kowalevski > contents.tar
Or list the files:
docker export adoring_kowalevski | tar t
Do note, that depending on the image, it might take some time and disk space.
Before Container Creation :
If you to explore the structure of the image that is mounted inside the container you can do
sudo docker image save image_name > image.tar
tar -xvf image.tar
This would give you the visibility of all the layers of an image and its configuration which is present in json files.
After container creation :
For this there are already lot of answers above. my preferred way to do
this would be -
docker exec -t -i container /bin/bash
The most upvoted answer is working for me when the container is actually started, but when it isn't possible to run and you for example want to copy files from the container this has saved me before:
docker cp <container-name>:<path/inside/container> <path/on/host/>
Thanks to docker cp (link) you can copy directly from the container as it was any other part of your filesystem.
For example, recovering all files inside a container:
mkdir /tmp/container_temp
docker cp example_container:/ /tmp/container_temp/
Note that you don't need to specify that you want to copy recursively.
The file system of the container is in the data folder of docker, normally in /var/lib/docker. In order to start and inspect a running containers file system do the following:
hash=$(docker run busybox)
cd /var/lib/docker/aufs/mnt/$hash
And now the current working directory is the root of the container.
you can use dive to view the image content interactively with TUI
https://github.com/wagoodman/dive
Try using
docker exec -it <container-name> /bin/bash
There might be possibility that bash is not implemented. for that you can use
docker exec -it <container-name> sh
On Ubuntu 14.04 running Docker 1.3.1, I found the container root filesystem on the host machine in the following directory:
/var/lib/docker/devicemapper/mnt/<container id>/rootfs/
Full Docker version information:
Client version: 1.3.1
Client API version: 1.15
Go version (client): go1.3.3
Git commit (client): 4e9bbfa
OS/Arch (client): linux/amd64
Server version: 1.3.1
Server API version: 1.15
Go version (server): go1.3.3
Git commit (server): 4e9bbfa
In my case no shell was supported in container except sh. So, this worked like a charm
docker exec -it <container-name> sh
I use another dirty trick that is aufs/devicemapper agnostic.
I look at the command that the container is running e.g. docker ps
and if it's an apache or java i just do the following:
sudo -s
cd /proc/$(pgrep java)/root/
and voilá you're inside the container.
Basically you can as root cd into /proc/<PID>/root/ folder as long as that process is run by the container. Beware symlinks will not make sense wile using that mode.
The most voted answer is good except if your container isn't an actual Linux system.
Many containers (especially the go based ones) don't have any standard binary (no /bin/bash or /bin/sh). In that case, you will need to access the actual containers file directly:
Works like a charm:
name=<name>
dockerId=$(docker inspect -f {{.Id}} $name)
mountId=$(cat /var/lib/docker/image/aufs/layerdb/mounts/$dockerId/mount-id)
cd /var/lib/docker/aufs/mnt/$mountId
Note: You need to run it as root.
Only for LINUX
The most simple way that I use was using proc dir, the container must be running in order to inspect the docker container files.
Find out the process id (PID) of the container and store it into some variable
PID=$(docker inspect -f '{{.State.Pid}}' your-container-name-here)
Make sure the container process is running, and use the variable name to get into the container folder
cd /proc/$PID/root
If you want to get through the dir without finding out the PID number, just use this long command
cd /proc/$(docker inspect -f '{{.State.Pid}}' your-container-name-here)/root
Tips:
After you get inside the container, everything you do will affect the actual process of the container, such as stopping the service or changing the port number.
Hope it helps
Note:
This method only works if the container is still running, otherwise, the directory wouldn't exist anymore if the container has stopped or removed
None of the existing answers address the case of a container that exited (and can't be restarted) and/or doesn't have any shell installed (e.g. distroless ones). This one works as long has you have root access to the Docker host.
For a real manual inspection, find out the layer IDs first:
docker inspect my-container | jq '.[0].GraphDriver.Data'
In the output, you should see something like
"MergedDir": "/var/lib/docker/overlay2/03e8df748fab9526594cfdd0b6cf9f4b5160197e98fe580df0d36f19830308d9/merged"
Navigate into this folder (as root) to find the current visible state of the container filesystem.
This will launch a bash session for the image:
docker run --rm -it --entrypoint=/bin/bash
On newer versions of Docker you can run docker exec [container_name] which runs a shell inside your container
So to get a list of all the files in a container just run docker exec [container_name] ls
I wanted to do this, but I was unable to exec into my container as it had stopped and wasn't starting up again due to some error in my code.
What worked for me was to simply copy the contents of the entire container into a new folder like this:
docker cp container_name:/app/ new_dummy_folder
I was then able to explore the contents of this folder as one would do with a normal folder.
For me, this one works well (thanks to the last comments for pointing out the directory /var/lib/docker/):
chroot /var/lib/docker/containers/2465790aa2c4*/root/
Here, 2465790aa2c4 is the short ID of the running container (as displayed by docker ps), followed by a star.
For docker aufs driver:
The script will find the container root dir(Test on docker 1.7.1 and 1.10.3 )
if [ -z "$1" ] ; then
echo 'docker-find-root $container_id_or_name '
exit 1
fi
CID=$(docker inspect --format {{.Id}} $1)
if [ -n "$CID" ] ; then
if [ -f /var/lib/docker/image/aufs/layerdb/mounts/$CID/mount-id ] ; then
F1=$(cat /var/lib/docker/image/aufs/layerdb/mounts/$CID/mount-id)
d1=/var/lib/docker/aufs/mnt/$F1
fi
if [ ! -d "$d1" ] ; then
d1=/var/lib/docker/aufs/diff/$CID
fi
echo $d1
fi
This answer will help those (like myself) who want to explore the docker volume filesystem even if the container isn't running.
List running docker containers:
docker ps
=> CONTAINER ID "4c721f1985bd"
Look at the docker volume mount points on your local physical machine (https://docs.docker.com/engine/tutorials/dockervolumes/):
docker inspect -f {{.Mounts}} 4c721f1985bd
=> [{ /tmp/container-garren /tmp true rprivate}]
This tells me that the local physical machine directory /tmp/container-garren is mapped to the /tmp docker volume destination.
Knowing the local physical machine directory (/tmp/container-garren) means I can explore the filesystem whether or not the docker container is running. This was critical to helping me figure out that there was some residual data that shouldn't have persisted even after the container was not running.
If you are using Docker v19.03, you follow the below steps.
# find ID of your running container:
docker ps
# create image (snapshot) from container filesystem
docker commit 12345678904b5 mysnapshot
# explore this filesystem
docker run -t -i mysnapshot /bin/sh
For an already running container, you can do:
dockerId=$(docker inspect -f {{.Id}} [docker_id_or_name])
cd /var/lib/docker/btrfs/subvolumes/$dockerId
You need to be root in order to cd into that dir. If you are not root, try 'sudo su' before running the command.
Edit: Following v1.3, see Jiri's answer - it is better.
another trick is to use the atomic tool to do something like:
mkdir -p /path/to/mnt && atomic mount IMAGE /path/to/mnt
The Docker image will be mounted to /path/to/mnt for you to inspect it.
My preferred way to understand what is going on inside container is:
expose -p 8000
docker run -it -p 8000:8000 image
Start server inside it
python -m SimpleHTTPServer
If you are using the AUFS storage driver, you can use my docker-layer script to find any container's filesystem root (mnt) and readwrite layer :
# docker-layer musing_wiles
rw layer : /var/lib/docker/aufs/diff/c83338693ff190945b2374dea210974b7213bc0916163cc30e16f6ccf1e4b03f
mnt : /var/lib/docker/aufs/mnt/c83338693ff190945b2374dea210974b7213bc0916163cc30e16f6ccf1e4b03f
Edit 2018-03-28 :
docker-layer has been replaced by docker-backup
The docker exec command to run a command in a running container can help in multiple cases.
Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
Run a command in a running container
Options:
-d, --detach Detached mode: run command in the background
--detach-keys string Override the key sequence for detaching a
container
-e, --env list Set environment variables
-i, --interactive Keep STDIN open even if not attached
--privileged Give extended privileges to the command
-t, --tty Allocate a pseudo-TTY
-u, --user string Username or UID (format:
[:])
-w, --workdir string Working directory inside the container
For example :
1) Accessing in bash to the running container filesystem :
docker exec -it containerId bash
2) Accessing in bash to the running container filesystem as root to be able to have required rights :
docker exec -it -u root containerId bash
This is particularly useful to be able to do some processing as root in a container.
3) Accessing in bash to the running container filesystem with a specific working directory :
docker exec -it -w /var/lib containerId bash
Often times I only need to explore the docker filesystem because my build won't run, so docker run -it <container_name> bash is impractical. I also do not want to waste time and memory copying filesystems, so docker cp <container_name>:<path> <target_path> is impractical too.
While possibly unorthodox, I recommend re-building with ls as the final command in the Dockerfile:
CMD [ "ls", "-R" ]
I've found the easiest, all-in-one solution to View, Edit, Copy files with a GUI app inside almost any running container.
mc editing files in docker
inside the container install mc and ssh: docker exec -it <container> /bin/bash, then with prompt install mc and ssh packages
in same exec-bash console, run mc
press ESC then 9 then ENTER to open menu and select "Shell link..."
using "Shell link..." open SCP-based filesystem access to any host with ssh server running (including the one running docker) by it's IP address
do your job in graphical UI
this method overcomes all issues with permissions, snap isolation etc., allows to copy directly to any machine and is the most pleasant to use for me
I had an unknown container, that was doing some production workload and did not want to run any command.
So, I used docker diff.
This will list all files that the container had changed and therefore good suited to explore the container file system.
To get only a folder you can just use grep:
docker diff <container> | grep /var/log
It will not show files from the docker image. Depending on your use case this can help or not.
Late to the party, but in 2022 we have VS Code
I am trying to docker exec a container that is built from scratch (say, a NATS container). Seems pretty straight-forward, but since it is built from scratch, I am unable to access /bin/bash, /bin/sh and literally any such command.
I get the error: oci runtime error (command not found, file not found, etc. depending upon the command that I enter).
I tried some commands like:
docker exec -it <container name> /bin/bash
docker exec -it <container name> /bin/sh
docker exec -it <container name> ls
My question is, how do I docker exec a container that is built from scratch and consisting only of binaries? By doing a docker exec, I wish to find out if the files have been successfully copied from my host to the container (I have a COPY in the Dockerfile).
If your scratch container is running you can copy a shell (and other needed utils) into its filesystem and then exec it. The shell would need to be a static binary. Busybox is a great choice here because it can double as so many other binaries.
Full example:
# Assumes scratch container is last launched one, else replace with container ID of
# scratch image, e.g. from `docker ps`, for example:
# scratch_container_id=401b31621b36
scratch_container_id=$(docker ps -ql)
docker run -d busybox:latest sleep 100
busybox_container_id=$(docker ps -ql)
docker cp "$busybox_container_id":/bin/busybox .
# The busybox binary will become whatever you name it (or the first arg you pass to it), for more info run:
# docker run busybox:latest /bin/busybox
# The `busybox --install` command copies the binary with different names into a directory.
docker cp ./busybox "$scratch_container_id":/busybox
docker exec -it "$scratch_container_id" /busybox sh -c '
export PATH="/busybin:$PATH"
/busybox mkdir /busybin
/busybox --install /busybin
sh'
For Kubernetes I think Ephemeral Containers provide or will provide equivalent functionality.
References:
distroless java docker image error
https://github.com/GoogleContainerTools/distroless/issues/168#issuecomment-371077961
There are several options.
You can do docker container cp ${CONTAINER}:/path/to/file/on/container /path/to/temp/dir/on/host. This will copy the files to your host where you can inspect things using host tools.
You can add an appropriate VOLUME to your Dockerfile. Then you can docker container inspect ${CONTAINER}. This will expose the volume name where the files should be. You can then inspect those in another container (based off an image with all the tools you need).
You can at runtime bind the container to a volume or host directory at the appropriate place.
You can add those binaries that you feel you need to the image. If you need /bin/ls or /bin/sh, then you can add them.
You can bind mount the necessary binaries to the container - so the container has them for verification purposes but the image is not bloated by them.
You can only use docker exec to run commands that actually exist in a container. If those commands don't exist, you can't run them. As you've noted, the scratch base image contains nothing – no shells, no libraries, no system files, nothing.
If all you're trying to check is if a Dockerfile COPY command actually copied the files you said it would, I'd generally assume the tooling works and just reference the copied files in my application.
Since it sounds like you control the Dockerfile, one workaround could be to change the base image to something lightweight but non-empty, like FROM busybox. That would give you a minimal set of tools that you could work with without blowing up the image size too much.
I am trying to do the same files check for my needs. I ended up with docker cp copy this file from container. In my case I am using nats container, but you can use any other container running scratch-based-image
sudo docker cp nats_nats_1:/nats-server.conf ./nats-server.conf
You can just grab the container identifier and throw it into a variable. For example, let's say the (truncated) output of docker ps -a is listed with your running container:
CONTAINER ID IMAGE
111111111111 neo4j-migrator
To further the example, you can docker exec -t using the variable you created. For example:
CONTAINER_ID=`docker ps -aqf "ancestor=neo4j-migrator"`
docker exec -it $CONAINER_ID \
sh -c "/usr/bin/neo4j-migrations \
--password $NEO4J_PASSWORD \
--username $NEO4J_USERNAME \
--address $NEO4J_URI \
migrate"
This question shows how to copy files out of a stopped container. This requires that I know the full path to the file including its file name. I know the directory I want to copy a file out of, but I do not know its file name since that is generated dynamically. How do I list the files in a directory in a stopped Docker container?
The following Docker command works great if the Docker container is running. But, it fails if the Docker container is stopped.
docker exec --privileged MyContainer ls -1 /var/log
Note: The files are not stored in a persistent volume.
This answer to another question shows how to start a stopped container with another command. Here are the commands to list files in a stopped container.
Commit the stopped container to a new image: test_image.
docker commit $CONTAINER_ID test_image
Run the new image in a new container with a shell.
docker run -ti --entrypoint=sh test_image
Run the list file command in the new container.
docker exec --privileged $NEW_CONTAINER_ID ls -1 /var/log
When starting the container is not an option, you can always export your image (a bit overkill but..) and list its contents:
docker export -o dump.tar <container id>
tar -tvf dump.tar
Reference: Baeldung - Exploring a Docker Container’s Filesystem
The command docker diff *CONTAINER* will list the files added, deleted and changed since the Container started.
If a file did not change since the container was started, then you would have to know the contents of the original image that started the container. So, this answer is not ideal but avoids creating an image and running it.
Unlike container-diff, this command does not require first creating a Docker image.
If you want to see a certain file content, I would suggest using docker container cp command. Here is the doc. It works on stopped container. Example:
docker container cp 02b1ef7de80a:/etc/nginx/conf.d/default.conf ./
This way I got the config file that was generated by templating engine during start.
Try using container-diff with the --type=file option. This will compare two images and report the files added, deleted and modified.
If a file did not change since the container was started, then you would have to know the contents of the original image that started the container. So, this answer is not ideal but avoids creating an image and running it.
This tool requires that you first create an image of the stopped Docker container with docker commit.
Here is the command to install it:
curl -LO https://storage.googleapis.com/container-diff/latest/container-diff-linux-amd64 \
&& chmod +x container-diff-linux-amd64 \
&& mkdir -p $HOME/bin \
&& export PATH=$PATH:$HOME/bin \
&& mv container-diff-linux-amd64 $HOME/bin/container-diff
Here is the command to use the utility:
container-diff analyze $IMAGE --type=file
docker container cp <STOPPED_CONTAINER_ID>:<PATH_TO_FILE> -
Notice the "-" at the end of the command.
It actually "copies" the specified file from the stopped container into "stdout". In other words, it just prints the file contents.
Thanks #azat-khadiev for your direction (I don't know why you got "-1 for that answer...)
I would like to start a stopped Docker container with a different command, as the default command crashes - meaning I can't start the container and then use 'docker exec'.
Basically I would like to start a shell so I can inspect the contents of the container.
Luckily I created the container with the -it option!
Find your stopped container id
docker ps -a
Commit the stopped container:
This command saves modified container state into a new image named user/test_image:
docker commit $CONTAINER_ID user/test_image
Start/run with a different entry point:
docker run -ti --entrypoint=sh user/test_image
Entrypoint argument description:
https://docs.docker.com/engine/reference/run/#/entrypoint-default-command-to-execute-at-runtime
Note:
Steps above just start a stopped container with the same filesystem state. That is great for a quick investigation; but environment variables, network configuration, attached volumes and other stuff is not inherited. You should specify all these arguments explicitly.
Steps to start a stopped container have been borrowed from here: (last comment) https://github.com/docker/docker/issues/18078
Edit this file (corresponding to your stopped container):
vi /var/lib/docker/containers/923...4f6/config.json
Change the "Path" parameter to point at your new command, e.g. /bin/bash. You may also set the "Args" parameter to pass arguments to the command.
Restart the docker service (note this will stop all running containers unless you first enable live-restore):
service docker restart
List your containers and make sure the command has changed:
docker ps -a
Start the container and attach to it, you should now be in your shell!
docker start -ai mad_brattain
Worked on Fedora 22 using Docker 1.7.1.
NOTE: If your shell is not interactive (e.g. you did not create the original container with -it option), you can instead change the command to "/bin/sleep 600" or "/bin/tail -f /dev/null" to give you enough time to do "docker exec -it CONTID /bin/bash" as another way of getting a shell.
NOTE2: Newer versions of docker have config.v2.json, where you will need to change either Entrypoint or Cmd (thanks user60561).
Add a check to the top of your Entrypoint script
Docker really needs to implement this as a new feature, but here's another workaround option for situations in which you have an Entrypoint that terminates after success or failure, which can make it difficult to debug.
If you don't already have an Entrypoint script, create one that runs whatever command(s) you need for your container. Then, at the top of this file, add these lines to entrypoint.sh:
# Run once, hold otherwise
if [ -f "already_ran" ]; then
echo "Already ran the Entrypoint once. Holding indefinitely for debugging."
cat
fi
touch already_ran
# Do your main things down here
To ensure that cat holds the connection, you may need to provide a TTY. I'm running the container with my Entrypoint script like so:
docker run -t --entrypoint entrypoint.sh image_name
This will cause the script to run once, creating a file that indicates it has already run (in the container's virtual filesystem). You can then restart the container to perform debugging:
docker start container_name
When you restart the container, the already_ran file will be found, causing the Entrypoint script to stall with cat (which just waits forever for input that will never come, but keeps the container alive). You can then execute a debugging bash session:
docker exec -i container_name bash
While the container is running, you can also remove already_ran and manually execute the entrypoint.sh script to rerun it, if you need to debug that way.
I took #Dmitriusan's answer and made it into an alias:
alias docker-run-prev-container='prev_container_id="$(docker ps -aq | head -n1)" && docker commit "$prev_container_id" "prev_container/$prev_container_id" && docker run -it --entrypoint=bash "prev_container/$prev_container_id"'
Add this into your ~/.bashrc aliases file, and you'll have a nifty new docker-run-prev-container alias which'll drop you into a shell in the previous container.
Helpful for debugging failed docker builds.
This is not exactly what you're asking for, but you can use docker export on a stopped container if all you want is to inspect the files.
mkdir $TARGET_DIR
docker export $CONTAINER_ID | tar -x -C $TARGET_DIR
docker-compose run --entrypoint /bin/bash cont_id_or_name
(for conven, put your env, vol mounts in the docker-compose.yml)
or use docker run and manually spec all args
It seems docker can't change entry point after a container started. But you can set a custom entry point and change the code of the entry point next time you restart it.
For example you run a container like this:
docker run --name c --entrypoint "/boot" -v "./boot":/boot $image
Here is the boot entry point:
#!/bin/bash
command_a
When you need restart c with a different command, you just change the boot script:
#!/bin/bash
command_b
And restart:
docker restart c
My Problem:
I started a container with docker run <IMAGE_NAME>
And then added some files to this container
Then I closed the container and tried to start it again withe same command as above.
But when I checked the new files, they were missing
when I run docker ps -a I could see two containers.
That means every time I was running docker run <IMAGE_NAME> command, new image was getting created
Solution:
To work on the same container you created in the first place run follow these steps
docker ps to get container of your container
docker container start <CONTAINER_ID> to start existing container
Then you can continue from where you left. e.g. docker exec -it <CONTAINER_ID> /bin/bash
You can then decide to create a new image out of it
I have found a simple command
docker start -a [container_name]
This will do the trick
Or
docker start [container_name]
then
docker exec -it [container_name] bash
I had a docker container where the MariaDB container was continuously crashing on startup because of corrupted InnoDB tables.
What I did to solve my problem was:
copy out the docker-entrypoint.sh from the container to the local file system (docker cp)
edit it to include the needed command line parameter (--innodb-force-recovery=1 in my case)
copy the edited file back into the docker container, overwriting the existing entrypoint script.
To me Docker always leaves the impression that it was created for a hobby system, it works well for that.
If something fails or doesn't work, don't expect to have a professional solution.
That said: Docker does not only NOT support such basic administrative tasks, it tries to prevent them.
Solution:
cd /var/lib/docker/overlay2/
find | grep somechangedfile
# You now can see the changed file from your container in a hexcoded folder/diff
cd hexcoded-folder/diff
Create an entrypoint.sh (make sure to backup an existing one if it's there)
cat > entrypoint.sh
#!/bin/bash
while ((1)); do sleep 1; done;
Ctrl+C
chmod +x entrypoint.sh
docker stop
docker start
You now have your docker container running an endless loop instead of the originally entry, you can exec bash into it, or do whatever you need.
When finished stop the container, remove/rename your custom entrypoint.
It seems like most of the time people are running into this while modifying a config file, which is what I did. I was trying to bypass CORS for a PHP/Apache server with a Vue SPA as my entry point. Anyway, if you know the file you horked, a simple solution that worked for me was
Copy the file you horked out of the image:
docker cp bt-php:/etc/apache2/apache2.conf .
Fix it locally
Copy it back in
docker cp apache2.conf bt-php:/etc/apache2/apache2.conf
Start your container back up
*Bonus points - Since this file is being modified, add it to your Compose or Build scripts so that when you do get it right it will be baked into the image!
Lots of discussion surrounding this so I thought I would add one more which I did not immediately see listed above:
If the full path to the entrypoint for the container is known (or discoverable via inspection) it can be copied in and out of the stopped container using 'docker cp'. This means you can copy the original out of the container, edit a copy of it to start a bash shell (or a long sleep timer) instead of whatever it was doing, and then restart the container. The running container can now be further edited with the bash shell to correct any problems. When finished editing another docker cp of the original entrypoint back into the container and a re-restart should do the trick.
I have used this once to correct a 'quick fix' that I butterfingered and was no longer able to run the container with the normal entrypoint until it was corrected.
I also agree there should be a better way to do this via docker: Maybe an option to 'docker restart' that allows an alternate entrypoint? Hey, maybe that already works with '--entrypoint'? Not sure, didn't try it, left as exercise for reader, let me know if it works. :)
My question is related to this question on copying files from containers to hosts; I have a Dockerfile that fetches dependencies, compiles a build artifact from source, and runs an executable. I also want to copy the build artifact (in my case it's a .zip produced by sbt dist in '../target/`, but I think this question also applies to jars, binaries, etc.
docker cp works on containers, not images; do I need to start a container just to get a file out of it? In a script, I tried running /bin/bash in interactive mode in the background, copying the file out, and then killing the container, but this seems kludgey. Is there a better way?
On the other hand, I would like to avoid unpacking a .tar file after running docker save $IMAGENAME just to get one file out (but that seems like the simplest, if slowest, option right now).
I would use docker volumes, e.g.:
docker run -v hostdir:out $IMAGENAME /bin/cp/../blah.zip /out
but I'm running boot2docker in OSX and I don't know how to directly write to my mac host filesystem (read-write volumes are mounting inside my boot2docker VM, which means I can't easily share a script to extract blah.zip from an image with others. Thoughts?
To copy a file from an image, create a temporary container, copy the file from it and then delete it:
id=$(docker create image-name)
docker cp $id:path - > local-tar-file
docker rm -v $id
Unfortunately there doesn't seem to be a way to copy files directly from Docker images. You need to create a container first and then copy the file from the container.
However, if your image contains a cat command (and it will do in many cases), you can do it with a single command:
docker run --rm --entrypoint cat yourimage /path/to/file > path/to/destination
If your image doesn't contain cat, simply create a container and use the docker cp command as suggested in Igor's answer.
docker cp $(docker create --name tc registry.example.com/ansible-base:latest):/home/ansible/.ssh/id_rsa ./hacked_ssh_key && docker rm tc
wanted to supply a one line solution based on pure docker functionality (no bash needed)
edit: container does not even has to be run in this solution
edit2: thanks to #Jonathan Dumaine for --rm so the container will be removed after, i just never tried, because it sounded illogical to copy something from somewhere which has been already removed by the previous command, but i tried it and it works
edit3: due the comments we found out --rm is not working as expected, it does not remove the container because it never runs, so I added functionality to delete the created container afterwards(--name tc=temporary-container)
edit 4: this error appeared, seems like a bug in docker, because t is in a-z and this did not happen a few months before.
Error response from daemon: Invalid container name (t), only [a-zA-Z0-9][a-zA-Z0-9_.-] are allowed
A much faster option is to copy the file from running container to a mounted volume:
docker run -v $PWD:/opt/mount --rm --entrypoint cp image:version /data/libraries.tgz /opt/mount/libraries.tgz
real 0m0.446s
** VS **
docker run --rm --entrypoint cat image:version /data/libraries.tgz > libraries.tgz
real 0m9.014s
Parent comment already showed how to use cat. You could also use tar in a similar fashion:
docker run yourimage tar -c -C /my/directory subfolder | tar x
Another (short) answer to this problem:
docker run -v $PWD:/opt/mount --rm -ti image:version bash -c "cp /source/file /opt/mount/"
Update - as noted by #Elytscha Smith this only works if your image has bash built in
Not a direct answer to the question details, but in general, once you pulled an image, the image is stored on your system and so are all its files. Depending on the storage driver of the local Docker installation, these files can usually be found in /var/lib/docker/overlay2 (requires root access). overlay2 should be the most common storage driver nowadays, but the path may differ.
The layers associated with an image can be found using $ docker inspect image IMAGE_NAME:TAG, look for a GraphDriver attribute.
At least in my local environment, the following also works to quickly see all layers associated with an image:
docker inspect image IMAGE_NAME:TAG | jq ".[0].GraphDriver.Data"
In one of these diff directories, the wanted file can be found.
So in theory, there's no need to create a temporary container. Ofc this solution is pretty inconvenient.
First pull docker image using docker pull
docker pull <IMG>:<TAG>
Then, create a container using docker create command and store the container id is a variable
img_id=$(docker create <IMG>:<TAG>)
Now, run the docker cp command to copy folders and files from docker container to host
docker cp $img_id:/path/in/container /path/in/host
Once the files/folders are moved, delete the container using docker rm
docker rm -v $img_id
You essentially had the best solution already. Have the container copy out the files for you, and then remove itself when it's complete.
This will copy the files from /inside/container/ to your machine at /path/to/hostdir/.
docker run --rm -v /path/to/hostdir:/mnt/out "$IMAGENAME" /bin/cp -r /inside/container/ /mnt/out/
Update - here's a better version without the tar file:
$id = & docker create image-name
docker cp ${id}:path .
docker rm -v $id
Old answer
PowerShell variant of Igor Bukanov's answer:
$id = & docker create image-name
docker cp ${id}:path - > local-file.tar
docker rm -v $id
I am using boot2docker on MacOS. I can assure you that scripts based on "docker cp" are portable. Because any command is relayed inside boot2docker but then the binary stream is relayed back to the docker command line client running on your mac. So write operations from the docker client are executed inside the server and written back to the executing client instance!
I am sharing a backup script for docker volumes with any docker container I provide and my backup scripts are tested both on linux and MacOS with boot2docker. The backups can be easily exchanged between platforms. Basically I am executing the following command inside my script:
docker run --name=bckp_for_volume --rm --volumes-from jenkins_jenkins_1 -v /Users/github/jenkins/backups:/backup busybox tar cf /backup/JenkinsBackup-2015-07-09-14-26-15.tar /jenkins
Runs a new busybox container and mounts the volume of my jenkins container with the name jenkins_jenkins_1. The whole volume is written to the file backups/JenkinsBackup-2015-07-09-14-26-15.tar
I have already moved archives between the linux container and my mac container without any adjustments to the backup or restore script. If this is what you want you find the whole script an tutorial here: blacklabelops/jenkins
You could bind a local path on the host to a path on the container, and then cp the desired file(s) to that path at the end of your script.
$ docker run -d \
-it \
--name devtest \
--mount type=bind,source="$(pwd)"/target,target=/app \
nginx:latest
Then there is no need to copy afterwards.