I am trying to build a backup and restore solution for the Docker containers that we work with.
I have Docker base image that I have created, ubuntu:base, and do not want have to rebuild it each time with a Docker file to add files to it.
I want to create a script that runs from the host machine and creates a new container using the ubuntu:base Docker image and then copies files into that container.
How can I copy files from the host to the container?
The cp command can be used to copy files.
One specific file can be copied TO the container like:
docker cp foo.txt container_id:/foo.txt
One specific file can be copied FROM the container like:
docker cp container_id:/foo.txt foo.txt
For emphasis, container_id is a container ID, not an image ID. (Use docker ps to view listing which includes container_ids.)
Multiple files contained by the folder src can be copied into the target folder using:
docker cp src/. container_id:/target
docker cp container_id:/src/. target
Reference: Docker CLI docs for cp
In Docker versions prior to 1.8 it was only possible to copy files from a container to the host. Not from the host to a container.
Get container name or short container id:
$ docker ps
Get full container id:
$ docker inspect -f '{{.Id}}' SHORT_CONTAINER_ID-or-CONTAINER_NAME
Copy file:
$ sudo cp path-file-host /var/lib/docker/aufs/mnt/FULL_CONTAINER_ID/PATH-NEW-FILE
EXAMPLE:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d8e703d7e303 solidleon/ssh:latest /usr/sbin/sshd -D cranky_pare
$ docker inspect -f '{{.Id}}' cranky_pare
or
$ docker inspect -f '{{.Id}}' d8e703d7e303
d8e703d7e3039a6df6d01bd7fb58d1882e592a85059eb16c4b83cf91847f88e5
$ sudo cp file.txt /var/lib/docker/aufs/mnt/**d8e703d7e3039a6df6d01bd7fb58d1882e592a85059eb16c4b83cf91847f88e5**/root/file.txt
The cleanest way is to mount a host directory on the container when starting the container:
{host} docker run -v /path/to/hostdir:/mnt --name my_container my_image
{host} docker exec -it my_container bash
{container} cp /mnt/sourcefile /path/to/destfile
Typically there are three types:
From a container to the host
docker cp container_id:./bar/foo.txt .
Also docker cp command works both ways too.
From the host to a container
docker exec -i container_id sh -c 'cat > ./bar/foo.txt' < ./foo.txt
Second approach to copy from host to container:
docker cp foo.txt mycontainer:/foo.txt
From a container to a container mixes 1 and 2
docker cp container_id1:./bar/foo.txt .
docker exec -i container_id2 sh -c 'cat > ./bar/foo.txt' < ./foo.txt
The following is a fairly ugly way of doing it but it works.
docker run -i ubuntu /bin/bash -c 'cat > file' < file
If you need to do this on a running container you can use docker exec (added in 1.3).
First, find the container's name or ID:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b9b7400ddd8f ubuntu:latest "/bin/bash" 2 seconds ago Up 2 seconds elated_hodgkin
In the example above we can either use b9b7400ddd8f or elated_hodgkin.
If you wanted to copy everything in /tmp/somefiles on the host to /var/www in the container:
$ cd /tmp/somefiles
$ tar -cv * | docker exec -i elated_hodgkin tar x -C /var/www
We can then exec /bin/bash in the container and verify it worked:
$ docker exec -it elated_hodgkin /bin/bash
root#b9b7400ddd8f:/# ls /var/www
file1 file2
Create a new dockerfile and use the existing image as your base.
FROM myName/myImage:latest
ADD myFile.py bin/myFile.py
Then build the container:
docker build .
The solution is given below,
From the Docker shell,
root#123abc:/root# <-- get the container ID
From the host
cp thefile.txt /var/lib/docker/devicemapper/mnt/123abc<bunch-o-hex>/rootfs/root
The file shall be directly copied to the location where the container sits on the filesystem.
Another solution for copying files into a running container is using tar:
tar -c foo.sh | docker exec -i theDockerContainer /bin/tar -C /tmp -x
Copies the file foo.sh into /tmp of the container.
Edit: Remove reduntant -f, thanks to Maartens comment.
To copy a file from host to running container
docker exec -i $CONTAINER /bin/bash -c "cat > $CONTAINER_PATH" < $HOST_PATH
Based on Erik's answer and Mikl's and z0r's comments.
This is a direct answer to the question 'Copying files from host to Docker container' raised in this question in the title.
Try docker cp. It is the easiest way to do that and works even on my Mac. Usage:
docker cp /root/some-file.txt some-docker-container:/root
This will copy the file some-file.txt in the directory /root on your host machine into the Docker container named some-docker-container into the directory /root. It is very close to the secure copy syntax. And as shown in the previous post, you can use it vice versa. I.e., you also copy files from the container to the host.
And before you downlink this post, please enter docker cp --help. Reading the documentation can be very helpful, sometimes...
If you don't like that way and you want data volumes in your already created and running container, then recreation is your only option today. See also How can I add a volume to an existing Docker container?.
I tried most of the (upvoted) solutions here but in docker 17.09 (in 2018) there is no longer /var/lib/docker/aufs folder.
This simple docker cp solved this task.
docker cp c:\path\to\local\file container_name:/path/to/target/dir/
How to get container_name?
docker ps
There is a NAMES section. Don't use aIMAGE.
With Docker 1.8, docker cp is able to copy files from host to container. See the Docker blog post Announcing Docker 1.8: Content Trust, Toolbox, and Updates to Registry and Orchestration.
To copy files/folders between a container and the local filesystem, type the command:
docker cp {SOURCE_FILE} {DESTINATION_CONTAINER_ID}:/{DESTINATION_PATH}
For example,
docker cp /home/foo container-id:/home/dir
To get the contianer id, type the given command:
docker ps
The above content is taken from docker.com.
Assuming the container is already running, type the given command:
# cat /path/to/host/file/ | docker exec -i -t <container_id> bash -c "/bin/cat > /path/to/container/file"
To share files using shared directory, run the container by typing the given command:
# docker run -v /path/to/host/dir:/path/to/container/dir ...
Note: Problems with permissions might arise as container's users are not the same as the host's users.
This is the command to copy data from Docker to Host:
docker cp container_id:file path/filename /hostpath
docker cp a13fb9c9e674:/tmp/dgController.log /tmp/
Below is the command to copy data from host to docker:
docker cp a.txt ccfbeb35116b:/home/
Container Up Syntax:
docker run -v /HOST/folder:/Container/floder
In docker File
COPY hom* /myFolder/ # adds all files starting with "hom"
COPY hom?.txt /myFolder/ # ? is replaced with any single character, e.g., "home.txt"
In a docker environment, all containers are found in the directory:
/var/lib/docker/aufs/required-docker-id/
To copy the source directory/file to any part of the container, type the given command:
sudo cp -r mydir/ /var/lib/docker/aufs/mnt/required-docker-id/mnt/
Docker cp command is a handy utility that allows to copy files and folders between a container and the host system.
If you want to copy files from your host system to the container, you should use docker cp command like this:
docker cp host_source_path container:destination_path
List your running containers first using docker ps command:
abhishek#linuxhandbook:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
8353c6f43fba 775349758637 "bash" 8 seconds ago Up 7
seconds ubu_container
You need to know either the container ID or the container name. In my case, the docker container name is ubu_container. and the container ID is 8353c6f43fba.
If you want to verify that the files have been copied successfully, you can enter your container in the following manner and then use regular Linux commands:
docker exec -it ubu_container bash
Copy files from host system to docker container
Copying with docker cp is similar to the copy command in Linux.
I am going to copy a file named a.py to the home/dir1 directory in the container.
docker cp a.py ubu_container:/home/dir1
If the file is successfully copied, you won’t see any output on the screen. If the destination path doesn’t exist, you would see an error:
abhishek#linuxhandbook:~$ sudo docker cp a.txt ubu_container:/home/dir2/subsub
Error: No such container:path: ubu_container:/home/dir2
If the destination file already exists, it will be overwritten without any warning.
You may also use container ID instead of the container name:
docker cp a.py 8353c6f43fba:/home/dir1
If the host is CentOS or Fedora, there is a proxy NOT in /var/lib/docker/aufs, but it is under /proc:
cp -r /home/user/mydata/* /proc/$(docker inspect --format "{{.State.Pid}}" <containerid>)/root
This cmd will copy all contents of data directory to / of container with id "containerid".
docker cp [OPTIONS] SRC_PATH CONTAINER:DEST_PATH
The destination path must be pre-exist
tar and docker cp are a good combo for copying everything in a directory.
Create a data volume container
docker create --name dvc --volume /path/on/container cirros
To preserve the directory hierarchy
tar -c -C /path/on/local/machine . | docker cp - dvc:/path/on/container
Check your work
docker run --rm --volumes-from dvc cirros ls -al /path/on/container
Many that find this question may actually have the problem of copying files into a Docker image while it is being created (I did).
In that case, you can use the COPY command in the Dockerfile that you use to create the image.
See the documentation.
In case it is not clear to someone like me what mycontainer in #h3nrik answer means, it is actually the container id. To copy a file WarpSquare.mp4 in /app/example_scenes/1440p60 from an exited docker container to current folder I used this.
docker cp `docker ps -q -l`:/app/example_scenes/1440p60/WarpSquare.mp4 .
where docker ps -q -l pulls up the container id of the last exited instance. In case it is not an exited container you can get it by docker container ls or docker ps
docker cp SRC_PATH CONTAINER_ID:DEST_PATH
For example, I want to copy my file xxxx/download/jenkins to tomcat
I start to get the id of the container Tomcat
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
63686740b488 tomcat "catalina.sh run" 12 seconds ago Up 11 seconds 0.0.0.0:8080->8080/tcp peaceful_babbage
docker cp xxxx/download/jenkins.war 63686740b488:usr/local/tomcat/webapps/
This is a onliner for copying a single file while running a tomcat container.
docker run -v /PATH_TO_WAR/sample.war:/usr/local/tomcat/webapps/myapp.war -it -p 8080:8080 tomcat
This will copy the war file to webapps directory and get your app running in no time.
My favorite method:
CONTAINERS:
CONTAINER_ID=$(docker ps | grep <string> | awk '{ print $1 }' | xargs docker inspect -f '{{.Id}}')
file.txt
mv -f file.txt /var/lib/docker/devicemapper/mnt/$CONTAINER_ID/rootfs/root/file.txt
or
mv -f file.txt /var/lib/docker/aufs/mnt/$CONTAINER_ID/rootfs/root/file.txt
The best way for copying files to the container I found is mounting a directory on host using -v option of docker run command.
There are good answers, but too specific. I find out docker ps is good way to get container id you're interested in. Then do
mount | grep <id>
to see where the volume is mounted. That's
/var/lib/docker/devicemapper/mnt/<id>/rootfs/
for me, but it might be a different path depending on the OS and configuration. Now simply copy files to that path.
Using -v is not always practical.
Try docker cp.
Usage:
docker cp CONTAINER:PATH HOSTPATH
It copies files/folders from PATH to the HOSTPATH.
Related
I did a docker pull and can list the image that's downloaded. I want to see the contents of this image. Did a search on the net but no straight answer.
If the image contains a shell, you can run an interactive shell container using that image and explore whatever content that image has. If sh is not available, the busybox ash shell might be.
For instance:
docker run -it image_name sh
Or following for images with an entrypoint
docker run -it --entrypoint sh image_name
Or if you want to see how the image was built, meaning the steps in its Dockerfile, you can:
docker image history --no-trunc image_name > image_history
The steps will be logged into the image_history file.
You should not start a container just to see the image contents. For instance, you might want to look for malicious content, not run it. Use "create" instead of "run";
docker create --name="tmp_$$" image:tag
docker export tmp_$$ | tar t
docker rm tmp_$$
The accepted answer here is problematic, because there is no guarantee that an image will have any sort of interactive shell. For example, the drone/drone image contains on a single command /drone, and it has an ENTRYPOINT as well, so this will fail:
$ docker run -it drone/drone sh
FATA[0000] DRONE_HOST is not properly configured
And this will fail:
$ docker run --rm -it --entrypoint sh drone/drone
docker: Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"sh\": executable file not found in $PATH".
This is not an uncommon configuration; many minimal images contain only the binaries necessary to support the target service. Fortunately, there are mechanisms for exploring an image filesystem that do not depend on the contents of the image. The easiest is probably the docker export command, which will export a container filesystem as a tar archive. So, start a container (it does not matter if it fails or not):
$ docker run -it drone/drone sh
FATA[0000] DRONE_HOST is not properly configured
Then use docker export to export the filesystem to tar:
$ docker export $(docker ps -lq) | tar tf -
The docker ps -lq there means "give me the id of the most recent docker container". You could replace that with an explicit container name or id.
docker save nginx > nginx.tar
tar -xvf nginx.tar
Following files are present:
manifest.json – Describes filesystem layers and name of json file that has the Container properties.
.json – Container properties
– Each “layerid” directory contains json file describing layer property and filesystem associated with that layer. Docker stores Container images as layers to optimize storage space by reusing layers across images.
https://sreeninet.wordpress.com/2016/06/11/looking-inside-container-images/
OR
you can use dive to view the image content interactively with TUI
https://github.com/wagoodman/dive
EXPLORING DOCKER IMAGE!
Figure out what kind of shell is in there bash or sh...
Inspect the image first: docker inspect name-of-container-or-image
Look for entrypoint or cmd in the JSON return.
Then do: docker run --rm -it --entrypoint=/bin/bash name-of-image
once inside do: ls -lsa or any other shell command like: cd ..
The -it stands for interactive... and TTY. The --rm stands for remove container after run.
If there are no common tools like ls or bash present and you have access to the Dockerfile simple add the common tool as a layer.
example (alpine Linux):
RUN apk add --no-cache bash
And when you don't have access to the Dockerfile then just copy/extract the files from a newly created container and look through them:
docker create <image> # returns container ID the container is never started.
docker cp <container ID>:<source_path> <destination_path>
docker rm <container ID>
cd <destination_path> && ls -lsah
To list the detailed content of an image you have to run docker run --rm image/name ls -alR where --rm means remove as soon as exits form a container.
If you want to list the files in an image without starting a container :
docker create --name listfiles <image name>
docker export listfiles | tar -t
docker rm listfiles
We can try a simpler one as follows:
docker image inspect image_id
This worked in Docker version:
DockerVersion": "18.05.0-ce"
if you want to check the image contents without running it you can do this:
$ sudo bash
...
$ cd /var/lib/docker # default path in most installations
$ find . -iname a_file_inside_the_image.ext
... (will find the base path here)
This works fine with the current default BTRFS storage driver.
Oneliner, no docker run (based on responses above)
IMAGE=your_image docker create --name filelist $IMAGE command && docker export filelist | tar tf - | tree --fromfile . && docker rm filelist
Same, but report tree structure to result.txt
IMAGE=your_image docker create --name filelist $IMAGE command && docker export filelist | tar tf - | tree --noreport --fromfile . | tee result.txt && docker rm filelist
I tried this tool - https://github.com/wagoodman/dive
I found it quite helpful to explore the content of the docker image.
Perhaps this is nota very straight forward approach but this one worked for me.
I had an ECR Repo (Amazon Container Service Repository) whose code i wanted to see.
First we need to save the repo you want to access as a tar file. In my case the command went like - docker save .dkr.ecr.us-east-1.amazonaws.com/<name_of_repo>:image-tag > saved-repo.tar
UNTAR the file using the command - tar -xvf saved-repo.tar. You could see many folders and files
Now try to find the file which contain the code you are looking for (if you know some part of the code)
Command for searching the file - grep -iRl "string you want to search" ./
This will make you reach the file. It can happen that even that file is tarred, so untar it using the command mentioned in step 2.
If you dont know the code you are searching for, you will need to go through all the files that you got after step 2 and this can be bit tiring.
All the Best !
There is a free open source tool called Anchore-CLI that you can use to scan container images. This command will allow you to list all files in a container image
anchore-cli image content myrepo/app:latest files
https://anchore.com/opensource/
EDIT: not available from anchore.com anymore, It's a python program you can install from https://github.com/anchore/anchore-cli
With Docker EE for Windows (17.06.2-ee-6 on Hyper-V Server 2016) all contents of Windows Containers can be examined at C:\ProgramData\docker\windowsfilter\ path of the host OS.
No special mounting needed.
Folder prefix can be found by container id from docker ps -a output.
I am trying to docker exec a container that is built from scratch (say, a NATS container). Seems pretty straight-forward, but since it is built from scratch, I am unable to access /bin/bash, /bin/sh and literally any such command.
I get the error: oci runtime error (command not found, file not found, etc. depending upon the command that I enter).
I tried some commands like:
docker exec -it <container name> /bin/bash
docker exec -it <container name> /bin/sh
docker exec -it <container name> ls
My question is, how do I docker exec a container that is built from scratch and consisting only of binaries? By doing a docker exec, I wish to find out if the files have been successfully copied from my host to the container (I have a COPY in the Dockerfile).
If your scratch container is running you can copy a shell (and other needed utils) into its filesystem and then exec it. The shell would need to be a static binary. Busybox is a great choice here because it can double as so many other binaries.
Full example:
# Assumes scratch container is last launched one, else replace with container ID of
# scratch image, e.g. from `docker ps`, for example:
# scratch_container_id=401b31621b36
scratch_container_id=$(docker ps -ql)
docker run -d busybox:latest sleep 100
busybox_container_id=$(docker ps -ql)
docker cp "$busybox_container_id":/bin/busybox .
# The busybox binary will become whatever you name it (or the first arg you pass to it), for more info run:
# docker run busybox:latest /bin/busybox
# The `busybox --install` command copies the binary with different names into a directory.
docker cp ./busybox "$scratch_container_id":/busybox
docker exec -it "$scratch_container_id" /busybox sh -c '
export PATH="/busybin:$PATH"
/busybox mkdir /busybin
/busybox --install /busybin
sh'
For Kubernetes I think Ephemeral Containers provide or will provide equivalent functionality.
References:
distroless java docker image error
https://github.com/GoogleContainerTools/distroless/issues/168#issuecomment-371077961
There are several options.
You can do docker container cp ${CONTAINER}:/path/to/file/on/container /path/to/temp/dir/on/host. This will copy the files to your host where you can inspect things using host tools.
You can add an appropriate VOLUME to your Dockerfile. Then you can docker container inspect ${CONTAINER}. This will expose the volume name where the files should be. You can then inspect those in another container (based off an image with all the tools you need).
You can at runtime bind the container to a volume or host directory at the appropriate place.
You can add those binaries that you feel you need to the image. If you need /bin/ls or /bin/sh, then you can add them.
You can bind mount the necessary binaries to the container - so the container has them for verification purposes but the image is not bloated by them.
You can only use docker exec to run commands that actually exist in a container. If those commands don't exist, you can't run them. As you've noted, the scratch base image contains nothing – no shells, no libraries, no system files, nothing.
If all you're trying to check is if a Dockerfile COPY command actually copied the files you said it would, I'd generally assume the tooling works and just reference the copied files in my application.
Since it sounds like you control the Dockerfile, one workaround could be to change the base image to something lightweight but non-empty, like FROM busybox. That would give you a minimal set of tools that you could work with without blowing up the image size too much.
I am trying to do the same files check for my needs. I ended up with docker cp copy this file from container. In my case I am using nats container, but you can use any other container running scratch-based-image
sudo docker cp nats_nats_1:/nats-server.conf ./nats-server.conf
You can just grab the container identifier and throw it into a variable. For example, let's say the (truncated) output of docker ps -a is listed with your running container:
CONTAINER ID IMAGE
111111111111 neo4j-migrator
To further the example, you can docker exec -t using the variable you created. For example:
CONTAINER_ID=`docker ps -aqf "ancestor=neo4j-migrator"`
docker exec -it $CONAINER_ID \
sh -c "/usr/bin/neo4j-migrations \
--password $NEO4J_PASSWORD \
--username $NEO4J_USERNAME \
--address $NEO4J_URI \
migrate"
I did a docker pull and can list the image that's downloaded. I want to see the contents of this image. Did a search on the net but no straight answer.
If the image contains a shell, you can run an interactive shell container using that image and explore whatever content that image has. If sh is not available, the busybox ash shell might be.
For instance:
docker run -it image_name sh
Or following for images with an entrypoint
docker run -it --entrypoint sh image_name
Or if you want to see how the image was built, meaning the steps in its Dockerfile, you can:
docker image history --no-trunc image_name > image_history
The steps will be logged into the image_history file.
You should not start a container just to see the image contents. For instance, you might want to look for malicious content, not run it. Use "create" instead of "run";
docker create --name="tmp_$$" image:tag
docker export tmp_$$ | tar t
docker rm tmp_$$
The accepted answer here is problematic, because there is no guarantee that an image will have any sort of interactive shell. For example, the drone/drone image contains on a single command /drone, and it has an ENTRYPOINT as well, so this will fail:
$ docker run -it drone/drone sh
FATA[0000] DRONE_HOST is not properly configured
And this will fail:
$ docker run --rm -it --entrypoint sh drone/drone
docker: Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"sh\": executable file not found in $PATH".
This is not an uncommon configuration; many minimal images contain only the binaries necessary to support the target service. Fortunately, there are mechanisms for exploring an image filesystem that do not depend on the contents of the image. The easiest is probably the docker export command, which will export a container filesystem as a tar archive. So, start a container (it does not matter if it fails or not):
$ docker run -it drone/drone sh
FATA[0000] DRONE_HOST is not properly configured
Then use docker export to export the filesystem to tar:
$ docker export $(docker ps -lq) | tar tf -
The docker ps -lq there means "give me the id of the most recent docker container". You could replace that with an explicit container name or id.
docker save nginx > nginx.tar
tar -xvf nginx.tar
Following files are present:
manifest.json – Describes filesystem layers and name of json file that has the Container properties.
.json – Container properties
– Each “layerid” directory contains json file describing layer property and filesystem associated with that layer. Docker stores Container images as layers to optimize storage space by reusing layers across images.
https://sreeninet.wordpress.com/2016/06/11/looking-inside-container-images/
OR
you can use dive to view the image content interactively with TUI
https://github.com/wagoodman/dive
EXPLORING DOCKER IMAGE!
Figure out what kind of shell is in there bash or sh...
Inspect the image first: docker inspect name-of-container-or-image
Look for entrypoint or cmd in the JSON return.
Then do: docker run --rm -it --entrypoint=/bin/bash name-of-image
once inside do: ls -lsa or any other shell command like: cd ..
The -it stands for interactive... and TTY. The --rm stands for remove container after run.
If there are no common tools like ls or bash present and you have access to the Dockerfile simple add the common tool as a layer.
example (alpine Linux):
RUN apk add --no-cache bash
And when you don't have access to the Dockerfile then just copy/extract the files from a newly created container and look through them:
docker create <image> # returns container ID the container is never started.
docker cp <container ID>:<source_path> <destination_path>
docker rm <container ID>
cd <destination_path> && ls -lsah
To list the detailed content of an image you have to run docker run --rm image/name ls -alR where --rm means remove as soon as exits form a container.
If you want to list the files in an image without starting a container :
docker create --name listfiles <image name>
docker export listfiles | tar -t
docker rm listfiles
We can try a simpler one as follows:
docker image inspect image_id
This worked in Docker version:
DockerVersion": "18.05.0-ce"
if you want to check the image contents without running it you can do this:
$ sudo bash
...
$ cd /var/lib/docker # default path in most installations
$ find . -iname a_file_inside_the_image.ext
... (will find the base path here)
This works fine with the current default BTRFS storage driver.
Oneliner, no docker run (based on responses above)
IMAGE=your_image docker create --name filelist $IMAGE command && docker export filelist | tar tf - | tree --fromfile . && docker rm filelist
Same, but report tree structure to result.txt
IMAGE=your_image docker create --name filelist $IMAGE command && docker export filelist | tar tf - | tree --noreport --fromfile . | tee result.txt && docker rm filelist
I tried this tool - https://github.com/wagoodman/dive
I found it quite helpful to explore the content of the docker image.
Perhaps this is nota very straight forward approach but this one worked for me.
I had an ECR Repo (Amazon Container Service Repository) whose code i wanted to see.
First we need to save the repo you want to access as a tar file. In my case the command went like - docker save .dkr.ecr.us-east-1.amazonaws.com/<name_of_repo>:image-tag > saved-repo.tar
UNTAR the file using the command - tar -xvf saved-repo.tar. You could see many folders and files
Now try to find the file which contain the code you are looking for (if you know some part of the code)
Command for searching the file - grep -iRl "string you want to search" ./
This will make you reach the file. It can happen that even that file is tarred, so untar it using the command mentioned in step 2.
If you dont know the code you are searching for, you will need to go through all the files that you got after step 2 and this can be bit tiring.
All the Best !
There is a free open source tool called Anchore-CLI that you can use to scan container images. This command will allow you to list all files in a container image
anchore-cli image content myrepo/app:latest files
https://anchore.com/opensource/
EDIT: not available from anchore.com anymore, It's a python program you can install from https://github.com/anchore/anchore-cli
With Docker EE for Windows (17.06.2-ee-6 on Hyper-V Server 2016) all contents of Windows Containers can be examined at C:\ProgramData\docker\windowsfilter\ path of the host OS.
No special mounting needed.
Folder prefix can be found by container id from docker ps -a output.
When running Docker, you can mount files and directories using the --volume option. E.g.:
docker run --volume /remote ./local myimage
I'm running a docker image that defines VOLUMESs in the Dockerfile. I need to access a config file that happens to be inside one of the defined volumes. I'd like to have that file "synced" on the host so that I can edit it. I know I could run docker exec ..., but I hope to circumvent that overhead for only editing one file. I found out that the volumes created by the VOLUMES line are stored in /var/lib/docker/volumes/<HASH>/_data.
Using docker inspect I was able to find the directory that is mounted:
docker inspect gitlab-runner | grep -B 1 '"Destination": "/etc/gitlab-runner"' | head -n 1 | cut -d '"' -f 4
Output:
/var/lib/docker/volumes/9c233c085c36380c6c33035222c16e5d061368c5060cc81dda2a9a713a2b2b3b/_data
So the question is:
Is there a way to re-mount volumes defined in an image? OR to somehow get the directory easier than my oneliner above?
EDIT after comments by zeppelin I've tried rebinding the volume with no success:
$ mkdir etc
$ docker run -d --name test1 gitlab/gitlab-runner
$ docker run -d --name test2 -v ~/etc:/etc/gitlab-runner gitlab/gitlab-runner
$ docker exec test1 ls /etc/gitlab-runner/
certs
config.toml
$ docker exec test2 ls /etc/gitlab-runner/
# empty. no files
$ ls etc
# also empty
docker inspect shows correctly that the volume is bound to ~/etc, but the files inside the container at /etc/gitlab-runner/ seem lost.
$ docker run -d --name test1 gitlab/gitlab-runner
$ docker run -d --name test2 -v ~/etc:/etc/gitlab-runner gitlab/gitlab-runner
You've got two different volume types there. One I call an anonymous volume (a very long uuid visible when you run docker volume ls). The second is a host volume or bind mount that maps a directory on the host directly into the container. So each container you spun up is looking at different places.
Anonymous volumes and named volumes (docker run -d -v mydata:/etc/gitlab-runner gitlab/gitlab-runner) get initialized to the contents of the image at that directory location. This initialization only happens when the volume is empty and is mounted into a new container. Host volumes, as you've seen, only get the contents of the host filesystem, even if it's empty at that location.
With that background, the short answer to your question is no, you cannot mount a file inside the container back out to your host. But you can copy the file out with several methods, assuming you don't overlay the source of the file with a host volume mount. With a running container, there's the docker cp command. Personally, I like:
docker run --rm -v ~/etc:/target gitlab/gitlab-runner \
cp -av /etc/gitlab-runner/. /target/.
If you have a named volume with data you want to copy in or out, you can use any image with the tools you need to do the copy:
docker run --rm -v mydata:/source -v ~/etc:/target busybox \
cp -av /source/. /target/.
Try to avoid modifying data inside a container from the host directly, much nicer is when you wrap your task into another container that you then start with "--volumes-from" option when possible in your case.
Not sure I understood your problem, anyway, as for the documentation you mention,
The VOLUME instruction creates a mount point with the specified name
and marks it as holding externally mounted volumes from native host or
other containers. [...] The docker run command initializes the newly
created volume with any data that exists at the specified location
within the base image.
So, following the example Dockerfile , after having built the image
docker build -t mytest .
and having the container running
docker run -d -ti --name mytestcontainer mytest /bin/bash
you can access it from the container itself, e.g.
docker exec -ti mytestcontainer ls -l /myvol/greeting
docker exec -ti mytestcontainer cat /myvol/greeting
Hope it helps.
I need to pipe (inject) a file or some data into docker as part of the run command and have it written to a file within the container as part of the startup. Is there best practise way to do this ?
I've tried this.
cat data.txt | docker run -a stdin -a stdout -i -t ubuntu /bin/bash -c 'cat >/data.txt'
But can't seem to get it to work.
cat setup.json | docker run -i ubuntu /bin/bash -c 'cat'
This worked for me. Remove the -t. Don't need the -a's either.
The better solution is to make (mount) you host folder be accessible to docker container. E.g. like this
docker run -v /Users/<path>:/<container path> ...
Here is /Users/<path> is a folder on your host computer and <container path> mounted path inside container.
Also see Manage data in containers manual page.
UPDATE another Accessing External Files from Docker Containers example.