Possible to retrieve file from Docker image that is not a container? - docker

I have created an application that uses docker. I have built an image like so: docker build -t myapp .
While in my image (using docker run -it myapp /bin/bash to access), a image file is created.
I would like to obtain that file to view on my local as I have found out that viewing images on Docker is a complex procedure.
I tried the following: docker cp myapp:/result.png ./ based on suggestions seen on the webs, but I get the following error: Error response from daemon: No such container: myapp

Image name != container name
myapp is the name of the image, which is not a running container.
When you use docker run, you are creating a container which is based on the myapp image. It will be assigned an ID, which you can see with docker ps. Example:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
aa58c8ff2f34 portainer/portainer "/portainer" 4 months ago Up 5 days 0.0.0.0:9909->9000/tcp portainer_portainer_1
Here you can see a container based on the portainer/portainer image. It has the ID aa58c8ff2f34.
Once you have the ID of your container, you can pass it to docker cp to copy your file.
Specifying the container name
Another approach, which may be preferable if you are automating / scripting something, is to specify the name of the container instead of having to look it up.
docker run -it --name mycontainer myapp /bin/bash
This will create a container named mycontainer. You can then supply that name to docker cp or other commands. Note that your container still has an ID like in the above example, but you can also use this name to refer to it.

You could map a local folder to a volume in the image, and then copy the file out of the image that way.
docker run -it -v /place/to/save/file:/store myapp /bin/bash cp /result.png /store/

Related

My changes were lost in new Docker container

Steps to reproduce:
Download and run postgres:9.6.24:
docker run --name my_container --restart=always -d -p 127.0.0.1:5432:5432 -e POSTGRES_PASSWORD=pgmypass postgres:9.6.24
Here result:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
879883bfc84a postgres:9.6.24 "docker-entrypoint.s…" 26 seconds ago Up 25 seconds 127.0.0.1:5432->5432/tcp my_container
OK.
Open file inside container /var/lib/postgresql/data/pg_hba.conf
docker exec -it my_container bash
root#879883bfc84a:/# cat /var/lib/postgresql/data/pg_hba.conf
IPv4 local connections:
host all all 127.0.0.1/32 trust
Replace file /var/lib/postgresql/data/pg_hba.conf inside container by my file. Copy and overwrite my file from host to container:
tar --overwrite -c pg_hba.conf | docker exec -i my_container /bin/tar -C /var/lib/postgresql/data/ -x
Make sure the file has been modified. Go inside container and open changed file
docker exec -it my_container bash
root#879883bfc84a:/# cat /var/lib/postgresql/data/pg_hba.conf
IPv4 local connections:
host all all 0.0.0.0/0 trust
As you can see the content of file was changed.
Create new image from container
docker commit my_container
See result:
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> ee57ad4bc6b4 3 seconds ago 200MB
postgres 9.6.24 027ccf656dc1 12 months ago 200MB
Now tag my new image
docker tag ee57ad4bc6b4 my_new_image:1.0.0
See reult:
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
my_new_image 1.0.0 ee57ad4bc6b4 About a minute ago 200MB
postgres 9.6.24 027ccf656dc1 12 months ago 200MB
OK.
Stop and delete old continer:
docker stop my_continer
docker rm my_container
See result:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
As you can see not exit any container. OK.
Create new continer from new image
docker run --name my_new_container_test --restart=always -d -p 127.0.0.1:5432:5432 -e POSTGRES_PASSWORD=pg1210 my_new_image:1.0.0
See result:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3a965dbbd991 my_new_image:1.0.0 "docker-entrypoint.s…" 7 seconds ago Up 6 seconds 127.0.0.1:5432->5432/tcp my_new_container
Open file inside container /var/lib/postgresql/data/pg_hba.conf
docker exec -it my_new_container bash
root#879883bfc84a:/# cat /var/lib/postgresql/data/pg_hba.conf
IPv4 local connections:
host all all 127.0.0.1/32 trust
As you can see my change in files are lost. The content of file is original. Not my changes.
P.S. This problem is only with file pg_hba.config. E.g if I created in the container the folder and file: /Downaloads/myfile.txt then this file not lost in the my container "my_new_container".
Editing files inside container with docker exec, in general, will in fact cause you to lose work. You mention docker commit but that's almost never a best practice. (If this was successful, but then you discovered PostgreSQL 9.6.24 exactly had some critical bug and you must upgrade, could you recreate the exact some image?)
In the case of the postgres image, the files in /var/lib/postgresql/data are always stored in a Docker volume or mount point. In your case you didn't use a docker run -v option, but the image is configured to create an anonymous volume in that directory. The volume is not included in docker commit, which is why you're not seeing it on the rebuilt container. (Also see docker postgres with initial data is not persisted over commits.)
For editing a configuration file, the easiest thing to do is to store the data on the host system. Create a directory to hold it, and extract the configuration file from the image. (Since the data directory is created by the image's startup script, you need a slightly longer path to get it out.)
mkdir pgdata
docker run -d --name pgtmp postgres:9.6.24
docker cp pgtmp:/var/lib/postgresql/data/pg_hba.conf ./pgdata
docker stop pgtmp
docker rm pgtmp
$EDITOR pgdata/pg_hba.conf
Now when you run the container, provide this data directory as a bind mount. That will inject the configuration file, but also cause the database data to persist over container exits.
docker run -v "$PWD/pgdata:/var/lib/postgresql/data" -u $(id -u) ... postgres:9.6.24
Note that this sequence doesn't use docker exec or "go inside" containers at all, and you haven't created an image without corresponding source. Everything is run with commands from the host. If you do need to reset the database data, in this setup, it's just files, and you can rm -rf pgdata, maybe saving the modified configuration file along the way.
(If I'm reading this configuration change correctly, you're trying to globally disable passwords and instead allow trust authentication for all inbound connections. That's not usually a good idea, especially since username/password authentication is standard in every database library I've encountered. You probably still want the volume to persist data, but I might not make this change to pg_hba.conf.)
Docker Container is a readyonly entity, which means if you will create a file into the container, remove it and re-create it (The container), the file is not supposed to be there.
what you want to do is one of two things,
Map your container to a local directory (volume)
Create a docker file based on the postgres image, and generate this modifications in a script, that your dockerfile reads.
docker volume usages
Dockerfile Reference

Inject configuration into volume before Docker container starts

I am looking for a way to create a Docker volume and put some data on it just before a specific container is started - which needs the configuration on startup.
I do not want to modify the container. I would like to use a vanilla container straight from the Docker Hub.
Any ideas?
Update
I did not mention that all this has to be done in a compose file. If I would do it manually, I could wait for the configuration injecting container to finish.
Absolutely! Just create your volume beforehand, attach it to any container (A base OS like Ubuntu would work great), add your data, and you're good to go!
Create the volume:
docker volume create test_volume
Attach it to an instance where you can add data:
docker run --rm -it --name ubuntu_1 -v test_volume:/app ubuntu /bin/sh
Add some data:
Do this within the container; which you are in from the previous command.
touch /app/my_file
Exit the container:
exit
Attach the volume to your new container:
Of course, replace ubuntu with your real image name.
docker run --rm -it --name ubuntu_2 -v test_volume:/app ubuntu /bin/sh
Verify the data is there:
~> ls app/
my_file

Docker Volume point to host Directory in Dockerfile

I have the following Dockerfile :
FROM jboss/wildfly
USER jboss
RUN mkdir -p /opt/jboss/wildfly/standalone/log
VOLUME /opt/jboss/wildfly/standalone/log
CMD /bin/bash
# CMD true
This resulting image is started with docker run -ti --name=data_volume data/volume. The next Dockerfile
FROM jboss/wildfly
RUN sed -i 's|<file relative-to="jboss.server.log.dir"
path="server.log"/>|\<file relative-to="jboss.server.log.dir"
path="\${jboss.host.name}-server.log"/\>|'
/opt/jboss/wildfly/standalone/configuration/standalone.xml
overrides the logging of the resulting jboss to log to "servername"-server.log in the logging dir. When I start the resulting image with docker run -ti --name=wild-01 --volumes-from=data_volume my/wildfly and docker run -ti --name=wild-02 --volumes-from=data_volume my/wildfly I have two log files in my data_colume container. So fine so good.
I would like to point my volume to a directory on the host eg. /var/log/wildfly.
How can I achieve this in Dockerfiles and not with the -v parameter when running data/volume
Thanks a lot in advance
Inside dockerfiles you can only define volumes in /var/lib/docker/volumes. This is because every host can be different from the other.
Docker uses /var/lib/docker as "docker area" where it stores all docker-related data. It's the directory that's guaranteed on every host because it gets created on installation.
If you were to point out a volume in the dockerfile, let's say to /home/mbieren/docker_vol, the image would result in multiple errors when executed on a different host, as that directory does not exist and the user probably has insufficient permissions to create it.
Docker goes around that problem by not allowing custom mount-paths to be set in the dockerfile.
I would like to point my volume to a directory on the host eg. /var/log/wildfly.
remove all mention of volumes from your Dockerfile ... launch your container using
docker run -d -v /var/log/wildfly:/var/log/wildfly your-image-name
then in your code just reference the normal path
/var/log/wildfly
Your syntax to launch the container using docker run -ti makes the container shell interactive whereas -d is the normal mode to spin it up as a daemon running in the background

how to share folder between host os and docker container

I have created a volume of a docker image. The docker image is:
REPOSITORY TAG IMAGE ID CREATED SIZE
gcr.io/tensorflow/tensorflow latest-gpu 7f09e75cdc12 4 months ago 1.289 GB
And the container volume is:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
e99c80d2d53e gcr.io/tensorflow/tensorflow:latest-gpu "/run_jupyter.sh" 21 hours ago Up 11 minutes 6006/tcp, 0.0.0.0:8888->8888/tcp deep
I need to share a folder between the host Ubuntu 16.04 OS and the docker container.
I ran this command for doing this:
docker run -v /home/cortana/deep-learning/:/home gcr.io/tensorflow/tensorflow:latest-gpu
This didnt lead to the folder being loaded into the container deep. I dont know what to do after this and am really new to the container stuff in docker. Please explain your answer a bit too.
EDIT:
I deleted the container and then ran these commands:
docker run -v /home/cortana/deep-learning/:/home gcr.io/tensorflow/tensorflow:latest-gpu
nvidia-docker run -p 8888:8888 --name deep gcr.io/tensorflow/tensorflow:latest-gpu
nvidia-docker exec -it deep bash
There is no folder called deep-learning in the /home/ folder in the container. What have I done wrong here?
There's no API I'm aware of to change the mounted volumes on a running container. You destroy the existing container (docker stop and docker rm) and create a new one with the proper configuration (docker run). If you find yourself trying to maintain a single container, upgrading apps inside the container or with data inside, odds are good that you're trying to recreate a VM rather than isolating a process, which is an anti-pattern.
From your edit, you didn't create the /home/deep-learning folder, you created the /home folder. You also appear to be creating a second container named deep without any volume mounts and exec'ing into that one. To make a container with the /home/deep-learning volume mount and the name deep, run it like:
docker run -v /home/cortana/deep-learning:/home/deep-learning \
-p 8888:8888 --name deep gcr.io/tensorflow/tensorflow:latest-gpu

Share and update docker data containers across containers

I have the following containers:
Data container which is build directly in quay.io from a github repo, basically is a website.
FPM container
NGINX container
The three of them are linked together and working just fine. BUT the problem is that every time I change something in the website (Data container) it is rebuilt (of course) and I have to remove that container and also the FPM and NGINX and recreate them all to be able to read the new content.
I started with a "backup approach" for what I'm copying the data from the container to a host directory and mounting that into the FPM and NGINX containers, this way I can update the data without restarting/removing any service.
But the idea of moving the data from the data container into the host, really doesn't like me. So wondering if there a "docker way" or a better way of doing it.
Thanks!
UPDATE: Adding more context
Dockerfile d`ata container definition
FROM debian
ADD data/* /home/mustela/
VOLUME /home/mustela/
Where data only has 2 files: hello.1 and hello.2
Compiling the image:
docker build -t="mustela/data" .
Running the data container:
docker run --name mustela-data mustela/data
Creating another container to link to the previous one:
docker run -d -it --name nginx --volumes-from mustela-data ubuntu bash
Listing the mounted files:
docker exec -it nginx ls /mustela/home
Result:
hello.1 hello.2
Now, lets rebuild the data container image, but first adding some new files, so now inside data we have hello.1 hello.2 hello.3 hello.4
docker rm mustela-data
docker build -t="mustela/data" .
docker run --name mustela-data mustela/data
If I ls /home/mustela from the running container, the files aren't being updated:
docker exec -it nginx ls /mustela/home
Result:
hello.1 hello.2
But if I run a new container I can see the files
docker run -it --name nginx2 --volumes-from mustela-data ubuntu ls /home/mustela
Result: hello.1 hello.2 hello.3 hello.4

Resources