How to work with the files from a docker container - docker

I need to work with all the files from a docker container, my approach is to copy all the list of files from the container to my host.
I'm using the next docker commands, for example with the postgres image:
docker create -ti --name dummy_1 postgres bash
docker cp dummy_1:/. Documents/docker/dockerOne
With this I have all the container folders and files in my host.
And then the idea is to transverse all the files with the java API, and work with them and finally delete the files and folders from local, but I would like to know if is it a better approach, maybe with Java and access directly to the container files, instead of create a local copy of the container files in my host.
Any ideas?

You can build a small server app inside your docker container which feeds you the information you need at an exposed port. Thats how i would have done it.

Maybe I don't understand the question, but you can mount a volume when you run, not create the container
docker run -v /host/path:/container/path your_container
Any code in the container (e.g. Java) that modifies files at /container/path will be reflected on the host, and not need to be copied back in/out. Similarly, any modifications on the host filesystem will be seen in the container.
I don't think I can implement an API in the docker container
Yes you can. You bind a TCP port using -p flag

Related

Is there a way to override the host's folder with the container's folder using volumes in Docker?

I'm fairly new to using Docker and Docker Compose (using Docker Compose for this particular problem). Here is what I know so far about the problem I am facing: When using volumes when there are contents available in the host folder as well as the container's folder, the files inside the container's folder are hidden and the host's files are then made available to the container.
I want to use it the other way round. I would like to make available the container's files (that were copied into the image in the Dockerfile) to the host folder.
Is there a way to do that?
Here are a bunch of screenshots of my Dockerfile and Docker Compose to show my setup.
Dockerfile Screenshot
DockerCompose Screenshot
Thanks in advance! :)
I've come across the same thing many times and the way I go about it is as follows.
As the host volume will always take priority over the container filesystem, you have to copy the files out of the container to the host first, then volume mount them back - this way you get what was there originally, and also what might change in the future (by the container).
The following is all pseudo code, but should hopefully simulate the concept:
First run the main container:
docker run --rm -d --name my-container registry/image-name
Then copy the files you want from it to the local filesystem
docker cp my-container:/files/i/want ./files
Then stop the original container
docker stop my-container
Then mount them back into the container on the next run
docker run --rm -d --name my-container -v ./files:/files/i/want registry/image-name
Obviously you've mentioned compose there also, so just reflect the volume mapping into the compose format - the copy stuff will need to be done via standard docker however in line with the above.
Note: I wrote the above commands blind, but will check them over at lunch and correct any mistypes - but the concept is correct

Docker: How to create an environment variable in the host machine that points to a directory in a docker container?

I am using Docker to run four containers to run a backend web application. The backend web application uses buildout to assemble the software.
However, the frontend, which is installed and runs on the host machine (that is, not using Docker), needs to access the buildout directory inside one of the four docker containers.
Moreover, the frontend uses an environment variable called NTI_BUILDOUT_PATH that is defined on the host machine. NTI_BUILDOUT_PATH must point to the buildout directory, which is inside the aforementioned container.
My problem is that I do not know how to define NTI_BUILDOUT_PATH such that it contains a directory that points towards the buildout directory that is needed by the front end for SSL certificates and other purposes.
I have researched around the web and read about volumes and bind mounts but I do not think they can help me in my case.
One way you can do that is by copying your buildout folder into the host machine using docker cp
docker cp <backend-container-id>:<path-to-buildout> <path-to-host-folder>
For Example if your backend's container_id is d1b5365c5bca and your buildout folder is in /app/buildout inside the container. You can use the following command to copy it to the host.
docker cp d1b5365c5bca:/app/buildout /home/mahmoud/app/buildout
After that you docker rm all your containers and recreate new ones with a bind mount to the buildout folder in the host. So following the previous example we'll have
docker run -v /home/mahmoud/app/buildout:/app/buildout your-backend-image
docker run -v /home/mahmoud/app/buildout:/app/buildout -e NTI_BUILDOUT_PATH=/app/buildout your-frontend-image

Run commands on host from container command prompt

I use portainer to manage containers and it works great.
https://portainer.io/
But when I connect to console, I get the command prompt of container. Is there any way to run simple commands like ls /home/ that will list the files on host?
In other words is there any image that will mount the file system of host server "as-is"?
Here's an example using docker command line:
$ docker run --rm -it -v ~/Desktop:/Desktop alpine:latest /bin/sh
/ # ls /Desktop/
You can extend the approach to as far as you need to. Experiment with it. Learn about the different mount options.
I know the Docker app on MacOS provides a way for default volume mounts. Portainer also claims to provide a volume management screen, am yet to use it.
Hope this helps.
If you're dealing with services, or an existing, running container, you can in most cases access the shell directly. Let's say you have a container called "meow". You can run:
docker exec -it meow bash
and it will drop you into the bash shell. You'll actually need to know if bash is installed, or try calling sh instead.
The "i" option indicates it should be interactive, and the "t" option indicates it should emulate a TTY terminal. When you're done, you can hit Ctrl+D to exit out of the container.
First of all: You never ever want to do so.
Volumes mounted to containers are used to persist the container's data as containers are designed to be volatile -(the container itself shouldn't persist it s state so restarting the container n number of times should result in the same container state each time it starts)- so think of the volume as a the database where all the data (state of the container) should be stored.
Seeing volumes this way makes it easier to decide against sharing the host's entire file system, as this container would have read write permissions over the host OS files itself which is a huge security threat .
Sharing volumes across containers is considered a bad container architecture let alone sharing the entirety of the host file system.
I would propose simple ssh (or remote desktop) to your host if you require access to it to run commands or tasks on your host.
OR if your container requires access to a specific folder for some reason then you should consider mounting or binding that folder to the container
docker run -d --name devtest --mount source=myvol2,target=/app nginx:latest
I would recommend copying the content of that folder into a docker managed volume (a folder under the docker/volumes tree) and binding the container to this volume instead of the original folder to minimize the impact of your container on your host's OS.

Sharing files between container and host

I'm running a docker container with a volume /var/my_folder. The data there is persistent: When I close the container it is still there.
But also want to have the data available on my host, because I want to work on code with an IDE, which is not installed in my container.
So how can I have a folder /var/my_folder on my host machine which is also available in my container?
I'm working on Linux Mint.
I appreciate your help.
Thanks. :)
Link : Manage data in containers
The basic run command you want is ...
docker run -dt --name containerName -v /path/on/host:/path/in/container
The problem is that mounting the volume will, (for your purposes), overwrite the volume in the container
the best way to overcome this is to create the files (inside the container) that you want to share AFTER mounting.
The ENTRYPOINT command is executed on docker run. Therefore, if your files are generated as part of your entrypoint script AND not as part of your build THEN they will be available from the host machine once mounted.
The solution is therefore, to run the commands that creates the files in the ENTRYPOINT script.
Failing this, during the build copy the files to another directory and then COPY them back in your ENTRYPOINT script.

How to mount a directory in a Docker container to the host?

Assume that i have an application with this simple Dockerfile:
//...
RUN configure.sh --logmyfiles /var/lib/myapp
ENTRYPOINT ["starter.sh"]
CMD ["run"]
EXPOSE 8080
VOLUME ["/var/lib/myapp"]
And I run a container from that:
sudo docker run -d --name myapp -p 8080:8080 myapp:latest
So it works properly and stores some logs in /var/lib/myapp of docker container.
My question
I need these log files to automatically saved in host too, So how can i mount the /var/lib/myapp from the container to the /var/lib/myapp in host server (without removing current container) ?
Edit
I also see Docker - Mount Directory From Container to Host, but it doesn't solve my problem i need a way to backup my files from docker to host.
First, a little information about Docker volumes. Volume mounts occur only at container creation time. That means you cannot change volume mounts after you've started the container. Also, volume mounts are one-way only: From the host to the container, and not vice-versa. When you specify a host directory mounted as a volume in your container (for example something like: docker run -d --name="foo" -v "/path/on/host:/path/on/container" ubuntu), it is a "regular ole" linux mount --bind, which means that the host directory will temporarily "override" the container directory. Nothing is actually deleted or overwritten on the destination directory, but because of the nature of containers, that effectively means it will be overridden for the lifetime of the container.
So, you're left with two options (maybe three). You could mount a host directory into your container and then copy those files in your startup script (or if you bring cron into your container, you could use a cron to periodically copy those files to that host directory volume mount).
You could also use docker cp to move files from your container to your host. Now that is kinda hacky and definitely not something you should use in your infrastructure automation. But it does work very well for that exact purpose. One-off or debugging is a great situation for that.
You could also possibly set up a network transfer, but that's pretty involved for what you're doing. However, if you want to do this regularly for your log files (or whatever), you could look into using something like rsyslog to move those files off your container.
So how can i mount the /var/lib/myapp from the container to the /var/lib/myapp in host server
That is the opposite: you can mount an host folder to your container on docker run.
(without removing current container)
I don't think so.
Right now, you can check docker inspect <containername> and see if you see your log in the /var/lib/docker/volumes/... associated to the volume from your container.
Or you can redirect the result of docker logs <containername> to an host file.
For more example, see this gist.
The alternative would be to mount a host directory as the log folder and then access the log files directly on the host.
me#host~$ docker run -d -p 80:80 -v <sites-enabled-dir>:/etc/nginx/sites-enabled -v <certs-dir>:/etc/nginx/certs -v <log-dir>:/var/log/nginx dockerfile/nginx
me#host~$ ls <log-dir>
(again, that apply to a container that you start, not an existing running one)

Resources