expose files from docker container to host - docker

I have a docker container that holds a django app. The static files are produced and copied to a static folder.
container folder hierarchy:
- var
- django
- app
- static
before i build the docker image, i run ./manage.py collectstatic so the static files are in the /var/django/static folder. To expose the app and serve the static files, i have on the host an nginx. The problem is that if i do a volume between the static folder and a designated folder on the host, when i run the docker container, the /var/django/static folder in the container gets deleted (well, not deleted but mounted). Is there any way to overcome this? as in set the volume but tell docker to take the current files as well?

Volumes are treated as mounts in Docker, which means the host directory will always be mounted over the container's directory. In other words, what you're trying to do isn't currently possible with Docker volumes.
See this Github issue for a discussion on this subject: https://github.com/docker/docker/issues/4361
One possible work-around would be to have a docker volume to an empty directory in your container, and then in your Docker RUN command (or start-up script), copy the static contents into that empty directory that is mounted as a volume.

Reading from: Dockers Volume Page
Volumes have several advantages over bind mounts:
New volumes can have their content pre-populated by a container.
Similar example using docker-compose
Using nginx's default webpage folder as the example:
$ docker volume create xmpl
$ docker run -v xmpl:/usr/share/nginx/html nginx
Will yield all the files on the host system via:
$ docker inspect xmpl
...
"Mountpoint": "/var/lib/docker/volumes/xmpl/_data"
And you can then view the files on the host:
# ls /var/lib/docker/volumes/xmpl/_data
50x.html index.html
And finally to use it from /var/nginx/static:
# mkdir -p /var/nginx
# ln -s /var/lib/docker/volumes/xmpl/_data /var/nginx/static
# ls /var/nginx/static
50x.html index.html

Another nice solution:
1. Install SSHd
2. Install SSHFS on Host
3. Mount folder inside docker container to outside (host) by SSHFS

Related

Change mountpoint of docker volume to a custom directory

I would like to have a Docker Volume that mounts to a container. This volume would need to be somewhere other than the default location of volumes, preferably somewhere on the Desktop. This is because I am running a web server and would like some directories to be editable by something like VSCode so I don't always have to go inside the container to edit a file. I am not going to be using Docker Compose and instead will be using a Docker File for the container. The functionality I'm going for is the following equivalent of Docker Compose, but in a Dockerfile or through docker run, whichever is easiest to accomplish:
volumes:
- <local-dir>:<container-dir>
This directory will need to be editable LIVE and using the Dockerfile ADD command will not suffice, because after building, the image gets put into a tar archive and cannot be accessed after that.
with this solution you can move even A live container to new partition:
Add a configuration file to tell the docker daemon what is the location of the data directory
Using your preferred text editor add a file named daemon.json under the directory /etc/docker. The file should have this content:
{
"data-root": "/path/to/your/docker"
}
Copy the current data directory to the new one
sudo rsync -aP /var/lib/docker/ /path/to/your/docker
Rename the old docker directory
sudo mv /var/lib/docker /var/lib/docker.old
Restart the docker daemon
sudo service docker start
resource: https://www.guguweb.com/2019/02/07/how-to-move-docker-data-directory-to-another-location-on-ubuntu/
You can mount a directory from your host inside your container when you launch the docker container, using -v or --volume
docker run -v /path/to/desktop/some-dir:/container-dir/path <docker-image>
Volumes specified in the Dockerfile, as you exemplified, will automatically create those volumes under /var/lib/docker/volumes/ every time a container is launched from that image, but it is NOT recommended have these volumes altered by non-Docker processes.

mount docker volume destroy files?

I want access /etc/php5/apache2 in my container, where f.e. php.ini is located.
As soon as I mount my volume...it seems the container can't write the default php.ini to the apache2 folder, because apache2 folder and config folder on host are empty.
docker config:
./config/:/etc/php5/apache2
I have also tested Z flag without any success. Folder config on host is read/write/excutable by everyone.
A volume shadows data in the container, e.g. if there is bla.txt in the folder in the container, after mounting you won't see the file.
If you only need to see the file, you can go into the the container using docker exec -it <id> /bin/sh and then look into the file.
Alternatively use docker cp to copy the file, but I never needed that.

Docker mounts and empty volume

I am running docker on ubuntu server 16.04 and I am running a container trying to mount a volume with my let's encrypt certificates..
I am doing:
docker run .... -v /etc/letsencrypt/live/mysite:/certs ....
on mysite folder I have my .pem files, but inside my container i find the folder certs created but it is empty!! I don't know why it is not mounting the files that are inside mysite folder...
Initially mysite folder had belongs to root but I change ownership to the current user with 'chown'.. I am also running docker run with 'sudo' but it is still not coping my folder.
I have no idea what to do :(
Try the mount flag.
docker run -it \
--mount src=/etc/letsencrypt/live/mysite,target=/certs,type=bind ubuntu
Or move your certs to a named volume.
You'll have to move your certs into that directory given under "Mountpoint"
Volumes docs
Bind Mount docs

Docker compose best practices of packaged or bundled asset container?

i am in lots of confusion to how to achieve the following flow
i've two containers nginx and asset (will have only bundled asset)
so there cab be 2-3 nginx instance and few asset instances.
So from my local machine or build server i'll build assets using grunt and that bundled asset will be inside of image.
so if i use volumes for bundled asset how it will be pushed along side image.
or if i use no volumes then how nginx will get mount path from asset image or (stopped or running) container.
There are two main ways.
You add your assets to your nginx image. In that case, you simply create a Dockerfile and COPY your local assets to a location on the nginx image. Eg:
FROM: nginx
COPY: myassets/dir/* /var/lib/html
This is the simplest way to do it.
Mount a volume
If you need the same assets to be shared between containers, then you can create a volume and mount this volume in your nginx container. The volume needs to be created before you try to create your nginx containers.
docker volume create myassets
The next step is to copy your files to that newly created volume. If your docker host is local (e.g.: VirtualBox, Docker for Mac or Windows, Vmware Fusion, Parallel) then you can mount a local dir with your assets an copy them to the volume. Eg:
docker run --rm -v /Users/myname/html:/html -v myassets:/temp alpine sh -c "cp -rp /html/* /temp"
If your docker host is hosted elsewhere (AWS, Azure, Remote Servers, ...) then you can't rely on mounted local drives. You'll need to remote copy the files.
docker run -d --name foo -v myassets:/temp alpine tail -f /dev/null
docker cp myassets/* foo:/temp
docker rm -f foo
This creates a container named foo which keeps running (tail -f) in the background -d. We then docker copy files into it at the location where the myassets volume is mounted, and then kill the container when done.
When you mount your volume on a nginx container, it will overwrite whatever is in that container's location.
docker run -d -v myassets:/var/lib/html nginx
You can create multi container docker environments for each app with docker-compose, where multiple asset docker images are built, which is mounted as data volumes for the nginx container.
Refer the docker-compose reference for data volume mounting and stackoverflow question for mounting directory from one container to another.

How can I use VOLUME in a Dockerfile to persist individual files in a directory?

This application I'm trying to Dockerize has configuration files in the root of the install dir. If I use VOLUME to mount the install dir on the host, I'll end up with the application on the host, too. I only want to store the configuration files on the host.
Should I use hard links in the container and use VOLUME to mount the dir that has the hardlinks? Do hard links even work in a container?
You can mount individual files. Below is from the docker documentation https://docs.docker.com/engine/userguide/containers/dockervolumes/
Mount a host file as a data volume
The -v flag can also be used to mount a single file - instead of just
directories - from the host machine.
$ docker run --rm -it -v ~/.bash_history:/root/.bash_history ubuntu /bin/bash
This will drop you into a bash shell in a new container, you will have
your bash history from the host and when you exit the container, the
host will have the history of the commands typed while in the
container.
Note: Many tools used to edit files including vi and sed --in-place may result in an inode change. Since Docker v1.1.0, this will produce an error such as “sed: cannot rename ./sedKdJ9Dy: Device
or resource busy”. In the case where you want to edit the mounted
file, it is often easiest to instead mount the parent directory.

Resources