Docker compose best practices of packaged or bundled asset container? - docker

i am in lots of confusion to how to achieve the following flow
i've two containers nginx and asset (will have only bundled asset)
so there cab be 2-3 nginx instance and few asset instances.
So from my local machine or build server i'll build assets using grunt and that bundled asset will be inside of image.
so if i use volumes for bundled asset how it will be pushed along side image.
or if i use no volumes then how nginx will get mount path from asset image or (stopped or running) container.

There are two main ways.
You add your assets to your nginx image. In that case, you simply create a Dockerfile and COPY your local assets to a location on the nginx image. Eg:
FROM: nginx
COPY: myassets/dir/* /var/lib/html
This is the simplest way to do it.
Mount a volume
If you need the same assets to be shared between containers, then you can create a volume and mount this volume in your nginx container. The volume needs to be created before you try to create your nginx containers.
docker volume create myassets
The next step is to copy your files to that newly created volume. If your docker host is local (e.g.: VirtualBox, Docker for Mac or Windows, Vmware Fusion, Parallel) then you can mount a local dir with your assets an copy them to the volume. Eg:
docker run --rm -v /Users/myname/html:/html -v myassets:/temp alpine sh -c "cp -rp /html/* /temp"
If your docker host is hosted elsewhere (AWS, Azure, Remote Servers, ...) then you can't rely on mounted local drives. You'll need to remote copy the files.
docker run -d --name foo -v myassets:/temp alpine tail -f /dev/null
docker cp myassets/* foo:/temp
docker rm -f foo
This creates a container named foo which keeps running (tail -f) in the background -d. We then docker copy files into it at the location where the myassets volume is mounted, and then kill the container when done.
When you mount your volume on a nginx container, it will overwrite whatever is in that container's location.
docker run -d -v myassets:/var/lib/html nginx

You can create multi container docker environments for each app with docker-compose, where multiple asset docker images are built, which is mounted as data volumes for the nginx container.
Refer the docker-compose reference for data volume mounting and stackoverflow question for mounting directory from one container to another.

Related

Docker mount not putting files to the hosts directory

I have created a mount on my container which maps a physical path on the server to a path within the docker container. However, when files are placed within the containers path, those files are not appearing on the servers path (and vice versa)
Here is my docker run cmd:
docker run -d -p 127.0.0.1:7001:5000 --name myContainer myContainer -v /var/www/Images:/app/wwwroot/
Server is running CentOS. My application that runs within this docker container places files in the app/wwwroot folder within its container. I expected these files to also appear on the servers /var/www/Images folder but they do not.
Any ideas why?
Thanks
I expected these files to also appear on the servers /var/www/Images
folder but they do not.
You map mount a directory or path /app/wwwroot will be overridden (hide) by the host files, as -v option tells to the docker I am going to override anything inside Docker with host files.
When you use a bind mount, a file or directory on the host machine is mounted into a container. The file or directory is referenced by its full or relative path on the host machine.
bind-mounts
Or if you expect to copy from container then one way is to start container
docker run -it --rm --name test my_container
then copy files from container
docker cp my_container:/app/wwwroot/ /var/www/Images
Now bind you have docker files under /var/www/Images.

Docker Volume - Retain host files when deleted from container

I have mounted my USB devices to a docker container using docker run --privileged -v /dev/bus/usb:/dev/bus/usb -d ubuntu
Within the container, I would like to delete few files from /dev/bus/usb/
This results in the deletion of files from the host as well, which is not what I want
I would like to delete files from the container, but continue to have them in the host
Is there any way that I can achieve this ?
This is because you are using a shared volume, so when you delete files this action is effective into your container and into the host.
Maybe you can write a little Dockerfile to create an image with a copy of your usb files and not share the volume into the container.
FROM ubuntu
COPY /dev/bus/usb /path/for/your/copy
After that you can compile your image:
docker build -t imagename .
And finally launch it:
docker run -d imagename

How to specify volume for docker container in CircleCI configuration?

I did not manage to find out how to mount volume of docker image in config.yml for integrating with CircleCI.
Official document gives those variables for
container usage, entry point, command, etc., but none about volume mounting.
The scenario is, the building of my project requires two docker containers, the main container and the other container for service foo. To use the service foo, I need expose some artifacts generated in earlier steps to foo container and do some next steps.
Anyone has idea whether I can do that?
As taken from CircleCI documentation:
Mounting Folders
It’s not possible to mount a folder from your job space into a container in Remote Docker (and vice versa). But you can use docker cp command to transfer files between these two environments. For example, you want to start a container in Remote Docker and you want to use a config file from your source code for that:
- run: |
# creating dummy container which will hold a volume with config
docker create -v /cfg --name configs alpine:3.4 /bin/true
# copying config file into this volume
docker cp path/in/your/source/code/app_config.yml configs:/cfg
# starting application container using this volume
docker run --volumes-from configs app-image:1.2.3
In the same way, if your application produces some artifacts that need to be stored, you can copy them from Remote Docker:
- run: |
# starting container with our application
# make sure you're not using `--rm` option otherwise container will be killed after finish
docker run --name app app-image:1.2.3
- run: |
# once application container finishes we can copy artifacts directly from it
docker cp app:/output /path/in/your/job/space

expose files from docker container to host

I have a docker container that holds a django app. The static files are produced and copied to a static folder.
container folder hierarchy:
- var
- django
- app
- static
before i build the docker image, i run ./manage.py collectstatic so the static files are in the /var/django/static folder. To expose the app and serve the static files, i have on the host an nginx. The problem is that if i do a volume between the static folder and a designated folder on the host, when i run the docker container, the /var/django/static folder in the container gets deleted (well, not deleted but mounted). Is there any way to overcome this? as in set the volume but tell docker to take the current files as well?
Volumes are treated as mounts in Docker, which means the host directory will always be mounted over the container's directory. In other words, what you're trying to do isn't currently possible with Docker volumes.
See this Github issue for a discussion on this subject: https://github.com/docker/docker/issues/4361
One possible work-around would be to have a docker volume to an empty directory in your container, and then in your Docker RUN command (or start-up script), copy the static contents into that empty directory that is mounted as a volume.
Reading from: Dockers Volume Page
Volumes have several advantages over bind mounts:
New volumes can have their content pre-populated by a container.
Similar example using docker-compose
Using nginx's default webpage folder as the example:
$ docker volume create xmpl
$ docker run -v xmpl:/usr/share/nginx/html nginx
Will yield all the files on the host system via:
$ docker inspect xmpl
...
"Mountpoint": "/var/lib/docker/volumes/xmpl/_data"
And you can then view the files on the host:
# ls /var/lib/docker/volumes/xmpl/_data
50x.html index.html
And finally to use it from /var/nginx/static:
# mkdir -p /var/nginx
# ln -s /var/lib/docker/volumes/xmpl/_data /var/nginx/static
# ls /var/nginx/static
50x.html index.html
Another nice solution:
1. Install SSHd
2. Install SSHFS on Host
3. Mount folder inside docker container to outside (host) by SSHFS

Dockerfile: understanding VOLUME instruction

Let's take an example.
The following is the VOLUME instruction for the nginx image:
VOLUME ["/etc/nginx/sites-enabled", "/etc/nginx/certs", "/etc/nginx/conf.d", "/var/log/nginx", "/var/www/html"]
Here are my questions:
When you start the container, will these directories show up on my host? And when I stop my container, the directories will stay?
If some (or all) of these directories already exist in my host, what will happen? For example, let's say the image comes with a default config file within the /etc/nginx directory of the container, and I also have a config file within /etc/nginx on my host. When the container starts, which of these files will get priority?
What's the key difference between -v /host/dir:container/dir and VOLUME?
References:
https://github.com/dockerfile/nginx/blob/master/Dockerfile
http://www.tech-d.net/2014/11/03/docker-indepth-volumes/
How to mount host volumes into docker containers in Dockerfile during build
http://jpetazzo.github.io/2015/01/19/dockerfile-and-data-in-volumes/
A container's volumes are just directories on the host regardless of what method they are created by. If you don't specify a directory on the host, Docker will create a new directory for the volume, normally under /var/lib/docker/vfs.
However the volume was created, it's easy to find where it is on the host by using the docker inspect command e.g:
$ ID=$(docker run -d -v /data debian echo "Data container")
$ docker inspect -f {{.Mounts}} $ID
[{0d7adb21591798357ac1e140735150192903daf3de775105c18149552a26f951 /var/lib/docker/volumes/0d7adb21591798357ac1e140735150192903daf3de775105c18149552a26f951/_data /data local true }]
We can see that Docker has created a directory for the volume at /var/lib/docker/volumes/0d7adb21591798357ac1e140735150192903daf3de775105c18149552a26f951/_data.
You are free to modify/add/delete files in this directory from the host, but note that you may need to use sudo for permissions.
Docker will only delete volume directories in two circumstances:
If the --rm option is given to docker run, any volumes will be deleted when the container exits
If a container is deleted with docker rm -v CONTAINER, any volumes will be removed.
In both cases, volumes will only be deleted if no other containers refer to them. Volumes mapped to specific host directories (the -v HOST_DIR:CON_DIR syntax) are never deleted by Docker. However, if you remove the container for a volume, the naming scheme means you will have a hard time figuring out which directory contains the volume.
So, specific questions:
Yes and yes, with above caveats.
Each Docker managed volume gets a new directory on the host
The VOLUME instruction is identical to -v without specifying the host dir. When the host dir is specified, Docker does not create any directories for the volume, will not copy in files from the image and will never delete the volume (docker rm -v CONTAINER will not delete volumes mapped to user-specified host directories).
More information here:
https://blog.container-solutions.com/understanding-volumes-docker

Resources