Docker "share" Container - docker

I'd like to share some files via a Docker container, but I'm not sure how. I have a project that has several scripts in it. I also have several VMs that need access to those scripts, and especially the latest versions. I'd like to build a docker container that has those scripts inside of it, and then have my VMs be able to mount the container and access the scripts. I tried https://hub.docker.com/r/erichough/nfs-server/ and "baking" the files in, but I don't think that does what I want it to do. Here is the docker file:
FROM erichough/nfs-server:latest
COPY ./Scripts /etc/exports/
EXPOSE 2049
It fails saying that I need to define /etc/exports. Looking at the entrypoint.sh it wants exports to be a file, so I'm guessing paths inside. So I tried creating an exports.txt file that has the path of my files:
exports.txt:
./Scripts
Dockerfile:
FROM erichough/nfs-server:latest
ADD ./exports.txt /etc/exports
EXPOSE 2049
No bueno. Is there a way to accomplish this? My end goal is a docker container in my registry that I can run in my stack. Whenever I update the scripts I push a new container.

Related

Windows Container with Sidecar for data

I am trying to setup a windows nanoserver container as a sidecar container holding the certs that I use for SSL. Because the SSL cert that I need changes in each environment, I need to be able to change the sidecar container (i.e. dev-cert container, prod-cert container, etc) at startup time. I have worked out the configuration problems, but am having trouble using the same pattern that I use for Linux containers.
On linux containers, I simply copy my files into a container and use the VOLUMES step to export my volume. Then, on my main application container, I can use volumes_from to import the volume from the sidecar.
I have tried to follow that same pattern with nanoserver and cannot get working. Here is my dockerfile:
# Building stage
FROM microsoft/nanoserver
RUN mkdir c:\\certs
COPY . .
VOLUME c:/certs
The container builds just fine, but I get the following error when I try and run it. The dockerfile documentation says the following:
Volumes on Windows-based containers: When using Windows-based
containers, the destination of a volume inside the container must be
one of:
a non-existing or empty directory
a drive other than C:
so I thought, easy, I will just switch to the D drive (because I don't want to export an empty directory like #1 requires). I made the following changes:
# Building stage
FROM microsoft/windowservercore as build
VOLUME ["d:"]
WORKDIR c:/certs
COPY . .
RUN copy c:\certs d:
and this container actually started properly. However, I missed in the docs where is says:
Changing the volume from within the Dockerfile: If any build steps
change the data within the volume after it has been declared, those
changes will be discarded.
so, when I checked, I didn't have any files in the d:\certs directory.
So how can you mount a drive for external use in a windows container if, #1 the directory must be empty to make a VOLUME on the c drive in the container, and use must use VOLUME to create a d drive, which is pointless because anything put in there will not be in the final container?
Unfortunately you cannot use Windows containers volumes in this way. Also this limitation is the reason why using database containers (like microsoft/mssql-server-windows-developer) is a real pain. You cannot create volume on non-empty database folder and as a result you cannot restore databases after container re-creation.
As for your use case, I would suggest you to utilize a reverse proxy (like Nginx for example).
You create another container with Nginx server and certificates inside. Then you let it handle all incoming HTTPS requests, terminate SSL/TLS and then pass request to inner application container using plain HTTP protocol.
With such deployment you don't have to copy and install HTTPS certificates to all application containers. There is only one place where you store certificates and you can change dev/test/etc certificates just by using different Nginx image versions (or by binding certificate folder using volume).
UPDATE:
Also if you still want to use sidecar container you can try one small hack. So basically you will move this operation
COPY . .
from build time to runtime (after container starts).
Something like this:
FROM microsoft/nanoserver
RUN mkdir c:\\certs_in
RUN mkdir c:\\certs_out
VOLUME c:/certs_out
CMD copy "C:\certs_in" *.* "D:\certs_out"

How to ensure certain scripts on the host system are present inside the Docker container when the container starts?

I wish to have certain scripts present in the host machine to be present inside the docker container when the container is created. How to I ensure this ? Thanks
You can use a COPY or an ADD statement in your Dockerfile.
COPY <src> <dest>
Docker will error when the file isn't present on the host.
See also:
Stackoverflow: Docker copy VS add
Dockerfile best practices
Docker documentation on COPY
Create a customized image for your container, use COPY or ADD statement in that image's Dockerfile to add scripts to customized image. Once you have the image, use it to start container then this container will have scripts you added.
If you can't, for any reasons, add the scripts to the image at creation with COPY or ADD, the only solution imho would be to mount the folder on the host machine into the container at runtime with the -voption. But in this case you will still need a kind of mechanism build in the image to trigger the script to execute. Via cron or something similar. Maybe have a look at the Phusion Baseimage as it has cron build in and an option to run scripts at container runtime, see here

How to keep changes inside a container on the host after a docker build?

I have a docker-compose dev stack. When I run, docker-compose up --build, the container will be built and it will execute
Dockerfile:
RUN composer install --quiet
That command will write a bunch of files inside the ./vendor/ directory, which is then only available inside the container, as expected. The also existing vendor/ on the host is not touched and, hence, out of date.
Since I use that container for development and want my changes to be available, I mount the current directory inside the container as a volume:
docker-compose.yml:
my-app:
volumes:
- ./:/var/www/myapp/
This loads an outdated vendor directory into my container; forcing me to rerun composer install either on the host or inside the container in order to have the up to date version.
I wonder how I could manage my docker-compose stack differently, so that the changes during the docker build on the current folder are also persisted on the host directory and I don't have to run the command twice.
I do want to keep the vendor folder mounted, as some vendors are my own and I like being able to modifiy them in my current project. So only mounting the folders I need to run my application would not be the best solution.
I am looking for a way to tell docker-compose: Write all the stuff inside the container back to the host before adding the volume.
You can run a short side container after docker-compose build:
docker run --rm -v /vendor:/target my-app cp -a vendor/. /target/.
The cp could also be something more efficient like an rsync. Then after that container exits, you do your docker-compose up which mounts /vendor from the host.
Write all the stuff inside the container back to the host before adding the volume.
There isn't any way to do this directly, but there are a few options to do it as a second command.
as already suggested you can run a container and copy or rsync the files
use docker cp to copy the files out of a container (without using a volume)
use a tool like dobi (disclaimer: dobi is my own project) to automate these tasks. You can use one image to update vendor, and another image to run the application. That way updates are done on the host, but can be built into the final image. dobi takes care of skipping unnecessary operations when the artifact is still fresh (based on modified time of files or resources), so you never run unnecessary operations.

Sharing files between container and host

I'm running a docker container with a volume /var/my_folder. The data there is persistent: When I close the container it is still there.
But also want to have the data available on my host, because I want to work on code with an IDE, which is not installed in my container.
So how can I have a folder /var/my_folder on my host machine which is also available in my container?
I'm working on Linux Mint.
I appreciate your help.
Thanks. :)
Link : Manage data in containers
The basic run command you want is ...
docker run -dt --name containerName -v /path/on/host:/path/in/container
The problem is that mounting the volume will, (for your purposes), overwrite the volume in the container
the best way to overcome this is to create the files (inside the container) that you want to share AFTER mounting.
The ENTRYPOINT command is executed on docker run. Therefore, if your files are generated as part of your entrypoint script AND not as part of your build THEN they will be available from the host machine once mounted.
The solution is therefore, to run the commands that creates the files in the ENTRYPOINT script.
Failing this, during the build copy the files to another directory and then COPY them back in your ENTRYPOINT script.

Add config.txt to docker container

We have a war which needs a configuration file to work.
We want to dockerize it. At the moment we're doing the following:
FROM tomcat:8.0
COPY apps/test.war /usr/local/tomcat/webapps/
COPY conf/ /config/
Our containers is losing the advantages of docker because it's dependent of the configfile. So when we want to execute the .war for other purposes we have to recreate the image which isn't a good approach.
Is it possible to give a config-file as a parameter without mounting it as a volume? Because we don't want the config on our local machine. What could be a solution?
You can pass it as an ENV but I don't see you losing the advantages of Docker. Docker is essentially all about temporary containers that are dispensable. Ideally you want to build a new image for every new app version.

Resources