Docker: How to create an environment variable in the host machine that points to a directory in a docker container? - docker

I am using Docker to run four containers to run a backend web application. The backend web application uses buildout to assemble the software.
However, the frontend, which is installed and runs on the host machine (that is, not using Docker), needs to access the buildout directory inside one of the four docker containers.
Moreover, the frontend uses an environment variable called NTI_BUILDOUT_PATH that is defined on the host machine. NTI_BUILDOUT_PATH must point to the buildout directory, which is inside the aforementioned container.
My problem is that I do not know how to define NTI_BUILDOUT_PATH such that it contains a directory that points towards the buildout directory that is needed by the front end for SSL certificates and other purposes.
I have researched around the web and read about volumes and bind mounts but I do not think they can help me in my case.

One way you can do that is by copying your buildout folder into the host machine using docker cp
docker cp <backend-container-id>:<path-to-buildout> <path-to-host-folder>
For Example if your backend's container_id is d1b5365c5bca and your buildout folder is in /app/buildout inside the container. You can use the following command to copy it to the host.
docker cp d1b5365c5bca:/app/buildout /home/mahmoud/app/buildout
After that you docker rm all your containers and recreate new ones with a bind mount to the buildout folder in the host. So following the previous example we'll have
docker run -v /home/mahmoud/app/buildout:/app/buildout your-backend-image
docker run -v /home/mahmoud/app/buildout:/app/buildout -e NTI_BUILDOUT_PATH=/app/buildout your-frontend-image

Related

How to work with the files from a docker container

I need to work with all the files from a docker container, my approach is to copy all the list of files from the container to my host.
I'm using the next docker commands, for example with the postgres image:
docker create -ti --name dummy_1 postgres bash
docker cp dummy_1:/. Documents/docker/dockerOne
With this I have all the container folders and files in my host.
And then the idea is to transverse all the files with the java API, and work with them and finally delete the files and folders from local, but I would like to know if is it a better approach, maybe with Java and access directly to the container files, instead of create a local copy of the container files in my host.
Any ideas?
You can build a small server app inside your docker container which feeds you the information you need at an exposed port. Thats how i would have done it.
Maybe I don't understand the question, but you can mount a volume when you run, not create the container
docker run -v /host/path:/container/path your_container
Any code in the container (e.g. Java) that modifies files at /container/path will be reflected on the host, and not need to be copied back in/out. Similarly, any modifications on the host filesystem will be seen in the container.
I don't think I can implement an API in the docker container
Yes you can. You bind a TCP port using -p flag

docker container using nfs directory on remote host as volume

I have an application in my local host.
The application use files from directory on remote host as data base.
I should docker this application
How can I use this directory?
I tried to use it as volume but it didn't work
the files of the directory are inside container, but the application doesn't recognize it
If you somehow map remote directory into your local host, why not using the same technique inside docker?
If for some reasons you cant (lets say, you don't want to install additional drivers in your container), you still can use volumes:
Lets say on your local host your directory (which is somehow synchronized with remote endpoint) is called /home/sync_folder. Then you start docker in following manner:
docker run -it -v /home/sync_folder:/shares ubuntu ls /shares
I've written ubuntu just as an example. ls /shares illustrates ow to access directory inside container

How can I provide application config to my .NET Core Web API services running in docker containers?

I am using Docker to deploy my ASP.NET Core Web API microservices, and am looking at the options for injecting configuration into each container. The standard way of using an appsettings.json file in the application root directory is not ideal, because as far as I can see, that means building the file into my docker images, which would then limit which environment the image could run in.
I want to build an image once which can they be provided configuration at runtime and rolled through the dev, test UAT and into Production without creating an image for each environment.
Options seem to be:
Providing config via environment variables. Seems a bit tedious.
Somehow mapping a path in the container to a standard location on the host server where appsettings.json sits, and getting the service to pick this up (how?)
May be possible to provide values on the docker run command line?
Does anyone have experience with this? Could you provide code samples/directions, particularly on option 2) which seems the best at the moment?
It's possible to create data volumes in the docker image/container. And also mount a host directory into a container. The host directory will then by accessible inside the container.
Adding a data volume
You can add a data volume to a container using the -v flag with the docker create and docker run command.
$ docker run -d -P --name web -v /webapp training/webapp python app.py
This will create a new volume inside a container at /webapp.
Mount a host directory as a data volume
In addition to creating a volume using the -v flag you can also mount a directory from your Docker engine’s host into a container.
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the container at /webapp.
Refer to the Docker Data Volumes
We are using other packaging system for now (not docker itself), but still have same issue - package can be deployed in any environment.
So, the way we are doing it now:
Use External configuration management system to hold and manage configuration per environment
Inject to our package the basic environment variables to hold the configuration management system connection details
This way we are not only allowing the package to run in almost any "known" environment, but also run-time configuration management.
When you are running docker, you can use environment variable options of the run command:
$ docker run -e "deep=purple" ...

Creating a host directory as a data volume in Dockerfile

So when running the command for the container, I can create the host volume mount that I want with the command
docker run -d -P --name web -v /home/ec2-user:/home/ec2-user testing123
How can I turn that /src/webapp:/opt/webapp host directory into an argument inside of the Dockerfile? Can I use VOLUME to create this type of storage?
The reason I am asking is that when I tried to add it in with the arguments
VOLUME ["/home/ec2-user/:/home/ec2-user/" ]
It creates the directory inside of the container with no files except a :. How would I structure this argument to work with it?
You can't do what you want to do because host directory is dependent to your host and your Dockerfile hence your image must not be dependent to your host. Just 'expose' a volume and use docker run to map your host directory to that volume.
Note from Docker docs:
The host directory is, by its nature, host-dependent. For this reason,
you can’t mount a host directory from Dockerfile because built images
should be portable. A host directory wouldn’t be available on all
potential hosts.
Keep in mind that newer versions of Docker, won't create the directory in your host with -v /host/path:/container/path anymore. This was considered an anti-pattern and it was deprecated. You should manage your host file system with a different tool that gives you better guarantees.

Docker exec command not using the mounted directory for /

I am new to docker containers and I and am trying to solve a problem I am facing right now.
These are my understanding based on limited knowledge.
When we create a docker container, Docker creates a local mount and use it as the root file system for the docker container.
Now, if I run any commands in the container from the host server using docker exec the docker is not using the mounted partition as the / file system for the container. I mean, it still pics up the binaries and env variables from the host server. Is there any option/alternate solution for making the docker use the original mounted directory for docker exec too ?
If I access/start the container with docker attach or docker run -i -t /bin/bash, I get the mounted directory as my / file system, which gives me an entirely independent environment from my host system. But this doesn't happen with the docker exec command.
Please help !!
You are operating under a misconception. The docker image only contains what was installed in it. This is usually a very cut down version of an operating system for efficiency reasons.
The docker container is started from an image - and that's a running version, which can change and store state - but may be discarded.
docker run starts a container from an image. You can run the same image multiple times to create completely different containers (which happen to have the same starting point for their content).
docker exec attaches to one of those containers to run a command. So you will only see the things inside it that ... were inside the image, or added post start (like log files). It has no vision of the host filesystem, and may not be the same OS - the only requirement is that it shares elements of the kernel ... although it usually has a selection of the commonly used binaries.
And when you run an image to create a container, you can specify a mount. One of the options when you do this is passing through a host filesystem, with e.g. -v /path/on/host:/path_in/container. But you don't have to, you can use data containers or use a docker volume mount instead. e.g. docker run -v /mount creates a mount point within the container, using the docker filesystem, which isn't part of the parent host. This can be used to make a data container with: docker create -v /path/to/data --name data_for_acontainer some_basic_image
And then mount volumes from that data container on a new one:
docker run -d --volumes-from data_for_acontainer some_app_image
Which will attach that data container onto the /path/to/data mount. But in neither case is the 'host' filesystem touched directly - this is the whole point of dockerising things.

Resources