Docker container file system access - docker

When we install any application there creates lots of file structure and also generates logs file in specified path of the application. When we are running same application in docker container , its also creates those files. How can we access those file . As I know we can use docker exec command with bash to interact in command prompt or terminal but is it possible to access same using winscp or gui based any 3rd party tools.

You could mount a volume on the container, so the "locally" generated docker files can be accessed from the host. For example:
docker run -v host_dir:container_dir yourDocker (...)
Your docker's process will save the files to his local container_dir, and you could access them via your host's host_dir.

Related

Docker: How to create an environment variable in the host machine that points to a directory in a docker container?

I am using Docker to run four containers to run a backend web application. The backend web application uses buildout to assemble the software.
However, the frontend, which is installed and runs on the host machine (that is, not using Docker), needs to access the buildout directory inside one of the four docker containers.
Moreover, the frontend uses an environment variable called NTI_BUILDOUT_PATH that is defined on the host machine. NTI_BUILDOUT_PATH must point to the buildout directory, which is inside the aforementioned container.
My problem is that I do not know how to define NTI_BUILDOUT_PATH such that it contains a directory that points towards the buildout directory that is needed by the front end for SSL certificates and other purposes.
I have researched around the web and read about volumes and bind mounts but I do not think they can help me in my case.
One way you can do that is by copying your buildout folder into the host machine using docker cp
docker cp <backend-container-id>:<path-to-buildout> <path-to-host-folder>
For Example if your backend's container_id is d1b5365c5bca and your buildout folder is in /app/buildout inside the container. You can use the following command to copy it to the host.
docker cp d1b5365c5bca:/app/buildout /home/mahmoud/app/buildout
After that you docker rm all your containers and recreate new ones with a bind mount to the buildout folder in the host. So following the previous example we'll have
docker run -v /home/mahmoud/app/buildout:/app/buildout your-backend-image
docker run -v /home/mahmoud/app/buildout:/app/buildout -e NTI_BUILDOUT_PATH=/app/buildout your-frontend-image

Node-red container don't have 'setting.js' file

I downloaded the official node-red container.
I noticed that the file 'setting.js' is missing inside it.
I tried to insert it manually inside the container but it is not read when node-red is started. I was wondering if there was a way to insert it or anyway an alternative way to set the credentials to access the admin page of node-red.
I pull nodered/node-red-docker:0.18.4-v8.
Usually setting.js file is inside .node-red/setting.js, but not in this case. This container have the path: /usr/src/node-red/ and when I enter with command docker exec -it container_name bash, i'm inside the directory node-red. I tried to put the setting.js in this path but not work
You should not change the copy of settings.js in /usr/src/node-red this is the default and should be left alone. Also editing this file after starting the container will not work as it is copied to the userDir the first time Node-RED is started.
If you want to include your own version you should mount it into the /data directory as this is the userDir for the system when running.
You can use the docker -v option to mount a local copy of the file into the container.
docker -v /path/to/settings.js:/data/settings.js ...

docker container using nfs directory on remote host as volume

I have an application in my local host.
The application use files from directory on remote host as data base.
I should docker this application
How can I use this directory?
I tried to use it as volume but it didn't work
the files of the directory are inside container, but the application doesn't recognize it
If you somehow map remote directory into your local host, why not using the same technique inside docker?
If for some reasons you cant (lets say, you don't want to install additional drivers in your container), you still can use volumes:
Lets say on your local host your directory (which is somehow synchronized with remote endpoint) is called /home/sync_folder. Then you start docker in following manner:
docker run -it -v /home/sync_folder:/shares ubuntu ls /shares
I've written ubuntu just as an example. ls /shares illustrates ow to access directory inside container

How can I provide application config to my .NET Core Web API services running in docker containers?

I am using Docker to deploy my ASP.NET Core Web API microservices, and am looking at the options for injecting configuration into each container. The standard way of using an appsettings.json file in the application root directory is not ideal, because as far as I can see, that means building the file into my docker images, which would then limit which environment the image could run in.
I want to build an image once which can they be provided configuration at runtime and rolled through the dev, test UAT and into Production without creating an image for each environment.
Options seem to be:
Providing config via environment variables. Seems a bit tedious.
Somehow mapping a path in the container to a standard location on the host server where appsettings.json sits, and getting the service to pick this up (how?)
May be possible to provide values on the docker run command line?
Does anyone have experience with this? Could you provide code samples/directions, particularly on option 2) which seems the best at the moment?
It's possible to create data volumes in the docker image/container. And also mount a host directory into a container. The host directory will then by accessible inside the container.
Adding a data volume
You can add a data volume to a container using the -v flag with the docker create and docker run command.
$ docker run -d -P --name web -v /webapp training/webapp python app.py
This will create a new volume inside a container at /webapp.
Mount a host directory as a data volume
In addition to creating a volume using the -v flag you can also mount a directory from your Docker engine’s host into a container.
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the container at /webapp.
Refer to the Docker Data Volumes
We are using other packaging system for now (not docker itself), but still have same issue - package can be deployed in any environment.
So, the way we are doing it now:
Use External configuration management system to hold and manage configuration per environment
Inject to our package the basic environment variables to hold the configuration management system connection details
This way we are not only allowing the package to run in almost any "known" environment, but also run-time configuration management.
When you are running docker, you can use environment variable options of the run command:
$ docker run -e "deep=purple" ...

How to save config file inside a running container?

I am new to docker. I want to run tinyproxy within docker. Here is the image I used to create a docker container: "https://hub.docker.com/r/dtgilles/tinyproxy/".
For some unknown reason, when I mount the log file to the host machine, I can see the .conf file, but I can't see log file and the proxy server seems doesn't work.
Here is the command I tried:
docker run -v $(pwd):/logs -p 8888:8888 -d --name tiny
dtgilles/tinyproxy
If I didn't mount the file, then every time when run a container, I need to change its config file inside container.
Does anyone has any ideas about saving the changes in container?
Question
How to save a change committed by/into a container?
Answer
The command docker commit creates a new image from a container's changes (from the man page).
Best Practice
You actually should not do this to save a configuration file. A Docker image is supposed to be immutable. This increases sharing, and image customization through mounted volume.
What you should do is create the configuration file on the host and share it at through parameters with docker run. This is done by using the option -v|--volume. Check the man page, you'll then be able to share files (or directories) between host and containers allowing to persists the data through different runs.

Resources