I have an application in my local host.
The application use files from directory on remote host as data base.
I should docker this application
How can I use this directory?
I tried to use it as volume but it didn't work
the files of the directory are inside container, but the application doesn't recognize it
If you somehow map remote directory into your local host, why not using the same technique inside docker?
If for some reasons you cant (lets say, you don't want to install additional drivers in your container), you still can use volumes:
Lets say on your local host your directory (which is somehow synchronized with remote endpoint) is called /home/sync_folder. Then you start docker in following manner:
docker run -it -v /home/sync_folder:/shares ubuntu ls /shares
I've written ubuntu just as an example. ls /shares illustrates ow to access directory inside container
Related
I have a bit of a conundrum with mounting a remote folder here.
What we have is a PC in an active directory, as well as a remote server in the same active directory. In order to get files for the script, we need to mount a folder from the remote server into a docker container (using ubuntu 20.04).
So far we've tried to directly mount the folder into the container using WebDAV, but this didn't work saying that the directory of remote folder doesn't exist.
Then we tried to first mount it locally through WSL using the mount command so docker could see the mounted folder on the local pc, but this didn't work either: in this case error said that instead, the folder that didn't exist was the target directory (even though it was created in advance).
The question at hand is, what would be the best and most correct way to mount a remote shared folder that is accessible with URL link to a docker container?
we have a similar issue/use case, but in our case, it was possible to create a Samba 4 share on the host where we had a folder with some .pdf documents to "work with".
We then created a docker volume with SMB share (on the host). With the:
docker volume create --driver local --opt type=cifs --opt device=//192.168.XX.YY/theShare --opt o=username=shareUsername,password='sharePassword',domain=company.com,vers=3.0,file_mode=0777,dir_mode=0777 THE_SHARE" command.
Note: we have centos 7 still on that host running docker (where we need samba mount) so we had to install some dependencies on the host system:
sudo yum update
sudo yum install samba-client samba-common cifs-utils
Then in a container, we simply mounted a volume (using -v)
-v THE_SHARE:/mnt/the_share
and inside application it can refer to the content using local RW to the file system on the /mnt/the_share path.
I need to work with all the files from a docker container, my approach is to copy all the list of files from the container to my host.
I'm using the next docker commands, for example with the postgres image:
docker create -ti --name dummy_1 postgres bash
docker cp dummy_1:/. Documents/docker/dockerOne
With this I have all the container folders and files in my host.
And then the idea is to transverse all the files with the java API, and work with them and finally delete the files and folders from local, but I would like to know if is it a better approach, maybe with Java and access directly to the container files, instead of create a local copy of the container files in my host.
Any ideas?
You can build a small server app inside your docker container which feeds you the information you need at an exposed port. Thats how i would have done it.
Maybe I don't understand the question, but you can mount a volume when you run, not create the container
docker run -v /host/path:/container/path your_container
Any code in the container (e.g. Java) that modifies files at /container/path will be reflected on the host, and not need to be copied back in/out. Similarly, any modifications on the host filesystem will be seen in the container.
I don't think I can implement an API in the docker container
Yes you can. You bind a TCP port using -p flag
I am using Docker to run four containers to run a backend web application. The backend web application uses buildout to assemble the software.
However, the frontend, which is installed and runs on the host machine (that is, not using Docker), needs to access the buildout directory inside one of the four docker containers.
Moreover, the frontend uses an environment variable called NTI_BUILDOUT_PATH that is defined on the host machine. NTI_BUILDOUT_PATH must point to the buildout directory, which is inside the aforementioned container.
My problem is that I do not know how to define NTI_BUILDOUT_PATH such that it contains a directory that points towards the buildout directory that is needed by the front end for SSL certificates and other purposes.
I have researched around the web and read about volumes and bind mounts but I do not think they can help me in my case.
One way you can do that is by copying your buildout folder into the host machine using docker cp
docker cp <backend-container-id>:<path-to-buildout> <path-to-host-folder>
For Example if your backend's container_id is d1b5365c5bca and your buildout folder is in /app/buildout inside the container. You can use the following command to copy it to the host.
docker cp d1b5365c5bca:/app/buildout /home/mahmoud/app/buildout
After that you docker rm all your containers and recreate new ones with a bind mount to the buildout folder in the host. So following the previous example we'll have
docker run -v /home/mahmoud/app/buildout:/app/buildout your-backend-image
docker run -v /home/mahmoud/app/buildout:/app/buildout -e NTI_BUILDOUT_PATH=/app/buildout your-frontend-image
Docker newcomer here.
I have a simple image of a django website with a volume defined for the app directory.
I can bind this volume to the actual folder where I do the development with this command :
docker container run --rm -p 8000:8000 --mount type=bind,src=$(pwd)/wordcount-project,target=/usr/src/app/wordcount-project wordcount-django
This works fairly well.
Now I tried to push that simple example in a swarm. Note that I have set up a local registry for the image to be available.
So to start my service I'd do :
docker service create -p 8000:8000 --mount type=bind,source=$(pwd)/wordcount-project,target=/usr/src/app/wordcount-project 127.0.0.1:5000/wordcount-django
It will work after some tries but only because it run on the local node (where the actual folder is) and not a remote node (where there is no wordcount-project folder).
Any idea how to solve this so that this folder can be accessible to all node and yet, still be accessible locally for development ?
Thanks !
Using bind-mount in docker swarn is not recommended, as you can read in the doc. In particular :
Important: Bind mounts can be useful but they can also cause problems. In most cases, it is recommended that you architect your application such that mounting paths from the host is unnecessary.
However, if you still want to use bind-mount, then you have two possibility :
Make sure your folder exists on all the nodes. The main problem here is that you'll have to update it everytime on every node.
Use a shared filesystem (such as sshfs for example) and mount it on a directory on each node. However, now that you have a shared filesystem, then you can just use a docker data volume and change the driver.
You can find some documentation on changing the volume data driver here
I have a directory in my Docker container, and I'm trying to make it available locally using -v screenshots:/srv/screenshots in my docker run command but it's not available.
Do I need to add something else to my command?
Host volumes are mapped from the host into the container, not the other way around. This is one way to have persistent storage (so the data don't disappear when the container is re-created).
You can copy the screenshot folder to your host with docker cp and map them in.
You will have your screenshots in the local screenshots folder. Mapping them in with -v screenshots:/srv/screenshots makes them appear in /srv/screenshots in the container, but these files are really on the host.
See: Mount a host directory as data volume