I face a problem to implement the NAS on container in my datacenter. here is the scenario when the app( Container ) starts it needs to create the NAS folder on the host and should access the files from there. folders should be create in the run time. things i tried.
Using -v i have connected the server folder and container manually ( But this will fail when the container moved to other server )
--from option i can able to connect the one container files to others [ this also done manually] ( But is there any way when the container starts it can create a storage folder in the other container and access the files) ?
Your guess/suggestion will be much appreciated.
Related
As the title states. I am looking to send a file from container A to container B. Both containers are running on separate volumes and are on the same network. Is this possible without temporarily storing the file in the host file system?
I have been reading around and found this solution, however it requires that the file I wish to send is temporarily stored in the host
https://medium.com/#gchudnov/copying-data-between-docker-containers-26890935da3f
Container A has its own volume to which a file is written to. I want to get Container A to send this file to a volume to which Container B is attached. Container B then reads this file.
Thanks
If they are linux containers you can use scp
scp file root#172.17.x.x:/path
I am running windows 10 and the most recent version of docker. I am trying to run a docker image and transfer files to and from the image.
I have tried using the "docker cp" command, but from what I've seen online, this does not appear to work for docker images. It only works for containers.
When searching for info on this topic, I have only seen responses dealing with containers, not for images.
A docker image is basically a template used for containers. If you add something to the image it will show up in all of the containers. So if you just want to share a single set of files that don't change you can add the copy command to your docker file, and then run the new image and you'll find the container.
Another option is to use shared volumes. Shared volumes are basically folders that exist on both the host machine and the running docker container. If you move a file on the host system into that folder it will be available on the container (and if you put something from the container into the folder on the container side you can access it from the host side).
I have a container that runs a Python script in order to download a few big files from Amazon S3. The purpose of this container is just to download the files so I have them on my host machine. Because these files are needed by my app (which is running in a separate container with a different image), I bind mount from my host to the app's container the directory downloaded from the first container.
Note 1: I don't want to run the script directly from my host as it has various dependencies that I don't want to install on my host machine.
Note 2: I don't want to download the files while the app's image is being built as it takes too much time to rebuild the image when needed. I want to pass these files from outside and update them when needed.
Is there a way to make the first container to download those files directly to my host machine without downloading them first in the container and then copying them to the host as they take 2x the space needed before cleaning up the container?
Currently, the process is the following:
Build the temporary container image and run it in order to download
the models
Copy the files from the container to the host
Cleanup unneeded container and image
Note: If there is a way to download the files from the first container directly to the second and override them if they exist, it may work too.
Thanks!
You would use a host volume for this. E.g.
docker run -v "$(pwd)/download:/data" your_image
Would run your_image and anything written to /data inside the container would actually write to the host in the ./download directory.
I have an application in my local host.
The application use files from directory on remote host as data base.
I should docker this application
How can I use this directory?
I tried to use it as volume but it didn't work
the files of the directory are inside container, but the application doesn't recognize it
If you somehow map remote directory into your local host, why not using the same technique inside docker?
If for some reasons you cant (lets say, you don't want to install additional drivers in your container), you still can use volumes:
Lets say on your local host your directory (which is somehow synchronized with remote endpoint) is called /home/sync_folder. Then you start docker in following manner:
docker run -it -v /home/sync_folder:/shares ubuntu ls /shares
I've written ubuntu just as an example. ls /shares illustrates ow to access directory inside container
I have users that each will each have a directory for storing arbitrary php code. Each user can execute their code in a Docker container - this means I don't want user1 to be able to see the directory for user2.
I'm not sure how to set this up.
I've read about bind-mounts vs named-volumes. I'm using swarm-mode so I don't know on which host a particular container will run. This means I'm not sure how to connect the container to the volume mount and subdirectory.
Any ideas?
Have you considered having an external server for storage and mounting it on each Docker host?
If you need the data to exist and you don't want to mount external storage you can try looking into something like Gluster for syncing files across multiple hosts
As for not wanting users to share directories you can just set the rights on the folder.