Im currently working on a Web interface for my personal mailserver and This requires me to work with two different containers:
A PHP Container and a Mailserver container.
Now Im using Docker-Mailserver (https://hub.docker.com/r/mailserver/docker-mailserver/) and Id like to be able to make Accounts and their Mailboxes within my Webinterface instead of using a bash script as it currently is.
From what I understand this requires me to launch a script located within the Mailserver Container at /usr/local/bin thats named setup
The problem now is How do I share /usr/local/bin with my PHP Container? The folder already exists so I cant just make a Volume (to my knowledge) and Im now stuck at a standstill.
Oh and I do NOT have access to the directory that needs to be shared from host. It is one I can only find in the docker container.
As it stands Ive tried looking around the internet however so far Ive been unable to find anything very useful. Theres alot of articles about Volumes however these create a new folder and from what I know do not work with existing folders. Networking could work however I doubt it since it is a script that needs to be executed instead of just a regular file.
You can use a shared volume. With docker, you can rely on the fact that new empty named volumes are populated with the content of the container its mounted to. See https://docs.docker.com/storage/volumes/#populate-a-volume-using-a-container.
You can run the executable from any path, it could be /user/local/bin/ or any other path that is set inside the $PATH variable to have it automatically discovered without the full path, but you can also just use the full path to some arbitrary location.
To get the general idea, look at the below example. note that the order is important. Since the first run command creates the volume and populates it while the second run command mounts the already populates volume.
docker run -d \
--name=mailserver \
-v mailbin:/usr/local/bin \
mailserver-image
docker run \
--name=php \
-v mailbin:/tmp/mailserver/bin \
php-app /tmp/mailserver/bin/some-executbale
This is specific to docker. If you want to do something like this in, say, Kubernetes, you would need to use an entrypoint in your mail server to copy the binary you want to share to a shared volume.
Related
I'm running Docker Desktop on Windows 10. I used the repository for the Fonduer Tutorials to create an image to run with docker. This works fine so far and I am able to run the notebooks which are included in the repository.
I now would like to copy some jupyter notebooks and other data from the host to the container called fonduer-tutorials-jupyter-1 to be able to make use of the fonduer framework.
I am able to copy the files to the container and also to open the jupyter notebooks, but they unfortunately do open in read-only mode.
How can I copy files from host to container and still have permission to write on a windows machine?
I read a lot about options like chown and other flags to use with COPY, but it seems like they're not available on windows machines.
Let's assume my UID received with id -u is 1000 and my GID received with id -g is 2000 if that is relevant to a solution.
To prevent copying the files manually and avoid the linked access restrictions described above a better solution is to map the host directory to a volume within the container via .yml-File, in this case the docker-compose.yml. To do so, the following needs to be added to the .yml-File.
volumes:
- [path to host directory]:[container path where the files should be placed in]
With this the files will be available both, within the container as well as on host.
I have a Meteor App deployed with Meteor UP to Ubuntu.
From this App I need to read a file which is located outside App container on the host server.
How can I do that?
I've tried to set up volumes in the mup.js but no luck. It seems that I'm missing how to correctly provide /host/path and /container/path
volumes: {
// passed as '-v /host/path:/container/path' to the docker run command
'/host/path': '/container/path',
'/second/host/path': '/second/container/path'
},
Read the docs for Docker mounting volumes but obviously can't understand it.
Let's say file is in /home/dirname/filename.csv.
How to correctly mount it into App to be able to access it from the Application?
Or maybe there are other possibilities to access it?
Welcome to Stack Overflow. Let me suggest another way of thinking about this...
In a scalable cluster, docker instances can be spun up and down as the load on the app changes. These may or may not be on the same host computer, so building a dependency on the file system of the host isn't a great idea.
You might be better to think of using a file storage mechanism such as S3, which will scale on its own, and disk storage limits won't apply.
Another option is to determine if the files could be stored in the database.
I hope that helps
Let's try to narrow the problem down.
Meteor UP is passing the configuration parameter volumes directly on to docker, as they also mention in the comment you included. It therefore might be easier to test it against docker directly - narrowing the components involved down as much as possible:
sudo docker run \
-it \
--rm \
-v "/host/path:/container/path" \
-v "/second/host/path:/second/container/path" \
busybox \
/bin/sh
Let me explain this:
sudo because Meteor UP uses sudo to start the container. See: https://github.com/zodern/meteor-up/blob/3c7120a75c12ea12fdd5688e33574c12e158fd07/src/plugins/meteor/assets/templates/start.sh#L63
docker run we want to start a container.
-it to access the container (think of it like SSH'ing into the container).
--rm to automatically clean up - remove the container - after we're done.
-v - here we give the volumes as you define it (I here took the two directories example you provided).
busybox - an image with some useful tools.
/bin/sh - the application to start the container with
I'd expect that you also cannot access the files here. In this case, dig deeper on why you can't make a folder accessible in Docker.
If you can, which would sound weird to me, you can start the container and try to access into the container by running the following command:
docker exec -it my-mup-container /bin/sh
You can think of this command like SSH'ing into a running container. Now you can check around if it really isn't there, if the credentials inside the container are correct, etc.
At last, I have to agree it #mikkel, that it's not a good option to mount a local directoy, but you can now start looking into how to use docker volume to mount a remote directory. He mentioned S3 by AWS, I've worked with AzureFiles on Azure, there are plenty of possibilities.
I successfully ran a bind-mount for my blog, however I think that a managed volume would be a better choice instead of bind-mount, the question is, if I need to edit the theme through SFTP or vim or simply add some files to the volume, how do I do that? Right now the bind-mount allows me to edit the files, but how would I add/edit files on the volume or if I wanted later to get those files out?
For example: docker volume create --name test-volume
How can I add/edit data there or access via SFTP?
As says the official documentation:
Volumes are stored in a part of the host filesystem which is managed by Docker (/var/lib/docker/volumes/ on Linux). Non-Docker processes should not modify this part of the filesystem.
So the idea is to set up a new container, that bind the working directory as well as mounts a volume, and then manage files in them.
For example, lets say your working dir is /app:
docker run \
-v $PROJECT:/tmp/project
-v test-volume:/app \
alpine \
/bin/sh -c "cp /tmp/project/* /app"
Sync tools can be used, like here.
To manage data of your volume through container itself via SFTP, you need to make sure that image you are using supports SSH connections and map a 22 port, more info you could find here.
I am new to docker and containers. I have a container consisting of an MRI analysis software. Within this container are many other software the main software draws its commands from. I would like to run a single command from one of the softwares in this container using research data that is located on an external hard drive which is plugged into my local machine that is running docker.
I know there is a cp command for copying files (such as scripts) into containers and most other questions along these lines seem to recommend copying the files from your local machine into the container and then running the script (or whatever) from the container. In my case I need the container to access data from separate folders in a directory structure and copying over the entire directory is not feasible since it is quite large.
I honestly just want to know how I can run a single command inside the docker using inputs present on my local machine. I have run docker ps to get the CONTAINER_ID which is d8dbcf705ee7. Having looked into executing commands inside containers I tried the following command:
docker exec d8dbcf705ee7 /bin/bash -c "mcflirt -in /Volumes/DISS/FMRIPREP/sub-S06V1A/func/sub-S06V1A_task-compound_run-01_bold.nii -out sub-S06V1A_task-compound_run-01_bold_mcf_COMMAND_TEST.nii.gz -reffile /Volumes/DISS/FMRIPREP_TMP/sub-S06V1A_dof6_ver1.2.5/fmriprep_wf/single_subject_S06V1A_wf/func_preproc_task_compound_run_01_wf/bold_reference_wf/gen_ref/ref_image.nii.gz -mats -plots"
mcflirt is the command I want to run inside the container. I believe the exec command would do what I hope since if I run docker exec d8dbcf705ee7 /bin/bash -c "mcflirt" I will get help output for the mcflirt command which is the expected outcome in that case. The files inside of the /Volume/... paths are the files on my local machine I would like to access. I understand that the location of the files is the problem since I cannot tab complete the paths within this command; when I run this I get the following output:
Image Exception : #22 :: ERROR: Could not open image /Volumes/DISS/FMRIPREP/sub-S06V1A/func/sub-S06V1A_task-compound_run-01_bold
terminate called after throwing an instance of 'RBD_COMMON::BaseException'
Can anyone point me in the right direction?
So if I got you right, you need to execute some shell script and provide the context (like local files).
The way is straightforward.
Lets say your script and all needed files are located in /hello folder of your host PC (no matter really if they are stored together or not, just showing the technique).
/hello
- runme.sh
- datafile1
- datafile1
You mount this folder into your container to make the files accessible inside. If you dont need container to modify them, better mount in readonly mode.
You launch docker like this:
docker run -it -v /hello:/hello2:ro ubuntu /hello2/runme.sh
And that's it! Your script runme.sh gets executed inside container and it has access to nearby files. Thanks to -v /hello:/hello2:ro directive. It maps host's folder /hello into container's folder /hello2 in readonly ro mode.
Note you can have same names, I've just differed them to show the difference.
I've been playing with dockers for a while now, and I had a problem extracting log files directory of a service that runs within container.
My Dockerfile looks like this:
ENV HOME=/software/service
ENV LOGS=$HOME/logs
COPY Service.jar $HOME/Service.jar
WORKDIR HOME
CMD java -jar Service.jar
I created a stub service for this that all he does is creating logfile name log.log inside LOGS environment variable and writes to it every 2 seconds.
What I wanted to achieve is to backup the log.log file inside my docker linux host. After some reading about multiple options I came across 2 popular solutions for persisting data:
Using volumes with the docker run -v options
Creating a data container that holds the data
Option 2 will not help much here since I want to view the logs inside my linux host machine so I chose option 1.
The problem with option 1 is it's creating the logs with a root permissions, which means I have to log into root to be able to delete these logs, something which can cause problem when not everyone should have root user and deleting logs is something that happens commonly.
So I read a little more and find many "work arounds" for this problem, one was mounting my /etc/group and /etc/passwd files inside the docker and use -u option and others were similar to this.
My main question is, is there any convenient and standard solution for this issue, extract the logfiles with/without -v option while letting entire group permission to rwx it.
Thanks!
Since you want the logs to be in your host you need some kind of volume sharing and the "-v" flag is definitely the simplest thing you can do.
As per the permissions issue, I see two options:
the -u flag with passwd/group bindmounting that you mentioned
passing the desired username and group IDs as environment variables and make the daemon running in the docker machine chown the file upon creation (but of course this is not always possible).
I think option 1, while tricky, is the easiest to apply.
Another option is to simply copy the logs to the host:
docker cp <container-name>:/software/service/logs .