I need to save incoming images, I decided to save it in my filesystem, but unfortunately, it is unavailable from a docker container, I am about to save images inside the home directory
I am using golang and this the path where I am saving
home, _ := os.UserHomeDir()
dir := home + "/Desktop/5" (root/Desktop/5)
how can I save images inside my filesystem not inside the container
I have tried to add volume inside my docker-compose file
volumes:
#path inside container
- /root
# path in my filesystem to map
- $HOME/Desktop/root
but it doesn't work
You have to mount a directory on your host to the docker container:
volumes:
- "/home/myname/root:/root"
With this volume definition in docker-compose, whatever the container writes under /root in the container should show up in your /home/myname/root directory.
Related
I have a custom image created from my own project. The Dockerfile is quite simple:
from wordpress
COPY test.html /var/www/html
Using docker composer, if I run it using a volume, it's works fine.
wordpress:
...
image: my_project_image
volumes: ['volumetest:/var/www/html']
...
volumes:
volumetest:
But if instead of creating a volume, if I map a local folder to the remote folder, the file test.html is not created, neither inside the wordpress container, neither inside the local folder:
wordpress:
...
image: my_project_image
volumes: ['./testdir:/var/www/html']
...
#volumes:
# volumetest:
Is there a way I can create the file just by using the docker-compose?
Thanks a lot. :)
At the build stage of the Docker image, you copy test.html to /var/www/html directory inside the Docker image.
When you run the image as a container, you map local ./testdir to /var/www/html directory inside the container. This means /var/www/html now points to your ./testdir directory. If your ./testdir does not contain your test.html file, then you're not going to see it inside the container as well.
I believe you misunderstood that when you use volumes, it will copy the file in the image to the local file system. What happens is, whichever files you have in your local file system directory will reflect (you can think of this as replace as well) as the files in the mapped directory inside the container.
I want to know if i can use a docker volume by many docker containers .
If so is the docker volume locked ?
You can first create a named volume, and then use it in wherever you want, one or many dockers.
When you create a named volume, for example called myvolume, if you don't specify driver option, local is used. So, docker creates a folder in /var/lib/docker/volumes. Your data will persist in /var/lib/docker/volumes/myvolume/_data
Nevertheless, that was just information, you don't need to manage that. You just have to create with:
docker volume create myvolume
And then, use volume name as source.
docker run -v myvolume:/yourdestinationpath ...
If you use docker compose, syntax is the same:
services:
myservice:
...
volumes:
- myvolume:/yourdestinationpath
The key is that you're not using a bind volume, where you specify as source a concrete path to be mounted, but a docker volume name.
This is a minimal example, in which I am mounting the local directory inside the container. My local directory will act as a persistence volume on which I can read and write from the container.
My current directory contains an input-file (this file shall be read by the container)
The container cats the content of the input-file and appends it to the output-file (Here I am faking your conversion)
// The initial wiorking directory content
.
└── input-file
// Read the `input-file` and append its content to the `output-file` (both files are persisted on the host machine)
docker run -v $(pwd):/root/some-path ubuntu /bin/bash -c "cat /root/some-path/input-file >> /root/some-path/output-file"
// The outcome
.
├── input-file
└── output-file
I'm running Jenkins in a Docker container. Following this article, I'm bind mounting the Docker socket in order to interact with it from the dockerized Jenkins. I'm also bind mounting the container directory jenkins_home. Here is a quick recap on my volumes:
# Jenkins
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /usr/local/bin/docker-compose:/usr/local/bin/docker-compose
- ./bar:/var/jenkins_home
I run this from the directory /home/foo/ of the host, therefore the following directory is created in the host file system (and mounted):
/home/foo/bar
Now, I have a Jenkins pipeline (mypipe) that runs a docker-compose file spinning up a MySQL container with the following volume:
# MySQL created from Jenkins
volumes:
- ./data:/var/lib/mysql
Weirdly enough, it ends up mounting:
/var/jenkins_home/workspace/mypipe/data < /var/lib/mysql
instead of:
/home/foo/bar/workspace/mypipe/data < /var/lib/mysql
Here is a graphical recap:
Searching stackoverflow, it turned out that it happens since:
The volume source path (left of :) does not refer to the middle container, but to the host filesystem!
And that's ok, but my question is:
Why there?
I mean why does .data is translated exactly into the path: /var/jenkins_home/workspace/…/data, since the MySQL container is not aware of the path /var/jenkins_home?
When Docker creates a bind mount, it is always from an absolute path in the host filesystem to an absolute path in the container filesystem.
When your docker-compose.yml names a relative path, Compose first expands that path before handing it off to the Docker daemon. In your example, you're trying to bind-mount ./bar from a file /var/jenkins_home/workspace/mypipe/docker-compose.yml, so Compose fills in the absolute path you see when it invokes the Docker API. Compose has no idea that the current directory is actually a bind-mount from a different path in the Docker daemon's context.
If you look in the Jenkins logs at what scripted pipeline invocations like docker.inside { ... } do, mounts the workspace directory to an identical path inside the container it launches. Probably the easiest way to work around the mapping problem you're having is to use an identical /var/jenkins_home path on the host system, so the filesystem path is the same in every context.
I am trying to create a jenkins and nexus integration using docker compose file. Where in my jenkins updated with few plugins using Dockerfile and volume created under /var/lib/jenkins/.
VOLUME ["/var/lib/jenkins/"]
in compose file am trying to map my volume to local store /opt/jenkins/
jenkins:
build: ./jenkins
ports:
- 9090:8080
volumes:
- /opt/jenkins/:/var/lib/jenkins/
But Nothing is copying to my persistence directory(/opt/jenkins/).
I can see in all my jenkins jobs created under _data/jobs/ directory under some volume. not in my volume defined /var/lib/jenkins/
Can any one help me on this why this is happening?
From the documentation:
Volumes are initialized when a container is created. If the container’s base image contains data at the specified mount point, that existing data is copied into the new volume upon volume initialization. (Note that this does not apply when mounting a host directory.)
And in the mount a host directory as data volume:
This command mounts the host directory, /src/webapp, into the container at /webapp. If the path /webapp already exists inside the container’s image, the /src/webapp mount overlays but does not remove the pre-existing content. Once the mount is removed, the content is accessible again. This is consistent with the expected behavior of the mount command.
So basically you are overlaying (hiding) anything that was in var/lib/jenkins. Can your image function if those things are hidden?
I would like to know if there's any way that I can share a directory from my host machine to the docker container (Shared Volume) using Dockerfile.
I understand that we can do that using volumes (-v option) while using docker run. But I couldn't find any way by which I can do that while using an Instruction of Dockerfile.
I already tried VOLUME instruction in Dockerfile but couldn't succeed.
Here's some details about my environment:
[me#myHost new]$ tree -L 1
.
|-- docker-compose.yml
|-- Dockerfile
|-- Shared // This is the directory I wish to share with my containers.
`
I was using docker-compose.yml file to mount this directory till now:
volumes:
- ./Shared:/shared # "Relative Path at the host":"Absolute Path at the container"
But now, due to some reasons, I need to mount it in Dockerfile. I already tried the following but couldn't succeed (It is creating a new empty volume at /shared.):
VOLUME ./Shared:/shared
I could use docker run and save the image by making manual changes in the container, but I wished if I could do that in Dockerfile itself.
Thanks.
You can't mount a local directory using commands in a Dockerfile. You must do this with docker run, or a proxy to docker run like docker-compose.