Mount Current Directory in Containrer using docker file - docker

I am new to docker i want to mount current directory in container and run command.
USING DOCKER FILE
mount current directory in a folder eg files
EG:
FROM ubuntu As tuttt
VOLUME [ %cd% ]
ENTRYPOINT ["/bin/bash"]
RUN ls

Volumes are mounted when you run a container. Dockerfile is the definition of the steps to build an image. From the image you run a container.
You can build your image (remove the VOLUME declaration since it is not useful in your case) and then you run it with:
docker run -v <host_path>:<container_path> <image_tag>

Related

how to get the application folder from the docker image

this is my container running the folder /app copied from the folder express present in my image
this is my image containing the folder express
after dockerising my express app folder I deleted it from my desktop but I still have the image now I need that folder how can I get it from my image
You can run the docker cp command (documentation: https://docs.docker.com/engine/reference/commandline/cp/). The way how to use is the following:
docker cp <container_id_or_name>:/path/in/container /path/in/host
In order to do that, you just need to run a container from the image you have, and then run the docker cp command:
# run a container as deamon
docker run -d --name my-container fahrazzzgb91/merns bash
# from your host, copy the files from the container to the host
docker cp my-container:/path/to/copy .

WORKDIR as VOLUME

In my dockerfile, I have my WORKDIR and I want to have it as a VOLUME, so that on the host I have a directory in /var/lib/docker/volumes/ where is the same content as in the WORKDIR.
How do I use the VOLUME Dockerfile command for this?
While you can mount a volume over the WORKDIR that you were using when building your image, the volume isn't available at build time. Volumes are only available for a container, not while building an image.
You can COPY files into the image to represent the content that will exist in the volume once a container is running, and use those temporary files to complete the building of the image. However, those exact files would be inaccessible once a volume is mounted in that location.
To have a directory from the host machine mounted inside a container, you would pass a -v parameter (you can do multiple -v params for different directories or for individual files) to the docker run command that starts the container:
docker run -v /var/lib/docker/volumes:/full/path/inside/container your_image_name

how to mount jdk in volume and use it in a container while creating a docker image

I am creating a Docker Image from a Dockerfile ,below is the data inside Dockerfile
FROM centos
ADD jdk-11.0.7_linux-x64_bin.tar.gz /opt/java
ENV JAVA_HOME /opt/java/jdk-11.0.7
ENV PATH $PATH:/opt/java/jdk-11.0.7/bin
RUN ls -l /opt/java/jdk-11.0.7
RUN java -version
ADD build/libs/CatalogModel-1.0.jar CatalogModel-1.0.jar
EXPOSE 9081
ENTRYPOINT ["java", "-jar", "CatalogModel-1.0.jar"]
Docker while creating image from Dockerfile will extract the jdk inside jdk-11.0.7_linux-x64_bin.tar.gz which is present parallel to Dockerfile.Instead of directly adding the jdk-11.0.7_linux-x64_bin.tar.gz file in each and evry image i want to mount it in a volume so that it can be used while creating other images also.
create a container from any image docker run -it -d --name jdk_container initial-image
copy jdk into that container using docker cp command
create image from that container using command docker create container-id new-image-name
create container from this newly build image. it will contain your jdk.

Does Docker update contents of volume when mounted if changes are made in Dockerfile?

I have Jenkins running in a Docker container. The home directory is in a host volume, in order to ensure that the build history is preserved when updates to the container are actioned.
I have updated the container, to create an additional file in the home directory. When the new container is pulled, I cannot see the changed file.
ENV JENKINS_HOME=/var/jenkins_home
RUN mkdir -p ${JENKINS_HOME}/.m2
COPY settings.xml ${JENKINS_HOME}/.m2/settings.xml
RUN chown -R jenkins:jenkins ${JENKINS_HOME}/.m2
VOLUME ["/var/jenkins_home"]
I am running the container like this:
docker run -v /host/directory:/var/jenkins_home -p 80:8080 jenkins
I had previous run Jenkins and so the home directory already exists on the host. When I pull the new container and run it, I see that the file .m2/settings.xml is not created. Why is this please?
Basically when you run:
docker run -v /host-src-dir:/container-dest-dir my_image
You will overlay your /container-dest-dir with what is in /host-src-dir
From Docs
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the
container at /webapp. If the path /webapp already exists inside the
container’s image, the /src/webapp mount overlays but does not remove
the pre-existing content. Once the mount is removed, the content is
accessible again. This is consistent with the expected behavior of the
mount command.
This SO question is also relevant docker mounting volumes on host
It seems you want it the other way around (i.e. the container is source and the host is destination).
Here is a workaround:
Create the volume in your Dockerfile
Run it without -v i.e.: docker run --name=my_container my_image
Run docker inspect --format='{{json .Mounts}}' my_container
This will give you output similar to:
[{"Name":"5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73","Source":"/var/lib/docker/volumes/5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73/_data","Destination":"/var/jenkins_home","Driver":"local","Mode":"","RW":true,"Propagation":""}]
Which means your dir as it is on container was mounted into the host directory /var/lib/docker/volumes/5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73/_data
Unfortunately, I do not know a way to make it mount on a specific host directory instead.

How to share data between the docker container and the host?

I tried to share data between the docker container and the host, for example by adding the parameter -v /Users/name/Desktop/Tutorials:/cntk/Tutorials to the docker run command, but I noticed that it also deletes all the files on the docker contained in /cntk/Tutorials.
My question is how to make the same link, but having instead all the files in /cntk/Tutorials copied to the host (at /Users/name/Desktop/Tutorials)
Thank you
Unfortunately that it is not possible, take a look here. That is because this is how mounting works in Linux.
It is not correct to say that the files were deleted. They are still present in the underlying image, but the act of mounting another directory at the same path has obscured them. They exist, but are not accessible in this condition.
One way you can accomplish this is by mounting a volume into your container at a different path, and then copying the container's files to that path. Something like this.
Mount a host volume using a different path than the one the container already has for the files you are interested in.
docker run -v /Users/name/Desktop/Tutorials:/cntk/Tutorials2 [...]
Now, execute a command that will copy the files already in the docker image, into the mounted volume from the outside host.
docker exec <container-id> cp -r /cntk/Tutorials /cntk/Tutorials2
The docker cp command allows you to copy files/folders on demand between host and the container:
docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH|-
docker cp [OPTIONS] SRC_PATH|- CONTAINER:DEST_PATH
docker cp ContainerName:/home/data.txt . <== copy from container to host
docker cp ./test.txt ContainerName:/test.txt <== copy from host to container
docker cp ContainerName:/test.txt ./test2.txt <== copy from container to host
For details run docker cp --help

Resources