copy file from container to the host before the container dies - docker

I am pretty novice to the docker container. I am trying to build an image which is running a jar file. I want to pass the output file to the host for further processing but the container exits as soon as it finish the command.
1- what is the best practices for this problem?
2- is there any way to pass the file name dynamically instead of hard coding in the docker file.
here is my Dockerfile:
FROM mybase:latest
VOLUME /root/:/var/myVol/
EXPOSE 8080
ADD mydir/test.jar /tmp/test.jar
CMD bash -c 'java -jar /tmp/test.jar > /var/myVol/output.json'

You can just mount output file as a volume using -v option. Your program will write directly to the output file on the host without any need to copy anything anywhere
Be aware however that -v option is known to be extremely slow.

Related

I want to write a docker file where my container can load db file from a directory and past it to application directory and on exit wrtie it back

I am working with Golang application that saves the information inside sqlite file and that resideds inside the data/sqlite.db same directory as docker file. My docker file is something like this
p.s: guys it's my very first docker file please be kind to me :(
FROM golang:1.16.4
ENV GIN_MODE=release
ENV PORT=8081
ADD . /go/src/multisig-svc
WORKDIR /go/src/multisig-svc
RUN go mod download
RUN go build -o bin/multisig-svc cmd/main.go
EXPOSE $PORT
ENTRYPOINT ./bin/multisig-svc
I deployed this application to the Google cloud plateform but somehow the container gets restarted there and after that my db is vanished. So i researched and try to use volumes.
I build the container using this command docker build -t svc . and then run it with docker run -p 8080:8081 -v data:/var/dump -it svc but i can not see the data folder is getting copied to /var/dump directory. My basic idea is , Whenever the container start it loads the db file from dump and then past it to data directory so application can use it and when it exits it copy it back to dump directory. I don't know if i am on right track any help would really be appreciated.
#Edit
The issue is when no request arrives for 15 minutes GPC shut down the container and start it when there comes a request again. Now the issue is to somehow fetch the db file from dump directory update it and write it back to the dump dir when container goes down for future use.
For a local run and if you are running on a VM, you need to specify the absolute path of the directory you want to mount as a bind mount into your directory. In this case something like that should work
docker run -p 8080:8081 -v $(pwd)/data:/var/dump -it svc
When you don't specify the absolute path, the volume you're mounting to your running container is a named volume manage by the docker daemon. And it is not located in a path related to your current working directory. You can find more information about how work docker volumes here https://docs.docker.com/storage/volumes/
However there are multiple environment on GCP (app engine, kubernetes, VMs), so depending on your environment you may need to adapt this solution.

Is there a way to open pre-built Singularity container with files in my current directory?

I am using a pre-built singularity container for Tensorflow ML in remote cluster.
I open the singularity container located in /cvmfs/unpacked.cern.ch/registry.hub.docker.com/fnallpc/
with command as below
singularity run --nv --bind `readlink $HOME` --bind `readlink -f ${HOME}/nobackup/`
--bind /cvmfs /cvmfs/unpacked.cern.ch/registry.hub.docker.com/fnallpc/fnallpc-docker:tensorflow-latest-devel-gpu-singularity
It opens the singularity image well and command works.
Now, I have several files that I need the Tensorflow environment, but when I open the image there is not file obviously. Is there a way that I can use the files in my working directory (which needs the Tensorflow environment) when I open the Singularity container? I wonder if there is a command to open the image and files in my working directory together, such that I have access to those files while being in the container image.
Both your $HOME and the current directory are mounted automatically by default. However, auto mounts are allowed to fail silently if the path does not exist inside the container. e.g., you execute from /cluster/storage/somepath but there is no /cluster inside the singularity container. The same can happen if $HOME is somewhere weird, but is much less common.
Explicit use of --bind does not have this problem. Adding --bind "$PWD" should take care of it.
You can quickly test that you're executing where you expect with singularity exec --bind "$PWD" image.sif pwd. The working directory inside the container matches the one outside the container (if available), falling back to $HOME if $PWD is not available, and / if neither $PWD nor $HOME are available inside the image.

How to execute Linux command in host-machine and copy the output to image build by docker file?

I want to copy the my.cnf file present in the host server to child docker image wherever I run docker file that uses a custom base image having below command.
ONBUILD ADD locate -i my.cnf|grep -ioh 'my.cnf'|head -1 /
but above line is breaking docker file. Please share correct syntax or alternatives to achieve the same.
Unfortunately, you can't declare host commands inside your Dockerfile. See Execute command on host during docker build
.
Your best bet is probably to tweak your deployment scripts to locate my.cnf on the host before running docker build.

Does data needs to have a specific format to upload it in Docker?

I would like to know if there is a specific way to upload data to Docker, I've been stuck on this during a week and I am sure the answer will be something simple.
Does anyone know? I am working with a windows 10 machine.
You can mount directories on the host system inside the container and access their contents that way, if that's what you mean by 'data'.
You should check out Manage data in containers for more info.
You can use the docker cp command to copy the file.
For eg: If you want to copy abc.txt to location /usr/local/folder inside some docker container(you can get docker container name from NAMES column by executing command docker ps.) then you just execute,
docker cp abc.txt ContainerName:/usr/local/folder
(abc.txt is a local to the foler from where you are executing the command. You can provide the full path of the file.)
After this just get into the container by,
docker exec -it ContainerName bash
then cd /usr/local/folder. you will see your file copied their.

Sharing files between container and host

I'm running a docker container with a volume /var/my_folder. The data there is persistent: When I close the container it is still there.
But also want to have the data available on my host, because I want to work on code with an IDE, which is not installed in my container.
So how can I have a folder /var/my_folder on my host machine which is also available in my container?
I'm working on Linux Mint.
I appreciate your help.
Thanks. :)
Link : Manage data in containers
The basic run command you want is ...
docker run -dt --name containerName -v /path/on/host:/path/in/container
The problem is that mounting the volume will, (for your purposes), overwrite the volume in the container
the best way to overcome this is to create the files (inside the container) that you want to share AFTER mounting.
The ENTRYPOINT command is executed on docker run. Therefore, if your files are generated as part of your entrypoint script AND not as part of your build THEN they will be available from the host machine once mounted.
The solution is therefore, to run the commands that creates the files in the ENTRYPOINT script.
Failing this, during the build copy the files to another directory and then COPY them back in your ENTRYPOINT script.

Resources