Docker volume data not persisting in local - docker

I'm building a Dockerfile and files in the container are not getting synced with local storage.
Dockerfile:
FROM maven:3.6.1-jdk-8
ENV HOME=\wc_console
RUN mkdir $HOME
ADD . $HOME
WORKDIR $HOME
RUN mvn clean install -T 2C -DskipTests=true
RUN mvn dependency:go-offline -B --fail-never
CMD mvn clean install -T 2C -DskipTests=true
My docker build command:
docker build -f build_maven_docker . -t wc_console_build:1.0
I want to use bind-mount because after the container runs, I need the output on my local directory.
My docker run command:
docker run -v e:\svn\daffodil-dev-3.4.1\whitecoats-admin:\wc_console wc_console_build:1.0
My current working directory in the local machine while running docker is:e:\svn\daffodil-dev-3.4.1\whitecoats-admin
My work directory in the Docker container:wc_console
But, whenever I run the docker container, it is not syncing the final output to my local directory back.
What am I doing wrong?
Image for folder visulization.

Instead of using \wc_console in your Dockerfile's ENV HOME=\wc_console, use /wc_console. Linux uses forward slashes for directory structuring. The same goes for your docker run command. Change
docker run -v e:\svn\daffodil-dev-3.4.1\whitecoats-admin:\wc_console wc_console_build:1.0
to
docker run -v e:\svn\daffodil-dev-3.4.1\whitecoats-admin:/wc_console wc_console_build:1.0

When you mount the volume you actually replace the contents of the /wc_console with whatever you have on your host.
If you want to get the artefacts generated by maven then you need to run the maven commands on the running container, not as part of the build process.
When you do this you also don't need to add your sources to the image at build time.
FROM maven:3.6.1-jdk-8
ENV HOME=/wc_console
WORKDIR $HOME
# Make this part of the ENTRYPOINT if you really need it
#RUN mvn dependency:go-offline -B --fail-never
ENTRYPOINT mvn clean install -T 2C -DskipTests=true
That being said, for what you need you don't even really need a Dockerfile:
docker run --rm -v e:\svn\daffodil-dev-3.4.1\whitecoats-admin:/wc_console --workdir /wc_console maven:3.6.1-jdk-8 mvn clean install -T 2C -DskipTests=true

Related

Copy files from container to local in Docker

I want to copy a file from container to my local. The file is generated after execute python script, but due to then ENTRYPOINT, the container exited right after it run, and cant be able to use docker cp command. Any idea on how to prevent the container from exit before manage to copy the file? Below is my Dockerfile:
FROM python:3.9-alpine3.12
WORKDIR /app
COPY . /app/
RUN pip install --no-cache-dir -r requirements.txt && \
rm -f /var/cache/apk/*
ENTRYPOINT ["python3", "main.py"]
I use this command to run the image:
docker run -d -it --name test [image]
If the output file is stored in it's own directory (say /app/output) you can run: docker run -d -it -v $PWD/output:/app/output/ --name test [image] and the file will be in the output directory of the current directory.
If it's not, then run the container with: docker run -d -it --name test [image]
Then copy the file to your own filesystem using docker cp test:/app/example.json . to copy it to the current directory.
If running a container in background is unnecessary then you can copy a file from stdout
docker run -it [image] cat /app/example.json > out_example.json

See image generated in docker

I created a Docker like:
FROM rikorose/gcc-cmake
RUN git clone https://github.com/hect1995/UBIMET_Challenge.git
WORKDIR /UBIMET_Challenge
RUN mkdir build
WORKDIR build
#RUN apt-get update && apt-get -y install cmake=3.13.1-1ubuntu3 protobuf-compiler
RUN cmake ..
RUN make
Afterwards I do:
docker build --tag trial .
docker run -t -i trial /bin/bash
Then I run an executable that saves a .png file inside the container.
How can I visualize the image?
You can execute something inside the container.
To see all containers you can run docker ps --all.
To execute something inside container you can run docker exec <container id> command.
Otherwise you can copy files from container to host, with docker cp <container id>:/file-path ~/target/file-path
Please mount a localhost volume(directory) with container volume(directory) in where you are saving your images.
now all of your images saved in container directory will be available in host or localhost mount directory. From there you can visualize or download to another machine.
Please follow this
docker run --rm -d -v host_volume_or-directory:container_volume_direcotory trial
docker exec -it container_name /bin/bash

Docker Desktop Community for Windows | Container Caching

Does Docker Desktop Community version for Windows caches the containers?
I was removing some of my containers and then trying to compose them again for a Python 3/Flask/Angular 7 application and it was turning them up without installing dependencies pretty fast. I had to remove containers then restart my machine for it to build the containers again.
I was running this command:
docker-compose up --build
Yes I have a docker-compose.yml. I also have Dockerfile with commands to install the dependencies.
FROM python:3.7
RUN mkdir -p /var/www/flask
Update working directory
WORKDIR /var/www/flask
copy everything from this directory to server/flask docker container
COPY . /var/www/flask/
Give execute permission to below file, so that the script can be executed
by docker.
RUN chmod +x /var/www/flask/entrypoint.sh
Install the Python libraries
RUN pip3 install --no-cache-dir -r requirements.txt
COPY uswgi.ini
COPY ./uwsgi.ini /etc/uwsgi.ini
EXPOSE 5000
run server
CMD ["./entrypoint.sh"]
I also tried following commands:
docker system prune
docker-compose up --build --force-recreate

Running multiple commands after docker create

I want to make a script run a series of commands in a Docker container and then copy a file out. If I use docker run to do this, I don't get back the container ID, which I would need for the docker cp. (I could try and hack it out of docker ps, but that seems risky.)
It seems that I should be able to
Create the container with docker create (which returns the container ID).
Run the commands.
Copy the file out.
But I don't know how to get step 2. to work. docker exec only works on running containers...
If i understood your question correctly, all you need is docker "run exec & cp" -
For example -
Create container with a name --name with docker run -
$ docker run --name bang -dit alpine
Run few commands using exec -
$ docker exec -it bang sh -c "ls -l"
Copy a file using docker cp -
$ docker cp bang:/etc/hosts ./
Stop the container using docker stop -
$ docker stop bang
All you really need is Dockerfile and then build the image from it and run the container using the newly built image. For more information u can refer to
this
A "standard" content of a dockerfile might be something like below:
#Download base image ubuntu 16.04
FROM ubuntu:16.04
# Update Ubuntu Software repository
RUN apt-get update
# Install nginx, php-fpm and supervisord from ubuntu repository
RUN apt-get install -y nginx php7.0-fpm supervisor && \
rm -rf /var/lib/apt/lists/*
#Define the ENV variable
ENV nginx_vhost /etc/nginx/sites-available/default
ENV php_conf /etc/php/7.0/fpm/php.ini
ENV nginx_conf /etc/nginx/nginx.conf
ENV supervisor_conf /etc/supervisor/supervisord.conf
#Copy supervisor configuration
COPY supervisord.conf ${supervisor_conf}
# Configure Services and Port
COPY start.sh /start.sh
CMD ["./start.sh"]
EXPOSE 80 443

Docker: Why does my home directory disappear after the build?

I have a simple docker file:
FROM ubuntu:16.04
MAINTAINER T-vK
RUN useradd -m -s /bin/bash -g dialout esp
USER esp
WORKDIR /home/esp
COPY ./entrypoint_script.sh ./entrypoint_script.sh
ENTRYPOINT ["/home/esp/entrypoint_script.sh"]
when I run docker build . followed by docker run -t -i ubuntu and look for the directory /home/esp it is not there! The whole directory including it's files seem to be gone.
Though, when I add RUN mkdir /home/esp to my docker file, it won't build telling me mkdir: cannot create directory '/home/esp': File exists.
So what am I misunderstanding here?
I tested this on Debian 8 x64 and Ubuntu 16.04 x64.
With Docker version 1.12.2
Simply change you Docker build command to:
docker build -t my-docker:dev .
And then to execute:
docker run -it my-docker:dev
Then you'll get what you want. you didn't tag docker build so you're actually running Ubuntu image.

Resources