Docker mount volume with copy data from temp container - docker

Consider the scenario
From package-alpine:latest as package
FROM alpine:latest
COPY --from=package /opt/raw /queue/raw
RUN filter-task /queue/raw --> this will change raw itself
Need a Volume here on queue so that, when I am running I can get the finished raw directly on host.
Wondering if its possible and if yes what is the syntax
Tried with docker volume but that actually make the queue directory empty
docker run -v $HOME/queue:/queue process:latest

What you define in your Dockerfile is executed in build-phase (build), not in container-deployment (run) phase.
You're creating volumes in run phase, so, /queue could not still exist.
So, I think you need to execute filter-task from Dockerfile RUN command to docker run command.
Just try with this:
Dockerfile
FROM alpine:latest
COPY ./filter-task
Create image:
docker build -t process:latest .
Run container with filter task as entrypoint, not in Dockerfile:
docker run -v /opt/raw:/queue/raw process:latest filter-task /queue/raw
At this point, when container is created, volume is mounted and data stored inside container in /queue/raw will be accesible in /opt/raw in host.
Your volume was empty because if you mount a volume that alrealdy exists in container, it's not mounted.

Related

-v deleted all the data from the docker container

I made a docker image called myImage, there is a folder: /data I want to let the user edit it by themselves. I read that -v flag can mount the volume, so I used it like following:
I run the container with this command:
docker run -v /my_local_path:/data -it myImage /bin/bash
But surprisingly, docker cleared all the files in /data in the container. But this is not I want... I want actually the host can get all the files from /data... :(
How can I do that?
When you share a volume like this, the volume on the host overwrites the volume in the container, so the files in the container's folder will be removed.
What you need to do is put the files in the container in folder A (a folder in the container). Mount folder B (another folder in the container). Then AFTER the volume is mounted, move the files from folder A to folder B. Then these files will be both available to the host and inside the container.
You can achieve this 'move files' operation using a RUN or an ENTRYPOINT script in your Dockerfile.
See Run a script in Dockerfile
Sorry, I forget if you need RUN or ENTRYPOINT (or if either will work) but one of these will definitely do it.
I think you want ENTRYPOINT because an ENTRYPOINT script runs AFTER the container is created. Thus it will run after the volume is mounted.

ENTRYPOINT without docker run

I have an executable that's containerized and I use the entry point statement in the Dockerfile:
ENTRYPOINT ["s10cmd"]
However, this is a statistical app that needs to receive a data file, so I cannot use docker run. Instead I create the container using docker create and then docker cp to copy the dat file into the container. However, none of the docker commands except run allow me to call the container as an executable.
Should I in this case, not specify ENTRYPOINT or CMD, and just do docker start, docker exec s10cmd /tmp/data.dat?
Docker images are just like templates , and Docker containers are live running machines.
In order to execute any commands - it requires a container , so you need to create a container , when you start the container, your entry-point will launch the command and then container will close automatically.
#>docker create <-- This will create an instance of the docker image
#>docker cp <-- Copy the relevant file into the container
#>docker start <-- Start the container entrypoint will do rest of the job.

Move docker bind-mount to volume

Actually, I run my containers like this, for example :
docker run -v /nexus-data:/nexus-data sonatype/nexus3
^
After reading the documentation, I discover volumes that are completely managed by docker. For some reasons, I want to change the way to run my containers, to do something like this :
docker run -v nexus-data:/nexus-data sonatype/nexus3
^
I want to transfer my existing bind-mount to volumes.
But I don't want to lose the data into /nexus-data folder, is there a possibility to transfer this folder, to the new volume, whitout restart everything ? Because I've also Jenkins and Sonar containers for example, I just want to change the way to have persistent data. The is a proper way to do this ?
You can try out following steps so that you will not loose your current nexus-data.
#>docker run -v nexus-data:/nexus-data sonatype/nexus3
#>docker copy /nexus-data/. <container-name-or-id>:/nexus-data/
#>docker stop <container-name-or-id>
#>docker start <container-name-or-id>
docker copy will copy data from your host-machine's /nexus-data folder to container's FS /nexus-data folder which is your mounted volume.
Let me know if you face any issue while performing following steps.
Here's another way to do this, that I just used successfully with a Heimdall container. It's outlined in the documentation for the sonatype/nexus3 image:
Stop the running container (e.g. named nexus3)
Create a docker volume called nexus-data, creating it with the following command: docker volume create nexus-data)
By default, Docker will store the volume's content at /var/lib/docker/volumes/nexus-data/_data/
Simply copy the directory where you previously had been using a bind mount to the aforementioned volume directory (you'll need super user privileges to do this, or for the user to be part of the docker group): cp -R /path/to/nexus-data/* /var/lib/docker/volumes/nexus-data/_data/
Restart the nexus3 container with $ docker run -v nexus-data:/nexus-data sonatype/nexus3 --name=nexus3
Your container will be back up and running, with the files persisted in the directory /path/to/nexus-data/ now mirrored in the docker volume. Check if functionality is the same, of course, and if so, you can delete the /path/to/nexus-data/ directory
Q.E.D.

Docker add files to VOLUME

I have a Dockerfile which copies some files into the container and after that creates a VOLUME.
...
ADD src/ /var/www/html/
VOLUME /var/www/html/files
...
In the src folder is an files folder and in this files folder are some files I need to have copied to the VOLUME the first time the container gets started.
I thought the first time the container gets created it uses the content of the original dir specified in the volume but this is not the case.
So how can I get the files into this folder?
Do I need to create an extra folder and copy it with a runscript (I hope not)?
Whatever you put in your Dockerfile is just evaluated at build time (and not when you are creating a new container).
If you want to make file from the host available in your container use a data volume:
docker run -v /host_dir:/container_dir ...
In case you just want to copy files from the host to a container as a one-off operation you can use:
docker cp /host_dir mycontainer:/container_dir
The issue is with your ADD statement. Also you might not understand how volumes are accessed. Compare your efforts with the demo below:
FROM alpine #, or your favorite tiny image
ADD src/files /var/www/html/files
VOLUME /var/www/html/files
Build an image called 'dataimg':
docker build -t dataimg .
Use the dataimg image to create a data container named 'datacon':
docker run --name datacon dataimg /bin/cat
Mount the volume from datacon in your nginx container:
docker run --volumes-from datacon nginx ls -la /var/www/html/files
And you'll see the listing of /var/www/html/files reflects the contents of src/files

Shared, ephemeral read-only Docker volume

I'm working on a Docker-based setup for a simple web-app running in Nginx+php-fpm. The common suggestion I've seen for storing the actual PHP code is to store it on the host and then mount it read-only in both the Nginx and PHP containers.
However, I want my setup to be self-contained so I can easily use it on Amazon ECS with Auto Scaling. In other words, I want to bundle the code somehow, rather than pulling it from the host.
So it seems what I want is a read-only volume that can be shared between two Docker containers and won't persist after those containers are destroyed. Is this possible? Or is there a better approach?
Docker images can contain volumes that are pre-populated with data. To achieve this, in the Dockerfile, first populate a the directory (for example using COPY or RUN) and then declare it as a volume. This allows you to build an image that contains your application code inside a volume:
FROM php:7-fpm
COPY ./app /var/www/html
VOLUME /var/www/html
Creating a new container from this image will create a new volume, initialize it with the data from the image's /var/www/html directory and mount it inside your new container at the same location.
Compare the documentation for more information:
The docker run command initializes the newly created volume with any data that exists at the specified location within the base image. For example, consider the following Dockerfile snippet:
FROM ubuntu
RUN mkdir /myvol
RUN echo "hello world" > /myvol/greeting
VOLUME /myvol
This Dockerfile results in an image that causes docker run, to create a new mount point at /myvol and copy the greeting file into the newly created volume.
This allows you to simply start your application image with docker run:
docker run -d --name app my_application_image
Then you can run your Nginx container and configure it to use the same volumes as your application container using the --volumes-from flag:
docker run -d --name web --link app:app --volumes-from app my_nginx_image
After this, you will have a Docker volume containing your application code that is mounted in both containers at /var/www/html.

Resources