I want to prepare a base docker image that executes a command and produces a report. I want this to be persisted within the image for the next docker file. Is this possible?
What you can do is to set a WORKDIR within your dockerfile e.g.
WORKDIR /data
and use a volume with the run command for the built image.
docker run -v /users/home/your_user/use_case_folder:/data -it your_image_tag /bin/bash
When you run your reports/predictions you have to write them to /data, then your files will be placed on your local system. You now can use the next
Dockerfile and use the same volume mount, which is defined as WORKDIR within the new Dockerfile. Sharing the results within the image itself for another image is not possible as far as I know. You will always have to use an outside mounted File System or Database or sth. similar.
Maybe here you can find some infos too:
https://docs.docker.com/storage/volumes/#restore-container-from-backup
Related
Over here is a use case - I want to download and extract all files from a particular website and allow users to specify from which workweek it might be done. Please, imagine using one docker command and specifying only the variable which tells where to go, download and extract files.
The problem is I want to allow a user to manipulate variables that refer to a particular workweek.
Now it is only my idea, not sure If I am thinking right before I start to design my Dockerfile.
Dockerfile:
...
ENV TARGET="$WW_DIR"
...
Now you can imagine that the first user wants to download files from WW17 so he can type:
docker container run -e TARGET=WW17 <image_name>
The second one wants to download files from WW25:
docker container run -e TARGET=WW25 <image_name>
Etc.
Underhood Dockerfile knows that it must go to the directory from WW17 (in the first scenario) or WW25 (in the second scenario). My imagination is that a new container is created then using for example "curl" files are downloaded from an external server and extracted.
Can you recommend to me the best methods with some examples of how to solve it? Apply bash script inside of the container?
Thanks.
There is no Dockerfile at docker container run, it just runs the command. So write a command that does what you want or add the data to the image when building it with Dockerfile.
# Dockerfile
FROM your_favourite_image
COPY your_script /
RUN chmod +x /your_script
CMD /your_script
# your_script
#!/usr/bin/env your_favourite_langauge_like_python_or_bash_or_perl
# download the $TARGET or whatever you want to do
And then
docker build -t image .
docker run -r TARGET=WW1 image
Reading: https://docs.docker.com/engine/reference/builder/#cmd https://docs.docker.com/engine/reference/builder/#entrypoint https://docs.docker.com/get-started/overview/ https://btholt.github.io/complete-intro-to-containers/dockerfile
I'm pretty new to docker, so I apologize if this is a simple question. I need to create a script of some sort that will start a docker image in ubuntu:16.04, copy files from a directory onto the container, and run some of the code that was just copied in.
From what I understand, the first step would be to start up the container with something like this:
docker run --name test_container my_image
Then, I need to copy over the files. From what I have found, this is conventionally done on the host with a command like so:
docker cp src/. test_container:/code/src
Lastly, lets say I want to run some code from my container, that I just put on it. If I started my container with the -it tag, I could probably just do something like the following (assuming there was a makefile and hello_world.c in the src folder that was copied):
cd code/src/
make
./hello_world
But is there some way I can have this automated. For example, I want to put the following lines in my docker file:
WORKDIR code/src/
RUN make
RUN ./hello_world
But the main problem is that if I run my dockerfile right at the beginning, I will not have my copied files on the container by the time I get to these commands at the bottom.
I was looking to see if there is a way to copy files onto the container by running commands inside the container. For example:
RUN docker cp src/. test_container:/code/src
But that doesn't seem to work, which kind of makes sense. So I was wondering if there is another good way to automate a process like this.
If you want to bake your files into the image the command is
COPY src /code/src/
WORKDIR /code/src
RUN make
CMD ["./hello_world"]
If you want to use files at runtime, you'd do it with something like
docker run -v $CWD/src:/code/src myimage make
Hello could someone pleas help me on copying docker(I'm starter) host file into my jupyter/pyspark-notebook images. I've pulled this notebook from docker as public available.
I've created a Dockerfile which contains this.
FROM jupyter/pyspark-notebook:latest
ADD /home/abdoulaye/Documents/M2BIGDATA/Jaziri /bin/bash
I've changed /bin/bash to . but nothing is visible.
when I execute docker built it's like it copy files as shown in output below.
when I go to my my notebook I did note found folders. I check my snapshot if I can found these copied folder but I'm very confused.
In clear, I've an running notebook in my docker I use it in y navigator but I can not load data. I like to copy data in place where I can access it in notebook.
You can not copy using absoult path, the path should be relative to Dockerfile, so /home/abdoulaye/Documents/M2BIGDATA/Jaziri this path inside Dockerfile is not correct. Copy file to Dockerfile context and then copy like
ADD M2BIGDATA/Jaziri /work
Now First thing, you should not copy files from host to executable files directory.
For instance,
FROM alpine
copy hello.txt /bin/sh
If you copy like this, it will create a problem to run command inside container as sh or bash will be replaced or corrupt.
2nd, while you are building the docker image with invalid context, it should be the same where your Dockerfile is, so better to run the directory where you place the Dockerfile.
docker build -t my-jupyter .
3rd, you should not run cp command inside container to copy files from host to container.
docker cp /home/abdoulaye/Documents/M2BIGDATA/Jaziri container_id:/work
it will copy your files to /work path of the container.
I have been trying to read up on Docker volumes for a while now, but something still seems to be missing.
Take this Dockerfile: (taken from docs)
FROM ubuntu
RUN mkdir /myvol
RUN echo "hello world" > /myvol/greeting
VOLUME /myvol
If I build and run this simply as docker run <img>, then a new volume is created (its name can be found using docker inspect <containerid>). The contents of /var/lib/docker/volumes/<volumeid> is just a file greeting containing hello world, as expected.
So far so good. Now, say I wanted, as a user, to place this volume to a custom location. I would run docker run -v /host/path/:/myvol. This does not work anymore. /host/path is created (if it does not exist), but is empty.
So, the question is, how do I put some files in a Volume in Dockerfile, whilst allowing the user to choose where they put this volume? This seems to me to be one of the reasons to use Volume directive in the first place.
So, the question is, how do I put some files in a Volume in
Dockerfile, whilst allowing the user to choose where they put this
volume?
You don't. A volume is created at runtime, while your Dockerfile is consulted at build time.
If you want to be able to pre-populate a volume at an arbitrary location when your container runs, allow the user to provide the path to the volume in an environment variable and then have an ENTRYPOINT script that copies the files into the volume when the container starts.
An alternative is to require the use of a specific path, and then taking advantage of the fact that if the user mounts a new, empty volume on that path, Docker will copy the contents of that directory into the volume. For example, if I have:
FROM alpine
RUN mkdir /data; echo "This is a test" > /data/datafile
VOLUME /data
And I build an image testimage from that, and then run:
docker run -it testimage sh
I see that my volume in /data contains the file(s) from the underlying filesystem:
/ # ls /data
datafile
This won't work if I mount a host path on that location, for example if I do:
host# mkdir /tmp/empty
host# docker run -it -v /tmp/empty:/data testimage sh
Then the resulting path will be empty:
/ # ls /data
/ #
This is another situation in which an ENTRYPOINT script could be used to populate the target directory.
RUN cp /data/ /data/db, this command does not copy the files in /data to /data/db.
Is there an alternate way to do this?
It depends where /data is for you: already in the image, or on your host disk.
A Dockerfile RUN command execute any commands in a new layer on top of the current image and commit the results.
That means /data is the one found in the image as built so far.
Not the /data on your disk.
If you want to copy from your disk to the image /data/db folder, you would need to use COPY or ADD.
At runtime, when you had an existing running container, you could also use docker cp to copy from or to a container.