I try to build a docker container that include externals files - docker

I try to create a docker container with bind9 on it and I want to add my db.personal-domain.com file but when I run docker build and then docker run -tdp 53:53 -v config:/etc/bind <image id> the container doesn't have my db.personal-domain.com file. How to fix that ? Thanks !
tree structure
-DNS
--Dockerfile
--config
---db.personal-domain.com
Dockerfile
FROM ubuntu:20.04
RUN apt-get update
RUN apt-get install -y bind9
RUN apt-get install -y bind9utils
WORKDIR /etc/bind
VOLUME ["/etc/bind"]
COPY config/db.personal-domain.com /etc/bind/db.personal-domain.com
EXPOSE 53/tcp
CMD ["/usr/sbin/named", "-g", "-c", "/etc/bind/named.conf", "-u", "bind"]

There is a syntax issue in your docker run -v option. If you use docker run -v name:/container/path (even if name matches a local file or directory) it mounts a named volume over your configuration directory. You want the host content, and you need -v /absolute/host/path:/container/path syntax (the host path must start with /). So (on Linux/MacOS):
docker run -d -p 53:53 \
-v "$PWD:config:/etc/bind" \
my/bind-image
In your image you're trying to COPY in the config file. This could work also; but it's being hidden by the volume mount, and also rendered ineffective by the VOLUME statement. (The most obvious effect of VOLUME is to prevent subsequent changes to the named directory; it's not required to mount a volume later.)
If you delete the VOLUME line from the Dockerfile, it should also work to run the container without the -v option at all. (But if you'll have different DNS configuration on different setups, this probably isn't what you want.)

-v config:/etc/bind
That syntax creates a named volume, called config. It looks like you wanted a host volume pointing to a path in your current directory, and for that you need to include a fully qualified path with the leading slash, e.g. using $(pwd) to generate the path:
-v "$(pwd)/config:/etc/bind"

Related

understanding docker : how come my docker container content is dynamic?

I want to make sure I understand correctly docker: when i build an image from the current directory I run:
docker build -t imgfile .
What happens when i change the content of a file in the directory AFTER the image is built? From what i've tried it seems it changes the content of the docker image also dynamically.
I thought the docker image was like a zip file that could only be changed with docker commands or logging into the image and running commands.
The dockerfile is :
FROM lambci/lambda:build-python3.8
WORKDIR /var/task
EXPOSE 8000
RUN echo 'export PS1="\[\e[36m\]zappashell>\[\e[m\] "' >> /root/.bashrc
CMD ["bash"]
And the docker run command is :
docker run -ti -p 8000:8000 -e AWS_PROFILE=zappa -v "$(pwd):/var/task" -v ~/.aws/:/root/.aws --rm zappa-docker-image
Thank you
Best,
Your docker run command isn't really running your image at all. The docker run -v $(pwd):/var/task syntax overwrites what was in /var/task in the image with a bind mount to the current directory on the host. So when you edit a file on your host, the container has the same host directory (and not the content from the image) and you see the changes inside the container as well.
You're right that the image is immutable. The image you show doesn't really contain anything, beyond a .bashrc file that won't usually be used. You can try running the image without the -v options to see:
docker run --rm zappa-docker-image ls -al
# just shows `.` and `..` directories
I'd recommend making sure you COPY your application into the image, setting its CMD to actually run the application, and removing the -v option that overwrites its main directory. If your goal is to run host code against host files with host supporting data like your AWS credentials, you're not really getting much benefit from introducing Docker in between your application and every single file it uses.

Docker volume is empty

When using -v switch the files from container should be copied to localhost volume right? But it seems like the directory jenkins_home isn't created at all.
If I create the jenkins_home directory manually and then mount it, the directory is still empty.
I want to preserve the jenkins configs so I could re-run image later.
docker run -p 8080:8080 -p 50000:50000 -d -v jenkins_home:/var/jenkins_home jenkins/jenkins:latest
If you docker run -v jenkins_home:... where the first half of the -v option has no slashes in it at all, that syntax creates a named Docker volume; it isn't a bind mount.
If you docker run -v "$PWD/jenkins_home:..." then that host directory is mounted over the corresponding container directory. At startup time, nothing is ever copied into the host directory; if the host directory is empty, that empty directory gets mounted into the container, hiding everything that was in the image.
If you use the docker run -v named-volume:... syntax, and the named volume is empty, then in this case only, and only the very first time the container is run, the contents of the image are copied into the named volume. This doesn't work for bind mounts, and it doesn't work if there is already data in the volume (perhaps from a previous docker run). It also does not work in other container environments such as Kubernetes. I do not recommend relying on this behavior.
Probably the easiest way to make this work is to launch a one-off container to export the contents of the image, and then use bind-mount syntax:
cd jenkins_home
docker run \
--rm \ # clean up this container when done
-w /var/jenkins_home \ # set the current container directory
jenkins/jenkins \ # the image to run
tar cf - . \ # write a tar file to stdout
| tar xf - # and unpack it on the host
# Now launch the container as normal
docker run -d -p ... -v "$PWD:/var/jenkins_home" jenkins/jenkins
Figured it out.
Turned out that by default it creates the volume in /var/lib/docker/volumes/jenkins_home/ instead of in the current directory.
Also I had tried docker volume create jenkins_home before running the docker image to mount. So not sure if it was the -v jenkins_home:/var/jenkins_home or if it was docker create volume that created the directory in /var/lib/docker/volumes/.

Docker not saving a file created using python - Flask application

I created a Flask Application. This application receives a XML from a url and saves it:
response = requests.get(base_url)
with open('currencies.xml', 'wb') as file:
file.write(response.content)
When I run the application without Docker, the file currencies.xml is correctly created inside my app folder.
However, this behaviour does not occur when I use docker.
In docker I run the following commands:
docker build -t my-api-docker:latest .
docker run -p 5000:5000 my-api-docker ~/Desktop/myApiDocker # This is where I want the file to be saved: inside the main Flask folder
When I run the second command, I get:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"/Users/name/Desktop/myApiDocker\": stat /Users/name/Desktop/myApiDocker: no such file or directory": unknown.
ERRO[0001] error waiting for container: context canceled
But If I run:
docker build -t my-api-docker:latest .
docker run -p 5000:5000 my-api-docker # Without specifying the PATH
I can access the website (but it is pretty useless without the file currencies.xml
Dockerfile
FROM python:3.7
RUN pip install --upgrade pip
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip install -r requirements.txt
COPY . /app
EXPOSE 5000
CMD [ "flask", "run", "--host=0.0.0.0" ]
When you
docker run -p 5000:5000 my-api-docker ~/Desktop/myApiDocker
Docker interprets everything after the image name (my-api-docker) as the command to run. It runs /Users/name/Desktop/myApiDocker as a command, instead of what you have as the CMD in the Dockerfile, and when that path doesn't exist in the container, you get the error you see.
It's a little unlikely you'll be able to pass this path to your flask run command as a command-line argument. A typical way of dealing with this is by using an environment variable instead. In your code,
download_dir = os.environ.get('DOWNLOAD_DIR', '.')
currencies_xml = os.path.join(download_dir, 'currencies.xml')
with open(currencies_xml, 'wb') as file:
...
Then when you start your container, you can pass that as an environment variable with the docker run -e option. Note that this names a path inside the container; there's no particular need for this to match the path on the host.
docker run \
-p 5000:5000 \
-e DOWNLOAD_DIR=/data \
-v $HOME/Desktop/myApiDocker:/data \
my-api-docker
It's also fairly common to put an ENV statement in your Dockerfile or otherwise pick a fixed path for this, and just specify that your image's interface is that it will download the file into whatever is mounted on /data.
When you docker run the image, the process' context is the container's file system not your host's file system. So my-api-docker ~/Desktop/myApiDocker (attempts to) place the file in the container's (!) ~/Desktop.
Instead you need to mount one of your host's directories into the container's file system and store the file in the mounted directory.
Something like:
docker run ... \
--volume=[HOST-PATH]:[CONTAINER-PATH] \
... \
my-api-docker [CONTAINER-PATH]/thefile
The container then writes the file to [CONTAINER-PATH]/thefile but this is mapped to the host's [HOST-PATH]/thefile.
NB The values for [HOST-PATH] and [CONTAINER-PATH] must be absolute not relative paths.
You may prove this behavior to yourself using e.g. either python:3.7 or busybox:
# List my host's root
ls -l /
# List the container's root
docker run --rm busybox ls -l /
# Mount the host's /tmp into the container's /tmp
ls -l /tmp
docker run --rm --volume=/tmp:/tmp busybox ls -l /tmp
HTH!

How to retrieve file from docker container?

I have a simple Dockerfile which creates a zip file and I'm trying retrieve the zip file once it is ready. My Dockerfile looks like this:
FROM ubuntu
RUN apt-get update && apt-get install -y build-essentials gcc
ENTRYPOINT ["zip","-r","-9"]
CMD ["/lib64.zip", "/lib64"]
After reading through the docs I fee like something like this should do it but I can't quite get it to work.
docker build -t ubuntu-libs .
docker run -d --name ubuntu-libs --mount source=$(pwd)/,target=/lib64.zip ubuntu-libs
One other side question: Is is possible to rename the zip file from the command line?
Edit:
This is different than the duplicate question mentioned in the comments because while they're using cp to copy file from a running Docker container I'm trying to mount a directory upon instantiation.
There are multiple ways to do this.
Using docker cp:
docker cp <container_hash>:/path/to/zip/file.zip /path/on/host/new_name.zip
Using docker volumes:
As you were leading to in your question, you can also mount a path from the container to your host. You can either do this by specifying where on the host you want the mount point to be or don't specify where the mount point is and let docker choose. Both these paths require different approaches.
Let docker choose host mount location
docker volume create random_volume_name
docker run -d --name ubuntu-libs -v random_volume_name:<path/to/mount/in/container> ubuntu-libs
The content will be located on your host, here:
ls -l /var/lib/docker/volumes/random_volume_name/_data/
Let me choose host mount location
docker run -d --name ubuntu-libs -v <existing/mount/point/on/host>:<path/to/mount/in/container> ubuntu-libs
This creates a clean/empty location that is shared as per the locations defined in the command. Now you need to modify your Dockerfile to copy the artifacts to this path, something like:
FROM ubuntu
RUN apt-get update && apt-get install -y build-essentials gcc
ENTRYPOINT ["zip","-r","-9"]
CMD ["sh", "-c", "/lib64.zip", "/lib64", "cp", "path/to/zip/file.zip", "<path/to/mount/in/container>"]
The content will now be located on your host, here:
ls -l <existing/mount/point/on/host>
I got to give a shout out to #joaofnfernandes from here, who does a great job explaining.
As #flagg19 commented, you should be binding a directory onto a directory. You can make up directories inside the container, and you can override the RUN arguments. Doing both plus adding type=bind leads to great success:
docker run -d --rm --mount type=bind,source="$(pwd)",target=/out ubuntu-libs /out/lib64.zip /lib64
Or of course you could change the Dockerfile RUN command to write to /out/lib64.zip instead of /lib64.zip:
FROM ubuntu
RUN apt-get update && apt-get install -y build-essentials gcc && mkdir /out
ENTRYPOINT ["zip","-r","-9"]
CMD ["/out/lib64.zip", "/lib64"]
docker run -d --rm --mount type=bind,source="$(pwd)",target=/out ubuntu-libs
Either way, I recommend adding --rm and getting rid of --name. No need to keep around the container after it's done.

os.Lstat is failing in a mounted volume on an ubuntu-based Docker container

I have a docker container that uses go-bindata to compile a config. I run the docker container with
docker run -id \
-v conf:/conf \
-e CONF="/conf" \
my-container
Then in the docker container, I install go-bindata, and run
RUN go-bindata -prefix $CONF -o $GOPATH/src/github.com/my/repo/dir/conf_generated.go $CW_CONF/config
And the output is
bindata: Failed to stat input path '/conf/config': lstat /conf/config: no such file or directory
This is the line that is causing the error. It is odd because when I enter the container and run the same command, it works. stat /conf/config also works (it knows the file is there). What is going on here? Why doesn't the install line work when the container is building?
It looks like you have the bindata call declared in your Dockerfile. With the RUN prefix it's executed during build of the container when there is no volume yet mounted. If you use the CMD prefix it will run during execution of the container, then the volume is mounted and it should work.

Resources