Docker not saving a file created using python - Flask application - docker

I created a Flask Application. This application receives a XML from a url and saves it:
response = requests.get(base_url)
with open('currencies.xml', 'wb') as file:
file.write(response.content)
When I run the application without Docker, the file currencies.xml is correctly created inside my app folder.
However, this behaviour does not occur when I use docker.
In docker I run the following commands:
docker build -t my-api-docker:latest .
docker run -p 5000:5000 my-api-docker ~/Desktop/myApiDocker # This is where I want the file to be saved: inside the main Flask folder
When I run the second command, I get:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"/Users/name/Desktop/myApiDocker\": stat /Users/name/Desktop/myApiDocker: no such file or directory": unknown.
ERRO[0001] error waiting for container: context canceled
But If I run:
docker build -t my-api-docker:latest .
docker run -p 5000:5000 my-api-docker # Without specifying the PATH
I can access the website (but it is pretty useless without the file currencies.xml
Dockerfile
FROM python:3.7
RUN pip install --upgrade pip
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip install -r requirements.txt
COPY . /app
EXPOSE 5000
CMD [ "flask", "run", "--host=0.0.0.0" ]

When you
docker run -p 5000:5000 my-api-docker ~/Desktop/myApiDocker
Docker interprets everything after the image name (my-api-docker) as the command to run. It runs /Users/name/Desktop/myApiDocker as a command, instead of what you have as the CMD in the Dockerfile, and when that path doesn't exist in the container, you get the error you see.
It's a little unlikely you'll be able to pass this path to your flask run command as a command-line argument. A typical way of dealing with this is by using an environment variable instead. In your code,
download_dir = os.environ.get('DOWNLOAD_DIR', '.')
currencies_xml = os.path.join(download_dir, 'currencies.xml')
with open(currencies_xml, 'wb') as file:
...
Then when you start your container, you can pass that as an environment variable with the docker run -e option. Note that this names a path inside the container; there's no particular need for this to match the path on the host.
docker run \
-p 5000:5000 \
-e DOWNLOAD_DIR=/data \
-v $HOME/Desktop/myApiDocker:/data \
my-api-docker
It's also fairly common to put an ENV statement in your Dockerfile or otherwise pick a fixed path for this, and just specify that your image's interface is that it will download the file into whatever is mounted on /data.

When you docker run the image, the process' context is the container's file system not your host's file system. So my-api-docker ~/Desktop/myApiDocker (attempts to) place the file in the container's (!) ~/Desktop.
Instead you need to mount one of your host's directories into the container's file system and store the file in the mounted directory.
Something like:
docker run ... \
--volume=[HOST-PATH]:[CONTAINER-PATH] \
... \
my-api-docker [CONTAINER-PATH]/thefile
The container then writes the file to [CONTAINER-PATH]/thefile but this is mapped to the host's [HOST-PATH]/thefile.
NB The values for [HOST-PATH] and [CONTAINER-PATH] must be absolute not relative paths.
You may prove this behavior to yourself using e.g. either python:3.7 or busybox:
# List my host's root
ls -l /
# List the container's root
docker run --rm busybox ls -l /
# Mount the host's /tmp into the container's /tmp
ls -l /tmp
docker run --rm --volume=/tmp:/tmp busybox ls -l /tmp
HTH!

Related

In Docker while binding host directory with container directory I am facing a problem

I am trying to bindmount a directory form docker container to my host directory called /home, the docker container directory which I am trying to sync is named as /test and it contains a file called new.txt.
My Dockerfile is in /home/sampledocker1 directory. Its contents are as follows:
FROM ubuntu:18.04
RUN ["/bin/bash", "-c", "mkdir test"]
COPY new.txt test
Here, local file new.txt available in current path.
I executed the below commands first I built the docker image and started the container as follows:
docker build -t sample1:latest . # image is created properly
docker run -t -d -v /home:/test sample1:latest /bin/bash
After creating container with mount option, I am expecting that the file new.txt in test folder of container would appear in my /home directory but it did not.
Here bindmount is not happening properly.
By running -v option you actually override directory that already exists in the docker file.
If you run:
docker run -ti sample1:latest /bin/bash
You will find /test/new.txt file because it is added to the image layer with COPY command on the Dockerfile.
If you run:
docker run -ti -v /home:/test sample1:latest /bin/bash
You will find the contents of your computers /home directory in the /test of the docker container, because -v (mouted volume) overrides original image layer created with the COPY command on the Dockerfile.
THE SUGGESTION: Remove both COPY and mkdir commands from your Dockerfile:
FROM ubuntu:18.04
# Nothing at all
And mount your current directory with your docker run command:
docker run -ti -v $(pwd):/test sample1:latest /bin/bash
Since your Dockerfile is empty, equivalent command is just running ubuntu:18:04 image:
docker run -ti -v $(pwd):/test ubuntu:18.04 /bin/bash
p.s. I changed -d (detached) to -i(interactive) on the example to make sure that you enter docker image as soon as you run docker run command.

Overwrite volume contents with container's contents

I have a volume which contains data that needs to stay persisted. When creating the volume for the first time, and mounting it to my node container, all container contents are copied to the volume, and everything behaves as expected. The issue is that when I change a few files in my node container, I remove the old image and container, and rebuild them from scratch. When running the updated container, the container's files don't get copied into the volume. This means that the volume still contains the old files, and therefore when the volume is mounted in the container, no updated functionality is present, and I have to remove and recreate the volume from scratch, which I can't do since the volume's data needs to be persisted.
Here is my dockerfile
FROM mcr.microsoft.com/dotnet/sdk:5.0
COPY CommandLineTool App/CommandLineTool/
COPY NeedBackupNodeServer App/NeedBackupNodeServer/
WORKDIR /App/NeedBackupNodeServer
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash - \
&& apt update \
&& apt install -y nodejs
EXPOSE 5001
ENTRYPOINT ["node", "--trace-warnings", "index.js"]
Here are my commands and expected output
docker volume create nodeServer-and-commandLineTool-volume
docker build -t need-backup-image -f Dockerfile .
docker run -p 5001:5001 --name need-backup-container -v nodeServer-and-commandLineTool-volume:/App need-backup-image -a
when running
docker exec need-backup-container cat index.js
the file is present and contains the latest updates, since the volume was just created.
Now when I update some files, I need to rebuild the image and the container, so I run
docker rm need-backup-container
docker rmi need-backup-image
docker build -t need-backup-image -f Dockerfile .
docker run -p 5001:5001 --name need-backup-container -v nodeServer-and-commandLineTool-volume:/App need-backup-image -a
Now I thought that when running
docker exec need-backup-container cat index.js
I'd see the updated file changes, but nope, I only see the old files that were first created when the volume was mounted for the first time.
So my question is, is there anyway to achieve overwriting the volume's files when creating a container?
If your application needs persistent data, it should be stored in a different directory from the application code. This can be in a dedicated /data directory or in a subdirectory of your application; the important thing is that, when you mount a volume to hold the persistent data, it does not hide your application code.
In a Node application, for example, you could refer to a ./data for your data files:
import { open } from 'fs/promises';
import { join } from 'path';
const dataDir = process.env.DATA_DIR || 'data';
const fh = await open(join(dataDir, 'file.txt'), 'rw');
Then in your Dockerfile you'd need to create that directory. If you set up a non-root user, that directory, but not your code, should be owned by the user.
FROM node:lts
# Create the non-root user
RUN adduser --system --no-create-home nonroot
# Install the Node application normally
WORKDIR /app
COPY package*.json .
RUN npm ci
COPY index.js .
# Create the data directory
RUN mkdir data && chown nonroot data
# Specify how to run the container
USER nonroot
CMD ["node", "index.js"]
Then when you launch the container, mount the volume only on the data directory, not over the entire /app tree.
docker run \
-p 5001:5001 \
--name need-backup-container \
-v nodeServer-and-commandLineTool-volume:/app/data \
need-backup-image
# ^^^^^^^^^
Note that the Dockerfile as shown here would also let you use a host directory instead of a Docker named volume, and specify the host uid when you run the container. You do not need to make any changes to the image to do this.
docker run \
-p 5002:5001 \
--name same-image-with-bind-mount \
-u $(id -u) \
-v "$PWD/app-data:/app/data" \
need-backup-image

I try to build a docker container that include externals files

I try to create a docker container with bind9 on it and I want to add my db.personal-domain.com file but when I run docker build and then docker run -tdp 53:53 -v config:/etc/bind <image id> the container doesn't have my db.personal-domain.com file. How to fix that ? Thanks !
tree structure
-DNS
--Dockerfile
--config
---db.personal-domain.com
Dockerfile
FROM ubuntu:20.04
RUN apt-get update
RUN apt-get install -y bind9
RUN apt-get install -y bind9utils
WORKDIR /etc/bind
VOLUME ["/etc/bind"]
COPY config/db.personal-domain.com /etc/bind/db.personal-domain.com
EXPOSE 53/tcp
CMD ["/usr/sbin/named", "-g", "-c", "/etc/bind/named.conf", "-u", "bind"]
There is a syntax issue in your docker run -v option. If you use docker run -v name:/container/path (even if name matches a local file or directory) it mounts a named volume over your configuration directory. You want the host content, and you need -v /absolute/host/path:/container/path syntax (the host path must start with /). So (on Linux/MacOS):
docker run -d -p 53:53 \
-v "$PWD:config:/etc/bind" \
my/bind-image
In your image you're trying to COPY in the config file. This could work also; but it's being hidden by the volume mount, and also rendered ineffective by the VOLUME statement. (The most obvious effect of VOLUME is to prevent subsequent changes to the named directory; it's not required to mount a volume later.)
If you delete the VOLUME line from the Dockerfile, it should also work to run the container without the -v option at all. (But if you'll have different DNS configuration on different setups, this probably isn't what you want.)
-v config:/etc/bind
That syntax creates a named volume, called config. It looks like you wanted a host volume pointing to a path in your current directory, and for that you need to include a fully qualified path with the leading slash, e.g. using $(pwd) to generate the path:
-v "$(pwd)/config:/etc/bind"

Docker bind-mount not working as expected within AWS EC2 Instance

I have created the following Dockerfile to run a spring-boot app: myapp within an EC2 instance.
# Use an official java runtime as a parent image
FROM openjdk:8-jre-alpine
# Add a user to run our application so that it doesn't need to run as root
RUN adduser -D -s /bin/sh myapp
# Set the current working directory to /home/myapp
WORKDIR /home/myapp
#copy the app to be deployed in the container
ADD target/myapp.jar myapp.jar
#create a file entrypoint-dos.sh and put the project entrypoint.sh content in it
ADD entrypoint.sh entrypoint-dos.sh
#Get rid of windows characters and put the result in a new entrypoint.sh in the container
RUN sed -e 's/\r$//' entrypoint-dos.sh > entrypoint.sh
#set the file as an executable and set myapp as the owner
RUN chmod 755 entrypoint.sh && chown myapp:myapp entrypoint.sh
#set the user to use when running the image to myapp
USER myapp
# Make port 9010 available to the world outside this container
EXPOSE 9010
ENTRYPOINT ["./entrypoint.sh"]
Because I need to access myapp's logs from the EC2 host machine, i want to bind-mount a folder into the logs folder sitting within "myapp" container here: /home/myapp/logs
This is the command that i use to run the image in the ec2 console:
docker run -p 8090:9010 --name myapp myapp:latest -v home/ec2-user/myapp:/home/myapp/logs
The container starts without any issues, but the mount is not achieved as noticed in the following docker inspect extract:
...
"Mounts": [],
...
I have tried the followings actions but ended up with the same result:
--mount type=bind instead of -v
use volumes instead of bind-mount
I have even tried the --privileged option
In the Dockerfile: I tried to use the USER root instead of myapp
I believe that, this has nothing to do with the ec2 machine but my container. Since running other containers with bind-mounts on the same host works like a charm.
I am pretty sure i am messing up with my Dockerfile.
But what am i doing wrong in that Dockerfile ?
or
What am i missing out ?
Here you have the entrypoint.sh if needed:
#!/bin/sh
echo "The app is starting ..."
exec java ${JAVA_OPTS} -Djava.security.egd=file:/dev/./urandom -jar -Dspring.profiles.active=${SPRING_ACTIVE_PROFILES} "${HOME}/myapp.jar" "$#"
I think the issue might be the order of the options on the command line. Docker expects the last two arguments to be the image id/name and (optionally) a command/args to run as pid 1.
https://docs.docker.com/engine/reference/run/
The basic docker run command takes this form:
$ docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
You have the mount options (-v in the example you provided) after the image name (myall:latest). I'm not sure but perhaps the -v ... is being interpreted as arguments to be passed to your entrypoint script (which are being ignored) and docker run isn't seeing as a mount option.
Also, the source of the mount here (home/ec2-user/myapp) doesn't start with a leading forward slash (/), which, I believe, will make it relative to where the docker run command is executed from. You should make sure the source path starts with a forward slash (i.e. /home/ec2-user/myapp) so that you're sure it will always mount the directory you expect. I.e. -v /home/ec2-user...
Have you tried this order:
docker run -p 8090:9010 --name myapp -v /home/ec2-user/myapp:/home/myapp/logs myapp:latest

Copy files from host to container when container starts on any host

I want to copy files from host to Docker container when I run the container on any host.
here is my Dockerfile
FROM tomcat:9
EXPOSE 8080
ADD ./target/app.war /tmp/myapp.war
RUN unzip /tmp/myapp.war -d /usr/local/tomcat/webapps/myapp
ENTRYPOINT ["cp", "-r", "/data/*", "/usr/local/tomcat/webapps/myapp/data"]
After building the docker image
docker build -t myappimage .
I am running it with:
docker run --mount type=bind,source=d:/data,destination=/data --rm -it -p 8081:8080 myappimage
but this throws error cp: cannot stat '/data/*': No such file or directory
I am not sure why mounting is not working, it should copy all files from my host directory d:/data to Docker container directory /data when a container starts.
This command in ENTRYPOINT is run in Docker container.
You can try:
FROM tomcat:9
EXPOSE 8080
ADD ./target/app.war /tmp/myapp.war
RUN unzip /tmp/myapp.war -d /usr/local/tomcat/webapps/myapp
COPY /data /usr/local/tomcat/webapps/myapp/data/
I hope /usr/local/tomcat/webapps/myapp/data directory exist in image prior to copying. The command seems to working fine on my machine (Mac). Not sure if it's the d:/ that is causing the issue.
Also you can try using the -v option with a z flag (It solved the same issue for me), assuming you are inside d: directory
docker run -v "$(pwd)"/data:/data:z --rm -it -p 8081:8080 myappimage
With -v it will create an endpoint for you. You can read here

Resources