I am trying to save a figure to my local after I build and run my docker file.
My docker File is this
FROM python:3.7
WORKDIR /usr/src/app
# copy all the files to the container
COPY . .
RUN mkdir -p /root/.config/matplotlib
RUN echo "backend : Agg" > /root/.config/matplotlib/matplotlibrc
RUN pip install pandas matplotlib==2.2.4
CMD ["python", "./main.py"]
My main.py is this
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
data = pd.read_csv('data/coding-environment-exercise.csv')
st_name=data['student']
marks=data['mark']
x=list(st_name)
y=list(marks)
print(x)
out_path = '/output/your_filename.png'
plt.plot(x,y)
plt.savefig("test2.png",format="png")
However after I run this docker file via these commands I can't find the png. It tried my code in local python ide. It saves the figure however I couldn't do it via docker.
docker build -t plot:docker .
docker run -it plot:docker
A Docker container has it's own separate file system from the host running the container. In order to obtain the image from the host you must mount a so called volume within the container which you can do using the -v option. You must mount the directory which should contain your image.
docker run -it -v /path/on/host:/path/in/container plot:docker
Please see the Docker documentation on volumes for more details.
Another option to obtain the image is the use of the docker cp command as long as the container is not yet removed which allows you to copy files from within the container to the host (and the other way round).
docker cp <container-name>:/path/to/file/in/container.png /target/path/on/host.png
You can set the container name using the --name flag in the docker run commmand.
docker run --name my-container -it plot:docker
Related
i have creted a docker volume with such command
docker run -ti --rm -v TestVolume1:/testvolume1 ubuntu
then i created a file there, called TestFile.txt and added text to it
Also i have a simple "Hello world" .net core app with Dockerfile
FROM mcr.microsoft.com/dotnet/aspnet:6.0
COPY bin/Release/net6.0/publish/ ShareFileTestInstance1/
WORKDIR /ShareFileTestInstance1
ENTRYPOINT ["dotnet", "ShareFileTestInstance1.dll"]
I published it using
dotnet publish -c Release
then ran
docker build -t counter-image -f Dockerfile .
And finally executed
docker run -it --rm --name=counter-container counter-image -v TestVolume1:/testvolume1 ubuntu
to run my app with a docker volume
So what i want to achive to access a file which is in a volume("TestFile.txt" in my case) from a code in the container.
for example
Console.WriteLine(File.Exists("WHAT FILE PATH HAS TO BE HERE") ? "File exists." : "File does not exist.");
Is it also possible to combine all this stuff in a Dockerfile? I want to add one more container next and connect to the volume to save data there.
The parameters for docker run can be either for docker or for the program running in the docker container. Parameters for docker go before the image name and parameters for the program in the container go after the image name.
The volume mapping is a parameter for docker, so it should go before the image name. So instead of
docker run -it --rm --name=counter-container counter-image -v TestVolume1:/testvolume1 ubuntu
you should do
docker run -it --rm --name=counter-container -v TestVolume1:/testvolume1 counter-image
When you do that, your file should be accessible for your program at /testvolume1/TestFile.txt.
It's not possible to do the mapping in the Dockerfile as you ask. Mappings may vary from docker host to docker host, so they need to be specified at run-time.
I am facing an issue where after runnig the container and using bind mount to mount the directory on host to container I am not able to see new files created in host machine inside container.Below is my project structure.
The python code creates a file inside the container which should be available inside the host machine too however this does happen when I start the container with below command. However updates to python code and html is available inside the container.
sudo docker container run -p 5000:5000 --name flaskapp --volume feedback1:/app/feedback/ --volume /home/deepak/PycharmProjects/NewDockerProject/sampleapp:/app flask_image
However after starting the container using below command, everything seems to work fine. I can see all the files from container to host and vice versa(new created , edited).I git this command from docker in the month of lunches book.
sudo docker container run --mount type=bind,source=/home/deepak/PycharmProjects/NewDockerProject/sampleapp,target=/app -p 5000:5000 --name flaskapp
Below is the content of my dockerfile
FROM python:3.8-alpine
WORKDIR /app
COPY ./requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python","main.py"]
Could someone please help me in figuring out the difference between the two commands ? I am using ubuntu. Thank you
In my case i got working volumes using following docker run args (but i am running without --mount type=bind):
docker run -it ... -v mysql_data:/var/lib/mysql -v storage:/usr/shared/app_storage
where:
mysql_data is a volume name
/var/lib/mysql path inside container machine
you could list volumes as:
docker volume ls
and inspect them to see where it points on your system (usually /var/lib/docker/volumes/{volume_nanme}/_data):
docker volume inspect mysql_data
to create volume use following command:
docker volume create {volume_name}
I am running Airflow on Docker using pucker/docker-airflow image
docker run -d -p 8080:8080 puckel/docker-airflow webserver
How do I make pySpark available?
My goal is to be able to use Spark within my DAG tasks.
Any tip?
Create a requirements.txt, add all the dependencies in this file and then follow: https://github.com/puckel/docker-airflow#install-custom-python-package
- Create a file "requirements.txt" with the desired python modules
- Mount this file as a volume -v $(pwd)/requirements.txt:/requirements.txt (or add it as a volume in docker-compose file)
- The entrypoint.sh script execute the pip install command (with --user option)
I created a Flask Application. This application receives a XML from a url and saves it:
response = requests.get(base_url)
with open('currencies.xml', 'wb') as file:
file.write(response.content)
When I run the application without Docker, the file currencies.xml is correctly created inside my app folder.
However, this behaviour does not occur when I use docker.
In docker I run the following commands:
docker build -t my-api-docker:latest .
docker run -p 5000:5000 my-api-docker ~/Desktop/myApiDocker # This is where I want the file to be saved: inside the main Flask folder
When I run the second command, I get:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"/Users/name/Desktop/myApiDocker\": stat /Users/name/Desktop/myApiDocker: no such file or directory": unknown.
ERRO[0001] error waiting for container: context canceled
But If I run:
docker build -t my-api-docker:latest .
docker run -p 5000:5000 my-api-docker # Without specifying the PATH
I can access the website (but it is pretty useless without the file currencies.xml
Dockerfile
FROM python:3.7
RUN pip install --upgrade pip
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip install -r requirements.txt
COPY . /app
EXPOSE 5000
CMD [ "flask", "run", "--host=0.0.0.0" ]
When you
docker run -p 5000:5000 my-api-docker ~/Desktop/myApiDocker
Docker interprets everything after the image name (my-api-docker) as the command to run. It runs /Users/name/Desktop/myApiDocker as a command, instead of what you have as the CMD in the Dockerfile, and when that path doesn't exist in the container, you get the error you see.
It's a little unlikely you'll be able to pass this path to your flask run command as a command-line argument. A typical way of dealing with this is by using an environment variable instead. In your code,
download_dir = os.environ.get('DOWNLOAD_DIR', '.')
currencies_xml = os.path.join(download_dir, 'currencies.xml')
with open(currencies_xml, 'wb') as file:
...
Then when you start your container, you can pass that as an environment variable with the docker run -e option. Note that this names a path inside the container; there's no particular need for this to match the path on the host.
docker run \
-p 5000:5000 \
-e DOWNLOAD_DIR=/data \
-v $HOME/Desktop/myApiDocker:/data \
my-api-docker
It's also fairly common to put an ENV statement in your Dockerfile or otherwise pick a fixed path for this, and just specify that your image's interface is that it will download the file into whatever is mounted on /data.
When you docker run the image, the process' context is the container's file system not your host's file system. So my-api-docker ~/Desktop/myApiDocker (attempts to) place the file in the container's (!) ~/Desktop.
Instead you need to mount one of your host's directories into the container's file system and store the file in the mounted directory.
Something like:
docker run ... \
--volume=[HOST-PATH]:[CONTAINER-PATH] \
... \
my-api-docker [CONTAINER-PATH]/thefile
The container then writes the file to [CONTAINER-PATH]/thefile but this is mapped to the host's [HOST-PATH]/thefile.
NB The values for [HOST-PATH] and [CONTAINER-PATH] must be absolute not relative paths.
You may prove this behavior to yourself using e.g. either python:3.7 or busybox:
# List my host's root
ls -l /
# List the container's root
docker run --rm busybox ls -l /
# Mount the host's /tmp into the container's /tmp
ls -l /tmp
docker run --rm --volume=/tmp:/tmp busybox ls -l /tmp
HTH!
Below is my dockerfile
FROM node:10.15.0
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY ./build/release /usr/src/app/
RUN yarn
EXPOSE 3000
CMD [ "node", "server.js" ]
First I ran
docker build -t app .
and then
docker run -t -p 3000:3000 app
Everything works fine via localhost:3000 in my computer.
Then I try to export this image by
docker export 68719e2bb0cd > app.tar
and import again by
cat app.tar | docker import - app2
then run
docker run -t -d -p 2000:3000 app2
and the error came out
docker: Error response from daemon: No command specified.
Why this happened?
You're using the wrong commands: docker export and docker import only transfer the filesystem part of an image and not other data like environment variables or the default command. There's not really a good typical use case for these commands.
The standard way to do this is to set up a Docker registry or use a public registry server like Docker Hub, AWS ECR, GCR, ... Once you have this set up you can docker push an image to the registry from the system it was built on, and then docker pull it on the system you want to run it on (or directly docker run it, which will automatically pull the image if not present).
If you really can't set up a registry then the commands you actually want are docker save and docker load, which save complete images with all of their metadata. I've only every wanted these in environments where I can't connect the systems I want to run images to the registry server; otherwise a registry is almost always better. (Cluster environments like Docker Swarm and Kubernetes all but require a registry as well.)
Just pass the command to run. because the imported image will lose all of its associated metadata on export, so the default command won't be available after importing it somewhere else.
The correct command would be something like:
docker run -t -d -p 2000:3000 app2 /path/to/something.sh