Pandas not found in Docker Run Command [attaching volumne] - docker

When I build my docker image and run it using the following commands:
docker build -t iter1 .
docker run -it --rm --name iter1_run iter1
My application runs just fine. However when I try to attach a volume and execute the following command:
docker run -it --rm --name iter_run -v /Users/xxxx/Desktop/Docker_Builds/SingleDocker/xxxxxx:/usr/src/oce -w /usr/src/oce python:3 python oce_test.py
The file oce_test.py cant find Pandas.
Traceback (most recent call last):
File "oce_test.py", line 1, in <module>
import pandas as pd
ModuleNotFoundError: No module named 'pandas'
The content of my Dockerfile is as follows:
# Docker image
FROM python:3
# Copy requirements
COPY requirements.txt /
# Install Requirements
RUN pip install -r /requirements.txt
# Copy scripts needed for execution
COPY ./xxxx /usr/src/oce
# Establish a working directory
WORKDIR /usr/src/oce
# Execute required script
CMD ["python", "oce_test.py"]
The content of my requirements.txt is as follows:
numpy==1.18.1
pandas==1.0.1
matplotlib==3.1.3
scipy==1.4.1
Python-dateutil==2.8.1

David Maze answered this:
Your docker run command is running a plain python:3 image with no additional packages installed. If you want to use the image from your Dockerfile, but overwriting the application code in the image with arbitrary content from your host, use your image name iter1 instead. (You don't need to repeat the image's WORKDIR or CMD as docker run options.)

Related

Pass in Arguments when Doing a Docker Run

If I have the following Dockfile
FROM centos:8
RUN yum update -y
RUN yum install -y python38-pip
COPY . /app
WORKDIR /app
RUN pip3 install -r requirements.txt
ENTRYPOINT ["python3"]
CMD ["app.py"]
With app being the following:
#!/usr/bin/python
import sys
print('Here is your param: ', sys.argv[0])
When I call docker run -it (myimg), how can I pass in a parameter so the output would be the param?
ex:
docker run -it (myparam) "testfoo"
would print
Here is your param: testfoo
sys.argv[0] refer to the FileName so you can not expect testfoo when you run docker run -it my_image testfoo
The first item in the list, sys.argv[0], is the name of the Python script. The rest of the list elements, sys.argv[1] to sys.argv[n], are the command line arguments 2 through n
print('Here is your param: file Name', sys.argv[0],'args testfoo:',sys.argv[1])
So you can just replace the entrypoint to below then you are good to pass runtime argument testfoo
ENTRYPOINT ["python3","app.py"]
Now pass argument testfoo
docker run -it --rm my_image testfoo
Anything you provide after the image name in the docker run command line replaces the CMD from the Dockerfile, and then that gets appended to the ENTRYPOINT to form a complete command.
Since you put the script name in CMD, you need to repeat that in the docker run invocation:
docker run -it myimg app.py testfoo
(This split of ENTRYPOINT and CMD seems odd to me. I'd make sure the script starts with a line like #!/usr/bin/env python3 and is executable, so you can directly run ./app.py; make that be the CMD and remove the ENTRYPOINT entirely.)

Check the file contents of a docker image

I am new to docker and I built my image with
docker build -t mycontainer .
The contents of my Dockerfile is
FROM python:3
COPY ./* /my-project/
RUN pip install -r requirements.txt
CMD python /my-project/main.py
Here I get an error:
Could not open requirements file: No such file or directory: 'requirements.txt'
I am not sure if all the files from my local are actually copied to the image.
I want to inspect the contents of the image, is there any way I can do that?
Any help will be appreciated!
When you run docker build, it should print out a line like
Step 2/4 : COPY ./* /my-project/
---> 1254cdda0b83
That number is actually a valid image ID, and so you can get a debugging shell in that image
docker run --rm -it 1254cdda0b83 bash
In particular the place that container starts up will have the exact filesystem, environment variables (from ENV directives), current directory (WORKDIR), user (USER), and so on; directly typing in the next RUN command should get the same result as Docker running it itself.
(In this specific case, try running pwd and ls -l in the debugging shell; does your Dockerfile need a WORKDIR to tell the pip command where to run?)
You just have to get into the project directory and run the pip command.
The best way to do that is to set the WORKDIR /my-project!
This is the updated file
FROM python:3
COPY ./* /my-project/
WORKDIR /my-project
RUN pip install -r requirements.txt
CMD python /my-project/main.py
Kudos!

Dockerfile and Docker run -w / workdir

Let's take the sample python dockerfile as an example.
FROM python:3
WORKDIR /project
COPY . /project
and then the run command to run the tests with in that container:
docker run --rm -v$(CWD):/project -w/project mydocker:1.0 pytest tests/
We are declaring the WORKDIR in the dockerfile and the run.
Am I right in saying
The WORKDIR in the dockerfile is the directory which the subsequent commands in the Dockerfile are run? But this will have no impact on when we run the docker run command?
Instead we need to pas in the -w/project to have pytests run in the /projects directory, well for pytest to look for the rests directory in /projects.
My setup.cfg
[tool:pytest]
addopts =
--junitxml=results/unit-tests.xml
In the example you give, you shouldn't need either the -v or -w option.
Various options in the Dockerfile give defaults that can be overridden at docker run time. CMD in the Dockerfile, for example, will be overridden by anything in a docker run command after the image name. (...and it's better to specify it in the Dockerfile than to have to manually specify it on every docker run invocation.)
Specifically to your question, WORKDIR affects the current directory for subsequent RUN and COPY commands, but it also specifies the default directory when the container runs; if you don't have a docker run -w option it will use that WORKDIR. Specifying -w to the same directory as the final image WORKDIR won't have an effect.
You also COPY the code into your image in the Dockerfile (which is good). You don't need a docker run -v option to overwrite that code at run time.
More specifically looking at pytest, it won't usually write things out to the filesystem. If you are using functionality like JUnit XML or code coverage reports, you can set it to write those out somewhere other than your application directory:
docker run --rm \
-v $PWD/reports:/reports \
mydocker:1.0 \
pytest --cov=myapp --cov-report=html:/reports/coverage.html tests

How to pass command line arguments to my dockerized python app

I have a simple docker file which I am using to containerize my python app. The app actually takes file paths as command line arguments. It is my first time using Docker and I am wondering how I can achieve this:
FROM python:3.6-slim
COPY . /app
WORKDIR /app
RUN apt-get update && apt-get -y install gcc g++
# Install numpy, pandas, scipy and scikit
RUN pip install --upgrade pip
RUN pip --no-cache-dir install -r requirements.txt
RUN python setup.py install
ENTRYPOINT python -m myapp.testapp
Please note that the python app is run from the from the module with the -m flag.
This builds the image completely fine. I can also run it using:
docker run -ti myimg
However, I cannot pass any command line arguments to it. For example, my app prints some help options with the -h option.
However, running docker as:
docker run -ti myimg -h
does not do anything. So, the command line option are not being passed.
Additionally, I was wondering what the best way to actually pass file handles from the host computer to docker. So, for example, the application takes path to a file as an input and the file would usually reside on the host computer. Is there a way for my containerized app to be able to access that?
You have to use the CMD instruction along with ENTRYPOINT(in exec form)
ENTRYPOINT ["python", "-m", "myapp.testapp"]
CMD [""]
Make sure whatever default value you are passing to CMD, ("" in the above snippet), it is ignored by your main command
When you do, docker run -ti myimg,
the command will be executed as python -m myapp.testapp ''
When you do, docker run -ti mying -h,
the command will be executed as python -m myapp.testapp -h
Note:
exec form: ENTRYPOINT ["command", "parameter1", "parameter2"]
shell form: ENTRYPOINT command parameter1 parameter2

Error while building a docker image of a flask app

This is the flask app:
`From flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Flask Dockerized'
if __name__ == '__main__':
app.run(debug=True,host='0.0.0.0')`
This is the docker file:
FROM ubuntu:14.04
MAINTAINER Ashish John Stanley "a*********#gmail.com"
RUN apt-get update -y RUN apt-get install -y python-pip python-dev build-essential
COPY . /app WORKDIR /app RUN pip install -r requirements.txt
ENTRYPOINT ["python"]
CMD ["app.py"]
Command to build docker file:
docker build -t flask-container:latest. -f --file="~/Documents/web/requirements.txt"
The execution of the command gives the following error terminal screenshot
--file should be pointed to a directory that contains Dockerfile
Here is a command to create docker image for your case:
docker build -t flask-container:latest .
Make sure you are running it from the directory that contains Dockerfile
Docker documentation:
--file , -f Name of the Dockerfile (Default is ‘PATH/Dockerfile’)
By default the docker build command will look for a Dockerfile at the
root of the build context. The -f, --file, option lets you specify the
path to an alternative file to use instead. This is useful in cases
where the same set of files are used for multiple builds. The path
must be to a file within the build context. If a relative path is
specified then it is interpreted as relative to the root of the
context.
Here is more info about Docker options
https://docs.docker.com/engine/reference/commandline/build/#options.

Resources