I have a simple docker file which I am using to containerize my python app. The app actually takes file paths as command line arguments. It is my first time using Docker and I am wondering how I can achieve this:
FROM python:3.6-slim
COPY . /app
WORKDIR /app
RUN apt-get update && apt-get -y install gcc g++
# Install numpy, pandas, scipy and scikit
RUN pip install --upgrade pip
RUN pip --no-cache-dir install -r requirements.txt
RUN python setup.py install
ENTRYPOINT python -m myapp.testapp
Please note that the python app is run from the from the module with the -m flag.
This builds the image completely fine. I can also run it using:
docker run -ti myimg
However, I cannot pass any command line arguments to it. For example, my app prints some help options with the -h option.
However, running docker as:
docker run -ti myimg -h
does not do anything. So, the command line option are not being passed.
Additionally, I was wondering what the best way to actually pass file handles from the host computer to docker. So, for example, the application takes path to a file as an input and the file would usually reside on the host computer. Is there a way for my containerized app to be able to access that?
You have to use the CMD instruction along with ENTRYPOINT(in exec form)
ENTRYPOINT ["python", "-m", "myapp.testapp"]
CMD [""]
Make sure whatever default value you are passing to CMD, ("" in the above snippet), it is ignored by your main command
When you do, docker run -ti myimg,
the command will be executed as python -m myapp.testapp ''
When you do, docker run -ti mying -h,
the command will be executed as python -m myapp.testapp -h
Note:
exec form: ENTRYPOINT ["command", "parameter1", "parameter2"]
shell form: ENTRYPOINT command parameter1 parameter2
Related
I want to preface this in saying that I am very new to docker and have just got my feet wet with using it. In my Docker file that I run to build the container I install a program that sets some env variables. Here is my Docker file for context.
FROM python:3.8-slim-buster
COPY . /app
RUN apt-get update
RUN apt-get install wget -y
RUN wget http://static.matrix-vision.com/mvIMPACT_Acquire/2.40.0/install_mvGenTL_Acquire.sh
RUN wget http://static.matrix-vision.com/mvIMPACT_Acquire/2.40.0/mvGenTL_Acquire-x86_64_ABI2-2.40.0.tgz
RUN chmod +x ./install_mvGenTL_Acquire.sh
RUN ./install_mvGenTL_Acquire.sh -u
RUN apt-get install -y python3-opencv
RUN pip3 install USSCameraTools
WORKDIR /app
CMD python3 main.py
After executing the build docker command the program "mvGenTL_Acquire.sh" sets env inside the container. I need these variables to be set when executing the run docker command. But when checking the env variables after running the image it is not set. I know I can pass them in directly but would like to use the ones that are set from the install in the build.
Any help would be greatly appreciated, thanks!
For running a bash script when your container is creating:
make an script.sh file:
#!/bin/bash
your commands here
If you are using an alpine image, you must use #!/bin/sh instead of #!/bin/bash on the first line of your bash file.
Now in your Dockerfile copy your bash file in the container and use the ENTRYPOINT instruction for running this file when the container is creating:
.
.
.
COPY script.sh /
RUN chmod +x /script.sh
.
.
.
ENTRYPOINT ["/script.sh"]
Notice that in the ENTRYPOINT instruction use your bash file address in your image.
Now when you create a container, the script.sh file will be executed.
If I have the following Dockfile
FROM centos:8
RUN yum update -y
RUN yum install -y python38-pip
COPY . /app
WORKDIR /app
RUN pip3 install -r requirements.txt
ENTRYPOINT ["python3"]
CMD ["app.py"]
With app being the following:
#!/usr/bin/python
import sys
print('Here is your param: ', sys.argv[0])
When I call docker run -it (myimg), how can I pass in a parameter so the output would be the param?
ex:
docker run -it (myparam) "testfoo"
would print
Here is your param: testfoo
sys.argv[0] refer to the FileName so you can not expect testfoo when you run docker run -it my_image testfoo
The first item in the list, sys.argv[0], is the name of the Python script. The rest of the list elements, sys.argv[1] to sys.argv[n], are the command line arguments 2 through n
print('Here is your param: file Name', sys.argv[0],'args testfoo:',sys.argv[1])
So you can just replace the entrypoint to below then you are good to pass runtime argument testfoo
ENTRYPOINT ["python3","app.py"]
Now pass argument testfoo
docker run -it --rm my_image testfoo
Anything you provide after the image name in the docker run command line replaces the CMD from the Dockerfile, and then that gets appended to the ENTRYPOINT to form a complete command.
Since you put the script name in CMD, you need to repeat that in the docker run invocation:
docker run -it myimg app.py testfoo
(This split of ENTRYPOINT and CMD seems odd to me. I'd make sure the script starts with a line like #!/usr/bin/env python3 and is executable, so you can directly run ./app.py; make that be the CMD and remove the ENTRYPOINT entirely.)
I have the following Dockerfile
FROM rikorose/gcc-cmake
RUN git clone https://github.com/hect1995/UBIMET_Challenge.git
WORKDIR /UBIMET_Challenge
RUN mkdir build
WORKDIR build
#RUN apt-get update && apt-get -y install cmake=3.13.1-1ubuntu3 protobuf-compiler
RUN cmake ..
RUN make
CMD ["./ubimet /UBIMET_Challenge/data/1706221600.29 output.csv"]
Even though it says it executes the last line when building it does not (or it does it incorrectly) as if you run last line it should generate 2 files that are not being generated once I check them using:
docker run -t -i trial /bin/bash
Nevertheless, if I get inside the container and from there I run:
./ubimet /UBIMET_Challenge/data/1706221600.29 output.csv
It generates the files, so why does it not generate the files while building?
CMD is the default command to run when you start your container. You are overriding
this by passing /bin/bash to docker run.
Either change CMD to RUN (to run your script at build time) or run docker run without the trailing command (to run when you start the container).
You are using CMD wrong. CMD has 3 forms, none of which are what you are using:
CMD
The CMD instruction has three forms:
CMD ["executable","param1","param2"] (exec form, this is the preferred form)
CMD ["param1","param2"] (as default parameters to ENTRYPOINT)
CMD command param1 param2 (shell form)
You can use CMD like this:
CMD ["./ubimet", "/UBIMET_Challenge/data/1706221600.29", "output.csv"]
I am new to docker and I built my image with
docker build -t mycontainer .
The contents of my Dockerfile is
FROM python:3
COPY ./* /my-project/
RUN pip install -r requirements.txt
CMD python /my-project/main.py
Here I get an error:
Could not open requirements file: No such file or directory: 'requirements.txt'
I am not sure if all the files from my local are actually copied to the image.
I want to inspect the contents of the image, is there any way I can do that?
Any help will be appreciated!
When you run docker build, it should print out a line like
Step 2/4 : COPY ./* /my-project/
---> 1254cdda0b83
That number is actually a valid image ID, and so you can get a debugging shell in that image
docker run --rm -it 1254cdda0b83 bash
In particular the place that container starts up will have the exact filesystem, environment variables (from ENV directives), current directory (WORKDIR), user (USER), and so on; directly typing in the next RUN command should get the same result as Docker running it itself.
(In this specific case, try running pwd and ls -l in the debugging shell; does your Dockerfile need a WORKDIR to tell the pip command where to run?)
You just have to get into the project directory and run the pip command.
The best way to do that is to set the WORKDIR /my-project!
This is the updated file
FROM python:3
COPY ./* /my-project/
WORKDIR /my-project
RUN pip install -r requirements.txt
CMD python /my-project/main.py
Kudos!
I am trying to run a docker image
Dockerfile
FROM marketplace.gcr.io/google/ubuntu1804:latest
MAINTAINER Vinay Joseph (vinay.joseph#microfocus.com)
LABEL ACI_COMPONENT="License Server"
EXPOSE 20000/tcp
#Install Unzip
RUN apt-get install unzip
#Unzip License Server to /opt/MicroFocus
RUN mkdir /opt/MicroFocus
RUN cd /opt/MicroFocus
#Download the License Server
RUN curl -O https://storage.googleapis.com/software-idol-21/LicenseServer_12.1.0_LINUX_X86_64.zip
RUN chmod 777 LicenseServer_12.1.0_LINUX_X86_64.zip
RUN unzip LicenseServer_12.1.0_LINUX_X86_64.zip
cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/xxxx/idol-licenseserver', '.']
images:
- 'gcr.io/xxxx/idol-licenseserver'
The message i get is
docker run gcr.io/xxxx/idol-licenseserver
/bin/sh: 0: -c requires an argument
There are a couple of problems with your Dockerfile
First
RUN apt-get install unzip
A good practice is to perform an update before installing packages, otherwise you could fall into situation with missing package lists.
RUN apt-get update && apt-get install -y ...
Second
RUN mkdir /opt/MicroFocus
RUN cd /opt/MicroFocus
This is mistake because cd doesn't work between layers (different RUN commands). What you wanted is achieved with single WORKDIR command
WORKDIR /opt/MicroFocus
Third
The error message that you are facing means that base image is configured with something like ENTRYPOINT ["sh", "-c"] therefore expecting you to provide initial command line when launching this image. You have to define the proper startup command and append it to your command after image name.
ENTRYPOINT ["/bin/sh", "-c"] is the default entrypoint in every Dockerfile if you do not choose your own entrypoint. If you run the Dockerfile, add a command of your choice that you would like to run. At best just try bash:
docker run -it gcr.io/xxxx/idol-licenseserver bash
Without adding any command, the container does not know what to run in the command line but still starts the bash (sh in this case) to run something, waiting for a command = -c requires an argument.