This question already has answers here:
Use environment variables in CMD
(4 answers)
Closed 2 months ago.
I have the below Dockerfile where I need to interpolate the filename to be run.
ARG PYTHON_VERSION=3.7-alpine
# Use the python image as the base image
FROM python:${PYTHON_VERSION}
# set environment to run
ENV ENVIRONMENT={ENVIRONMENT:-"prod"}
# Copy the python files
COPY requirements.txt /app/
COPY prod.py dev.py requirements.txt /app/
# Set the working directory
WORKDIR /app
# Install the dependencies
RUN pip install -r requirements.txt
# Set the command to run the python file when the container is started
CMD ["python", "${ENVIRONMENT}.py"]
But when I build the image and run it, I get the following error:
python: can't open file '${ENVIRONMENT}.py': [Errno 2] No such file or directory
Why this is not interpolated correctly? Can someone help me with this?
I think your best bet is to have something like a run.py which reads the $ENVIRONMENT variable and then loads the correct file. Or, depending on how far dev.py and prod.py diverge, they could even be one file after all.
Related
This question already has an answer here:
docker can't run a go output file that already exist
(1 answer)
Closed 3 months ago.
I use ENTRYPOINT["./app"] in dockerfile, when I use the other larger image, it was work
But when I used alpine, it told me exec ./app: no such file or directory
And I'm sure the file is exist in container
I'm guess alpine doesn't have bash, so I try to use RUN apk update && apk add bash, but it still not work
What's wrong?
Here is my dockerfile:
FROM golang AS build
WORKDIR /app
COPY . .
RUN cd ./src && go build -o ../bin/app
RUN rm -r src/ .vscode/ .git/
FROM alpine AS release
COPY --from=build /app /app
WORKDIR /app/bin
ENTRYPOINT ["./app"]
And this is container content:
Even I install bash, it still not found
Also I just forgot to say, docker runs on the wsl2
You have to investigate the reasons.
I suggest you to put the following lines to inspect the presence of the executable file where you should run it:
RUN pwd
RUN ls -l
I'm having some issues with producing the following setup.
I've implemented a Java application that can start a process with any executable file (with Runtime.exec({whatever-file-here})....), where the file path is provided via external configuration. I have then created a Docker image with the said application, the idea being that the external executable file will be part of a second Docker image, containing all the necessary dependencies. This will leave the option to easily swap the file being executed by the Java app.
So from one side there is the Java image that should look like:
FROM openjdk:14
WORKDIR /app
COPY /build/some.jar /app/some.jar
And let's say I build a service-image out of it. The next step would be to use the aforementioned image as a base image in either a second Dockerfile or a single file with multiple stages.
The way I imagine it being a second Dockerfile for let's say a Python executable will be:
FROM python:latest #python so I can run the script
COPY --from=service-image / / #to get the runtime environment + app directory + jar
COPY some-file.py /app/some-file.py #copying the file for the jar to run
CMD ["java", "-jar", "/app/some.jar"] #the command that will start the java app
And running a container with an image build from the second file should have both a JRE to run the jar file and python to run the .py file as well as the actual .jar and .py files. I'm ignoring any details such as environment variables necessary for the java app to work. But that doesn't seem right, as the resulting image is absolutely massive.
What would you recommend as an approach? Until now I haven't dealt with complex Docker scenarios.
I really do not think that you will be able to create a proper container by replacing the root folder with the one of an other image.
Here is how you could do:
Build your jar file using an openjdk image
Create an image with python and Java installed and copy the .jar from the previous image
You can start from a python image and install Java or the opposite.
Here is an example:
FROM openjdk:14 AS build
WORKDIR /app
COPY . .
RUN build-my-app.sh
FROM openjdk:14-alpine AS runner
WORKDIR /app
# Install python
ENV PYTHONUNBUFFERED=1
RUN apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python
RUN python3 -m ensurepip
RUN pip3 install --no-cache --upgrade pip setuptools
COPY --from=builder /app/dist/myapp.jar myapp.jar
COPY some-file.py some-file.py
CMD ["java", "-jar", "/app/some.jar"] #the command that will start the java app
EDIT: Apparently you are not using Docker to build your jar so you can simply copy it from your host machine (like that py file) and skip the build step.
So I am learning out docker for the first time and was wondering if I am doing this in the correct format for my flask app, as a lot of documentation online for the WOKRDIR command is changing dir into "/app" however my main file to run the app is run.py which would be the same directory as the actual docker file. However, WORKDIR doesn't let me do "WORKDIR ." to use the current DIR.
Can someone clarify if I have my docker file set up correctly?
(I also plan to host this on Heroku if that matters)
File structure
Docker file
# start by pulling the python image
FROM python:3.8-alpine
# copy the requirements file into the image
COPY ./requirements.txt /requirements.txt
# Don't need to switch working directory
# WORKDIR
# install the dependencies and packages in the requirements file
RUN pip install -r requirements.txt
# copy every content from the local file to the image
COPY . /app
# configure the container to run in an executed manner
ENTRYPOINT [ "python" ]
CMD ["run.py" ]
This question is asked before yet After reviewing the answers I am still not able to copy the solution.
I am still new to docker and after watching tutorials and following articles I was able to create a Dockerfile for an existing GitHub repository.
I started by using the nearest available image as a base then adding what I need.
from what I read the problem is in WORKDIR and CMD commands
This is error message:
python: can't open file 'save_model.py': [Errno 2] No such file or directory*
This is my Dockerfile:
# syntax=docker/dockerfile:1
FROM tensorflow/serving:2.3.0-rc0-devel-gpu
WORKDIR app
COPY requirements-gpu.txt .
# install dependencies
RUN pip install -r requirements-gpu.txt
# copy the content of the local src directory to the working directory
COPY /home/pc/Desktop/yolo4_deep .
# command to run on container start
CMD ["python","./app/save_model.py","./app/object_tracker.py" ]
src
save_model.py
object_tracker.py
...
requirements.txt
Dockerfile
I tried WORKDIR command to set the absolute path: WORKDIR /home/pc/Desktop/yolo4_Deep_sort_nojupitor the result was Same Error.
I see multiple issues in your Dockerfile.
COPY /home/pc/Desktop/yolo4_deep .
The COPY command copies files from your local machine to the container. The path on your local machine must be path relative to your build context. The build context is the path you pass in when you run docker build . — in this case the . (the current directory) is the build context. Also the local machine path can only reference files located under the build context — i.e. paths containing .. (parent directory) or / (root directory) are not allowed.
WORKDIR app
WORKDIR sets the path inside the container not on your local machine. So WORKDIR /app means that all commands — RUN, CMD, ENTRYPOINT — will be executed from the /app directory.
CMD ["python","./app/save_model.py","./app/object_tracker.py" ]
As mentioned above WORKDIR /app causes all operations to be executed from the /app directory. So ./app/save_model.py is actually translated as /app/app/save_model.py.
Thanks for help Everyone.
As I mentioned earlier I'm beginner in the docker world. I solved the issue by editing the copy command.
# syntax=docker/dockerfile:1
FROM tensorflow/serving:2.3.0-rc0-devel-gpu
WORKDIR /home/pc/Desktop/yolo4_deep
COPY requirements-gpu.txt .
# install dependencies
RUN pip install -r requirements-gpu.txt
# copy the content of the local src directory to the working directory
COPY src/ .
# command to run on container start
ENTRYPOINT ["./start.sh"]
I tried to copy some files from source to destination (flask app) in a dockerfile but it seems things are not working as expected when building the image. With last 2 line showing:
Step 3 : COPY pkl_objects/* /home/jovyan/work/movieclassifier/pkl_objects/
No source files were specified
This is the docker file.
FROM jupyter/datascience-notebook
RUN pip install flask flask-wtf
COPY pkl_objects/* /home/jovyan/work/movieclassifier/pkl_objects/
COPY static/* /home/jovyan/work/movieclassifier/static/
COPY templates/* /home/jovyan/work/movieclassifier/templates/
COPY app.py /home/jovyan/work/movieclassifier
COPY reviews.sqlite /home/jovyan/work/movieclassifier
COPY vectorizer.py /home/jovyan/work/movieclassifier
WORKDIR /home/jovyan/work/movieclassifier
ENV FLASK_APP=app.py
# ENV FLASK_DEBUG=0
CMD ["flask", "run", "--host=0.0.0.0"]
Looks like there are no files in the pkl_objects folder, and when the wildcard (*) is expanded, it results in no source files being specified.
Maybe you could add an empty file in there, so that when the wildcard picks up files, you at least get one source file.
Example file could be: .nonempty or something like that.