When I create an image from the following Dockerfile, the command poetry run python manage.py setstaticpages is skipped (and thus not run). Why is this happening?
NOTE: I've tried to run aforementioned command from inside of shell, and it worked perfectly. However, I need that it is executed when container is built.
# Define Image
FROM python:3.8
# Set Environment Variable
ENV PYTHONUNBUFFERED 1
ENV C_FORCE_ROOT true
# Making source and static directory
RUN mkdir /src
RUN mkdir /static
# Creating Work Directory
WORKDIR /src
# Update pip
RUN pip install --upgrade pip
COPY ./src/poetry.lock /scripts/
COPY ./src/pyproject.toml /scripts/
RUN pip install poetry
CMD ["sh", "-c", "poetry install; poetry run python manage.py collectstatic --no-input; poetry run python manage.py migrate;", "poetry run python manage.py setstaticpages;", "poetry run gunicorn -w 4 -t 180 -b 0.0.0.0:8000 backend.wsgi:application"]
For chaining commands use either a bash-script and call that in CMD or chain up commands inside shell with &&
CMD [ "sh" , "-c" , "whatever && whatever_next && whatever_last" ]
or
ADD docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
CMD ["/docker-entrypoint.sh"]
#!/bin/bash
#docker-entrypoint.sh
whatever
whatever_next
whatever_last
This may not solve your problem, but this way you'll get proper signal handling by running in one process. So if it still does not work add logs to your question please.
Related
I am working on a Dockerfile to be used with Google Cloud Run.
I'm not getting the command to run.
Here's the (slightly obfuscated) Dockerfile:
FROM gcr.io/google.com/cloudsdktool/google-cloud-cli:latest
RUN apt-get update
RUN pip install --upgrade pip
COPY requirements.txt /root/
RUN pip install -r /root/requirements.txt
RUN useradd -m ubuntu
ENV HOME=/home/ubuntu
USER ubuntu
COPY --chown=ubuntu:ubuntu . /home/ubuntu
WORKDIR /home/ubuntu
RUN gcloud config set project our-customer-tech-sem-prod
RUN gcloud auth activate-service-account --key-file=./service/our-customer-tech-sem-prod-a02b2c7f4536.json
RUN gcloud compute config-ssh
ENV GOOGLE_APPLICATION_CREDENTIALS=./service/our-customer-tech-sem-prod-a02b2c7f4536.json
CMD ["gcloud", "compute", "ssh", "--internal-ip", "our-persist-cluster-py3-prod", "--zone=us-central1-b", "--project", "our-customer-tech-sem-prod", "--", "'ps -ef'", "|", "./checker2.py"]
This tries to run the CMD at the end, but says it can't find the host specified. (Runs fine from the command line outside Docker.)
There were a couple of things wrong at the end (1) typo in the host name (fixed with the help of a colleague) ... then I had to make the CMD into a shell script to get the pipe inside to work correctly.
Here's the final (slightly obfuscated) script:
FROM gcr.io/google.com/cloudsdktool/google-cloud-cli:latest
RUN apt-get update
RUN pip install --upgrade pip
COPY requirements.txt /root/
RUN pip install -r /root/requirements.txt
RUN useradd -m ubuntu
RUN mkdir /secrets
COPY secrets/* /secrets
ENV HOME=/home/ubuntu
USER ubuntu
COPY --chown=ubuntu:ubuntu . /home/ubuntu
WORKDIR /home/ubuntu
RUN gcloud config set project our-customer-tech-sem-prod
RUN gcloud auth activate-service-account --key-file=./service/our-customer-tech-sem-prod-a02b2c7f4536.json
RUN gcloud compute config-ssh
ENV GOOGLE_APPLICATION_CREDENTIALS=./service/our-customer-tech-sem-prod-a02b2c7f4536.json
CMD ["./rungcloud.sh"]
I work on a project that has a large number of Java SpringBoot services (and other types) running in k8s clusters. Each service has a small start script that executes a more complex script that is provided in a configmap. This all works fine in builds and at runtime.
I need to make some changes to that complex script. I've already made the changes and tested the concept in an isolated script. I still need to do more testing of it. I am attempting to take some of the command lines that run in our Linux build system and run them on my VirtualBox Ubuntu VM that runs on my Windows 10 laptop. Although I am running this on the VM, most of the files were created and written on the host Windows 10 laptop that I get to using a VirtualBox Shared Folder.
When I look at the "ls -l" output of "startService.sh", I just get this:
-rwxrwx--- 1 root vboxsf 634 Aug 24 15:07 startService.sh*
Note that I am running docker with my own uid, and I have that uid in the "vboxsf" group.
It seems like when the file gets copied into the image, either the owner or the perms get changed in a way that make it inaccessible from within the container.
I tried adding a "RUN chmod 777 startService.sh" in the Dockerfile, just before the ENTRYPOINT, but that fails at build time with this:
Step 23/26 : RUN chmod 777 startService.sh
---> Running in 6dbb89c930c1
chmod: startService.sh: Operation not permitted
The command '/bin/sh -c chmod 777 startService.sh' returned a non-zero code: 1
I don't know why this is happening, or whether this is something that might mitigate this.
My "docker build" command looks like it went fine. I saw it execute all the steps that the normal build shows. The "docker run" step seemed to go fine, but it finished very quickly. When I looked at the "docker log" for the container, it just said entirely:
/bin/sh: ./startService.sh: Permission denied
Note that everything here is done the same way it is on the build server. There seems to be something funny with the fact that I'm running an Ubuntu
You have to write chmod +x startService.sh before docker run or docker-compose up -d --build
And example Dockerfile for django. Look at actions with wait-for, you must make same
###########
# BUILDER #
###########
# pull official base image
FROM python:3.8.3-slim as builder
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apt-get update \
&& apt-get -y install libpq-dev gcc \
python3-dev musl-dev libffi-dev\
&& pip install psycopg2
# lint
RUN pip install --upgrade pip
COPY . .
# install dependencies
COPY ./requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r requirements.txt
# copy project
COPY . .
#########
# FINAL #
#########
# pull official base image
FROM python:3.8.3-slim
# create directory for the app user
RUN mkdir -p /home/app
# create the app user
RUN addgroup --system app && adduser --system --group app
# create the appropriate directories
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/static
RUN mkdir $APP_HOME/media
RUN mkdir $APP_HOME/currencies
WORKDIR $APP_HOME
# install dependencies
RUN apt-get update && apt-get install -y libpq-dev bash netcat rabbitmq-server
COPY --from=builder /usr/src/app/wheels /wheels
COPY --from=builder /usr/src/app/requirements.txt .
COPY wait-for /bin/wait-for
COPY /log /var/log
COPY /run /var/run
RUN pip install --no-cache /wheels/*
# copy project
COPY . $APP_HOME
# chown all the files to the app user
RUN chown -R app:app $APP_HOME
RUN chown -R app:app /var/log/
RUN chown -R app:app /var/run/
EXPOSE 3000
# change to the app user
USER app
# only for dgango
CMD ["gunicorn", "Config.asgi:application", "--bind", "0.0.0.0:8000", "--workers", "3", "-k","uvicorn.workers.UvicornWorker","--log-file","-"]
I have this Dockerfile
ARG FUNCTION_DIR="/opt/"
FROM node:10.13-alpine#sha256:22c8219b21f86dfd7398ce1f62c48a022fecdcf0ad7bf3b0681131bd04a023a2 AS BUILD_IMAGE
ARG FUNCTION_DIR
RUN apk --update add cmake autoconf automake libtool binutils libexecinfo-dev python2 gcc make g++ zlib-dev
ENV NODE_ENV=production
ENV PYTHON=/usr/bin/python2
RUN mkdir -p ${FUNCTION_DIR}
WORKDIR ${FUNCTION_DIR}
COPY package.json yarn.lock ./
RUN yarn --frozen-lockfile
RUN npm prune --production
RUN yarn cache clean
RUN npm cache clean --force
FROM node:10.13-alpine#sha256:22c8219b21f86dfd7398ce1f62c48a022fecdcf0ad7bf3b0681131bd04a023a2
ARG FUNCTION_DIR
ENV NODE_ENV=production
ENV NODE_OPTIONS=--max_old_space_size=4096
RUN apk update \
&& apk upgrade \
&& apk add mongodb-tools fontconfig dumb-init \
&& rm -rf /var/cache/apk/*
RUN mkdir -p ${FUNCTION_DIR}
WORKDIR ${FUNCTION_DIR}
COPY --from=BUILD_IMAGE ${FUNCTION_DIR}/node_modules ./node_modules
COPY . .
RUN if [ -f core/config/local.js ]; then rm core/config/local.js; fi
RUN cp core/config/local.js.aws.readonly core/config/local.js
USER node
EXPOSE 8080
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["node", "app.js", "--app=search", "--env=production"]
I use this Dockerfile to generate an image (called core-a) that run our application in K8s. I've added some code inside my application to handle the case our application is launched from a lambda function and i've created another Dockerfile like the one above but using custom ENTRYPOINT and CMD setting this values.
ENTRYPOINT [ "/usr/local/bin/npx", "aws-lambda-ric" ]
CMD [ "apps/search/index.handler" ]
Than i deployed this image called core-b to ecr using core-b as docker image for a lambda function and everything works as expected.
After that i thought that i can use the possibility to overwrite entrypoint and CMD in order to use the same docker image for both environments, so i modified Lambda function's image pointing to core-a and using the entrypoint and cmd values i used in core-b dockerfile, but doing so i get an error
Couldn't find valid bootstrap(s): [\"/usr/local/bin/npx\"]
Did anyone have any suggestion ?
Try to remove the quotation marks (" ") when entering the override value in this web form.
These AWS docs unfortunately have an uncorrect note that say to use the quotation marks on each string.
I'm trying to execute shell file that contains python script.
But, I don't know why i met the error like this
file directory structure
/home/kwstat/workplace/analysis/report_me
home
kwstat
workplace
analysis
report_me
report_me.sh
python_file
python_code.py
...
$docker exec -it test /bin/bash -c "source /home/kwstat/workplace/analysis/report_me/report_me.sh"
# Error
/home/kwstat/workplace/analysis/report_me/report_me.sh: line 30: source: /usr/local/bin/python: cannot execute binary file
I tried several things in Dockerfile, But same error occured.
# 1.CMD ["/bin/bash","-l","-c"]
CMD ["/bin/bash","-l","-c"]
# 2. CMD bin/bash
CMD bin/bash
#########My Dockerfile#############
FROM continuumio/miniconda3
# System packages
RUN apt-get update && apt-get install -y curl
RUN apt-get update && apt-get install -y subversion
WORKDIR /home/kwstat/workplace/analysis/report_me
COPY environments.yml /home/kwstat/workplace/analysis/report_me/environments.yml
RUN conda env create -f /home/kwstat/workplace/analysis/report_me/environments.yml
# Make RUN commands use the new environment:
SHELL ["conda", "run", "-n", "myenv", "/bin/bash", "-c"]
RUN echo "conda activate my_env" >> ~/.profile
# Activate the environment, and make sure it's activated:
#RUN echo "Make sure flask is installed:"
COPY requirements.txt /home/kwstat/me_report_dockerfile/requirements.txt
RUN pip install -r /home/kwstat/me_report_dockerfile/requirements.txt
WORKDIR /home/kwstat/workplace/analysis/report_me/python_file
COPY python_file/ /home/kwstat/workplace/analysis/report_me/python_file
WORKDIR /home/kwstat/workplace/analysis/report_me/
COPY report_me.sh ./report_me.sh
RUN chmod +x report_me.sh
CMD ["/bin/bash"]
please any help ~
my problem was from shell script.
Inside the shell set the coda env path
and all solved.
I have following docker file.
MAINTANER Your Name "youremail#domain.tld"
RUN apt-get update -y && \
apt-get install -y python-pip python-dev
# We copy just the requirements.txt first to leverage Docker cache
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip install -r requirements.txt
COPY . /app
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]
There is one file that I want to run even before app.py file runs. How can I achieve that? I don't want to put code inside app.py.
One solution could be to use a docker-entrypoint.sh script. Basically that entrypoint would allow you to define a set of commands to run to initialize your program.
For example, I could create the following docker-entrypoint.sh:
#!/bin/bash
set -e
if [ "$1" = 'app' ]; then
sh /run-my-other-file.sh
exec python app.py
fi
exec "$#"
And I would use it as so in my Dockerfile:
FROM alpine
COPY ./docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["app"]
There is a lot of articles and examples online about docker-entrypoints. You should give it a quick search I am sure you will find a lot of interesting examples that are used by famous production grade containers.
You could try creating a bash file called run.sh and put inside
app.py
try changing the CMD
CMD [ "run.sh" ]
Also make sure permission in executable for run.sh