I'm currently having an issue when I tried to run pip as a non-root user in my Dockerfile. When It starts to build the wheel it gives these warnings WARNING: The script foo is installed in '/home/myuser/.local/bin' which is not on PATH. for pep8, sqlformat, isort, gunicorn, django-admin, epylint, pylint, pyreverse, and symilar.
In the current code I have, the build fails with the error COPY failed: stat usr/src/app/wheels: file does not exist when it tries to run COPY --from=builder /usr/src/app/wheels /wheels although it was failing earlier before I took out --wheel-dir /usr/src/app/wheels from line 35.
###########
# BUILDER #
###########
# pull official base image
FROM python:3.9.4-alpine as builder
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
# lint
RUN pip install --upgrade pip
RUN pip install flake8==3.9.2
COPY . .
RUN flake8 --ignore=E501,F401 .
# install dependencies
RUN adduser -D myuser
USER myuser
WORKDIR /home/myuser
COPY --chown=myuser:myuser requirements.txt requirements.txt
RUN pip install --user -r requirements.txt
ENV PATH="/home/myuser/.local/bin:${PATH}"
COPY --chown=myuser:myuser . .
WORKDIR /usr/src/app
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r requirements.txt
#########
# FINAL #
#########
# pull official base image
FROM python:3.9.4-alpine
# create directory for the app user
RUN mkdir -p /home/app
# create the app user
RUN addgroup -S app && adduser -S app -G app
# create the appropriate directories
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/staticfiles
RUN mkdir $APP_HOME/mediafiles
WORKDIR $APP_HOME
# install dependencies
RUN apk update && apk add libpq
COPY --from=builder /usr/src/app/wheels /wheels
COPY --from=builder /usr/src/app/requirements.txt .
RUN pip install --upgrade pip
RUN pip install --no-cache /wheels/*
# copy entrypoint.prod.sh
COPY ./entrypoint.prod.sh .
RUN sed -i 's/\r$//g' $APP_HOME/entrypoint.prod.sh
RUN chmod +x $APP_HOME/entrypoint.prod.sh
# copy project
COPY . $APP_HOME
# copy media files
COPY ./media/ $APP_HOME/mediafiles
# chown all the files to the app user
RUN chown -R app:app $APP_HOME
# change to the app user
USER app
# run entrypoint.prod.sh
ENTRYPOINT ["/home/app/web/entrypoint.prod.sh"]
I've tried tinkering with the WORKDIR, ENV PATH, and RUN pip wheel.
Related
My problem is that I am trying to use Cloud Build with a repository containing a Dockerfile that needs to use the Docker BuildKit to be generated correctly, but Cloud Build does not allow its use, here the code:
# Stage 1 - Create yarn install skeleton layer
FROM node:16-bullseye-slim AS packages
WORKDIR /app
COPY package.json yarn.lock ./
COPY packages packages
# Comment this out if you don't have any internal plugins
COPY plugins plugins
RUN find packages \! -name "package.json" -mindepth 2 -maxdepth 2 -exec rm -rf {} \+
# Stage 2 - Install dependencies and build packages
FROM node:16-bullseye-slim AS build
# install sqlite3 dependencies
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && \
apt-get install -y --no-install-recommends libsqlite3-dev python3 build-essential && \
yarn config set python /usr/bin/python3
USER node
WORKDIR /app
COPY --from=packages --chown=node:node /app .
# Stop cypress from downloading it's massive binary.
ENV CYPRESS_INSTALL_BINARY=0
RUN --mount=type=cache,target=/home/node/.cache/yarn,sharing=locked,uid=1000,gid=1000 \
yarn install --frozen-lockfile --network-timeout 600000
COPY --chown=node:node . .
RUN yarn tsc
RUN yarn --cwd packages/backend build
# If you have not yet migrated to package roles, use the following command instead:
# RUN yarn --cwd packages/backend backstage-cli backend:bundle --build-dependencies
RUN mkdir packages/backend/dist/skeleton packages/backend/dist/bundle \
&& tar xzf packages/backend/dist/skeleton.tar.gz -C packages/backend/dist/skeleton \
&& tar xzf packages/backend/dist/bundle.tar.gz -C packages/backend/dist/bundle
# Stage 3 - Build the actual backend image and install production dependencies
FROM node:16-bullseye-slim
# Install sqlite3 dependencies. You can skip this if you don't use sqlite3 in the image,
# in which case you should also move better-sqlite3 to "devDependencies" in package.json.
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && \
apt-get install -y --no-install-recommends libsqlite3-dev wget python3 build-essential && \
yarn config set python /usr/bin/python3
RUN apt-get update && apt-get install -y python3 python3-pip
RUN pip3 install mkdocs-techdocs-core==1.0.1
# From here on we use the least-privileged `node` user to run the backend.
USER node
# This should create the app dir as `node`.
# If it is instead created as `root` then the `tar` command below will fail: `can't create directory 'packages/': Permission denied`.
# If this occurs, then ensure BuildKit is enabled (`DOCKER_BUILDKIT=1`) so the app dir is correctly created as `node`.
WORKDIR /app
# Copy the install dependencies from the build stage and context
COPY --from=build --chown=node:node /app/yarn.lock /app/package.json /app/packages/backend/dist/skeleton/ ./
RUN --mount=type=cache,target=/home/node/.cache/yarn,sharing=locked,uid=1000,gid=1000 \
yarn install --frozen-lockfile --production --network-timeout 600000
# Copy the built packages from the build stage
COPY --from=build --chown=node:node /app/packages/backend/dist/bundle/ ./
# Copy any other files that we need at runtime
COPY --chown=node:node app-config.yaml ./
RUN wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
RUN chmod +x cloud_sql_proxy
# This switches many Node.js dependencies to production mode.
ENV NODE_ENV production
ADD start.sh credentials.json ./
COPY catalog ./
CMD ["./start.sh"]
I tried to modify the code using artificial intelligence as I'm not very good at docker but it didn't work.
I think the buildkit --mount options are mainly there for performance and you should be able to remove them and still be able to build.
Try
# Stage 1 - Create yarn install skeleton layer
FROM node:16-bullseye-slim AS packages
WORKDIR /app
COPY package.json yarn.lock ./
COPY packages packages
# Comment this out if you don't have any internal plugins
COPY plugins plugins
RUN find packages \! -name "package.json" -mindepth 2 -maxdepth 2 -exec rm -rf {} \+
# Stage 2 - Install dependencies and build packages
FROM node:16-bullseye-slim AS build
# install sqlite3 dependencies
RUN apt-get update && \
apt-get install -y --no-install-recommends libsqlite3-dev python3 build-essential && \
yarn config set python /usr/bin/python3
USER node
WORKDIR /app
COPY --from=packages --chown=node:node /app .
# Stop cypress from downloading it's massive binary.
ENV CYPRESS_INSTALL_BINARY=0
RUN yarn install --frozen-lockfile --network-timeout 600000
COPY --chown=node:node . .
RUN yarn tsc
RUN yarn --cwd packages/backend build
# If you have not yet migrated to package roles, use the following command instead:
# RUN yarn --cwd packages/backend backstage-cli backend:bundle --build-dependencies
RUN mkdir packages/backend/dist/skeleton packages/backend/dist/bundle \
&& tar xzf packages/backend/dist/skeleton.tar.gz -C packages/backend/dist/skeleton \
&& tar xzf packages/backend/dist/bundle.tar.gz -C packages/backend/dist/bundle
# Stage 3 - Build the actual backend image and install production dependencies
FROM node:16-bullseye-slim
# Install sqlite3 dependencies. You can skip this if you don't use sqlite3 in the image,
# in which case you should also move better-sqlite3 to "devDependencies" in package.json.
RUN apt-get update && \
apt-get install -y --no-install-recommends libsqlite3-dev wget python3 build-essential && \
yarn config set python /usr/bin/python3
RUN apt-get update && apt-get install -y python3 python3-pip
RUN pip3 install mkdocs-techdocs-core==1.0.1
# From here on we use the least-privileged `node` user to run the backend.
USER node
# This should create the app dir as `node`.
# If it is instead created as `root` then the `tar` command below will fail: `can't create directory 'packages/': Permission denied`.
# If this occurs, then ensure BuildKit is enabled (`DOCKER_BUILDKIT=1`) so the app dir is correctly created as `node`.
WORKDIR /app
# Copy the install dependencies from the build stage and context
COPY --from=build --chown=node:node /app/yarn.lock /app/package.json /app/packages/backend/dist/skeleton/ ./
RUN yarn install --frozen-lockfile --production --network-timeout 600000
# Copy the built packages from the build stage
COPY --from=build --chown=node:node /app/packages/backend/dist/bundle/ ./
# Copy any other files that we need at runtime
COPY --chown=node:node app-config.yaml ./
RUN wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
RUN chmod +x cloud_sql_proxy
# This switches many Node.js dependencies to production mode.
ENV NODE_ENV production
ADD start.sh credentials.json ./
COPY catalog ./
CMD ["./start.sh"]
I am looking for a way to make docker images secure as possible. Maybe there is a way to prevent the execution of any process other than specified in CMD?
For example I run the following image, but once a hacker would be able to get into the container he/she could create a python file and do mostly anything with the data in the container.
I also see that wget is available, so the attacker could also easily download his/her files and start sending data to his home. Any tips are welcome.
FROM python:alpine3.16 as builder
RUN pip install --upgrade pip
WORKDIR /opt/working-files
COPY requirements.txt .
RUN python -m venv ./venv
ENV PATH="/opt/working-files/venv/bin:$PATH"
RUN pip install -r requirements.txt
RUN rm -rf requirements.txt && rm -rf ~/.cache/pip
COPY app/ ./app
FROM python:alpine3.16
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV PATH="/opt/working-files/venv/bin:$PATH"
RUN addgroup -S 10033 && adduser -S 10033 -G 10033
USER 10033:10033
COPY --from=builder /opt /opt
WORKDIR /opt/working-files/app
CMD ["gunicorn", "-b", "0.0.0.0:8000", "-w", "1", "app:app"]
I am Dockerising my first flask app and following an online guide but stuck with the following line in Dockerfile.prod.
RUN addgroup -S app && adduser -S app -G app
I get the error Option s is ambiguous (shell, system)
I came across this SO post and tried the accepted answer (pretty much the same):
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
With the same outcome. I have tried specifying --shell and --system, and --group instead get the error addgroup: Only one or two names allowed.
No matter what I try, I get these errors:
Option s is ambiguous (shell, system)
Option g is ambiguous (gecos, gid, group)
I am on Windows (using Docker, not Docker Windows). Not sure if that's the issue. But I cannot find a solution.
Dockerfile.prod
FROM python:3.8.1-slim-buster as builder
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install system dependencies
RUN apt-get update && \
apt-get install -y --no-install-recommends gcc
# lint
RUN pip install --upgrade pip
RUN pip install flake8
COPY . /usr/src/app/
RUN flake8 --ignore=E501,F401,E722 .
# install python dependencies
COPY ./requirements.txt .
# COPY requirements .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r requirements.txt
# RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r requirements/prod.txt
# pull official base image
FROM python:3.8.1-slim-buster
# create directory for the app user
RUN mkdir -p /home/app
# create the app user
RUN addgroup -S app && adduser -S app -G app
# create the appropriate directories
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
WORKDIR $APP_HOME
# install dependencies
RUN apt-get update && apt-get install -y --no-install-recommends netcat
COPY --from=builder /usr/src/app/wheels /wheels
COPY --from=builder /usr/src/app/requirements.txt .
# COPY --from=builder /usr/src/app/requirements/common.txt .
# COPY --from=builder /usr/src/app/requirements/prod.txt .
RUN pip install --upgrade pip
RUN pip install --no-cache /wheels/*
# copy entrypoint-prod.sh
COPY ./entrypoint.prod.sh $APP_HOME
# copy project
COPY . $APP_HOME
# chown all the files to the app user
RUN chown -R app:app $APP_HOME
# change to the app user
USER app
# run entrypoint.prod.sh
ENTRYPOINT ["/home/app/web/entrypoint.prod.sh"]
I do understand you would like to create a user and group both called "app" and the user shall be in that group, both being a system account/group.
That's possible by online using adduser
RUN adduser --system --group app
Maybe this helps:
https://github.com/mozilla-services/Dockerflow/issues/36
Depending on the underlying distribution of the container, these
options can be ambiguous. The options should use the full name format
for readability.
My Current Docker file looks like below . I'm trying to use h2o as a base for my ML model service .Now h2o requires JRE and i'm forced to install the required packages for my flask script . It was as heavy as 1.8 Gig so attempted multi-stage build (script below )
#Original Docker File
FROM h2oai/h2o-open-source-k8s
MAINTAINER rajesh.r6r#gmail.com
USER root
WORKDIR /app
ADD . /app
RUN set -xe \
&& apt-get update -y \
&& apt-get install python-pip -y \
&& rm -rf /var/lib/apt/lists/* # remove the cached files
RUN pip install --upgrade pip
RUN pip install --trusted-host pypi.python.org -r requirements.txt
EXPOSE 5005
EXPOSE 54321
ENV NAME World
CMD ["python", "app.py"]
I attempted doing multi-stage builds as follows , but this only results in a python image skipping the h2o part . What am I Missing ?
#Multi-Stage Docker File
FROM h2oai/h2o-open-source-k8s AS baseimage
FROM python:3.7-slim
USER root
WORKDIR /app
ADD . /app
RUN pip install --trusted-host pypi.python.org -r requirements.txt
EXPOSE 5005
EXPOSE 54321
ENV NAME World
CMD ["python", "app.py"]
Using DockerToolbox, I've been trying for the past few days to update my container to run in heroku.
I cant seem to update the code in the container.
Here are some of things I've tried:
in Docker file change COPY . /app to ADD . /app
Removed docker machine and create a virtualbox
`docker-machine rm default`
`docker-machine create --drive virtualbox default`
build/run docker image
`docker build --no-cache -t appname`
`docker run -it -p 8888:8080 appname`
Also tried docker build --no-cache .
Docker File
FROM python:3.6
# create and set working directory
RUN mkdir /app
WORKDIR /app
# Add current directory code to working directory
ADD . /app/
# set default environment variables
ENV PYTHONUNBUFFERED 1
ENV LANG C.UTF-8
ENV DEBIAN_FRONTEND=noninteractive
ENV PORT 8080
RUN wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz && \
tar -xvzf ta-lib-0.4.0-src.tar.gz && \
cd ta-lib/ && \
./configure --prefix=/usr && \
make && \
make install
COPY . /app
WORKDIR /app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
RUN pip install ta-lib
EXPOSE 8080
CMD gunicorn appname.wsgi:application --bind 0.0.0.0:$PORT