I am trying to build a docker image using the Minikube registry. When I do not have Minikube set as the target registry, it builds successfully. When I do the following:
eval $(minikube docker-env)
docker image build . -f packages/backend/Dockerfile --tag backstage
it fails with the following error:
Step 6/10 : RUN tar xzf skeleton.tar.gz && rm skeleton.tar.gz
---> Running in 9caeb307b8b1
tar: packages: Cannot mkdir: Permission denied
tar: packages/app/package.json: Cannot open: No such file or directory
tar: packages: Cannot mkdir: Permission denied
tar: packages/backend/package.json: Cannot open: No such file or directory
tar: Exiting with failure status due to previous errors
The command '/bin/sh -c tar xzf skeleton.tar.gz && rm skeleton.tar.gz' returned a non-zero code: 2
Here is the Dockerfile I am using - it is the boilerplate Dockerfile used to build Backstage:
# This dockerfile builds an image for the backend package.
# It should be executed with the root of the repo as docker context.
#
# Before building this image, be sure to have run the following commands in the repo root:
#
# yarn install
# yarn tsc
# yarn build
#
# Once the commands have been run, you can build the image using `yarn build-image`
FROM node:16-bullseye-slim
# Install sqlite3 dependencies. You can skip this if you don't use sqlite3 in the image,
# in which case you should also move better-sqlite3 to "devDependencies" in package.json.
# RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
# --mount=type=cache,target=/var/lib/apt,sharing=locked \
# apt-get update && \
# apt-get install -y --no-install-recommends libsqlite3-dev python3 build-essential && \
# yarn config set python /usr/bin/python3
# From here on we use the least-privileged `node` user to run the backend.
USER node
# This should create the app dir as `node`.
# If it is instead created as `root` then the `tar` command below will fail: `can't create directory 'packages/': Permission denied`.
# If this occurs, then ensure BuildKit is enabled (`DOCKER_BUILDKIT=1`) so the app dir is correctly created as `node`.
WORKDIR /app
# This switches many Node.js dependencies to production mode.
ENV NODE_ENV production
# Copy repo skeleton first, to avoid unnecessary docker cache invalidation.
# The skeleton contains the package.json of each package in the monorepo,
# and along with yarn.lock and the root package.json, that's enough to run yarn install.
COPY --chown=node:node yarn.lock package.json packages/backend/dist/skeleton.tar.gz ./
RUN tar xzf skeleton.tar.gz && rm skeleton.tar.gz
RUN --mount=type=cache,target=/home/node/.cache/yarn,sharing=locked,uid=1000,gid=1000 \
yarn install --frozen-lockfile --production --network-timeout 300000
# Then copy the rest of the backend bundle, along with any other files we might want.
COPY --chown=node:node packages/backend/dist/bundle.tar.gz app-config*.yaml ./
RUN tar xzf bundle.tar.gz && rm bundle.tar.gz
CMD ["node", "packages/backend", "--config", "app-config.yaml", "--config", "app-config.production.yaml"]
How can I get this image to build successfully for use in Minikube? I've also tried minikube image load to put the image into Minikube, but it just hangs.
Related
I'm trying to create a container that installs one of my apps.
In this application, I have to do a composer install at the root but also in a sub-folder.
In my dockerfile I do this:
# Switch to non-root 'app' user & install app dependencies
COPY composer.json composer.lock ./
RUN chown -R $NON_ROOT_USER:$NON_ROOT_GROUP $LARAVEL_PATH
USER $NON_ROOT_USER
# Install composer in base directoru
RUN composer install --prefer-dist --no-scripts --no-dev --no-autoloader
# Here I want to go to subfolder
RUN ls -la
RUN cd ./web/app/themes/sage
RUN composer install --prefer-dist --no-scripts --no-dev --no-autoloader
RUN rm -rf /home/$NON_ROOT_USER/.composer
The problem is, I'm getting the following error:
can't cd to ./web/app/themes/sage: No such file or directory
However, when I look at the build, I do RUN ls -la and see the correct file architecture with my existing "web" folder.
How to do ?
You can use WORKDIR to change working directory. So replace RUN cd ./web/app/themes/sage with WORKDIR /web/app/themes/sage
I'm struggling to do a Docker image in Phoenix 1.6 with Liveview and deploying with releases.
Running with mix phx.server everything goes fine but with a Dockerfile the assets are not loaded. Images and css/js files do not load.
The assets folder is copied with all the files and the mix assets.deploy compile the assets in the priv folder.
The Dockerfile. I'm using ESBuild to compile the assets.
ARG MIX_ENV="prod"
FROM hexpm/elixir:1.11.3-erlang-23.3.4.7-alpine-3.14.0 as build
# install build dependencies
RUN apk add --no-cache build-base git python3 curl
# prepare build dir
WORKDIR /app
# install hex + rebar
RUN mix local.hex --force && \
mix local.rebar --force
# set build ENV
ARG MIX_ENV
ENV MIX_ENV="${MIX_ENV}"
# install mix dependencies
COPY mix.exs mix.lock ./
RUN mix deps.get --only $MIX_ENV
RUN mkdir config
# copy compile-time config files before we compile dependencies
# to ensure any relevant config change will trigger the dependencies
# to be re-compiled.
COPY config/config.exs config/$MIX_ENV.exs config/
RUN mix deps.compile
COPY priv priv
# note: if your project uses a tool like https://purgecss.com/,
# which customizes asset compilation based on what it finds in
# your Elixir templates, you will need to move the asset compilation
# step down so that `lib` is available.
COPY assets assets
RUN mix assets.deploy
# compile and build the release
COPY lib lib
RUN mix compile
# changes to config/runtime.exs don't require recompiling the code
COPY config/runtime.exs config/
COPY rel rel
RUN PORT=4001 mix release
# prepare release image
FROM alpine:3.12.1 AS app
RUN apk add --no-cache libstdc++ openssl ncurses-libs
ARG MIX_ENV
ENV USER="elixir"
WORKDIR "/home/${USER}/app"
# Creates an unprivileged user to be used exclusively to run the Phoenix app
RUN \
addgroup \
-g 1000 \
-S "${USER}" \
&& adduser \
-s /bin/sh \
-u 1000 \
-G "${USER}" \
-h "/home/${USER}" \
-D "${USER}" \
&& su "${USER}"
# Everything from this line onwards will run in the context of the unprivileged user.
USER "${USER}"
COPY --from=build --chown="${USER}":"${USER}" /app/_build/"${MIX_ENV}"/rel/acompanhante ./
ENTRYPOINT ["bin/acompanhante"]
# Usage:
# * build: sudo docker image build -t elixir/my_app .
# * shell: sudo docker container run --rm -it --entrypoint "" -p 127.0.0.1:4000:4000 elixir/my_app sh
# * run: sudo docker container run --rm -it -p 127.0.0.1:4000:4000 --name my_app elixir/my_app
# * exec: sudo docker container exec -it my_app sh
# * logs: sudo docker container logs --follow --tail 100 my_app
CMD ["start"]
I work on a project that has a large number of Java SpringBoot services (and other types) running in k8s clusters. Each service has a small start script that executes a more complex script that is provided in a configmap. This all works fine in builds and at runtime.
I need to make some changes to that complex script. I've already made the changes and tested the concept in an isolated script. I still need to do more testing of it. I am attempting to take some of the command lines that run in our Linux build system and run them on my VirtualBox Ubuntu VM that runs on my Windows 10 laptop. Although I am running this on the VM, most of the files were created and written on the host Windows 10 laptop that I get to using a VirtualBox Shared Folder.
When I look at the "ls -l" output of "startService.sh", I just get this:
-rwxrwx--- 1 root vboxsf 634 Aug 24 15:07 startService.sh*
Note that I am running docker with my own uid, and I have that uid in the "vboxsf" group.
It seems like when the file gets copied into the image, either the owner or the perms get changed in a way that make it inaccessible from within the container.
I tried adding a "RUN chmod 777 startService.sh" in the Dockerfile, just before the ENTRYPOINT, but that fails at build time with this:
Step 23/26 : RUN chmod 777 startService.sh
---> Running in 6dbb89c930c1
chmod: startService.sh: Operation not permitted
The command '/bin/sh -c chmod 777 startService.sh' returned a non-zero code: 1
I don't know why this is happening, or whether this is something that might mitigate this.
My "docker build" command looks like it went fine. I saw it execute all the steps that the normal build shows. The "docker run" step seemed to go fine, but it finished very quickly. When I looked at the "docker log" for the container, it just said entirely:
/bin/sh: ./startService.sh: Permission denied
Note that everything here is done the same way it is on the build server. There seems to be something funny with the fact that I'm running an Ubuntu
You have to write chmod +x startService.sh before docker run or docker-compose up -d --build
And example Dockerfile for django. Look at actions with wait-for, you must make same
###########
# BUILDER #
###########
# pull official base image
FROM python:3.8.3-slim as builder
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apt-get update \
&& apt-get -y install libpq-dev gcc \
python3-dev musl-dev libffi-dev\
&& pip install psycopg2
# lint
RUN pip install --upgrade pip
COPY . .
# install dependencies
COPY ./requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r requirements.txt
# copy project
COPY . .
#########
# FINAL #
#########
# pull official base image
FROM python:3.8.3-slim
# create directory for the app user
RUN mkdir -p /home/app
# create the app user
RUN addgroup --system app && adduser --system --group app
# create the appropriate directories
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/static
RUN mkdir $APP_HOME/media
RUN mkdir $APP_HOME/currencies
WORKDIR $APP_HOME
# install dependencies
RUN apt-get update && apt-get install -y libpq-dev bash netcat rabbitmq-server
COPY --from=builder /usr/src/app/wheels /wheels
COPY --from=builder /usr/src/app/requirements.txt .
COPY wait-for /bin/wait-for
COPY /log /var/log
COPY /run /var/run
RUN pip install --no-cache /wheels/*
# copy project
COPY . $APP_HOME
# chown all the files to the app user
RUN chown -R app:app $APP_HOME
RUN chown -R app:app /var/log/
RUN chown -R app:app /var/run/
EXPOSE 3000
# change to the app user
USER app
# only for dgango
CMD ["gunicorn", "Config.asgi:application", "--bind", "0.0.0.0:8000", "--workers", "3", "-k","uvicorn.workers.UvicornWorker","--log-file","-"]
I was trying to run:
docker build -f hol-light/Dockerfile_check_proofs --ulimit stack=1000000000 --tag check_proofs hol-light/
but I get the error:
Sending build context to Docker daemon 48.9MB
Step 1/16 : FROM ocaml/opam:ubuntu-16.04_ocaml-4.03.0
pull access denied for ocaml/opam, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Why?
The docker file is: https://github.com/brain-research/hol-light/blob/master/Dockerfile_check_proofs
FROM ocaml/opam:ubuntu-16.04_ocaml-4.03.0
WORKDIR /home/opam/
SHELL ["/bin/bash", "-c"]
ENV PATH="/home/opam/.opam/4.03.0/bin:${PATH}"
### Install num
RUN opam install num
### Install campl5
RUN git clone --depth 1 -b rel617 https://github.com/camlp5/camlp5
RUN cd camlp5 &&\
./configure &&\
make world.opt &&\
make install &&\
# meta/Makefile in camlp5 skips these files which we need, so copy them
# manually.
cp {main/pcaml,main/quotation,etc/pa_reloc,meta/q_MLast}.{cmi,cmx,o} `camlp5 -where`
### Install grpc
RUN sudo apt-get update &&\
sudo apt-get install -y build-essential autoconf libtool pkg-config clang libc++-dev
RUN git clone -b 'v1.17.1' --recurse-submodule --depth 1 https://github.com/grpc/grpc
RUN sudo make -C grpc install-headers_c install-static_c install-pkg-config_c\
install-headers_cxx install-static_cxx install-pkg-config_cxx\
install-plugins
RUN sudo make -C grpc/third_party/protobuf install
### Install farmhash
RUN git clone --depth 1 https://github.com/google/farmhash &&\
cd farmhash &&\
./configure CXXFLAGS="-DNAMESPACE_FOR_HASH_FUNCTIONS=farmhash"
RUN sudo make -C farmhash install
### Build binaries
COPY --chown=opam:0 . src/
RUN make -C src check_proofs
CMD ["./src/check_proofs"]
crossposted:
https://forums.docker.com/t/why-cant-i-get-ocaml-opam-ubuntu-16-04-ocaml-4-03-0-docker-image/84351
https://hub.docker.com/r/ocaml/opam/ hasn't been updated for 2 years and says:
At some point in the future, the tags in this repository will be deleted
This deletion is currently in progress (in fact, it has been deleting for more than a week now).
The ocaml/opam (opam 1) images generally aren't useful now because they don't work with the current opam-repository.
There are two alternatives you can use:
ocaml/opam2 contains opam 2 images. e.g. ocaml/opam2:ubuntu-16.04-ocaml-4.03
ocurrent/opam is also opam 2, but contains much smaller images (with only one version of the compiler per image). e.g. ocurrent/opam:ubuntu-16.04-ocaml-4.03
However, this repository is only temporary. It will replace ocaml/opam once Hub finishes deleting that...
I'm working on a Dockerfile with a multi-stage build. The general idea is to build the binary for the backend, build the javascript bundle for the frontend, and then put these two things in a final container for the app.
Here's the docker file:
# go binary
FROM golang:alpine as build-go
RUN apk --no-cache add git bzr mercurial
ENV D=/go/src/github.com/tamuhack-org/quack
RUN go get -d -v golang.org/x/net/html
RUN go get -d -v github.com/gorilla/handlers
RUN go get -d -v github.com/gorilla/mux
COPY ./main.go $D/main.go
COPY ./frontend/dist $D/frontend/dist
RUN rm -rf $D/frontend/dist/index.html
RUN rm -rf $D/frontend/dist/index.js
RUN cd $D && go build -o main && cp main /tmp/
# ui
FROM node:alpine AS build-node
RUN mkdir -p /src/ui
COPY ./frontend/package.json /src/ui/
RUN cd /src/ui && yarn install
COPY ./frontend /src/ui
# Replace the dev instance of index.html with the prod version.
RUN rm -rf /src/ui/dist/index.html
RUN mv /src/ui/dist/index-prod.html /src/ui/dist/index.html
RUN cd /src/ui && yarn build
# final
FROM alpine
RUN apk --no-cache add ca-certificates
WORKDIR /app/server/
COPY --from=build-go /tmp/main /app/server/
COPY --from=build-node /src/ui/dist /app/server/frontend/dist
EXPOSE 8080
CMD ["./main"]
What I've noticed is that when I update the frontend source code and build the docker container, the new version of the container doesn't update with the new bundle. Are there any obvious errors in the Dockerfile that may be the reason for why I'm not seeing any file changes? If I run yarn build locally, the bundle is accurate, but the docker container seems to be caching an older version. Thoughts?