I'm struggling to do a Docker image in Phoenix 1.6 with Liveview and deploying with releases.
Running with mix phx.server everything goes fine but with a Dockerfile the assets are not loaded. Images and css/js files do not load.
The assets folder is copied with all the files and the mix assets.deploy compile the assets in the priv folder.
The Dockerfile. I'm using ESBuild to compile the assets.
ARG MIX_ENV="prod"
FROM hexpm/elixir:1.11.3-erlang-23.3.4.7-alpine-3.14.0 as build
# install build dependencies
RUN apk add --no-cache build-base git python3 curl
# prepare build dir
WORKDIR /app
# install hex + rebar
RUN mix local.hex --force && \
mix local.rebar --force
# set build ENV
ARG MIX_ENV
ENV MIX_ENV="${MIX_ENV}"
# install mix dependencies
COPY mix.exs mix.lock ./
RUN mix deps.get --only $MIX_ENV
RUN mkdir config
# copy compile-time config files before we compile dependencies
# to ensure any relevant config change will trigger the dependencies
# to be re-compiled.
COPY config/config.exs config/$MIX_ENV.exs config/
RUN mix deps.compile
COPY priv priv
# note: if your project uses a tool like https://purgecss.com/,
# which customizes asset compilation based on what it finds in
# your Elixir templates, you will need to move the asset compilation
# step down so that `lib` is available.
COPY assets assets
RUN mix assets.deploy
# compile and build the release
COPY lib lib
RUN mix compile
# changes to config/runtime.exs don't require recompiling the code
COPY config/runtime.exs config/
COPY rel rel
RUN PORT=4001 mix release
# prepare release image
FROM alpine:3.12.1 AS app
RUN apk add --no-cache libstdc++ openssl ncurses-libs
ARG MIX_ENV
ENV USER="elixir"
WORKDIR "/home/${USER}/app"
# Creates an unprivileged user to be used exclusively to run the Phoenix app
RUN \
addgroup \
-g 1000 \
-S "${USER}" \
&& adduser \
-s /bin/sh \
-u 1000 \
-G "${USER}" \
-h "/home/${USER}" \
-D "${USER}" \
&& su "${USER}"
# Everything from this line onwards will run in the context of the unprivileged user.
USER "${USER}"
COPY --from=build --chown="${USER}":"${USER}" /app/_build/"${MIX_ENV}"/rel/acompanhante ./
ENTRYPOINT ["bin/acompanhante"]
# Usage:
# * build: sudo docker image build -t elixir/my_app .
# * shell: sudo docker container run --rm -it --entrypoint "" -p 127.0.0.1:4000:4000 elixir/my_app sh
# * run: sudo docker container run --rm -it -p 127.0.0.1:4000:4000 --name my_app elixir/my_app
# * exec: sudo docker container exec -it my_app sh
# * logs: sudo docker container logs --follow --tail 100 my_app
CMD ["start"]
Related
I am trying to build a docker image using the Minikube registry. When I do not have Minikube set as the target registry, it builds successfully. When I do the following:
eval $(minikube docker-env)
docker image build . -f packages/backend/Dockerfile --tag backstage
it fails with the following error:
Step 6/10 : RUN tar xzf skeleton.tar.gz && rm skeleton.tar.gz
---> Running in 9caeb307b8b1
tar: packages: Cannot mkdir: Permission denied
tar: packages/app/package.json: Cannot open: No such file or directory
tar: packages: Cannot mkdir: Permission denied
tar: packages/backend/package.json: Cannot open: No such file or directory
tar: Exiting with failure status due to previous errors
The command '/bin/sh -c tar xzf skeleton.tar.gz && rm skeleton.tar.gz' returned a non-zero code: 2
Here is the Dockerfile I am using - it is the boilerplate Dockerfile used to build Backstage:
# This dockerfile builds an image for the backend package.
# It should be executed with the root of the repo as docker context.
#
# Before building this image, be sure to have run the following commands in the repo root:
#
# yarn install
# yarn tsc
# yarn build
#
# Once the commands have been run, you can build the image using `yarn build-image`
FROM node:16-bullseye-slim
# Install sqlite3 dependencies. You can skip this if you don't use sqlite3 in the image,
# in which case you should also move better-sqlite3 to "devDependencies" in package.json.
# RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
# --mount=type=cache,target=/var/lib/apt,sharing=locked \
# apt-get update && \
# apt-get install -y --no-install-recommends libsqlite3-dev python3 build-essential && \
# yarn config set python /usr/bin/python3
# From here on we use the least-privileged `node` user to run the backend.
USER node
# This should create the app dir as `node`.
# If it is instead created as `root` then the `tar` command below will fail: `can't create directory 'packages/': Permission denied`.
# If this occurs, then ensure BuildKit is enabled (`DOCKER_BUILDKIT=1`) so the app dir is correctly created as `node`.
WORKDIR /app
# This switches many Node.js dependencies to production mode.
ENV NODE_ENV production
# Copy repo skeleton first, to avoid unnecessary docker cache invalidation.
# The skeleton contains the package.json of each package in the monorepo,
# and along with yarn.lock and the root package.json, that's enough to run yarn install.
COPY --chown=node:node yarn.lock package.json packages/backend/dist/skeleton.tar.gz ./
RUN tar xzf skeleton.tar.gz && rm skeleton.tar.gz
RUN --mount=type=cache,target=/home/node/.cache/yarn,sharing=locked,uid=1000,gid=1000 \
yarn install --frozen-lockfile --production --network-timeout 300000
# Then copy the rest of the backend bundle, along with any other files we might want.
COPY --chown=node:node packages/backend/dist/bundle.tar.gz app-config*.yaml ./
RUN tar xzf bundle.tar.gz && rm bundle.tar.gz
CMD ["node", "packages/backend", "--config", "app-config.yaml", "--config", "app-config.production.yaml"]
How can I get this image to build successfully for use in Minikube? I've also tried minikube image load to put the image into Minikube, but it just hangs.
I have a Dockerfile with some commands I would like to use conditionally:
FROM + image_name (I have a M1 chip MacOS so I need to add --platform=linux/amd64 to it but I want to deploy in a AWS EC2 linux instance that doesn't need it)
On production I would like to run my project with nginx so I want the Dockerfile to end with this RUN mkdir -p tmp/sockets. But for testing, I have no need of the nginx so I would like my Dockerfile to end with this
# Expose port
EXPOSE 3000
# Start rails
CMD ["rails", "server", "-b", "0.0.0.0"]
I thought of using the multi stage dockerfile to solve the FROM image problem but the Dockerfile resulting is quite lengthy since it is basically the same except for the FROM image part.
For the nginx part I wanted to use a shell script but I am not sure how to write the exposing port and final command to start rails.
These are the files:
run_dockerfile.sh
#!/bin/bash
if [ ${RUN_DOCKERFILE} = "PROD" ]; then
mkdir -p tmp/sockets
else
????
fi
My Dockerfilelooks like this:
# Start from the official ruby image
# To run Dockerfile with arm64 architecture (M1 chip MacOS for example)
FROM --platform=linux/amd64 ruby:2.6.6 AS ARM64
# Set environment
ARG BUILD_DEVELOPMENT
# if --build-arg BUILD_DEVELOPMENT=1, set RAILS_ENV to 'development' or set to null otherwise.
ENV RAILS_ENV=${BUILD_DEVELOPMENT:+development}
# if RAILS_ENV is null, set it to 'production' (or leave as is otherwise).
ENV RAILS_ENV=${RAILS_ENV:-production}
# Update and install JS & DB
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client
# Create a directory for the application and use it
RUN mkdir /myapp
WORKDIR /myapp
# Gemfile and lock file need to be present, they'll be overwritten immediately
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
# Install gem dependencies
RUN gem install bundler:2.2.32
RUN bundle install
RUN curl https://deb.nodesource.com/setup_12.x | bash
ADD https://dl.yarnpkg.com/debian/pubkey.gpg /tmp/yarn-pubkey.gpg
RUN apt-key add /tmp/yarn-pubkey.gpg && rm /tmp/yarn-pubkey.gpg
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update -qq && apt-get install -y yarn && apt-get install -y npm
RUN yarn add bootstrap
COPY . /myapp
# So that webpacker compiles
RUN yarn config set ignore-engines true
RUN rm -rf bin/webpack*
RUN rails webpacker:install
RUN bundle exec rails webpacker:compile
RUN bundle exec rake assets:precompile
# This script runs every time the container is created, necessary for rails
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
# Run run_dockerfile.sh
COPY run_dockerfile.sh run_dockerfile.sh
RUN chmod u+x run_dockerfile.sh && ./run_dockerfile.sh
##################################################
# Start from the official ruby image
# To run Dockerfile without arm64 architecture
FROM ruby:2.6.6 AS AMD64
# Set environment
ARG BUILD_DEVELOPMENT
# if --build-arg BUILD_DEVELOPMENT=1, set RAILS_ENV to 'development' or set to null otherwise.
ENV RAILS_ENV=${BUILD_DEVELOPMENT:+development}
# if RAILS_ENV is null, set it to 'production' (or leave as is otherwise).
ENV RAILS_ENV=${RAILS_ENV:-production}
# Update and install JS & DB
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client
# Create a directory for the application and use it
RUN mkdir /myapp
WORKDIR /myapp
# Gemfile and lock file need to be present, they'll be overwritten immediately
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
# Install gem dependencies
RUN gem install bundler:2.2.32
RUN bundle install
RUN curl https://deb.nodesource.com/setup_12.x | bash
ADD https://dl.yarnpkg.com/debian/pubkey.gpg /tmp/yarn-pubkey.gpg
RUN apt-key add /tmp/yarn-pubkey.gpg && rm /tmp/yarn-pubkey.gpg
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update -qq && apt-get install -y yarn && apt-get install -y npm
RUN yarn add bootstrap
COPY . /myapp
# So that webpacker compiles
RUN yarn config set ignore-engines true
RUN rm -rf bin/webpack*
RUN rails webpacker:install
RUN bundle exec rails webpacker:compile
RUN bundle exec rake assets:precompile
# This script runs every time the container is created, necessary for rails
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
# Run run_dockerfile.sh
COPY run_dockerfile.sh run_dockerfile.sh
RUN chmod u+x run_dockerfile.sh && ./run_dockerfile.sh
Is there any way I could do the .sh or are there any recommendations on the proper way to do it? Thank you!
From the way you've described the problem, you don't really need very many special cases at all.
The one important detail is that it's very easy to override the image's CMD when you run a container. If you have two Compose files, for example, you can just set the service's command:
# docker-compose.yml
version: '3.8'
services:
myapp:
image: registry.example.com/myapp:${MYAPP_TAG:-latest}
ports: ['3000:80']
# docker-compose.override.yml
# for developer use
version: '3.8'
services:
myapp:
build: .
command: rails server -b 0.0.0.0 -p 80
The other variations you list shouldn't matter. You should get consistent results if you build your image FROM --platform=linux/amd64 on an x86-64 host, explicitly specifying the native platform; RUN mkdir a directory you won't use is harmless. The one inconsistency seems to be the container port, but you can explicitly tell rails server which port to use so it matches. I'd use the same image in all environments.
FROM --platform=linux/amd64 ruby:2.6.6 # even on an Intel/AMD host system
...
RUN mkdir tmp/sockets # even if it's unused
CMD ["nginx", "-g", "daemon off;"] # can be overridden when the container runs
I wrote a docker file with gradle installations inside it. It shows Gradle version with gradle -v command but while I am running through jenkins job with gradle -v command in execute shell while building a job it shows as gradle:not found
Please check the image mentioned
This is gradle installation in docker file
#Install gradle
RUN cd /usr/lib \
&& wget https://downloads.gradle.org/distributions/gradle-3.4.1-bin.zip -o gradle-bin.zip \
&& unzip "gradle-3.4.1-bin.zip" \
&& ln -s "/usr/gradle-3.4.1/bin/gradle" /usr/bin/gradle \
&& rm "gradle-bin.zip"
#Env set up
ENV GRADLE_HOME=usr/lib/gradle-3.4.1
#ENV PATH=$PATH:$GRADLE_HOME/bin:$PATH
ENV PATH=$PATH:$GRADLE_HOME/bin JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
Try this, work for me.
# Start with a base image containing Java runtime
FROM openjdk:8-jdk-alpine
# Add Maintainer Info
# Add a volume pointing to /tmp
VOLUME /tmp
# Make port 8080 available to the world outside this container
EXPOSE 8080
RUN mkdir /app
WORKDIR /app
COPY . /app
RUN ./gradlew build
ENTRYPOINT ["java","-jar","./build/libs/app-0.1.0.jar"]
I am trying to build a jenkins docker image from official jenkins git repo:
https://github.com/jenkinsci/docker.
But when I try to run the container of the image using docker run -it -dP jenkins, it exits immediately and when i check the docker logs, I get the following error:
: invalid option
I read that the error could be because the pid of tini is not 1. I looked at the documents and saw that if we do the following, it should solve the issue.
Passing the -s argument to Tini (tini -s -- ...)
Setting the environment variable TINI_SUBREAPER (e.g. export TINI_SUBREAPER=).
But it did not solve anything.
The following is the exact copy of the Dockerfile being built with docker build -t jenkins .:
FROM openjdk:8-jdk
RUN apt-get update && apt-get install -y git curl && rm -rf /var/lib/apt/lists/*
ARG user=jenkins
ARG group=jenkins
ARG uid=1000
ARG gid=1000
ARG http_port=8080
ARG agent_port=50000
ENV JENKINS_HOME /var/jenkins_home
ENV JENKINS_SLAVE_AGENT_PORT ${agent_port}
ENV TINI_SUBREAPER=
# Jenkins is run with user `jenkins`, uid = 1000
# If you bind mount a volume from the host or a data container,
# ensure you use the same uid
RUN groupadd -g ${gid} ${group} \
&& useradd -d "$JENKINS_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
# Jenkins home directory is a volume, so configuration and build history
# can be persisted and survive image upgrades
VOLUME /var/jenkins_home
# `/usr/share/jenkins/ref/` contains all reference configuration we want
# to set on a fresh new installation. Use it to bundle additional plugins
# or config file with your custom jenkins Docker image.
RUN mkdir -p /usr/share/jenkins/ref/init.groovy.d
ENV TINI_VERSION 0.14.0
ENV TINI_SHA 6c41ec7d33e857d4779f14d9c74924cab0c7973485d2972419a3b7c7620ff5fd
# Use tini as subreaper in Docker container to adopt zombie processes
RUN curl -fsSL https://github.com/krallin/tini/releases/download/v${TINI_VERSION}/tini-static-amd64 -o /bin/tini && chmod +x /bin/tini \
&& echo "$TINI_SHA /bin/tini" | sha256sum -c -
COPY init.groovy /usr/share/jenkins/ref/init.groovy.d/tcp-slave-agent-port.groovy
# jenkins version being bundled in this docker image
ARG JENKINS_VERSION
ENV JENKINS_VERSION ${JENKINS_VERSION:-2.60.1}
# jenkins.war checksum, download will be validated using it
ARG JENKINS_SHA=34fde424dde0e050738f5ad1e316d54f741c237bd380bd663a07f96147bb1390
# Can be used to customize where jenkins.war get downloaded from
ARG JENKINS_URL=https://repo.jenkins-ci.org/public/org/jenkins-ci/main/jenkins-war/${JENKINS_VERSION}/jenkins-war-${JENKINS_VERSION}.war
# could use ADD but this one does not check Last-Modified header neither does it allow to control checksum
# see https://github.com/docker/docker/issues/8331
RUN curl -fsSL ${JENKINS_URL} -k -o /usr/share/jenkins/jenkins.war \
&& echo "${JENKINS_SHA} /usr/share/jenkins/jenkins.war" | sha256sum -c -
ENV JENKINS_UC https://updates.jenkins.io
RUN chown -R ${user} "$JENKINS_HOME" /usr/share/jenkins/ref
# for main web interface:
EXPOSE ${http_port}
# will be used by attached slave agents:
EXPOSE ${agent_port}
ENV COPY_REFERENCE_FILE_LOG $JENKINS_HOME/copy_reference_file.log
USER ${user}
COPY jenkins-support /usr/local/bin/jenkins-support
COPY jenkins.sh /usr/local/bin/jenkins.sh
ENTRYPOINT ["/bin/tini", "--", "/usr/local/bin/jenkins.sh"]
# from a derived Dockerfile, can use `RUN plugins.sh active.txt` to setup /usr/share/jenkins/ref/plugins from a support bundle
COPY plugins.sh /usr/local/bin/plugins.sh
COPY install-plugins.sh /usr/local/bin/install-plugins.sh
The problem was with the docker version. My Docker version was old. Not sure which command was not supported, but the new docker built the dockerfile.
I am trying to set up a customised Jenkins 2 server from a dockerfile.
I use the official image and I want to be able to add things that I need like custom jobs and an admin user.
This is my dockerfile so far:
FROM openjdk:8-jdk
RUN apt-get update && apt-get install -y git curl && rm -rf /var/lib/apt/lists/*
ENV JENKINS_HOME /var/jenkins_home
ENV JENKINS_SLAVE_AGENT_PORT 50000
ARG user=jenkins
ARG group=jenkins
ARG uid=1000
ARG gid=1000
# Jenkins is run with user `jenkins`, uid = 1000
# If you bind mount a volume from the host or a data container,
# ensure you use the same uid
RUN groupadd -g ${gid} ${group} \
&& useradd -d "$JENKINS_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
# Jenkins home directory is a volume, so configuration and build history
# can be persisted and survive image upgrades
VOLUME /var/jenkins_home
# `/usr/share/jenkins/ref/` contains all reference configuration we want
# to set on a fresh new installation. Use it to bundle additional plugins
# or config file with your custom jenkins Docker image.
RUN mkdir -p /usr/share/jenkins/ref/init.groovy.d
ENV TINI_VERSION 0.9.0
ENV TINI_SHA fa23d1e20732501c3bb8eeeca423c89ac80ed452
# Use tini as subreaper in Docker container to adopt zombie processes
RUN curl -fsSL https://github.com/krallin/tini/releases/download/v${TINI_VERSION}/tini-static -o /bin/tini && chmod +x /bin/tini \
&& echo "$TINI_SHA /bin/tini" | sha1sum -c -
COPY init.groovy /usr/share/jenkins/ref/init.groovy.d/tcp-slave-agent-port.groovy
# jenkins version being bundled in this docker image
ARG JENKINS_VERSION
ENV JENKINS_VERSION ${JENKINS_VERSION:-2.19.2}
# jenkins.war checksum, download will be validated using it
ARG JENKINS_SHA=32b8bd1a86d6d4a91889bd38fb665db4090db081
# Can be used to customize where jenkins.war get downloaded from
ARG JENKINS_URL=https://repo.jenkins-ci.org/public/org/jenkins-ci/main/jenkins-war/${JENKINS_VERSION}/jenkins-war-${JENKINS_VERSION}.war
# could use ADD but this one does not check Last-Modified header neither does it allow to control checksum
# see https://github.com/docker/docker/issues/8331
RUN curl -fsSL ${JENKINS_URL} -o /usr/share/jenkins/jenkins.war \
&& echo "${JENKINS_SHA} /usr/share/jenkins/jenkins.war" | sha1sum -c -
ENV JENKINS_UC https://updates.jenkins.io
RUN chown -R ${user} "$JENKINS_HOME" /usr/share/jenkins/ref
# for main web interface:
EXPOSE 8080
# will be used by attached slave agents:
EXPOSE 50000
ENV COPY_REFERENCE_FILE_LOG $JENKINS_HOME/copy_reference_file.log
USER ${user}
COPY jenkins-support /usr/local/bin/jenkins-support
COPY jenkins.sh /usr/local/bin/jenkins.sh
ENTRYPOINT ["/bin/tini", "--", "/usr/local/bin/jenkins.sh"]
# from a derived Dockerfile, can use `RUN plugins.sh active.txt` to setup /usr/share/jenkins/ref/plugins from a support bundle
COPY plugins.txt /usr/share/jenkins/plugins.txt
COPY plugins.sh /usr/local/bin/plugins.sh
COPY install-plugins.sh /usr/local/bin/install-plugins.sh
# Add the command line tools
COPY jenkins-cli.jar "$JENKINS_HOME"
# Create jobs
ARG job_name_1="my_super_job"
#ARG job_name_2="my_ultra_job"
# create the jobs folder recursively
RUN mkdir -p "$JENKINS_HOME"/jobs/${job_name_1}
RUN mkdir -p "$JENKINS_HOME"/jobs/${job_name_1}/workspace/
RUN mkdir -p "$JENKINS_HOME"/jobs/${job_name_1}/builds
RUN mkdir -p "$JENKINS_HOME"/jobs/${job_name_1}/builds/lastFailedBuild
RUN mkdir -p "$JENKINS_HOME"/jobs/${job_name_1}/builds/lastStableBuild
RUN mkdir -p "$JENKINS_HOME"/jobs/${job_name_1}/builds/lastSuccessfulBuild
RUN mkdir -p "$JENKINS_HOME"/jobs/${job_name_1}/builds/lastUnstableBuild
RUN mkdir -p "$JENKINS_HOME"/jobs/${job_name_1}/builds/lastUnsuccessfulBuild
RUN mkdir -p "$JENKINS_HOME"/jobs/${job_name_1}/builds/legacyIds
#RUN mkdir -p "$JENKINS_HOME"/jobs/${job_name_2}
## add the custom configs to the container
COPY ${job_name_1}_config.xml "$JENKINS_HOME"/jobs/${job_name_1}/config.xml
USER root
#RUN chmod 600 "$JENKINS_HOME"/jobs/${job_name_1}/config.xml
RUN java -jar /var/jenkins_home/jenkins-cli.jar -s http://localhost:8080 create-job my_super_job < /var/jenkins_home/jobs/my_super_job/config.xml
#COPY ${job_name_2}_config.xml "$JENKINS_HOME"/jobs/${job_name_2}/config.xml
# --Install plugins--
# Notice: Deprecated method which however works with a 'plugins.txt' file
#USER root
#RUN chmod 600 /usr/share/jenkins/plugins.txt
#RUN chmod 600 /usr/local/bin/install-plugins.sh
#RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt
# Notice: Recommended method with open case on Github [https://github.com/jenkinsci/docker/issues/348]
# Notice: Select whichever plugins you want
#RUN /usr/local/bin/install-plugins.sh \
#dashboard-view:2.9.10 \
#pipeline-stage-view:2.2 \
#parameterized-trigger:2.32 \
#bitbucket:1.1.5 \
#git:3.0.0 \
#github:1.22.4
# --Install plugins--
I have tried to create a job on build time by first launching a container, creating the job manually, saving the config.xml file, and then copying it in the image from the Dockerfile. Moreover, I am trying to replicate the files/folder structure when a job is being created.
But it is not working. The job is not appearing in Jenkins.
I also tried to use the jenkins-cli.jar, but as I understood , there must be a live Jenkins server to connect to and execute anything which is not the case at build time.
Finally, I suppose creating an admin user in build time must be way more complicated that creating a job...
So, does anyone have any experience on this?