How to create and add user with password in alpine Dockerfile? - docker

The following Dockerfile works fine for Ubuntu:
FROM ubuntu:20.04
SHELL ["/bin/bash", "-c"]
ARG user=hakond
ARG home=/home/$user
RUN useradd --create-home -s /bin/bash $user \
&& echo $user:ubuntu | chpasswd \
&& adduser $user sudo
WORKDIR $home
USER $user
COPY --chown=$user entrypoint.sh .
RUN chmod +x entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
where entrypoint.sh is
#! /bin/bash
exec bash
How can I do the same in Alpine? I tried:
FROM alpine:3.12
SHELL ["/bin/sh", "-c"]
RUN apk add --no-cache bash
ARG user=hakond
ARG home=/home/$user
RUN addgroup -S docker
RUN adduser \
--disabled-password \
--gecos "" \
--home $home \
--ingroup docker \
$user
WORKDIR $home
USER $user
COPY chown=$user entrypoint.sh .
RUN chmod +x entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
But this fails to build:
$ docker build -t alpine-user .
Sending build context to Docker daemon 5.12kB
Step 1/12 : FROM alpine:3.12
---> a24bb4013296
Step 2/12 : SHELL ["/bin/sh", "-c"]
---> Using cache
---> ce9a303c96c8
Step 3/12 : RUN apk add --no-cache bash
---> Running in e451a2481846
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz
(1/4) Installing ncurses-terminfo-base (6.2_p20200523-r0)
(2/4) Installing ncurses-libs (6.2_p20200523-r0)
(3/4) Installing readline (8.0.4-r0)
(4/4) Installing bash (5.0.17-r0)
Executing bash-5.0.17-r0.post-install
Executing busybox-1.31.1-r16.trigger
OK: 8 MiB in 18 packages
Removing intermediate container e451a2481846
---> 7b5f7f87bdf6
Step 4/12 : ARG user=hakond
---> Running in 846b4b12856e
Removing intermediate container 846b4b12856e
---> a0453cb6706e
Step 5/12 : ARG home=/home/$user
---> Running in 06550ad3f550
Removing intermediate container 06550ad3f550
---> 994d71fb0281
Step 6/12 : RUN addgroup -S docker
---> Running in 70aaec6f40e0
Removing intermediate container 70aaec6f40e0
---> 5188ed7b234c
Step 7/12 : RUN adduser --disabled-password --gecos "" --home $home --ingroup docker $user
---> Running in ff36a7f7e99b
Removing intermediate container ff36a7f7e99b
---> 97f481916feb
Step 8/12 : WORKDIR $home
---> Running in 8d7f0411d6e3
Removing intermediate container 8d7f0411d6e3
---> 5de66f4b5d4e
Step 9/12 : USER $user
---> Running in ac4abac7c3a8
Removing intermediate container ac4abac7c3a8
---> dffd2185df1f
Step 10/12 : COPY chown=$user entrypoint.sh .
COPY failed: stat /var/snap/docker/common/var-lib-docker/tmp/docker-builder615220199/chown=hakond: no such file or directory

You created your new user successfully, you just wrote chown instead of --chown in your COPY command.
Your Dockerfile should look like:
FROM alpine:3.12
SHELL ["/bin/sh", "-c"]
RUN apk add --no-cache bash
ARG user=hakond
ARG home=/home/$user
RUN addgroup -S docker
RUN adduser \
--disabled-password \
--gecos "" \
--home $home \
--ingroup docker \
$user
WORKDIR $home
USER $user
COPY --chown=$user entrypoint.sh .
RUN chmod +x entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]

Related

Dockerfile won't install cron

I am trying to install cron via my Dockerfile, so that docker-compose can create a dedicated cron container by using a different entrypoint when it's built, which will regularly create another container that runs a script, and then remove it. I'm trying to follow the Separating Cron From Your Application Services section of this guide: https://www.cloudsavvyit.com/9033/how-to-use-cron-with-your-docker-containers/
I know that order of operation is important and I wonder if I have that misconfigured in my Dockerfile:
FROM swift:5.3-focal as build
RUN export DEBIAN_FRONTEND=noninteractive DEBCONF_NONINTERACTIVE_SEEN=true \
&& apt-get -q update \
&& apt-get -q dist-upgrade -y \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /build
RUN apt-get update && apt-get install -y cron
COPY example-crontab /etc/cron.d/example-crontab
RUN chmod 0644 /etc/cron.d/example-crontab &&\
crontab /etc/cron.d/example-crontab
COPY ./Package.* ./
RUN swift package resolve
COPY . .
RUN swift build --enable-test-discovery -c release
WORKDIR /staging
RUN cp "$(swift build --package-path /build -c release --show-bin-path)/Run" ./
RUN [ -d /build/Public ] && { mv /build/Public ./Public && chmod -R a-w ./Public; } || true
RUN [ -d /build/Resources ] && { mv /build/Resources ./Resources && chmod -R a-w ./Resources; } || true
# ================================
# Run image
# ================================
FROM swift:5.3-focal-slim
RUN export DEBIAN_FRONTEND=noninteractive DEBCONF_NONINTERACTIVE_SEEN=true && \
apt-get -q update && apt-get -q dist-upgrade -y && rm -r /var/lib/apt/lists/*
RUN useradd --user-group --create-home --system --skel /dev/null --home-dir /app vapor
WORKDIR /app
COPY --from=build --chown=vapor:vapor /staging /app
USER vapor:vapor
EXPOSE 8080
ENTRYPOINT ["./Run"]
CMD ["serve", "--env", "production", "--hostname", "0.0.0.0", "--port", "8080"]
This is relevant portion of my docker-compose file:
services:
app:
image: prizmserver:latest
build:
context: .
environment:
<<: *shared_environment
volumes:
- $PWD/.env:/app/.env
links:
- db:db
ports:
- '8080:8080'
# user: '0' # uncomment to run as root for testing purposes even though Dockerfile defines 'vapor' user.
command: ["serve", "--env", "production", "--hostname", "0.0.0.0", "--port", "8080"]
cron:
image: prizmserver:latest
entrypoint: /bin/bash
command: ["cron", "-f"]
This is my example-scheduled-task.sh:
#!/bin/bash
timestamp=`date +%Y/%m/%d-%H:%M:%S`
echo "System path is $PATH at $timestamp"
And this is my crontab file:
*/5 * * * * /usr/bin/sh /example-scheduled-task.sh
My script example-scheduled-task.sh and my crontab example-crontab live inside my application folder where this Dockerfile and docker-compose.yml live.
Why won't my cron container launch?
In a multistage build, only the last FROM will be used to generate final image.
E.g., for next example, the a.txt only could be seen in the first stage, can't be seen in the final image.
Dockerfile:
FROM python:3.9-slim-buster
WORKDIR /tmp
RUN touch a.txt
RUN ls /tmp
FROM ubuntu:16.04
RUN ls /tmp
Execution:
# docker build -t abc:1 . --no-cache
Sending build context to Docker daemon 2.048kB
Step 1/6 : FROM python:3.9-slim-buster
---> c2f204720fdd
Step 2/6 : WORKDIR /tmp
---> Running in 1e6ed4ef521d
Removing intermediate container 1e6ed4ef521d
---> 25282e6f7ed6
Step 3/6 : RUN touch a.txt
---> Running in b639fcecff7e
Removing intermediate container b639fcecff7e
---> 04985d00ed4c
Step 4/6 : RUN ls /tmp
---> Running in bfc2429d6570
a.txt
tmp6_uo5lcocacert.pem
Removing intermediate container bfc2429d6570
---> 3356850a7653
Step 5/6 : FROM ubuntu:16.04
---> 065cf14a189c
Step 6/6 : RUN ls /tmp
---> Running in 19755da110b8
Removing intermediate container 19755da110b8
---> 890f13e709dd
Successfully built 890f13e709dd
Successfully tagged abc:1
Back to your example, you copy crontab to the stage of swift:5.3-focal, but the final stage is swift:5.3-focal-slim which won't have any crontab.
EDIT:
For you, the compose for cron also needs to update as next:
cron:
image: prizmserver:latest
entrypoint: cron
command: ["-f"]
cron don't need to use /bash to start, directly use cron to override the entrypoint could make the trick.

How to debug missing path in nvidia-docker build

I am creating a nvidia-docker image with the following included in the Dockerfile:
RUN curl -so /miniconda.sh https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh && chmod +x /miniconda.sh && /miniconda.sh -b -p /miniconda && rm /miniconda.sh
ENV PATH=/miniconda/bin:$PATH
#this is stored in cache ---> fa383a2e1344
# check path
RUN /miniconda/bin/conda
I get the following error:
/bin/sh: 1: /miniconda/bin/conda: not found
The command '/bin/sh -c /miniconda/bin/conda' returned a non-zero code: 127
When I test the path using:
nvidia-docker run --rm fa383a2e1344 ls
then /miniconda does not exist hence the error.
I then altered the Dockerfile to replace /miniconda with a env var path ie:
ENV CONDA_DIR $HOME/miniconda
# Install Miniconda
RUN curl -so /miniconda.sh https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh \
&& chmod +x /miniconda.sh \
&& /miniconda.sh -b -p CONDA_DIR \
&& rm /miniconda.sh
ENV PATH=$CONDA_DIR:$PATH
# check path
RUN $CONDA_DIR/conda
And get the error:
/bin/sh: 1: /miniconda/conda: not found
The command '/bin/sh -c $CONDA_DIR/conda' returned a non-zero code: 127
I was able to get it working by setting the path to current dir rather than hitting /
WORKDIR /miniconda
RUN curl -so ./miniconda.sh https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh \
&& chmod +x ./miniconda.sh \
&& ./miniconda.sh -b -p CONDA_DIR
Here is the build result for reference
docker build - < Dockerfile
Sending build context to Docker daemon 3.072kB
Step 1/5 : FROM node:12.16.0-alpine
---> 466593119d17
Step 2/5 : RUN apk update && apk add --no-cache curl
---> Using cache
---> 1d6830c38dfa
Step 3/5 : WORKDIR /miniconda
---> Using cache
---> 8ee9890a7109
Step 4/5 : WORKDIR /miniconda
---> Running in 63238c179aea
Removing intermediate container 63238c179aea
---> 52f571393bf6
Step 5/5 : RUN curl -so ./miniconda.sh https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh && chmod +x ./miniconda.sh && ./miniconda.sh -b -p CONDA_DIR
---> Running in b59e945ad7a9
Removing intermediate container b59e945ad7a9
---> 74ce06c9af66
Successfully built 74ce06c9af66

Cannot create deno docker image

I want to create deno docker image using Dockerfile
FROM alpine:latest
WORKDIR /
RUN apk update && \
apk upgrade
RUN apk add curl
RUN curl -fsSL https://deno.land/x/install/install.sh | sh
ENV DENO_INSTALL="/root/.deno"
ENV PATH="${DENO_INSTALL}/bin:${PATH}"
RUN deno --help
But when run docker build -t deno . it shows at last /bin/sh: deno: not found
full output:
Sending build context to Docker daemon 54.78kB
Step 1/8 : FROM alpine:latest
---> f70734b6a266
Step 2/8 : WORKDIR /
---> Using cache
---> b1bbfa810906
Step 3/8 : RUN apk update && apk upgrade
---> Using cache
---> a7761425faba
Step 4/8 : RUN apk add curl
---> Using cache
---> 9099d4f65cb1
Step 5/8 : RUN curl -fsSL https://deno.land/x/install/install.sh | sh
---> Using cache
---> b4ea95c69a73
Step 6/8 : ENV DENO_INSTALL="/root/.deno"
---> Using cache
---> bdc7e1e85e9c
Step 7/8 : ENV PATH="${DENO_INSTALL}/bin:${PATH}"
---> Using cache
---> d35db1caba71
Step 8/8 : RUN deno --help
---> Running in d1ca4e1d0dc6
/bin/sh: deno: not found
The command '/bin/sh -c deno --help' returned a non-zero code: 127
Alpine is missing glibc which is needed for deno to run.
You can use frolvlad/alpine-glibc:alpine-3.11_glibc-2.31 instead and it will work fine.
FROM frolvlad/alpine-glibc:alpine-3.11_glibc-2.31
WORKDIR /
RUN apk update && \
apk upgrade
RUN apk add curl
RUN curl -fsSL https://deno.land/x/install/install.sh | sh
ENV DENO_INSTALL="/root/.deno"
ENV PATH="${DENO_INSTALL}/bin:${PATH}"
RUN deno --help
I recommend building a specific deno version, for that, you should use:
curl -fsSL https://deno.land/x/install/install.sh | sh -s v1.0.0
FROM frolvlad/alpine-glibc:alpine-3.11_glibc-2.31
ENV DENO_VERSION=1.0.0
# ...
RUN curl -fsSL https://deno.land/x/install/install.sh | sh -s v${DENO_VERSION}
# ...
You can also check deno-docker

Dockerfile is not running the desired shell script

I am trying configure and run a certain program using Docker. I am a beginner in Docker, so beware of newbie mistakes!
FROM ubuntu:16.04
# create non-root user
ENV USERNAME ros
RUN adduser --ingroup sudo --disabled-password --gecos "" --shell /bin/bash --home /home/$USERNAME $USERNAME
RUN bash -c 'echo $USERNAME:ros | chpasswd'
ENV HOME /home/$USERNAME
RUN apt-get update && apt-get install --assume-yes wget sudo && \
wget https://raw.githubusercontent.com/ROBOTIS-GIT/robotis_tools/master/install_ros_kinetic.sh && \
chmod 755 ./install_ros_kinetic.sh && \
bash ./install_ros_kinetic.sh
RUN apt-get install --assume-yes ros-kinetic-joy ros-kinetic-teleop-twist-joy ros-kinetic-teleop-twist-keyboard ros-kinetic-laser-proc ros-kinetic-rgbd-launch ros-kinetic-depthimage-to-laserscan ros-kinetic-rosserial-arduino ros-kinetic-rosserial-python ros-kinetic-rosserial-server ros-kinetic-rosserial-client ros-kinetic-rosserial-msgs ros-kinetic-amcl ros-kinetic-map-server ros-kinetic-move-base ros-kinetic-urdf ros-kinetic-xacro ros-kinetic-compressed-image-transport ros-kinetic-rqt-image-view ros-kinetic-gmapping ros-kinetic-navigation ros-kinetic-interactive-markers
RUN cd /home/$USERNAME/catkin_ws/src/
RUN git clone https://github.com/ROBOTIS-GIT/turtlebot3_msgs.git
RUN git clone https://github.com/ROBOTIS-GIT/turtlebot3.git
USER $USERNAME
WORKDIR /home/$USERNAME
# add catkin env
RUN echo 'source /opt/ros/kinetic/setup.bash' >> /home/$USERNAME/.bashrc
RUN echo 'source /home/$USERNAME/catkin_ws/devel/setup.bash' >> /home/$USERNAME/.bashrc
RUN /bin/bash -c "source /home/ros/.bashrc && cd /home/$USERNAME/catkin_ws && catkin_make"
Gave the following output:
~/m/rosdocker docker build --rm -f "Dockerfile" -t rosdocker:latest .
Sending build context to Docker daemon 5.632kB
Step 1/15 : FROM ubuntu:16.04
---> b0ef3016420a
Step 2/15 : ENV USERNAME ros
---> Using cache
---> 25bf14574e2b
Step 3/15 : RUN adduser --ingroup sudo --disabled-password --gecos "" --shell /bin/bash --home /home/$USERNAME $USERNAME
---> Using cache
---> 3a2787196745
Step 4/15 : RUN bash -c 'echo $USERNAME:ros | chpasswd'
---> Using cache
---> fa4bc1d220a8
Step 5/15 : ENV HOME /home/$USERNAME
---> Using cache
---> f987768fa3b1
Step 6/15 : RUN apt-get update && apt-get install --assume-yes wget sudo && wget https://raw.githubusercontent.com/ROBOTIS-GIT/robotis_tools/master/install_ros_kinetic.sh && chmod 755 ./install_ros_kinetic.sh && bash ./install_ros_kinetic.sh
---> Using cache
---> 9c26b8318f2e
Step 7/15 : RUN apt-get install --assume-yes ros-kinetic-joy ros-kinetic-teleop-twist-joy ros-kinetic-teleop-twist-keyboard ros-kinetic-laser-proc ros-kinetic-rgbd-launch ros-kinetic-depthimage-to-laserscan ros-kinetic-rosserial-arduino ros-kinetic-rosserial-python ros-kinetic-rosserial-server ros-kinetic-rosserial-client ros-kinetic-rosserial-msgs ros-kinetic-amcl ros-kinetic-map-server ros-kinetic-move-base ros-kinetic-urdf ros-kinetic-xacro ros-kinetic-compressed-image-transport ros-kinetic-rqt-image-view ros-kinetic-gmapping ros-kinetic-navigation ros-kinetic-interactive-markers
---> Using cache
---> 4b4c0abace7f
Step 8/15 : RUN cd /home/$USERNAME/catkin_ws/src/
---> Using cache
---> fb87caedbef8
Step 9/15 : RUN git clone https://github.com/ROBOTIS-GIT/turtlebot3_msgs.git
---> Using cache
---> d2d7f198e018
Step 10/15 : RUN git clone https://github.com/ROBOTIS-GIT/turtlebot3.git
---> Using cache
---> 42ddcbbc19e1
Step 11/15 : USER $USERNAME
---> Using cache
---> 4526fd7b5d75
Step 12/15 : WORKDIR /home/$USERNAME
---> Using cache
---> 0543c327b994
Step 13/15 : RUN echo 'source /opt/ros/kinetic/setup.bash' >> /home/$USERNAME/.bashrc
---> Using cache
---> dff40263114a
Step 14/15 : RUN echo 'source /home/$USERNAME/catkin_ws/devel/setup.bash' >> /home/$USERNAME/.bashrc
---> Using cache
---> fff611e9d9db
Step 15/15 : RUN /bin/bash -c "source /home/ros/.bashrc && cd /home/$USERNAME/catkin_ws && catkin_make"
---> Running in 7f26a34419a3
/bin/bash: catkin_make: command not found
The command '/bin/sh -c /bin/bash -c "source /home/ros/.bashrc && cd /home/$USERNAME/catkin_ws && catkin_make"' returned a non-zero code: 127
~/m/rosdocker
I need it to run catkin_make (which is on the path set up by .bashrc)
Exit code 127 from shell commands means "command not found". Is .bashrc executable? Normally it is not, probably you want to source it?
source ./home/$USERNAME/.bashrc
As Dan Farrel pointed out in his comment, sourcing the file in a RUN command will only have effect within that shell.
To source .bashrc during the build
If you want it to have effect for later commands in the build you need to run them all in the same RUN statement. In the below .bashrcis sourced in the same shell as catkin_make is run.
RUN . /home/ros/.bashrc && \
cd /home/$USERNAME/catkin_ws && \
catkin_make
To source the .bashrc file when the container starts
What should happen when the container is run using docker runis specified using the ENTRYPOINTstatement. If you just want a plain bash prompt, specify /bin/bash. The shell will be run with the user specified in the USER statement.
So in summary if you add the following to the end of your Dockerfile
USER ros
ENTRYPOINT /bin/bash
When someone runs the container using docker run -it <containerName> they will land in a bash shell as the user ros. Bash will automatically source the /home/ros/.bashrc file and all definitions inside will be available in the shell. (Your RUN statement containing the .bashrc file canbe removed

File copied to Docker image dissappears

I have this multi-stage Dockerfile. I make a program in the build image, tar up the contents, copy it in the main image, untar it. Once the container starts, when i go into the container, I can no longer find the file. However, using "ls" commands I'm able to see that it was copied over and extracted. I don't know if this has anything to do with the fact that I have the root directory of the application as a volume. I did that to speed up the builds after making code changes.
docker-compose.yml
version: "3"
services:
web:
build: .
ports:
- "5000:5000"
- "5432:5432"
volumes:
- ".:/code"
environment:
- PORT=5000
# TODO: Should be set to 0 for production
- PYTHONUNBUFFERED=1
Dockerfile
# Build lab-D
FROM gcc:8.2.0 as builder
RUN apt-get update && apt-get install -y libxerces-c-dev
WORKDIR /lab-d/
RUN git clone https://github.com/lab-d/lab-d.git
WORKDIR /lab-d/lab-d/
RUN autoreconf -if
RUN ./configure --enable-silent-rules 'CFLAGS=-g -O0 -w' 'CXXFLAGS=-g -O0 -w' 'LDFLAGS=-g -O0 -w'
RUN make
RUN make install
WORKDIR /lab-d/
RUN ls
RUN tar -czf labd.tar.gz lab-d
# Main Image
FROM library/python:3.7-stretch
RUN apt-get update && apt-get install -y python3 python3-pip \
postgresql-client \
# lab-D requires this library
libxerces-c-dev \
# For VIM
apt-file
RUN apt-file update && apt-get install -y vim
RUN pip install --upgrade pip
COPY requirements.txt /
RUN pip3 install --trusted-host pypi.org -r /requirements.txt
RUN pwd
RUN ls .
COPY --from=builder /lab-d/labd.tar.gz /code/labd.tar.gz
WORKDIR /code
RUN pwd
RUN ls .
RUN tar -xzf labd.tar.gz
RUN ls .
run pwd
RUN ls .
CMD ["bash", "start.sh"]
docker-compose build --no-cache
...
Step 19/29 : RUN pwd
---> Running in a856867bf69a
/
Removing intermediate container a856867bf69a
---> f1ee3dca8500
Step 20/29 : RUN ls .
---> Running in ee8da6874808
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
requirements.txt
root
run
sbin
srv
sys
tmp
usr
var
Removing intermediate container ee8da6874808
---> e8aec80955c9
Step 21/29 : COPY --from=builder /lab-d/labd.tar.gz /code/labd.tar.gz
---> 72d14ab4e01f
Step 22/29 : WORKDIR /code
---> Running in 17873e785c17
Removing intermediate container 17873e785c17
---> 57e8361767ca
Step 23/29 : RUN pwd
---> Running in abafd210abcb
/code
Removing intermediate container abafd210abcb
---> c6f430e1b362
Step 24/29 : RUN ls .
---> Running in 40b9e85261c2
labd.tar.gz
Removing intermediate container 40b9e85261c2
---> f9ee8e04d065
Step 25/29 : RUN tar -xzf labd.tar.gz
---> Running in 6e60ce7e1886
Removing intermediate container 6e60ce7e1886
---> 654d3c791798
Step 26/29 : RUN ls .
---> Running in 0f445b35f399
lab-d
labd.tar.gz
Removing intermediate container 0f445b35f399
---> 7863a15534b1
Step 27/29 : run pwd
---> Running in 9658c6170bde
/code
Removing intermediate container 9658c6170bde
---> 8d8e472a1b95
Step 28/29 : RUN ls .
---> Running in 19da5b77f5b6
lab-d
labd.tar.gz
Removing intermediate container 19da5b77f5b6
---> 140645efadbc
Step 29/29 : CMD ["bash", "start.sh"]
---> Running in 02b006bdf868
Removing intermediate container 02b006bdf868
---> 28d819321035
Successfully built 28d819321035
Successfully tagged -server_web:latest
start.sh
#!/bin/bash
# Start the SQL Proxy (Local-only)
pwd
ls .
./cloud_sql_proxy -instances=api-project-123456789:us-central1:sq=tcp:5432 \
-credential_file=./config/google_service_account.json &
ls .
# Override with CircleCI for other environments
cp .env.development .env
ls .
python3 -u ./server/server.py
In your Dockerfile, you
COPY --from=builder /lab-d/labd.tar.gz /code/labd.tar.gz
WORKDIR /code
RUN tar -xzf labd.tar.gz
But then your docker-compose.yml specifies
volumes:
- ".:/code"
That causes the current directory on the host to be mounted over /code in the container, and every last bit of work your Dockerfile does is hidden.

Resources