Understanding comand in run of docker - docker

I have code which have dockerfile having this
FROM node:16-slim AS base
RUN apt-get update
RUN apt-get -y install build-essential libcairo2-dev libpango1.0-dev libjpeg-dev libgif-dev librsvg2-dev
RUN useradd -ms /bin/bash xyz-deployer
WORKDIR /srv/app
RUN chmod -R g+rwX /srv/app
RUN yarn global add ts-node#^10.1.0
Can someone please tell me what is this doing in particular?
RUN useradd -ms /bin/bash xyz-deployer
WORKDIR /srv/app
RUN chmod -R g+rwX /srv/app

RUN useradd -ms /bin/bash xyz-deployer
creates a new user called xyz-deployer with a home directory and /bin/bash as the shell.
WORKDIR /srv/app
changes the working directory to /srv/app.
RUN chmod -R g+rwX /srv/app
sets all files in /srv/app to be group readable, writable and executable.
It doesn't make much sense in the context of the Dockerfile you've shown, but hopefully it helps.

Related

Docker: how to use entrypoint + CMD together

I know the difference between ENTRYPOINT and CMD but cannot solve myself my issue.
This is my Dockerfile for use ansible without install it.
FROM python:3.10.4-slim-buster
# Update and upgrade
RUN apt-get update -y && apt-get upgrade -y
# Install requirements
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends openssh-client sshpass
RUN pip install pip --upgrade
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY entrypoint.sh /entrypoint.sh
RUN chmod a+x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["--version"]
And this is the entrypoint.sh
#!/usr/bin/env sh
set -e
cp -pr /ssh /root/.ssh
chown -R root:root /root/.ssh/config
ansible-playbook
So, I'm expecting that launching my Docker with
docker run \
--rm -it \
-v $(TOPDIR)/playbook:/playbook:ro \
-v ~/.ssh:/ssh:ro \
sineverba/ansible:latest
(so, without arguments or CMDs) I get the version of ansible in console. But nothing is returned or printed.
Neither if I add the right usage, that I thought it overwrites the CMD instruction
docker run \
--rm -it \
-v $(TOPDIR)/playbook:/playbook:ro \
-v ~/.ssh:/ssh:ro \
--name $(CONTAINER_NAME) \
$(IMAGE_NAME):$(VERSION) \
-i /playbook/inventory.yml /playbook/playbook.yml
But, If I remove the entrypoint and re-build with
FROM python:3.10.4-slim-buster
# Update and upgrade
RUN apt-get update -y && apt-get upgrade -y
# Install requirements
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends openssh-client sshpass
RUN pip install pip --upgrade
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY entrypoint.sh /entrypoint.sh
RUN chmod a+x /entrypoint.sh
ENTRYPOINT ["ansible-playbook"]
CMD ["--version"]
I get version in console / I can use ansible (well, no, 'because I need to change owner of SSH keys) but, apart of ssh trouble, I get my working docker.
So, how can I replace the entrypoint ansible-playbook with the sh entrypoint?
Your script doesn't do anything with any parameters it might get. You need to add the parameters to the ansible-playbook command in the script, like this
#!/usr/bin/env sh
set -e
cp -pr /ssh /root/.ssh
chown -R root:root /root/.ssh/config
ansible-playbook $#

docker image only runs on kubernetes if runAsGroup, runAsUser are changed to 0 from 1000

I'd like to start with I'm new to docker/kubernetes, so I apologize if I get terminologies wrong.
We are trying to get our docker image to run on kubernetes. When deployed on kubernetes, the yaml file generated has this line:
securityContext:
runAsUser: 1000
runAsGroup: 1000
As is, the website doesn't work, but if changed to 0 from 1000, it works.
We suspect that it has to do with apache in our Dockerfile, but haven't been able to figure out how to fix it. Here is the Dockerfile (with ENV and COPY commands removed):
FROM cern/cc7-base
EXPOSE 8083
RUN yum update -y && yum install -y \
ImageMagick \
httpd \
npm \
php \
python3-pip
RUN echo "alias python=python3" >>~/.bashrc
RUN yum update -y && yum install -y \
epel-release \
root \
python3-root
COPY requirements.txt /code/requirements.txt
RUN pip3 install -r /code/requirements.txt
RUN mkdir /db /run/secrets
RUN chown -R apache:apache /db /var/www /run/secrets
RUN ln -s /dev/stdout /etc/httpd/logs/access_log
RUN ln -s /dev/stderr /etc/httpd/logs/error_log
RUN chown apache:apache /etc/httpd/logs/error_log
RUN chown apache:apache /etc/httpd/logs/access_log
RUN chmod 666 /etc/httpd/logs/error_log
RUN chmod 666 /etc/httpd/logs/access_log
WORKDIR /webapp
COPY webapp/package.json /webapp/package.json
RUN npm install
COPY webapp /webapp
RUN npm run build
RUN cp -r /webapp/build /var/www/public
RUN cp -r /webapp/build /webapp/public
RUN mkdir /var/www/results /var/www/results/pdfs /var/www/results/pngs /var/www/results/jsons
RUN chmod 777 /var/www/results /var/www/results/pdfs /var/www/results/pngs /var/www/results/jsons
RUN chgrp -R 1000 /run && chmod -R g=u /run
RUN chgrp -R 1000 /etc/httpd/logs && chmod -R g=u /etc/httpd/logs
CMD ["/usr/sbin/httpd","-D","FOREGROUND"]
Some of the things I tried are: How to run Apache as non-root user?, https://www.henryxieblogs.com/2020/01/dockerfile-example-of-linux-nonroot.html, https://takac.dev/docker-run-apache-as-non-root-user-based-on-the-official-image/
I am not sure if they are not addressing my problem, or if I am just executing the solutions wrong.
Changing the location of error_log, access_log in httpd.conf solved this for me. I also changed all the apache:apache to 1000:1000 in the Dockerfile.
I got the idea to change the location of the logs from here:
https://stackoverflow.com/a/525724/9062782

How to give non-root user permission to make and give access to a folder in a Dockerfile

I am trying to create/permission to folders using a non-root user using an image from ubi8/ubi-minimal redhat.
Here are two questions:
Make a folder: Another way to give non-user permission to create folders and give permission to folders. I have searched a bit. Could be possible under the RUN command after it installs all package with microdnf?
Give access: Will RUN chmod -R 777 /app is not best practice and best to do RUN chown -R $USER:$USER /app?
Here is my Dockerfile which I repeat chown a bit for permission.
FROM registry.access.redhat.com/ubi8/ubi-minimal
ENV USER=appuser
RUN microdnf update -y \
&& rm -rf /var/cache/yum \
&& microdnf install gcc wget tar gzip make zlib-devel findutils bzip2-devel openssl-devel ncurses-devel \
sqlite-devel libffi-devel xz-devel which shadow-utils \
&& microdnf clean all ;\
useradd -m $USER
RUN chown -R $USER:$USER /opt
RUN mkdir -p /app
RUN chown $USER /app
USER $USER
WORKDIR /app
COPY . /app/
RUN chown -R $USER:$USER /app

How to run sudo commands in Docker?

I'm trying to build a docker container containing Sqlite3 and Flask. But Sqlite isn't getting installed because sudo needs a password. How is this problem solved?
The error:
Step 6/19 : RUN sudo apt-get install -y sqlite3
---> Running in 9a9c8f8104a8
sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper
The command '/bin/sh -c sudo apt-get install -y sqlite3' returned a non-zero code: 1
The Dockerfile:
FROM ubuntu:latest
RUN apt-get -y update && apt-get -y install sudo
RUN useradd -m docker && echo "docker:docker" | chpasswd && adduser docker sudo
USER docker
CMD /bin/bash
RUN sudo apt-get install -y sqlite3
RUN mkdir /db
RUN /usr/bin/sqlite3 /db/test.db
CMD /bin/bash
RUN sudo apt-get install -y python
WORKDIR /usr/src/app
ENV FLASK_APP=__init__.py
ENV FLASK_DEBUG=1
ENV FLASK_RUN_HOST=0.0.0.0
ENV FLASK_ENV=development
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["flask", "run"]
sudo is not necessary as you can install everything before switching users.
You should think of consistent layers.
Each version of your image should replace only delta parts.
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
Please find below an example of what you could use instead of the provided dockerfile.
The idea is that you install dependencies and then run some configuration commands.
Be aware that CMD can be replaced at runtime.
docker run myimage <CMD>
# Base image, based on python installed on debian
FROM python:3.9-slim-bullseye
# Arguments used to run the app
ARG user=docker
ARG group=docker
ARG uid=1000
ARG gid=1000
ARG app_home=/usr/src/app
ARG sql_database_directory=/db
ARG sql_database_name=test.db
# Environment variables, user defined
ENV FLASK_APP=__init__.py
ENV FLASK_DEBUG=1
ENV FLASK_RUN_HOST=0.0.0.0
ENV FLASK_ENV=development
# Install sqlite
RUN apt-get update \
&& apt-get install -y sqlite3 \
&& apt-get clean
# Create app user
RUN mkdir -p ${app_home} \
&& chown ${uid}:${gid} ${app_home} \
&& groupadd -g ${gid} ${group} \
&& useradd -d "${app_home}" -u ${uid} -g ${gid} -s /bin/bash ${user}
# Create sql database directory
RUN mkdir -p ${sql_database_directory} \
&& chown ${uid}:${gid} ${sql_database_directory}
# Switch to user defined by arguments
USER ${user}
RUN /usr/bin/sqlite3 ${sql_database_directory}/${sql_database_name}
# Copy & Run application (by default)
WORKDIR ${app_home}
COPY . .
RUN pip install --no-cache-dir --no-warn-script-location -r requirements.txt
CMD ["python", "-m", "flask", "run"]

RUN addgroup / adduser. Get: Option s is ambiguous (shell, system) error

I am Dockerising my first flask app and following an online guide but stuck with the following line in Dockerfile.prod.
RUN addgroup -S app && adduser -S app -G app
I get the error Option s is ambiguous (shell, system)
I came across this SO post and tried the accepted answer (pretty much the same):
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
With the same outcome. I have tried specifying --shell and --system, and --group instead get the error addgroup: Only one or two names allowed.
No matter what I try, I get these errors:
Option s is ambiguous (shell, system)
Option g is ambiguous (gecos, gid, group)
I am on Windows (using Docker, not Docker Windows). Not sure if that's the issue. But I cannot find a solution.
Dockerfile.prod
FROM python:3.8.1-slim-buster as builder
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install system dependencies
RUN apt-get update && \
apt-get install -y --no-install-recommends gcc
# lint
RUN pip install --upgrade pip
RUN pip install flake8
COPY . /usr/src/app/
RUN flake8 --ignore=E501,F401,E722 .
# install python dependencies
COPY ./requirements.txt .
# COPY requirements .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r requirements.txt
# RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r requirements/prod.txt
# pull official base image
FROM python:3.8.1-slim-buster
# create directory for the app user
RUN mkdir -p /home/app
# create the app user
RUN addgroup -S app && adduser -S app -G app
# create the appropriate directories
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
WORKDIR $APP_HOME
# install dependencies
RUN apt-get update && apt-get install -y --no-install-recommends netcat
COPY --from=builder /usr/src/app/wheels /wheels
COPY --from=builder /usr/src/app/requirements.txt .
# COPY --from=builder /usr/src/app/requirements/common.txt .
# COPY --from=builder /usr/src/app/requirements/prod.txt .
RUN pip install --upgrade pip
RUN pip install --no-cache /wheels/*
# copy entrypoint-prod.sh
COPY ./entrypoint.prod.sh $APP_HOME
# copy project
COPY . $APP_HOME
# chown all the files to the app user
RUN chown -R app:app $APP_HOME
# change to the app user
USER app
# run entrypoint.prod.sh
ENTRYPOINT ["/home/app/web/entrypoint.prod.sh"]
I do understand you would like to create a user and group both called "app" and the user shall be in that group, both being a system account/group.
That's possible by online using adduser
RUN adduser --system --group app
Maybe this helps:
https://github.com/mozilla-services/Dockerflow/issues/36
Depending on the underlying distribution of the container, these
options can be ambiguous. The options should use the full name format
for readability.

Resources