Docker run vs Podman run - docker

I have got the following Dockerfile:
FROM node:18.6.0-alpine3.15
RUN apk --no-cache add --virtual .builds-deps build-base python3 python3-dev py3-pip && \
echo "echo http://dl-cdn.alpinelinux.org/alpine/latest-stable/main" >> /etc/apk/repositories && \
echo echo "http://dl-cdn.alpinelinux.org/alpine/latest-stable/community" >> /etc/apk/repositories && \
apk update && apk upgrade && \
apk add linux-headers && \
pip3 install --upgrade pip
RUN pip3 install RPi.GPIO rpi_ws281x adafruit-circuitpython-neopixel
WORKDIR /usr/src/app
COPY . .
RUN npm install && mkdir -p backend/images
EXPOSE 3000
CMD [ "node", "server.js" ]
This container image is running on a raspberry pi and I am controlling for example a led stripe.
The weird thing is that everything works with this docker run command:
docker run --rm --name cm_back_test --privileged -e CM_USER=superuser -e CM_PASSWORD=superuser -e CM_CLUSTER=cluster.xxxx.mongodb.net cm-back:1.0.0
I need the privileged flag for the gpio stuff.
But for some reasone the exact same command with podman does not work.
Can someone explain me why?

Related

docker suddenly stopped working, executor failed running [/bin/sh -c echo

Hello this is my docker file:
FROM golang:alpine3.15 as builder
RUN apk add ca-certificates git make gcc musl-dev libc6-compat curl chromium bash curl
RUN echo "http://dl-cdn.alpinelinux.org/alpine/edge/main" > /etc/apk/repositories \
&& echo "http://dl-cdn.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories \
&& echo "http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories \
&& echo "http://dl-cdn.alpinelinux.org/alpine/v3.12/main" >> /etc/apk/repositories \
&& apk upgrade -U -a \
&& apk add \
libstdc++ \
chromium \
harfbuzz \
nss \
freetype \
ttf-freefont \
font-noto-emoji \
wqy-zenhei
RUN mkdir /build
ADD go.* /build/
WORKDIR /build
RUN go mod download -x
ADD main.go /build/
RUN CGO_ENABLED=0 GOOS=linux go build -a -o /api
FROM alpine:3.15
COPY --from=builder /api .
EXPOSE 8080
ENTRYPOINT [ "/api" ]
STOPSIGNAL SIGKILL
I have been using same image on save ubuntu version, since past 4 months. But now when I run docker-compose up --build I always get this error:
failed to solve: executor failed running [/bin/sh -c echo "http://dl-cdn.alpinelinux.org/alpine/edge/main" > /etc/apk/repositories && echo "http://dl-cdn.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories && echo "http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories && echo "http://dl-cdn.alpinelinux.org/alpine/v3.12/main" >> /etc/apk/repositories && apk upgrade -U -a && apk add libstdc++ chromium harfbuzz nss freetype ttf-freefont font-noto-emoji wqy-zenhei]: exit code: 1
What could be the issue? Thanks
Found these errors too:
#0 3.144 ERROR: ca-certificates-bundle-20220614-r2: trying to overwrite etc/ssl1.1/cert.pem owned by libcrypto1.1-1.1.1q-r0.
#0 3.144 ERROR: ca-certificates-bundle-20220614-r2: trying to overwrite etc/ssl1.1/certs owned by libcrypto1.1-1.1.1q-r0.

RUN apk --update add python3 py3-pip python3-dev not wokring in alpine docker image

Here I'm trying to build a terraform image using Alpine with the following Dockerfile, but success. However same used to work until a few months ago, not sure went changed
Dockerfile:
FROM alpine:latest
ARG Test_GID=1002
ARG Test_UID=1002
# Change to root user
USER root
RUN addgroup --gid ${Test_GID:-1002} test
RUN adduser -S -u ${Test_UID:-1002} -D -h "$(pwd)" -G test test
ENV USER=test
ENV TERRAFORM_VERSION=0.15.4
ENV TERRAFORM_SHA256SUM=ddf9fdfdfdsffdsffdd4e7c080da9a106befc1ff9e53b57364622720114e325c
ENV TERRAFORM_DOWNLOAD_URL=https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip
RUN apk --update add python3 py3-pip python3-dev
RUN apk update && \
apk add ansible \
gcc \
libffi \
libffi-dev \
musl-dev \
make \
openssl \
openssl-dev \
curl \
zip \
git \
jq
When I run the command docker image build -t terraform:0.15.5 . I get below-shown error
there is a problem in your Dockerfile that you copied here I think jenkins should not be there. Anyway, I tried with the Dockerfile below and build was success, I couldn't reproduce the problem, Are there any other lines in your Dockerfile ?
FROM alpine:latest
ARG Test_GID=1002
ARG Test_UID=1002
# Change to root user
USER root
RUN addgroup --gid ${Test_GID:-1002} test
RUN adduser -S -u ${Test_UID:-1002} -D -h "$(pwd)" -G test test
ENV USER=test
ENV TERRAFORM_VERSION=0.15.4
ENV TERRAFORM_SHA256SUM=ddf9fdfdfdsffdsffdd4e7c080da9a106befc1ff9e53b57364622720114e325c
ENV TERRAFORM_DOWNLOAD_URL=https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip
RUN apk --update add python3 py3-pip python3-dev
RUN apk update && \
apk add ansible \
gcc \
libffi \
libffi-dev \
musl-dev \
make \
openssl \
openssl-dev \
curl \
zip \
git \
jq

run.sh not found during push to heroku for no reason

Once I've done with Django application I created pull request and merged 2 branches, pushed and released the container to Heroku but for no reason an error occured:
2021-10-01T15:54:44.673598+00:00 heroku[web.1]: State changed from crashed to starting
2021-10-01T15:54:50.776869+00:00 heroku[web.1]: Starting process with command `run.sh`
2021-10-01T15:54:51.721903+00:00 heroku[web.1]: Process exited with status 127
2021-10-01T15:54:51.834182+00:00 heroku[web.1]: State changed from starting to crashed
2021-10-01T15:54:51.580990+00:00 app[web.1]: /bin/sh: run.sh: not found
In latest commits there were no changes in Dockerfile or in run.sh, no paths were changed either.
run.sh is located in scripts/run.sh
run.sh:
#!/bin/sh
set -e
npm run prod
npm prune --production
python manage.py wait_for_db
python manage.py collectstatic --noinput
python manage.py makemigrations
python manage.py migrate
gunicorn placerem.wsgi:application --bind 0.0.0.0:$PORT
Dockerfile:
FROM python:3.9-alpine3.13
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
COPY ./placerem /placerem
COPY ./scripts /scripts
WORKDIR /placerem
RUN apk add --update --no-cache nodejs npm && \
npm ci
RUN python -m venv /py && \
/py/bin/pip install --upgrade pip && \
apk add --update --no-cache postgresql-client && \
apk add --update --no-cache --virtual .tmp-deps \
build-base jpeg-dev postgresql-dev musl-dev linux-headers \
zlib-dev libffi-dev openssl-dev python3-dev cargo && \
apk add --update --no-cache libjpeg && \
/py/bin/pip install -r /requirements.txt && \
apk del .tmp-deps && \
adduser --disabled-password --no-create-home placerem && \
mkdir -p /vol/web/static && \
mkdir -p /vol/web/media && \
chown -R placerem:placerem /vol && \
chown -R placerem:placerem /py/lib/python3.9/site-packages/social_django/migrations && \
chown -R placerem:placerem /py/lib/python3.9/site-packages/easy_thumbnails/migrations && \
chown -R placerem:placerem package.json && \
chmod -R +x /scripts
ENV PATH="/scripts:/py/bin:$PATH"
USER placerem
CMD ["run.sh"]
I tried to rename file, copy it to another folder, but nothing worked. The only way I found was to roll back but that makes no sense. How to fix it?

Create unix socket inside of Alpine container to connect Gunicorn with Nginx

I'm moving my Django application (same as described here) from local machine to Docker container.
I'm redirecting users that come to port 80 from Nginx to Gunicorn via unix:run/gunicorn.sock It works on my local machine, but I7m not sure how to describe this action in Dockerfile.
Right now I'm doing it this way, but it won't works...
FROM python:3.7.4-alpine3.10
ADD mediadbin/requirements.txt /app/requirements.txt
RUN set -ex \
&& apk add --no-cache --virtual .build-deps postgresql-dev build-base python3-dev gcc jpeg-dev nginx zlib-dev\
&& python -m venv /env \
&& /env/bin/pip install --upgrade pip \
&& /env/bin/pip install --no-cache-dir -r /app/requirements.txt \
&& runDeps="$(scanelf --needed --nobanner --recursive /env \
| awk '{ gsub(/,/, "\nso:", $2); print "so:" $2 }' \
| sort -u \
| xargs -r apk info --installed \
| sort -u)" \
&& apk add --virtual rundeps $runDeps \
&& mkdir run/gunicorn.sock \
&& mkdir /etc/nginx/sites-enabled \
&& apk del .build-deps
COPY ./config/mediadbin /etc/nginx/sites-enabled/mediadbin
ADD mediadbin /app
WORKDIR /app
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
EXPOSE 5432
EXPOSE 8000
EXPOSE 80
CMD ["gunicorn", "--bind", "unix:run/gunicorn.sock", "--workers", "3", "mediadbin.wsgi:application", "--name", "mediadbin"]
Also it won't throw an Error or somthing like that, so I'm not sure what is wrong...
I'm able to run Gunicorn without Nginx via port 8000 in my container when I edit a little bit the script above

Docker contain opens directly into specified conda environment

I have a Docker container that has two conda environments in it. One is used the vast majority of the time so I would like to automatically start in that environment (currently in starts in the base environment). Based on this website I tried by adding this to the end of my Dockerfile:
# Activate env
SHELL ["conda", "run", "-n", "py3", "/bin/bash", "-c"]
ENTRYPOINT ["conda", "run", "-n", "py3", "python", "pass.py"]
where run.py just contains print("hello world").
This caused the script to run but the docker container does not remain open after I run the docker container, although it does when I remove these lines. How do I get the container to open and stay open in a specified environment?
My Dockerfile looks like this:
FROM nvidia/cuda:10.1-cudnn7-devel-centos7
WORKDIR /app/
COPY ./*.* ./
ENV CONDA_DIR "/opt/conda"
ENV PATH "$CONDA_DIR"/bin:$PATH
ONBUILD ENV PATH "$CONDA_DIR"/bin:$PATH
RUN \
yum -y install epel-release && \
yum -y update && \
yum install -y \
bzip2 \
curl \
which \
libXext \
libSM \
libXrender \
git \
cuda-nvcc-10-1 \
openssh-server \
postgresql-devel \
yum clean all && rm -rf /var/cache/yum/*
RUN CONDA_VERSION="4.7.12" && \
curl -L \
https://repo.continuum.io/miniconda/Miniconda3-${CONDA_VERSION}-Linux-x86_64.sh -o miniconda.sh && \
mkdir -p "$CONDA_DIR" && \
bash miniconda.sh -f -b -p "$CONDA_DIR" && \
echo "export PATH=$CONDA_DIR/bin:\$PATH" > /etc/profile.d/conda.sh && \
rm miniconda.sh && \
conda config --add channels conda-forge && \
conda config --set auto_update_conda False && \
pip install --upgrade pip && \
rm -rf /root/.cache/pip/*
RUN conda env create -f py2_env.yaml
RUN conda env create -f py3_env.yaml
# Activate env
SHELL ["conda", "run", "-n", "py3", "/bin/bash", "-c"]
ENTRYPOINT ["conda", "run", "-n", "py3", "python", "pass.py"]

Resources