Docker permission denied via build image for container - docker

I tried to build image from Dockerfile.
For this purposes I used this dockerhub image: https://hub.docker.com/r/openshift/origin-haproxy-router
My Dockerfile:
FROM openshift/origin-haproxy-router
RUN INSTALL_PKGS="haproxy18 rsyslog" && \
yum install -y $INSTALL_PKGS && \
yum clean all && \
rpm -V $INSTALL_PKGS && \
mkdir -p /var/lib/haproxy/router/{certs,cacerts,whitelists} && \
mkdir -p /var/lib/haproxy/{conf/.tmp,run,bin,log} && \
touch /var/lib/haproxy/conf/{{os_http_be,os_edge_reencrypt_be,os_tcp_be,os_sni_passthrough,os_route_http_redirect,cert_config,os_wildcard_domain}.map,haproxy.config} && \
setcap 'cap_net_bind_service=ep' /usr/sbin/haproxy && \
chown -R :0 /var/lib/haproxy && \
chmod -R g+w /var/lib/haproxy
COPY images/router/haproxy/* /var/lib/haproxy/
LABEL io.k8s.display-name="OpenShift HAProxy Router" \
io.k8s.description="This component offers ingress to an OpenShift cluster via Ingress and Route rules." \
io.openshift.tags="openshift,router,haproxy"
USER root
EXPOSE 80 443
WORKDIR /var/lib/haproxy/conf
ENV TEMPLATE_FILE=/var/lib/haproxy/conf/haproxy-config.template \
RELOAD_SCRIPT=/var/lib/haproxy/reload-haproxy
ENTRYPOINT ["/usr/bin/openshift-router"]
After I tried to run command inside folder with dockerfile:
sudo docker build -t os-router .
I got next result:
ovl: Error while doing RPMdb copy-up:
[Errno 13] Permission denied: '/var/lib/rpm/Conflictname'
You need to be root to perform this command.
How can I solve this error?

put USER root in your dockerfile

Related

Nginx Dockerfile amazon Linux

Please help ..
I am building nginx plus ingress controller and deplyoing in eks using Dockerfile
Dockerfile:
FROM amazonlinux:2
LABEL maintainer="armand#f5.com"
ENV NGINX_VERSION 23
ENV NJS_VERSION 0.5.2
ENV PKG_RELEASE 1.amzn2.ngx
ENV PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:${PATH}"
RUN mkdir -p /etc/ssl/nginx
ADD nginx-repo.crt /etc/ssl/nginx
ADD nginx-repo.key /etc/ssl/nginx
ADD qlik.crt /etc/ssl/nginx
RUN update-ca-trust extract
RUN yum -y update \
&& yum -y install sudo
RUN set -x \
&& chmod 644 /etc/ssl/nginx/* \
&& yum install -y --setopt=tsflags=nodocs wget ca-certificates bind-utils wget bind-utils vim-minimal shadow-utils \
&& groupadd --system --gid 101 nginx \
&& adduser -g nginx --system --no-create-home --home /nonexistent --shell /bin/false --uid 101 nginx \
&& usermod -s /sbin/nologin nginx \
&& usermod -L nginx \
&& wget -P /etc/yum.repos.d https://cs.nginx.com/static/files/nginx-plus-amazon2.repo \
&& yum --showduplicates list nginx-plus \
&& yum install -y --setopt=tsflags=nodocs nginx-plus-${NGINX_VERSION}-${PKG_RELEASE} \
&& rm /etc/nginx/conf.d/default.conf \
&& mkdir -p /var/cache/nginx \
&& mkdir -p /var/lib/nginx/state \
&& chown -R nginx:nginx /etc/nginx \
&& ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log \
&& ulimit -c -m -s -t unlimited \
&& yum clean all \
&& rm -rf /var/cache/yum \
&& rm -rf /etc/yum.repos.d/* \
&& rm /etc/ssl/nginx/nginx-repo.crt /etc/ssl/nginx/nginx-repo.key
RUN echo "root:root" | chpasswd
EXPOSE 80 443 8080
STOPSIGNAL SIGTERM
CMD ["nginx", "-g", "daemon off;"]
I am starting the container using helm commands
helm upgrade \
--install my-athlon-ingress-controller nginx-stable/nginx-ingress --version 0.11.3 --debug \
--set controller.image.pullPolicy=Always \
--set controller.image.tag=6.0.1 \
--set controller.image.repository=957123096554.dkr.ecr.eu-central-1.amazonaws.com/nginx-service \
--set controller.nginxplus=true \
--set controller.enableSnippets=true \
--set controller.enablePreviewPolicies=true \
--set-string controller.service.annotations.'service\.beta\.kubernetes\.io/aws-load-balancer-type'='nlb' \
--set-string controller.service.annotations.'service\.beta\.kubernetes\.io/aws-load-balancer-backend-protocol'='tcp' \
--set-string controller.service.annotations.'service\.beta\.kubernetes\.io/aws-load-balancer-proxy-protocol'='*' \
--set-string controller.service.annotations.'service\.beta\.kubernetes\.io/aws-load-balancer-ssl-ports'='443'
echo Setting up SSL
export tlskey=$(aws secretsmanager get-secret-value --secret-id myathlon/infrastructure/$(env)/gate-key |jq --raw-output '.SecretString' )
echo $tlskey
export tlscrt=$(aws secretsmanager get-secret-value --secret-id myathlon/infrastructure/$(env)/gate-crt |jq --raw-output '.SecretString' )
echo $tlscrt
helm upgrade --install nginx-certificate-secrets ./helm-chart-nginx-certificates --set tlscrt=$tlscrt --set tlskey=$tlskey
Ok, let me give more clarity , i have a nginx pod running in debian 10 and when i try to curl a particular endpoint in keycloak i get a error like
2022/06/13 12:17:46 [info] 35#35: *35461 invalid JWK set while sending to client, client: 141.113.3.32, server: gate-acc.athlon.com, request:
but when i curl the same end point from a application (java pod) i get a response 200 .
Both nginx pod and all my application pod is in same namespace and the from the same cluster in EKS.
the difference i see between nginx pod and application pod is application pod i used the base image as amazon linux but the ngnix pod is with the base image of debian .
so i suspect the OS is the issue , so now i try to build a ngnix plus image using amazon linux and deploy using helm and then try to curl the keycloak end point , that is when i get this PATH not found issue ,
I assume amazon linux may have some root certificate already trusted inbuild so it is able to curl my keycloak but debian does not .
This is the reason i am doing this , adding a certificate in the docker file is a interim solution , if this works then i can add this as secrets and mount as file system .
Both the ngnix pod build in amazon linux or debian as only nginx user , i am not able to login as root , so i am not able to install any utilities like tcpdump or MRT or dig to see what is happening when i do curl , the strange thing is not even ps , sudo or any basis command is working as i dont have root , i am not even able to install anything .
Error :
Error: failed to start container "my-athlon-ingress-controller-nginx-ingress": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "-nginx-plus=true": executable file not found in $PATH: unknown
My goal is deploy this image and have the root certifcate installed in amazon linux machine and have root access for the pod .
I am getting the above message any help is much appreciated , i also added ENV path in my docker file .
qlik.crt has the root certificate
Please help, thanks
For loading certificates you need not to build the nginx docker image. You can use secrets to load the same as volume mount to deployment/daemon set configuration.

Error while setting up a remote ssh host on a docker container

I'm trying to setup a remote ssh host on a docker container to perform jenkins jobs from another container. MY Dockerfile looks like this:
FROM Centos
RUN yum -y install openssh-server && \
yum install -y passwd
RUN useradd remote_user && \
echo remote_user:1234 | chpasswd && \
mkdir /home/remote_user/.ssh && \
chmod 700 /home/remote_user/.ssh
COPY remote-key.pub /home/remote_user/.ssh/authorized-keys
RUN chown remote_user:remote_user -R /home/remote_user/.ssh && \
chmod 600 /home/remote_user/.ssh/authorized-keys
RUN /usr/sbin/sshd-keygen -A
EXPOSE 22
RUN r -rf /run/nologin
CMD /usr/sbin/sshd -D
I run into this following error:
/bin/sh: /usr/sbin/sshd-keygen: No such file or directory
The command '/bin/sh -c /usr/sbin/sshd-keygen -A' returned a non-zero code: 127
Can you please help me correct this issue?

connection refused when using dockerfile to pull git repository

Local setup for kubernetes: Mac OS
Docker for desktop >> kubernetes >> traefik >> Gitea
The gitea is installed in the cluster and exposed as clusterIP service ingresses through treafik which is accessible at http://gitea.local. Everything is butter smooth till here.
The pain:
Now i am creating a dockerfile and using a docker build to build an image. This dockerfile is trying to clone a repository from http://gitea.local. The problem is i am getting connection refused all the times.
RUN mkdir -p apps sites/assets/css \
&& cd apps \
&& git clone http://gitea.local/inviadmin/testing.git
Then i simply tried RUN curl http://gitea.local from inside dockerfile just to debug and got the same:
curl: (7) Failed to connect to gitea.local port 80: Connection refused
if i curl google.com from dockerfile its working. Any help is strongly appreciated.
Dockerfile:
# syntax = docker/dockerfile:1.0-experimental
FROM bitnami/python:3.7-prod
ENV NVM_DIR=/root/.nvm
ENV NODE_VERSION=12.18.3
ENV PATH="/root/.nvm/versions/node/v${NODE_VERSION}/bin/:${PATH}"
RUN install_packages wget \
&& wget https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh \
&& chmod +x install.sh \
&& ./install.sh \
&& . "$NVM_DIR/nvm.sh" && nvm install ${NODE_VERSION} \
&& nvm use v${NODE_VERSION} && npm install -g yarn
RUN install_packages \
# when using ssh
git openssh-client openssh-server iputils-ping
#git
ARG GIT_BRANCH=master
#RUN ping host.docker.internal
RUN mkdir -p apps sites/assets/css \
&& cd apps \
&& git clone http://gitea.local/inviadmin/test.git --branch $GIT_BRANCH
FROM nginx:latest
COPY --from=0 /home/test/sample/sites /var/www/html/
COPY --from=0 /var/www/error_pages /var/www/
COPY build/nginx/nginx-default.conf.template /etc/nginx/conf.d/default.conf.template
COPY build/entry/docker-entrypoint.sh /
RUN apt-get update && apt-get install -y rsync && apt-get clean \
&& echo "#!/bin/bash" > /rsync \
&& chmod +x /rsync
VOLUME [ "/assets" ]
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["nginx", "-g", "daemon off;"]
I tested your dockerfile and here's the outcome
Since the only part you were having issue was the git pull, i chose to use the following lines only.
Notice from the build that how adding entry to the /etc/hosts took effect for the following commands.
If the issue still persists then i suggest you start looking into the gitea container's logs.

Error :- Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "

I am asking for a massive favor. I was stuck below the issue for the last couple of days. If someone helps then that would be great. Going back to the issue. I have installed a docker and docker container using the following code (Docker-Apache spark).
Docker File:-
FROM debian:stretch
MAINTAINER Getty Images "https://github.com/gettyimages"
RUN apt-get update \
&& apt-get install -y locales \
&& dpkg-reconfigure -f noninteractive locales \
&& locale-gen C.UTF-8 \
&& /usr/sbin/update-locale LANG=C.UTF-8 \
&& echo "en_US.UTF-8 UTF-8" >> /etc/locale.gen \
&& locale-gen \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Users with other locales should set this in their derivative image
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
RUN apt-get update \
&& apt-get install -y curl unzip \
python3 python3-setuptools \
&& ln -s /usr/bin/python3 /usr/bin/python \
&& easy_install3 pip py4j \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# http://blog.stuart.axelbrooke.com/python-3-on-spark-return-of-the-pythonhashseed
ENV PYTHONHASHSEED 0
ENV PYTHONIOENCODING UTF-8
ENV PIP_DISABLE_PIP_VERSION_CHECK 1
# JAVA
RUN apt-get update \
&& apt-get install -y openjdk-8-jre \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# HADOOP
ENV HADOOP_VERSION 3.0.0
ENV HADOOP_HOME /usr/hadoop-$HADOOP_VERSION
ENV HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
ENV PATH $PATH:$HADOOP_HOME/bin
RUN curl -sL --retry 3 \
"http://archive.apache.org/dist/hadoop/common/hadoop-$HADOOP_VERSION/hadoop-$HADOOP_VERSION.tar.gz" \
| gunzip \
| tar -x -C /usr/ \
&& rm -rf $HADOOP_HOME/share/doc \
&& chown -R root:root $HADOOP_HOME
# SPARK
ENV SPARK_VERSION 2.4.1
ENV SPARK_PACKAGE spark-${SPARK_VERSION}-bin-without-hadoop
ENV SPARK_HOME /usr/spark-${SPARK_VERSION}
ENV SPARK_DIST_CLASSPATH="$HADOOP_HOME/etc/hadoop/*:$HADOOP_HOME/share/hadoop/common/lib/*:$HADOOP_HOME/share/hadoop/common/*:$HADOOP_HOME/share/hadoop/hdfs/*:$HADOOP_HOME/share/hadoop/hdfs/lib/*:$HADOOP_HOME/share/hadoop/hdfs/*:$HADOOP_HOME/share/hadoop/yarn/lib/*:$HADOOP_HOME/share/hadoop/yarn/*:$HADOOP_HOME/share/hadoop/mapreduce/lib/*:$HADOOP_HOME/share/hadoop/mapreduce/*:$HADOOP_HOME/share/hadoop/tools/lib/*"
ENV PATH $PATH:${SPARK_HOME}/bin
RUN curl -sL --retry 3 \
"https://archive.apache.org/dist/spark/spark-${SPARK_VERSION}/${SPARK_PACKAGE}.tgz" \
| gunzip \
| tar x -C /usr/ \
&& mv /usr/$SPARK_PACKAGE $SPARK_HOME \
&& chown -R root:root $SPARK_HOME
WORKDIR $SPARK_HOME
CMD ["bin/spark-class", "org.apache.spark.deploy.master.Master"]
Command:
ubuntu#ip-123.43.11.136:~$ sudo docker run -it --rm -v $(pwd):/home/ubuntu sparkimage /home/ubuntu bin/spark-submit ./count.py
Got Error below
Error :- Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"/home/ubuntu\": permission denied": unknown.
Can some help me what could be the issue? I have gone through several links but no luck still not able to resolve the issue.
ERRO[0001] error waiting for the container: context cancelled
Everything passes after image sparkimage are considered as an argument to the Docker Entrypoint.
For example
Entrypoint ["node"]
so when you start
docker run -it my_image app.js
Now here app.js will be the argument for the node which will start app.js and docker will treat them like node app.js.
So you are passing invalid option to the image in your docker run command as there is no entrypoint in your Dockefile and command become
CMD ["/home/ubuntu bin/spark-submit ./count.py"]
That's is its throw error for /home/ubuntu permission denied.
You can try these two combinations.
Etnrypoint ["bin/spark-class", "org.apache.spark.deploy.master.Master"]
with the run command.
sudo docker run -it --rm -v $(pwd):/home/ubuntu sparkimage /home/ubuntu bin/spark-submit ./count.py
OR
CMD ["bin/spark-class", "org.apache.spark.deploy.master.Master","/home/ubuntu bin/spark-submit ./count.py"]
with docker run command
sudo docker run -it --rm -v $(pwd):/home/ubuntu sparkimage
The issue has been resolved. the corrected right mount path and executed and it was working fine without any issue.

catalina.sh on Alpine dockerimage doesn´t start

I have this Dockerfile which is an alpine image with tomcat and Java.
Thing is I can´t run the catalina.sh script when I run the container in detached mode. I get following error:
$ docker run -d --name login2 -p 8080:8080 alpine:login-o365
8fbd3fda6c2f171045f2338abf4887b5a2374e1f42537717649a3fb6b61c4cb2
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"/opt/tomcat/bin\": permission denied": unknown.
After this error I Access the container manually and I check that catalina.sh located in /opt/tomcat/bin has perfectly set the permissions as follow:
-rwxr-x--- 1 root root 21579 Jun 9 2016 catalina.sh
What can it be this error? Is this something related to alpine which is a smaller image or what? This is the Dockerfile that I have:
FROM davidcaste/alpine-java-unlimited-jce:jre8
ENV TOMCAT_MAJOR=8 \
TOMCAT_VERSION=8.5.3 \
TOMCAT_HOME=/opt/tomcat \
CATALINA_HOME=/opt/tomcat/bin \
CATALINA_OUT=/dev/null
RUN apk upgrade --update && \
apk add --update curl bash wget && \
curl -jksSL -o /tmp/apache-tomcat.tar.gz http://archive.apache.org/dist/tomcat/tomcat-${TOMCAT_MAJOR}/v${TOMCAT_VERSION}/bin/apache-tomcat-${TOMCAT_VERSION}.tar.gz && \
gunzip /tmp/apache-tomcat.tar.gz && \
tar -C /opt -xf /tmp/apache-tomcat.tar && \
ln -s /opt/apache-tomcat-${TOMCAT_VERSION} ${TOMCAT_HOME} && \
rm -rf ${TOMCAT_HOME}/webapps/* && \
apk del curl && \
rm -rf /tmp/* /var/cache/apk/*
COPY logging.properties ${TOMCAT_HOME}/conf/logging.properties
COPY server.xml ${TOMCAT_HOME}/conf/server.xml
VOLUME ["/logs"]
EXPOSE 8080
CMD ["/opt/tomcat/bin", "-n", "-c", "catalina.sh"]
Perhaps using this might help
CMD /opt/tomcat/bin/catalina.sh run

Resources