Nginx Dockerfile amazon Linux - docker

Please help ..
I am building nginx plus ingress controller and deplyoing in eks using Dockerfile
Dockerfile:
FROM amazonlinux:2
LABEL maintainer="armand#f5.com"
ENV NGINX_VERSION 23
ENV NJS_VERSION 0.5.2
ENV PKG_RELEASE 1.amzn2.ngx
ENV PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:${PATH}"
RUN mkdir -p /etc/ssl/nginx
ADD nginx-repo.crt /etc/ssl/nginx
ADD nginx-repo.key /etc/ssl/nginx
ADD qlik.crt /etc/ssl/nginx
RUN update-ca-trust extract
RUN yum -y update \
&& yum -y install sudo
RUN set -x \
&& chmod 644 /etc/ssl/nginx/* \
&& yum install -y --setopt=tsflags=nodocs wget ca-certificates bind-utils wget bind-utils vim-minimal shadow-utils \
&& groupadd --system --gid 101 nginx \
&& adduser -g nginx --system --no-create-home --home /nonexistent --shell /bin/false --uid 101 nginx \
&& usermod -s /sbin/nologin nginx \
&& usermod -L nginx \
&& wget -P /etc/yum.repos.d https://cs.nginx.com/static/files/nginx-plus-amazon2.repo \
&& yum --showduplicates list nginx-plus \
&& yum install -y --setopt=tsflags=nodocs nginx-plus-${NGINX_VERSION}-${PKG_RELEASE} \
&& rm /etc/nginx/conf.d/default.conf \
&& mkdir -p /var/cache/nginx \
&& mkdir -p /var/lib/nginx/state \
&& chown -R nginx:nginx /etc/nginx \
&& ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log \
&& ulimit -c -m -s -t unlimited \
&& yum clean all \
&& rm -rf /var/cache/yum \
&& rm -rf /etc/yum.repos.d/* \
&& rm /etc/ssl/nginx/nginx-repo.crt /etc/ssl/nginx/nginx-repo.key
RUN echo "root:root" | chpasswd
EXPOSE 80 443 8080
STOPSIGNAL SIGTERM
CMD ["nginx", "-g", "daemon off;"]
I am starting the container using helm commands
helm upgrade \
--install my-athlon-ingress-controller nginx-stable/nginx-ingress --version 0.11.3 --debug \
--set controller.image.pullPolicy=Always \
--set controller.image.tag=6.0.1 \
--set controller.image.repository=957123096554.dkr.ecr.eu-central-1.amazonaws.com/nginx-service \
--set controller.nginxplus=true \
--set controller.enableSnippets=true \
--set controller.enablePreviewPolicies=true \
--set-string controller.service.annotations.'service\.beta\.kubernetes\.io/aws-load-balancer-type'='nlb' \
--set-string controller.service.annotations.'service\.beta\.kubernetes\.io/aws-load-balancer-backend-protocol'='tcp' \
--set-string controller.service.annotations.'service\.beta\.kubernetes\.io/aws-load-balancer-proxy-protocol'='*' \
--set-string controller.service.annotations.'service\.beta\.kubernetes\.io/aws-load-balancer-ssl-ports'='443'
echo Setting up SSL
export tlskey=$(aws secretsmanager get-secret-value --secret-id myathlon/infrastructure/$(env)/gate-key |jq --raw-output '.SecretString' )
echo $tlskey
export tlscrt=$(aws secretsmanager get-secret-value --secret-id myathlon/infrastructure/$(env)/gate-crt |jq --raw-output '.SecretString' )
echo $tlscrt
helm upgrade --install nginx-certificate-secrets ./helm-chart-nginx-certificates --set tlscrt=$tlscrt --set tlskey=$tlskey
Ok, let me give more clarity , i have a nginx pod running in debian 10 and when i try to curl a particular endpoint in keycloak i get a error like
2022/06/13 12:17:46 [info] 35#35: *35461 invalid JWK set while sending to client, client: 141.113.3.32, server: gate-acc.athlon.com, request:
but when i curl the same end point from a application (java pod) i get a response 200 .
Both nginx pod and all my application pod is in same namespace and the from the same cluster in EKS.
the difference i see between nginx pod and application pod is application pod i used the base image as amazon linux but the ngnix pod is with the base image of debian .
so i suspect the OS is the issue , so now i try to build a ngnix plus image using amazon linux and deploy using helm and then try to curl the keycloak end point , that is when i get this PATH not found issue ,
I assume amazon linux may have some root certificate already trusted inbuild so it is able to curl my keycloak but debian does not .
This is the reason i am doing this , adding a certificate in the docker file is a interim solution , if this works then i can add this as secrets and mount as file system .
Both the ngnix pod build in amazon linux or debian as only nginx user , i am not able to login as root , so i am not able to install any utilities like tcpdump or MRT or dig to see what is happening when i do curl , the strange thing is not even ps , sudo or any basis command is working as i dont have root , i am not even able to install anything .
Error :
Error: failed to start container "my-athlon-ingress-controller-nginx-ingress": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "-nginx-plus=true": executable file not found in $PATH: unknown
My goal is deploy this image and have the root certifcate installed in amazon linux machine and have root access for the pod .
I am getting the above message any help is much appreciated , i also added ENV path in my docker file .
qlik.crt has the root certificate
Please help, thanks

For loading certificates you need not to build the nginx docker image. You can use secrets to load the same as volume mount to deployment/daemon set configuration.

Related

SSH into Azure web-app container running with non root user

I am running an Elastic and Kibana service within a container using an Azure Web app container service. I was keen on checking the SSH connectivity for this container using Azures Web SSH console feature. Followed the microsoft documentation for SSH into custom containers https://learn.microsoft.com/en-us/azure/app-service/configure-custom-container?pivots=container-linux#enable-ssh which shows the example of running the container as default root user.
My issue is Elasticsearch process does not run as a root user so I had to make the sshd process run as an elastic user. I was able to get the sshd process running which accepts the SSH connection from my host however the credentials I am setting in the docker file (elasticsearch:Docker!) are throwing Access Denied error.Any idea where i am going wrong here?
Dockerfile
FROM openjdk:jre-alpine
ARG ek_version=6.5.4
RUN apk add --quiet --no-progress --no-cache nodejs wget \
&& adduser -D elasticsearch \
&& apk add openssh \
&& echo "elasticsearch:Docker!" | chpasswd
# Copy the sshd_config file to the /etc/ssh/ directory
COPY startup.sh /home/elasticsearch/
RUN chmod +x /home/elasticsearch/startup.sh && \
chown elasticsearch /home/elasticsearch/startup.sh
COPY sshd_config /home/elasticsearch/
USER elasticsearch
WORKDIR /home/elasticsearch
ENV ES_TMPDIR=/home/elasticsearch/elasticsearch.tmp ES_DATADIR=/home/elasticsearch/elasticsearch/data
RUN wget -q -O - https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-${ek_version}.tar.gz \
| tar -zx \
&& mv elasticsearch-${ek_version} elasticsearch \
&& mkdir -p ${ES_TMPDIR} ${ES_DATADIR} \
&& wget -q -O - https://artifacts.elastic.co/downloads/kibana/kibana-oss-${ek_version}-linux-x86_64.tar.gz \
| tar -zx \
&& mv kibana-${ek_version}-linux-x86_64 kibana \
&& rm -f kibana/node/bin/node kibana/node/bin/npm \
&& ln -s $(which node) kibana/node/bin/node \
&& ln -s $(which npm) kibana/node/bin/npm
EXPOSE 9200 5601 2222
ENTRYPOINT ["/home/elasticsearch/startup.sh"]
startup.sh script
#!/bin/sh
# Generating hostkey
ssh-keygen -f /home/elasticsearch/ssh_host_rsa_key -N '' -t rsa
# starting sshd process
echo "Starting SSHD"
/usr/sbin/sshd -f sshd_config
# Staring the ES stack
echo "Starting ES"
sh elasticsearch/bin/elasticsearch -E http.host=0.0.0.0 & kibana/bin/kibana --host 0.0.0.0
sshd_config file
Port 2222
HostKey /home/elasticsearch/ssh_host_rsa_key
PidFile /home/elasticsearch/sshd.pid
ListenAddress 0.0.0.0
LoginGraceTime 180
X11Forwarding yes
Ciphers aes128-cbc,3des-cbc,aes256-cbc,aes128-ctr,aes192-ctr,aes256-ctr
MACs hmac-sha1,hmac-sha1-96
StrictModes yes
SyslogFacility DAEMON
PasswordAuthentication yes
PermitEmptyPasswords no
PermitRootLogin yes
Subsystem sftp internal-sftp
Error i am getting
Please check and verify that your docker image supports SSH. It would appear that you have done everything correctly so one of the final troubleshooting steps left as this point is to verify that your image supports SSH to begin with.

connection refused when using dockerfile to pull git repository

Local setup for kubernetes: Mac OS
Docker for desktop >> kubernetes >> traefik >> Gitea
The gitea is installed in the cluster and exposed as clusterIP service ingresses through treafik which is accessible at http://gitea.local. Everything is butter smooth till here.
The pain:
Now i am creating a dockerfile and using a docker build to build an image. This dockerfile is trying to clone a repository from http://gitea.local. The problem is i am getting connection refused all the times.
RUN mkdir -p apps sites/assets/css \
&& cd apps \
&& git clone http://gitea.local/inviadmin/testing.git
Then i simply tried RUN curl http://gitea.local from inside dockerfile just to debug and got the same:
curl: (7) Failed to connect to gitea.local port 80: Connection refused
if i curl google.com from dockerfile its working. Any help is strongly appreciated.
Dockerfile:
# syntax = docker/dockerfile:1.0-experimental
FROM bitnami/python:3.7-prod
ENV NVM_DIR=/root/.nvm
ENV NODE_VERSION=12.18.3
ENV PATH="/root/.nvm/versions/node/v${NODE_VERSION}/bin/:${PATH}"
RUN install_packages wget \
&& wget https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh \
&& chmod +x install.sh \
&& ./install.sh \
&& . "$NVM_DIR/nvm.sh" && nvm install ${NODE_VERSION} \
&& nvm use v${NODE_VERSION} && npm install -g yarn
RUN install_packages \
# when using ssh
git openssh-client openssh-server iputils-ping
#git
ARG GIT_BRANCH=master
#RUN ping host.docker.internal
RUN mkdir -p apps sites/assets/css \
&& cd apps \
&& git clone http://gitea.local/inviadmin/test.git --branch $GIT_BRANCH
FROM nginx:latest
COPY --from=0 /home/test/sample/sites /var/www/html/
COPY --from=0 /var/www/error_pages /var/www/
COPY build/nginx/nginx-default.conf.template /etc/nginx/conf.d/default.conf.template
COPY build/entry/docker-entrypoint.sh /
RUN apt-get update && apt-get install -y rsync && apt-get clean \
&& echo "#!/bin/bash" > /rsync \
&& chmod +x /rsync
VOLUME [ "/assets" ]
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["nginx", "-g", "daemon off;"]
I tested your dockerfile and here's the outcome
Since the only part you were having issue was the git pull, i chose to use the following lines only.
Notice from the build that how adding entry to the /etc/hosts took effect for the following commands.
If the issue still persists then i suggest you start looking into the gitea container's logs.

Docker permission denied via build image for container

I tried to build image from Dockerfile.
For this purposes I used this dockerhub image: https://hub.docker.com/r/openshift/origin-haproxy-router
My Dockerfile:
FROM openshift/origin-haproxy-router
RUN INSTALL_PKGS="haproxy18 rsyslog" && \
yum install -y $INSTALL_PKGS && \
yum clean all && \
rpm -V $INSTALL_PKGS && \
mkdir -p /var/lib/haproxy/router/{certs,cacerts,whitelists} && \
mkdir -p /var/lib/haproxy/{conf/.tmp,run,bin,log} && \
touch /var/lib/haproxy/conf/{{os_http_be,os_edge_reencrypt_be,os_tcp_be,os_sni_passthrough,os_route_http_redirect,cert_config,os_wildcard_domain}.map,haproxy.config} && \
setcap 'cap_net_bind_service=ep' /usr/sbin/haproxy && \
chown -R :0 /var/lib/haproxy && \
chmod -R g+w /var/lib/haproxy
COPY images/router/haproxy/* /var/lib/haproxy/
LABEL io.k8s.display-name="OpenShift HAProxy Router" \
io.k8s.description="This component offers ingress to an OpenShift cluster via Ingress and Route rules." \
io.openshift.tags="openshift,router,haproxy"
USER root
EXPOSE 80 443
WORKDIR /var/lib/haproxy/conf
ENV TEMPLATE_FILE=/var/lib/haproxy/conf/haproxy-config.template \
RELOAD_SCRIPT=/var/lib/haproxy/reload-haproxy
ENTRYPOINT ["/usr/bin/openshift-router"]
After I tried to run command inside folder with dockerfile:
sudo docker build -t os-router .
I got next result:
ovl: Error while doing RPMdb copy-up:
[Errno 13] Permission denied: '/var/lib/rpm/Conflictname'
You need to be root to perform this command.
How can I solve this error?
put USER root in your dockerfile

Docker Is supposed to be listening but it doesn't

I deployed my first scala project on docker but i have a problem, the problem is the docker says that the server has been started, but surprisingly it doesn't listen to any request, even i exposed the port to the host, when i tried to request a get, it says that the connection is refused, also i tried to telnet to the port and it seems that there are no listener on port 9000 neither 3200 an 3000, please find bellow what i have wrote in dockerFile
FROM jelastic/sbt
# Env variables
ENV SCALA_VERSION 2.12.4
ENV SBT_VERSION 1.1.0
# Scala expects this file
RUN touch /usr/lib/jvm/java-8-openjdk-amd64/release
# Install Scala
## Piping curl directly in tar
RUN \
curl -fsL https://downloads.typesafe.com/scala/$SCALA_VERSION/scala-$SCALA_VERSION.tgz | tar xfz - -C /root/ && \
echo >> /root/.bashrc && \
echo "export PATH=~/scala-$SCALA_VERSION/bin:$PATH" >> /root/.bashrc
# Install sbt
RUN \
curl -L -o sbt-$SBT_VERSION.deb https://dl.bintray.com/sbt/debian/sbt-$SBT_VERSION.deb && \
dpkg -i sbt-$SBT_VERSION.deb && \
rm sbt-$SBT_VERSION.deb && \
apt-get update && \
apt-get install sbt && \
sbt sbtVersion
WORKDIR /
ADD play /
RUN tree /
EXPOSE 9000
CMD sbt run
and my run command was
docker run -p 9000:9000 -t bee while bee is my image name
as you see the server is started properly.
please find bellow the attached picture to be more clearly
here is the docker ps
If you see your screenshot, it clear states the docker machine is located at 192.168.99.100. So that is the address you need to use.
Open http://192.168.99.100:9000 and it should work

Docker Container port issue: Not able to access tomcat url using host ip

I am new to Docker, I have setup Docker Container on an Amazon Linux box.
I have a docker file which installs tomcat java and a war.
I can see all the installations present in the docker container when I navigate through the container in the exact folders I have mentioned in the Docker file.
When I run the Docker container it says tomcat server has started and I have also tailed the logs so I can see the service is running.
But when I open the host IP URL and 8080 port it says URL can't be reached.
These are the commands to build and run the file which works fine and I can see the status as running.
docker build -t friendly1 .
docker run -p 8080:8080 friendly1
What am I missing here? Request some help on this.
FROM centos:latest
RUN yum -y update && \
yum -y install wget && \
yum -y install tar && \
yum -y install zip unzip
ENV JAVA_HOME /opt/java/jdk1.7.0_67/
ENV CATALINA_HOME /opt/tomcat/apache-tomcat-7.0.70
ENV SAVIYNT_HOME /opt/tomcat/apache-tomcat-7.0.70/webapps
ENV PATH $PATH:$JAVA_HOME/jre/jdk1.7.0_67/bin:$CATALINA_HOME/bin:$CATALINA_HOME/scripts:$CATALINA_HOME/apache-tomcat-7.0.70/bin
ENV JAVA_VERSION 7u67
ENV JAVA_BUILD 7u67
RUN mkdir /opt/java/
RUN wget https://<S3location>/jdk-7u67-linux-x64.gz && \
tar -xvf jdk-7u67-linux-x64.gz && \
#rm jdk*.gz && \
mv jdk* /opt/java/
# Install Tomcat
ENV TOMCAT_MAJOR 7
ENV TOMCAT_VERSION 7.0.70
RUN mkdir /opt/tomcat/
RUN wget https://<s3location>/apache-tomcat-7.0.70.tar.gz && \
tar -xvf apache-tomcat-${TOMCAT_VERSION}.tar.gz && \
#rm apache-tomcat*.tar.gz && \
mv apache-tomcat* /opt/tomcat/
RUN chmod +x ${CATALINA_HOME}/bin/*sh
WORKDIR /opt/tomcat/apache-tomcat-7.0.70/
CMD "startup.sh" && tail -f /opt/tomcat/apache-tomcat-7.0.70/logs/*
EXPOSE 8080

Resources