I am running an Elastic and Kibana service within a container using an Azure Web app container service. I was keen on checking the SSH connectivity for this container using Azures Web SSH console feature. Followed the microsoft documentation for SSH into custom containers https://learn.microsoft.com/en-us/azure/app-service/configure-custom-container?pivots=container-linux#enable-ssh which shows the example of running the container as default root user.
My issue is Elasticsearch process does not run as a root user so I had to make the sshd process run as an elastic user. I was able to get the sshd process running which accepts the SSH connection from my host however the credentials I am setting in the docker file (elasticsearch:Docker!) are throwing Access Denied error.Any idea where i am going wrong here?
Dockerfile
FROM openjdk:jre-alpine
ARG ek_version=6.5.4
RUN apk add --quiet --no-progress --no-cache nodejs wget \
&& adduser -D elasticsearch \
&& apk add openssh \
&& echo "elasticsearch:Docker!" | chpasswd
# Copy the sshd_config file to the /etc/ssh/ directory
COPY startup.sh /home/elasticsearch/
RUN chmod +x /home/elasticsearch/startup.sh && \
chown elasticsearch /home/elasticsearch/startup.sh
COPY sshd_config /home/elasticsearch/
USER elasticsearch
WORKDIR /home/elasticsearch
ENV ES_TMPDIR=/home/elasticsearch/elasticsearch.tmp ES_DATADIR=/home/elasticsearch/elasticsearch/data
RUN wget -q -O - https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-${ek_version}.tar.gz \
| tar -zx \
&& mv elasticsearch-${ek_version} elasticsearch \
&& mkdir -p ${ES_TMPDIR} ${ES_DATADIR} \
&& wget -q -O - https://artifacts.elastic.co/downloads/kibana/kibana-oss-${ek_version}-linux-x86_64.tar.gz \
| tar -zx \
&& mv kibana-${ek_version}-linux-x86_64 kibana \
&& rm -f kibana/node/bin/node kibana/node/bin/npm \
&& ln -s $(which node) kibana/node/bin/node \
&& ln -s $(which npm) kibana/node/bin/npm
EXPOSE 9200 5601 2222
ENTRYPOINT ["/home/elasticsearch/startup.sh"]
startup.sh script
#!/bin/sh
# Generating hostkey
ssh-keygen -f /home/elasticsearch/ssh_host_rsa_key -N '' -t rsa
# starting sshd process
echo "Starting SSHD"
/usr/sbin/sshd -f sshd_config
# Staring the ES stack
echo "Starting ES"
sh elasticsearch/bin/elasticsearch -E http.host=0.0.0.0 & kibana/bin/kibana --host 0.0.0.0
sshd_config file
Port 2222
HostKey /home/elasticsearch/ssh_host_rsa_key
PidFile /home/elasticsearch/sshd.pid
ListenAddress 0.0.0.0
LoginGraceTime 180
X11Forwarding yes
Ciphers aes128-cbc,3des-cbc,aes256-cbc,aes128-ctr,aes192-ctr,aes256-ctr
MACs hmac-sha1,hmac-sha1-96
StrictModes yes
SyslogFacility DAEMON
PasswordAuthentication yes
PermitEmptyPasswords no
PermitRootLogin yes
Subsystem sftp internal-sftp
Error i am getting
Please check and verify that your docker image supports SSH. It would appear that you have done everything correctly so one of the final troubleshooting steps left as this point is to verify that your image supports SSH to begin with.
Related
Please help ..
I am building nginx plus ingress controller and deplyoing in eks using Dockerfile
Dockerfile:
FROM amazonlinux:2
LABEL maintainer="armand#f5.com"
ENV NGINX_VERSION 23
ENV NJS_VERSION 0.5.2
ENV PKG_RELEASE 1.amzn2.ngx
ENV PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:${PATH}"
RUN mkdir -p /etc/ssl/nginx
ADD nginx-repo.crt /etc/ssl/nginx
ADD nginx-repo.key /etc/ssl/nginx
ADD qlik.crt /etc/ssl/nginx
RUN update-ca-trust extract
RUN yum -y update \
&& yum -y install sudo
RUN set -x \
&& chmod 644 /etc/ssl/nginx/* \
&& yum install -y --setopt=tsflags=nodocs wget ca-certificates bind-utils wget bind-utils vim-minimal shadow-utils \
&& groupadd --system --gid 101 nginx \
&& adduser -g nginx --system --no-create-home --home /nonexistent --shell /bin/false --uid 101 nginx \
&& usermod -s /sbin/nologin nginx \
&& usermod -L nginx \
&& wget -P /etc/yum.repos.d https://cs.nginx.com/static/files/nginx-plus-amazon2.repo \
&& yum --showduplicates list nginx-plus \
&& yum install -y --setopt=tsflags=nodocs nginx-plus-${NGINX_VERSION}-${PKG_RELEASE} \
&& rm /etc/nginx/conf.d/default.conf \
&& mkdir -p /var/cache/nginx \
&& mkdir -p /var/lib/nginx/state \
&& chown -R nginx:nginx /etc/nginx \
&& ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log \
&& ulimit -c -m -s -t unlimited \
&& yum clean all \
&& rm -rf /var/cache/yum \
&& rm -rf /etc/yum.repos.d/* \
&& rm /etc/ssl/nginx/nginx-repo.crt /etc/ssl/nginx/nginx-repo.key
RUN echo "root:root" | chpasswd
EXPOSE 80 443 8080
STOPSIGNAL SIGTERM
CMD ["nginx", "-g", "daemon off;"]
I am starting the container using helm commands
helm upgrade \
--install my-athlon-ingress-controller nginx-stable/nginx-ingress --version 0.11.3 --debug \
--set controller.image.pullPolicy=Always \
--set controller.image.tag=6.0.1 \
--set controller.image.repository=957123096554.dkr.ecr.eu-central-1.amazonaws.com/nginx-service \
--set controller.nginxplus=true \
--set controller.enableSnippets=true \
--set controller.enablePreviewPolicies=true \
--set-string controller.service.annotations.'service\.beta\.kubernetes\.io/aws-load-balancer-type'='nlb' \
--set-string controller.service.annotations.'service\.beta\.kubernetes\.io/aws-load-balancer-backend-protocol'='tcp' \
--set-string controller.service.annotations.'service\.beta\.kubernetes\.io/aws-load-balancer-proxy-protocol'='*' \
--set-string controller.service.annotations.'service\.beta\.kubernetes\.io/aws-load-balancer-ssl-ports'='443'
echo Setting up SSL
export tlskey=$(aws secretsmanager get-secret-value --secret-id myathlon/infrastructure/$(env)/gate-key |jq --raw-output '.SecretString' )
echo $tlskey
export tlscrt=$(aws secretsmanager get-secret-value --secret-id myathlon/infrastructure/$(env)/gate-crt |jq --raw-output '.SecretString' )
echo $tlscrt
helm upgrade --install nginx-certificate-secrets ./helm-chart-nginx-certificates --set tlscrt=$tlscrt --set tlskey=$tlskey
Ok, let me give more clarity , i have a nginx pod running in debian 10 and when i try to curl a particular endpoint in keycloak i get a error like
2022/06/13 12:17:46 [info] 35#35: *35461 invalid JWK set while sending to client, client: 141.113.3.32, server: gate-acc.athlon.com, request:
but when i curl the same end point from a application (java pod) i get a response 200 .
Both nginx pod and all my application pod is in same namespace and the from the same cluster in EKS.
the difference i see between nginx pod and application pod is application pod i used the base image as amazon linux but the ngnix pod is with the base image of debian .
so i suspect the OS is the issue , so now i try to build a ngnix plus image using amazon linux and deploy using helm and then try to curl the keycloak end point , that is when i get this PATH not found issue ,
I assume amazon linux may have some root certificate already trusted inbuild so it is able to curl my keycloak but debian does not .
This is the reason i am doing this , adding a certificate in the docker file is a interim solution , if this works then i can add this as secrets and mount as file system .
Both the ngnix pod build in amazon linux or debian as only nginx user , i am not able to login as root , so i am not able to install any utilities like tcpdump or MRT or dig to see what is happening when i do curl , the strange thing is not even ps , sudo or any basis command is working as i dont have root , i am not even able to install anything .
Error :
Error: failed to start container "my-athlon-ingress-controller-nginx-ingress": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "-nginx-plus=true": executable file not found in $PATH: unknown
My goal is deploy this image and have the root certifcate installed in amazon linux machine and have root access for the pod .
I am getting the above message any help is much appreciated , i also added ENV path in my docker file .
qlik.crt has the root certificate
Please help, thanks
For loading certificates you need not to build the nginx docker image. You can use secrets to load the same as volume mount to deployment/daemon set configuration.
I want to setup a very minimalistic alpine linux docker container with the following capabilities:
It runs an ssh server
It copies over a SSH public key of my choice to which I can then authenticate
I looked into various options, and in the end decided to write my own small Dockerfile.
However, I ran into some problems.
Dockerfile
FROM alpine:latest
RUN apk update
RUN apk upgrade
RUN apk add openssh-server
RUN mkdir -p /var/run/sshd
RUN mkdir -p /root/.ssh
ADD authorized_keys /root/.ssh/authorized_keys
ADD entrypoint.sh /entrypoint.sh
RUN chmod 755 /entrypoint.sh
EXPOSE 22
CMD ["/entrypoint.sh"]
entrypoint.sh
#!/bin/sh
/usr/sbin/sshd -D
authorized_keys
ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP8cIHIPgV9QoAaSsYNGHktiP/QnWfKPOeyzjujUXgQMQLBw3jJ1EBe04Lk3FXTMxwrKk3Dxq0VhJ+Od6UwzPDg=
Starting the container gives an error: sshd: no hostkeys available -- exiting.
How can I fix my Dockerfile, to ensure the ssh server is running correctly and the authorized_keys file is there. What am I missing?
In order to start, the SSH daemon does need host keys.
Those does not represents the keys that you are going to use to connect to your container, just the keys that define this specific host.
A host key is a cryptographic key used for authenticating computers in the SSH protocol.
Source: https://www.ssh.com/ssh/host-key
So you have to generate some keys for your host, you can then safely ignore those if you do not really intend to use them.
Generating those keys can be done via
ssh-keygen -A
So in your image, just adding a
RUN ssh-keygen -A
should do.
For the record, here is my own sshd Alpine image:
FROM alpine
RUN apk add --no-cache \
openssh \
&& ssh-keygen -A \
&& mkdir /root/.ssh \
&& chmod 0700 /root/.ssh \
&& echo "root:$(openssl rand 96 | openssl enc -A -base64)" | chpasswd \
&& ln -s /etc/ssh/ssh_host_ed25519_key.pub /root/.ssh/authorized_keys
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D", "-e"]
Extra notes:
I am reusing the SSH keys generated by ssh-keygen -A, exposing them in a volume, this is the reason why I am doing the command:
ln -s /etc/ssh/ssh_host_ed25519_key.pub /root/.ssh/authorized_keys
Because this is just an Ansible node cluster lab, I am SSH'ing this machine as the root user, this is why I need the, quite insecure
echo "root:$(openssl rand 96 | openssl enc -A -base64)" | chpasswd
I'm trying to create a Jenkins Docker agent that has Go.
The following is my Dockerfile.
After I build it, if I try: docker run myimage:0.0.1 go version returns the Go version, however if I try this, it doesn't find Go at all.
docker run --privileged --dns 9.0.128.50 --dns 9.0.130.50 -d -P --name slave myimage:0.0.1
docker ps ## grab the port number
ssh -p PORT_NUMBER jenkins#localhost
What am I missing in order to make Go available under the Jenkins user?
FROM golang:1.11.5-alpine
RUN apk add --no-cache \
bash \
curl \
wget \
git \
openssh \
tar
COPY ssh/*key /etc/ssh/
COPY skel/ /home/jenkins
COPY id_rsa /home/jenkins/.ssh/id_rsa
COPY id_rsa.pub /home/jenkins/.ssh/id_rsa.pub
RUN addgroup docker \
&& adduser -s /bin/bash -h /home/jenkins -G docker -D jenkins \
&& echo "jenkins ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers \
&& echo "jenkins:jenkinspass" | chpasswd \
&& chmod u+s /bin/ping \
&& chown -R jenkins:docker /home/jenkins \
&& mv /etc/profile.d/color_prompt /etc/profile.d/color_prompt.sh \
&& mv /bin/sh /bin/sh.bak \
&& ln -s /bin/bash /bin/sh
# Standard SSH port
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
If you run:
docker run myimage:0.0.1 which go
You will see that go executable in path /usr/local/go/bin/go
If you connect as jenkins user via ssh and run /usr/local/go/bin/go version all work as well.
Conclusion:
Go installation provided as root user
jenkins user added after go installed and haven't /usr/local/go/bin/go in his $PATH environment variable.
Solution:
Add /usr/local/go/bin/go to $PATH for user jenkins
Use go executable with full path.
I deployed my first scala project on docker but i have a problem, the problem is the docker says that the server has been started, but surprisingly it doesn't listen to any request, even i exposed the port to the host, when i tried to request a get, it says that the connection is refused, also i tried to telnet to the port and it seems that there are no listener on port 9000 neither 3200 an 3000, please find bellow what i have wrote in dockerFile
FROM jelastic/sbt
# Env variables
ENV SCALA_VERSION 2.12.4
ENV SBT_VERSION 1.1.0
# Scala expects this file
RUN touch /usr/lib/jvm/java-8-openjdk-amd64/release
# Install Scala
## Piping curl directly in tar
RUN \
curl -fsL https://downloads.typesafe.com/scala/$SCALA_VERSION/scala-$SCALA_VERSION.tgz | tar xfz - -C /root/ && \
echo >> /root/.bashrc && \
echo "export PATH=~/scala-$SCALA_VERSION/bin:$PATH" >> /root/.bashrc
# Install sbt
RUN \
curl -L -o sbt-$SBT_VERSION.deb https://dl.bintray.com/sbt/debian/sbt-$SBT_VERSION.deb && \
dpkg -i sbt-$SBT_VERSION.deb && \
rm sbt-$SBT_VERSION.deb && \
apt-get update && \
apt-get install sbt && \
sbt sbtVersion
WORKDIR /
ADD play /
RUN tree /
EXPOSE 9000
CMD sbt run
and my run command was
docker run -p 9000:9000 -t bee while bee is my image name
as you see the server is started properly.
please find bellow the attached picture to be more clearly
here is the docker ps
If you see your screenshot, it clear states the docker machine is located at 192.168.99.100. So that is the address you need to use.
Open http://192.168.99.100:9000 and it should work
I am trying to build a hadoop Dockerfile.
In the build process, I added:
&& apt install -y openssh-client \
&& apt install -y openssh-server \
&& ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa \
&& cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys \
&& chmod 0600 ~/.ssh/authorized_keys
&& sed -i '/\#AuthorizedKeysFile/ d' /etc/ssh/sshd_config \
&& echo "AuthorizedKeysFile ~/.ssh/authorized_keys" >> /etc/ssh/sshd_config \
&& /etc/init.d/ssh restart
I assumed that when I ran this container:
docker run -it --rm hadoop/tag bash
I would be able to:
ssh localhost
But I got an error:
ssh: connect to host localhost port 22: Connection refused
If I run this manually inside the container:
/etc/init.d/ssh restart
# or this
service ssh restart
Then I can get connected. I am thinking that this means the sshd restart didn't work.
I am using FROM java in the Dockerfile.
The build process only builds an image. Processes that are run at that time (using RUN) are no longer running after the build, and are not started again when a container is launched using the image.
What you need to do is get sshd to start at container runtime. The simplest way to do that is using an entrypoint script.
Dockerfile:
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["whatever", "your", "command", "is"]
entrypoint.sh:
#!/bin/sh
# Start the ssh server
/etc/init.d/ssh restart
# Execute the CMD
exec "$#"
Rebuild the image using the above, and when you use it to start a container, it should start sshd before running your CMD.
You can also change the base image you start from to something like Phusion baseimage if you prefer. It makes it easy to start some services like syslogd, sshd, that you may wish the container to have running.