Azure App Service failed to start with custom container (trying to configure SSH connection) - docker

I'm following this guide from Microsoft to connect to my App Service (running on a custom container) using SSH.
The base image I'm using is tiangolo/uwsgi-nginx
And here's my docker file
FROM node
WORKDIR /nodebuild
ADD frontend /nodebuild
ADD .env /nodebuild
RUN export $(grep -v '^#' .env | xargs) && npm install && npm audit fix && npm run build
FROM tiangolo/uwsgi-nginx:latest
ENV UWSGI_INI uwsgi.ini
WORKDIR /app
COPY requirements.txt /app
RUN python3 -m pip install -r requirements.txt
ADD . /app
COPY --from=0 /nodebuild/build /app/frontend/build
RUN export $(grep -v '^#' .env | xargs) && python3 manage.py makemigrations -noinput && python3 manage.py migrate --noinput && python3 manage.py collectstatic --noinput
RUN rm .env
# THE BELOW IS FOR SETTING UP SSH
# ----------------------------------
ENV SSH_PASSWD "root:Docker!"
RUN apt-get update \
&& apt-get install -y --no-install-recommends dialog \
&& apt-get update \
&& apt-get install -y --no-install-recommends openssh-server \
&& echo "$SSH_PASSWD" | chpasswd
COPY sshd_config /etc/ssh/
COPY init.sh /usr/local/bin/
RUN chmod u+x /usr/local/bin/init.sh
EXPOSE 8000 2222
ENTRYPOINT ["init.sh"]
Notice the last line of the Dockerfile. It uses ENTRYPOINT to set the startup command.
Content of the init.sh file is as below (just to start the SSH service).
#!/bin/bash
set -e
echo "Starting SSH ..."
service ssh start
Now the strange thing is that if I remove the last line (ENTRYPOINT ["init.sh"]) then everything works fine. But if it's there, the app failed to start and the app logs say something like
Container abc_xy_0_57397aae didn't respond to HTTP pings on port: 80, failing site start. See container logs for debugging.

Your entrypoint is equivalent to the init process (PID 1) of a traditional Unix system. If that process terminates, your computer shuts down or reboots. Your bash script starts sshd and then terminates. You need to find out what the entrypoint was and call that to preserve the previous behaviour.

Related

connection refused when using dockerfile to pull git repository

Local setup for kubernetes: Mac OS
Docker for desktop >> kubernetes >> traefik >> Gitea
The gitea is installed in the cluster and exposed as clusterIP service ingresses through treafik which is accessible at http://gitea.local. Everything is butter smooth till here.
The pain:
Now i am creating a dockerfile and using a docker build to build an image. This dockerfile is trying to clone a repository from http://gitea.local. The problem is i am getting connection refused all the times.
RUN mkdir -p apps sites/assets/css \
&& cd apps \
&& git clone http://gitea.local/inviadmin/testing.git
Then i simply tried RUN curl http://gitea.local from inside dockerfile just to debug and got the same:
curl: (7) Failed to connect to gitea.local port 80: Connection refused
if i curl google.com from dockerfile its working. Any help is strongly appreciated.
Dockerfile:
# syntax = docker/dockerfile:1.0-experimental
FROM bitnami/python:3.7-prod
ENV NVM_DIR=/root/.nvm
ENV NODE_VERSION=12.18.3
ENV PATH="/root/.nvm/versions/node/v${NODE_VERSION}/bin/:${PATH}"
RUN install_packages wget \
&& wget https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh \
&& chmod +x install.sh \
&& ./install.sh \
&& . "$NVM_DIR/nvm.sh" && nvm install ${NODE_VERSION} \
&& nvm use v${NODE_VERSION} && npm install -g yarn
RUN install_packages \
# when using ssh
git openssh-client openssh-server iputils-ping
#git
ARG GIT_BRANCH=master
#RUN ping host.docker.internal
RUN mkdir -p apps sites/assets/css \
&& cd apps \
&& git clone http://gitea.local/inviadmin/test.git --branch $GIT_BRANCH
FROM nginx:latest
COPY --from=0 /home/test/sample/sites /var/www/html/
COPY --from=0 /var/www/error_pages /var/www/
COPY build/nginx/nginx-default.conf.template /etc/nginx/conf.d/default.conf.template
COPY build/entry/docker-entrypoint.sh /
RUN apt-get update && apt-get install -y rsync && apt-get clean \
&& echo "#!/bin/bash" > /rsync \
&& chmod +x /rsync
VOLUME [ "/assets" ]
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["nginx", "-g", "daemon off;"]
I tested your dockerfile and here's the outcome
Since the only part you were having issue was the git pull, i chose to use the following lines only.
Notice from the build that how adding entry to the /etc/hosts took effect for the following commands.
If the issue still persists then i suggest you start looking into the gitea container's logs.

Run Python scripts on command line running Docker images

I built a docker image using Dockerfile with Python and some libraries inside (no my project code inside). In my local work dir, there are some scripts to be run on the docker. So, here what I did
$ cd /path/to/my_workdir
$ docker run -it --name test -v `pwd`:`pwd` -w `pwd` my/code:test python src/main.py --config=test --results-dir=/home/me/Results
The command python src/main.py --config=test --results-dir=/home/me/Results is what I want to run inside the Docker container.
However, it returns,
/home/docker/miniconda3/bin/python: /home/docker/miniconda3/bin/python: cannot execute binary file
How can I fix it and run my code?
Here is my Dockerfile
FROM nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04
MAINTAINER Me <me#me.com>
RUN apt update -yq && \
apt install -yq curl wget unzip git vim cmake sudo
RUN adduser --disabled-password --gecos '' docker && \
adduser docker sudo && \
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER docker
WORKDIR /home/docker/
RUN chmod a+rwx /home/docker/ && \
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh && \
bash Miniconda3-latest-Linux-x86_64.sh -b && rm Miniconda3-latest-Linux-x86_64.sh
ENV PATH /home/docker/miniconda3/bin:$PATH
Run pip install absl-py==0.5.0 atomicwrites==1.2.1 attrs==18.2.0 certifi==2018.8.24 chardet==3.0.4 cycler==0.10.0 docopt==0.6.2 enum34==1.1.6 future==0.16.0 idna==2.7 imageio==2.4.1 jsonpickle==1.2 kiwisolver==1.0.1 matplotlib==3.0.0 mock==2.0.0 more-itertools==4.3.0 mpyq==0.2.5 munch==2.3.2 numpy==1.15.2 pathlib2==2.3.2 pbr==4.3.0 Pillow==5.3.0 pluggy==0.7.1 portpicker==1.2.0 probscale==0.2.3 protobuf==3.6.1 py==1.6.0 pygame==1.9.4 pyparsing==2.2.2 pysc2==3.0.0 pytest==3.8.2 python-dateutil==2.7.3 PyYAML==3.13 requests==2.19.1 s2clientprotocol==4.10.1.75800.0 sacred==0.8.1 scipy==1.1.0 six==1.11.0 sk-video==1.1.10 snakeviz==1.0.0 tensorboard-logger==0.1.0 torch==0.4.1 torchvision==0.2.1 tornado==5.1.1 urllib3==1.23
USER docker
ENTRYPOINT ["/bin/bash"]
Try making the file executable before running it.
as John mentioned to do in the dockerfile
FROM python:latest
COPY src/main.py /usr/local/share/
RUN chmod +x /usr/local/share/src/main.py #<-**--- just add this also
# I have some doubts about the pathing
CMD ["/usr/local/share/src/main.py", "--config=test --results-dir=/home/me/Results"]
You can run a python script in docker by adding this to your docker file:
FROM python:latest
COPY src/main.py /usr/local/share/
CMD ["src/main.py", "--config=test --results-dir=/home/me/Results"]

Docker MultiStage Build Exposing web app, and running bash commands

I have this dockerfile. I am attempting to create an image that has ubuntu and a website contained within. The ubuntu commands open firefox and should navigate to the url that has the website. The CMD portion works as expected but http:localhost:8080 does not work as intended. Here is the dockerfile and I will walk you through each section.
Dockerfile
FROM microsoft/aspnetcore-build:2.0 AS build-env
WORKDIR /app
# copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
# copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# build runtime image
FROM microsoft/aspnetcore:2.0
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "SecondApp.dll"]
#ubuntu 16.04
FROM ubuntu:16.04
#set the working directory. All commands will be ran here
WORKDIR /root/
#Update packages, and install sudo
RUN apt-get update && apt-get install -y sudo
#not sure what all this does, but it gets the gui and firefox working
RUN export uid=1000 gid=1000 && \
mkdir -p /home/developer && \
echo "developer:x:${uid}:${gid}:Developer,,,:/home/developer:/bin/bash" >> /etc/passwd && \
echo "developer:x:${uid}:" >> /etc/group && \
echo "developer ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/developer && \
chmod 0440 /etc/sudoers.d/developer && \
chown ${uid}:${gid} -R /home/developer
#install firefox, wget and add chrome to sources
RUN apt-get update && apt-get install -y firefox
RUN apt-get install wget -y
RUN wget -q -O - https://dl.google.com/linux/linux_signing_key.pub | sudo apt-key add -
RUN echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list
RUN apt-get update
#install severel dependent packages for firefox and chrome
RUN apt-get install google-chrome-stable dbus-x11 packagekit-gtk3-module libcanberra-gtk-module libcanberra-gtk3-module -y
#xdotool is used for mimicking key presses
RUN apt-get install xdotool -y
#not sure what this does either
RUN chown -R developer:developer /home/developer
RUN mkdir /var/run/dbus/
#? user
USER developer
ENV HOME /home/developer
#not sure what dbus-daemon does, but this does open up firefox, navigate to localhost:8080, and full screens the app. To keep the container running bin/bash is ran after, it remains in the background successfully.
CMD sudo dbus-daemon --system --fork && /usr/bin/firefox -url http://localhost:8080 & xdotool search --sync --onlyvisible --class "Firefox" windowactivate key F11 & /bin/bash
SecondApp
SecondApp is just the default created app for visual studio 2017 for an asp.net core web application. no MVC, nothing added. When debugging. Navigating to http:localhost:56545 will bring up the home page.
So from the above build of the dockerfile. Using this docker run command
docker run -ti --rm -p 8080:80 -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix repo/repo
or this one
docker run -ti --rm -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix repo/repo
Will run the container, open Firefox at url localhost:8080, and fullscreen firefox. But, localhost:8080 cannot be found.
Is there something wrong with the ports/networking that I am missing?

Docker port forwarding cannot see the output on browser

I am a newbie to Docker. I'm using ubuntu 14.04 as my OS and I've installed Docker Community Edition by following instructions from https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/#set-up-the-repository
I have a created a docker file for my project and run it using docker-compose file.
My Dockerfile is as follows.
# ImageName
FROM node:8.8.1
# Create app required directories
ENV appDir /usr/src/app
RUN mkdir -p /usr/src/app /usr/src/app/datas /usr/log/supervisor
# Change working directory
WORKDIR ${appDir}
# Install dependencies
RUN apt-get update && \
apt-get -y install vim\
supervisor \
python3 \
python3-pip \
python3-setuptools \
groff \
less \
&& pip3 install --upgrade pip \
&& apt-get clean
RUN pip3 --no-cache-dir install --upgrade awscli
# Install app dependencies
COPY graphql/package.json /usr/src/app
RUN npm install
RUN npm install -g webpack
# Copy app source code
COPY graphql/ /usr/src/app
COPY datas/ /usr/src/app/datas
# Set Environment Variables
RUN echo export DATA_DIR=/usr/src/app/datas/ >> ~/.data_variables && \
echo "source ~/.data_variables" >> ~/.bash_login && \
echo "source ~/.data_variables" >> ~/.bashrc
COPY supervisord.conf /etc/supercvisor/conf.d/supervisord.conf
# Expose API port to the outside
EXPOSE 5000
# Launch application
CMD ["/usr/bin/supervisord", "-c", "/etc/supercvisor/conf.d/supervisord.conf"]
My docker-compose file
version: '3'
services:
web:
build: .
image: graphql_img
container_name: graphql_img_master
ports:
- "5000:5000"
My supervisord.conf file
[supervisord]
nodaemon=true
[program:babelWatch]
command=npm run babelWatch
[program:monitor]
command=npm run monitor
As you can see I've exposed the port 5000, but when I try to check the output on the browser using the command localhost:5000/graphql it shows an error
This site can’t be reached
I even tried to check for the ip address of docker container using "docker inspect" command and I've used that container ip address with the port still I'm getting the error. Can somebody please help me out on this. Any help would be much appreciated.
Additionally, it would also really helpful to know how to make the program "run monitor" to run on foreground using supervisor

Unable to start container from jenkins

In Jenkins I installed Docker build step plugin.
In Jenkins, created job and in it, executed docker command selected build image. The image is created using the Dockerfile.The Dockerfile is :
FROM ubuntu:latest
#OS Update
RUN apt-get update
RUN apt-get -y install git git-core unzip python-pip make wget build-essential python-dev libpcre3 libpcre3-dev libssl-dev vim nano net-tools iputils-ping supervisor curl supervisor
WORKDIR /home/wipro
#Mongo Setup
RUN curl -O http://downloads.mongodb.org/linux/mongodb-linux-x86_64-3.0.2.tgz && tar -xzvf mongodb-linux-x86_64-3.0.2.tgz && cd mongodb-linux-x86_64-3.0.2/bin && cp * /usr/bin/
#RUN mongod --dbpath /home/azureuser/CI_service/data/ --logpath /home/azureuser/CI_service/log.txt --logappend --noprealloc --smallfiles --port 27017 --fork
#Node Setup
#RUN curl -O https://nodejs.org/dist/v0.12.7/node-v0.12.7.tar.gz && tar -xzvf node-v0.12.7.tar.gz && cd node-v0.12.7
#RUN cd /opt/node-v0.12.7 && ./configure && make && make install
#RUN cp /usr/local/bin/node /usr/bin/ && cp /usr/local/bin/npm /usr/bin/
RUN wget https://nodejs.org/dist/v0.12.7/node-v0.12.7-linux-x64.tar.gz
RUN cd /usr/local && sudo tar --strip-components 1 -xzf /home/wipro/node-v0.12.7-linux-x64.tar.gz
RUN npm install forever -g
#CI SERVICE
ADD prod /home//
ADD servicestart.sh /home/
RUN chmod +x /home/servicestart.sh
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD ["sh", "/home/servicestart.sh"]
EXPOSE 80
EXPOSE 27017
Then I tried to create the container and container is created.
When I tried to start the container, the container is not running.
When I checked with command:
docker ps -a
, it shows status as created only.
Its not in running or Exited state.
The output of docker ps -a is:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8ac762c4dc84 d85c2d90be53 "sh /home/servi" 15 hours ago Created hungry_liskov
7d8864940515 d85c2d90be53 "sh /home/servi" 16 hours ago Created ciservice
How to start the container using jenkins?
It depends on your container main command (ENTRPOINT + CMD)
A created state (for non data-volume container) means the main command failed to execute.
Try a docker logs <container_id> to see if there is any error message recorded.
CMD ["sh", "/home/servicestart.sh"] should be:
CMD ["/home/servicestart.sh"]
(The default ENTRYPOINT for Ubuntu should be ["sh", "-c"], so no need to repeat an "sh")

Resources