Docker port forwarding cannot see the output on browser - docker

I am a newbie to Docker. I'm using ubuntu 14.04 as my OS and I've installed Docker Community Edition by following instructions from https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/#set-up-the-repository
I have a created a docker file for my project and run it using docker-compose file.
My Dockerfile is as follows.
# ImageName
FROM node:8.8.1
# Create app required directories
ENV appDir /usr/src/app
RUN mkdir -p /usr/src/app /usr/src/app/datas /usr/log/supervisor
# Change working directory
WORKDIR ${appDir}
# Install dependencies
RUN apt-get update && \
apt-get -y install vim\
supervisor \
python3 \
python3-pip \
python3-setuptools \
groff \
less \
&& pip3 install --upgrade pip \
&& apt-get clean
RUN pip3 --no-cache-dir install --upgrade awscli
# Install app dependencies
COPY graphql/package.json /usr/src/app
RUN npm install
RUN npm install -g webpack
# Copy app source code
COPY graphql/ /usr/src/app
COPY datas/ /usr/src/app/datas
# Set Environment Variables
RUN echo export DATA_DIR=/usr/src/app/datas/ >> ~/.data_variables && \
echo "source ~/.data_variables" >> ~/.bash_login && \
echo "source ~/.data_variables" >> ~/.bashrc
COPY supervisord.conf /etc/supercvisor/conf.d/supervisord.conf
# Expose API port to the outside
EXPOSE 5000
# Launch application
CMD ["/usr/bin/supervisord", "-c", "/etc/supercvisor/conf.d/supervisord.conf"]
My docker-compose file
version: '3'
services:
web:
build: .
image: graphql_img
container_name: graphql_img_master
ports:
- "5000:5000"
My supervisord.conf file
[supervisord]
nodaemon=true
[program:babelWatch]
command=npm run babelWatch
[program:monitor]
command=npm run monitor
As you can see I've exposed the port 5000, but when I try to check the output on the browser using the command localhost:5000/graphql it shows an error
This site can’t be reached
I even tried to check for the ip address of docker container using "docker inspect" command and I've used that container ip address with the port still I'm getting the error. Can somebody please help me out on this. Any help would be much appreciated.
Additionally, it would also really helpful to know how to make the program "run monitor" to run on foreground using supervisor

Related

Module /opt/rejson.so failed to load

ive been creating new intances for redis and clustering them. i would like my intsances to use rejson so im using redislabs/rejson. the only problem is when composing up i get this issue for all redis nodes.
here is the dockerfile for allredis nodes:
FROM redislabs/rejson:latest AS redis
ARG REDIS_PORT
WORKDIR /redis-workdir
# Installing OS level dependencies
RUN apt-get update
RUN apt-get install -y wget
RUN apt-get install -y gettext-base
RUN mkdir -p "/opt"
# Downloading redis default config
RUN wget https://raw.githubusercontent.com/wayofthepie/docker-rejson/master/redis.conf
RUN mv redis.conf redis.default.conf
COPY . .
ENV REDIS_PORT $REDIS_PORT
RUN envsubst < redis.conf > updated_redis.conf
RUN mv updated_redis.conf redis.conf
CMD redis-server ./redis.conf
its running the last line of this link https://raw.githubusercontent.com/wayofthepie/docker-rejson/master/redis.conf
thanks for helping out

Getting "Additional property ssh is not allowed" error when specifying ssh-agent in docker-compose

I'm trying to build a Python docker image which pip installs from a private repository using ssh. The details of which are in a requirements.txt file.
I've spent a long time reading guides from StackOverflow as well as the official Docker documentation on the subject ...
https://docs.docker.com/develop/develop-images/build_enhancements/#using-ssh-to-access-private-data-in-builds
https://docs.docker.com/compose/compose-file/build/#ssh
... and have come up with a Dockerfile which builds and runs fine when using:
$ docker build --ssh default -t build_tester .
However, when I try to do the same in a docker-compose.yml file, I get the following error:
$ docker-compose up
services.build-tester.build Additional property ssh is not allowed
This is the same even when enabling buildkit:
$ COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker-compose up
services.build-tester.build Additional property ssh is not allowed
Project structure
- docker-compose.yml
- build_files
- Dockerfile
- requirements.txt
- app
- app.py
Dockerfile
# syntax=docker/dockerfile:1.2
FROM python:bullseye as builder
RUN mkdir -p /build/
WORKDIR /build/
RUN apt-get update; \
apt-get install -y git; \
rm -rf /var/lib/apt/lists/*
RUN mkdir -p -m 0600 ~/.ssh; \
ssh-keyscan -H github.com >> ~/.ssh/known_hosts
RUN python3 -m venv env; \
env/bin/pip install --upgrade pip
COPY requirements.txt .
RUN --mount=type=ssh \
env/bin/pip install -r requirements.txt; \
rm requirements.txt
FROM python:slim as runner
RUN mkdir -p /app/
WORKDIR /app/
COPY --from=builder /build/ .
COPY app/ .
CMD ["env/bin/python", "app.py"]
docker-compose.yml
services:
build-tester:
container_name: build-tester
image: build-tester
build:
context: build_files
dockerfile: Dockerfile
ssh:
- default
If I remove ...
ssh:
- default
... the docker-compose up command builds the image OK but obviously doesn't run as app.py doesn't have the required packages installed from pip.
I'd really like to be able to get this working in this way if possible so any advice would be much appreciated.
OK - so ended up being a very simple fix... Just needed to ensure docker-compose was updated to version 2.6 on my Mac.
For some reason brew wasn't updating my docker cask properly so was still running a package from early Jan 2022. Seems --ssh compatibility was added sometime between then and now.

Docker runs on WIndows and only on one of two Linux systems

I have a docker image that I have built that runs on my windows laptop as expected. When I copy and load it on to one of my two Linux systems I get this error when I run docker logs:
Error: 'docker/semantic_search_django/gunicorn.conf' doesn't exist
When I inspect the running container on Windows I can see that "missing" file! Furthermore, if I copy and load the same docker image to my second Linux system, it runs as expected.
This issue just happened today. I've been having success on all 3 systems for the past couple of months until today. Any suggestions would be greatly appreciated. Both Linux systems are running Ubuntu 18.04.5 LTS.
I've tried renamed the images, I've stopped and started the docker daemon, I've even restarted both Linux boxes.
Here are the commands I have used:
docker pull my.artifactory.com/ciee_ssrdjango
docker-compose up -d
My docker-compose.yml
version: "3.8"
services:
web:
image: m.artifactory.com/ciee_ssrdjango
env_file:
- proxy.env
- django.env
container_name: ciee_ssrdjango
volumes:
- query-results-volume:/code
expose:
- "${SSRDJANGO_PORT}"
extra_hosts:
dbhost: ${POSTGRES_DOCKER_IP}
depends_on:
- db
networks:
- ssr_network
networks:
ssr_network:
external: true
volumes:
postgresql-volume:
external: true
query-results-volume:
external: true
My Dockerfile:
FROM ubuntu:18.04
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
COPY ./requirements.txt /requirements.txt
#prevents being asked to set TZ
ARG DEBIAN_FRONTEND=noninteractive
RUN apt update -y && \
apt -y upgrade && \
apt install -y python3-pip && \
apt install -y build-essential libssl-dev libffi-dev libpq-dev python3-dev && \
apt install -y software-properties-common python3.8
RUN python3 -m pip install --upgrade pip setuptools wheel
ENV TZ=US/Eastern
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt update -y & apt install gcc libxml2-dev libxslt-dev postgresql postgresql-contrib postgresql-plpython-10 --no-install-recommends unixodbc-dev unixodbc libpq-dev -y
RUN mkdir /code # && mkdir /code/ciee
RUN pip install nltk
RUN export PATH=~/.local/bin:$PATH
RUN pip install -r /requirements.txt
COPY . /code/
WORKDIR /code
RUN useradd -m user && chmod 777 /home/user && mkdir /code/query_results && chmod 777 /code/query_results
USER user
CMD ["gunicorn", "semantic_search_django.wsgi:application", "--config", "docker/semantic_search_django/gunicorn.conf", "--keep-alive", "600"]
Here's the thing, I've been using these files and commands successfully for many weeks.
I can make one assumption. You are mounting query-results-volume into /code directory in container and your conf file is located inside it. The volume persists between containers – that's the nature of the volumes. So, somehow, the file in question (or even the folder) has been removed from the volume on the problem machine and now container can not get it.

Azure App Service failed to start with custom container (trying to configure SSH connection)

I'm following this guide from Microsoft to connect to my App Service (running on a custom container) using SSH.
The base image I'm using is tiangolo/uwsgi-nginx
And here's my docker file
FROM node
WORKDIR /nodebuild
ADD frontend /nodebuild
ADD .env /nodebuild
RUN export $(grep -v '^#' .env | xargs) && npm install && npm audit fix && npm run build
FROM tiangolo/uwsgi-nginx:latest
ENV UWSGI_INI uwsgi.ini
WORKDIR /app
COPY requirements.txt /app
RUN python3 -m pip install -r requirements.txt
ADD . /app
COPY --from=0 /nodebuild/build /app/frontend/build
RUN export $(grep -v '^#' .env | xargs) && python3 manage.py makemigrations -noinput && python3 manage.py migrate --noinput && python3 manage.py collectstatic --noinput
RUN rm .env
# THE BELOW IS FOR SETTING UP SSH
# ----------------------------------
ENV SSH_PASSWD "root:Docker!"
RUN apt-get update \
&& apt-get install -y --no-install-recommends dialog \
&& apt-get update \
&& apt-get install -y --no-install-recommends openssh-server \
&& echo "$SSH_PASSWD" | chpasswd
COPY sshd_config /etc/ssh/
COPY init.sh /usr/local/bin/
RUN chmod u+x /usr/local/bin/init.sh
EXPOSE 8000 2222
ENTRYPOINT ["init.sh"]
Notice the last line of the Dockerfile. It uses ENTRYPOINT to set the startup command.
Content of the init.sh file is as below (just to start the SSH service).
#!/bin/bash
set -e
echo "Starting SSH ..."
service ssh start
Now the strange thing is that if I remove the last line (ENTRYPOINT ["init.sh"]) then everything works fine. But if it's there, the app failed to start and the app logs say something like
Container abc_xy_0_57397aae didn't respond to HTTP pings on port: 80, failing site start. See container logs for debugging.
Your entrypoint is equivalent to the init process (PID 1) of a traditional Unix system. If that process terminates, your computer shuts down or reboots. Your bash script starts sshd and then terminates. You need to find out what the entrypoint was and call that to preserve the previous behaviour.

Gcloud meanjs build failing via docker install

I'm trying to deploy the following container on google cloud app engine using gcloud app deploy, it's the meanjs.org vanilla image. It uses a dockerfile, I'm new to docker and I'm trying to learn it on the fly, so if anyone can help that'd be great, thanks.
It looks as if the install of node via the dockerfile fails, I've checked node's documentation on github, and nothing has changed syntactically to what is in the existing dockerfile. I will attempt to recreate on my local workstation this morning, and will update this query shortly.
the errors are as follows..first docker error second errorbuild fail error
The docker file..
# Build:
# docker build -t meanjs/mean .
#
# Run:
# docker run -it meanjs/mean
#
# Compose:
# docker-compose up -d
FROM ubuntu:latest
MAINTAINER MEAN.JS
# 80 = HTTP, 443 = HTTPS, 3000 = MEAN.JS server, 35729 = livereload, 8080 = node-inspector
EXPOSE 80 443 3000 35729 8080
# Set development environment as default
ENV NODE_ENV development
# Install Utilities
RUN apt-get update -q \
&& apt-get install -yqq \
curl \
git \
ssh \
gcc \
make \
build-essential \
libkrb5-dev \
sudo \
apt-utils \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Install nodejs
RUN curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
RUN sudo apt-get install -yq nodejs \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Install MEAN.JS Prerequisites
RUN npm install --quiet -g gulp bower yo mocha karma-cli pm2 && npm cache clean
RUN mkdir -p /opt/mean.js/public/lib
WORKDIR /opt/mean.js
# Copies the local package.json file to the container
# and utilities docker container cache to not needing to rebuild
# and install node_modules/ everytime we build the docker, but only
# when the local package.json file changes.
# Install npm packages
COPY package.json /opt/mean.js/package.json
RUN npm install --quiet && npm cache clean
# Install bower packages
COPY bower.json /opt/mean.js/bower.json
COPY .bowerrc /opt/mean.js/.bowerrc
RUN bower install --quiet --allow-root --config.interactive=false
COPY . /opt/mean.js
# Run MEAN.JS server
CMD npm install && npm start
Okay, so after much wrestling unsuccessfully trying to install docker on windows, I went back to the dockerfile to try and identify the core issue here. Fortunately I find a solution as follows..
NodeJS is attempting to install on Ubuntu.
In the dockerfile at the root of the app
Ubuntu version is configured as:
FROM ubuntu:latest
simply change it to:
FROM ubuntu:14.04
I'm not sure if this is the best version to use for the build but it seems to be running successfully. Please feel free to amend/recommend an alternative solution. I'm new to Docker so pls be kind.

Resources