How to access localhost from docker to mobile? - docker

I want to view the apps from mobile devices. I have tried to use the ip address of my laptop but i still cant accessed to the apps from mobile devices. I can run the apps locally in my laptop but not in other devices although they are using the same network WIFI. I have tried to add ENV HOST 0.0.0.0 which helps to solve connection outside but it still cannot be accessible in my mobile devices. Is there any solution for this ?
Docker File
FROM node:16.14.0-alpine3.15 as builder
ENV HOST 0.0.0.0
WORKDIR /app
COPY . .
RUN yarn install \
--prefer-offline \
--frozen-lockfile \
--non-interactive \
--production=false
RUN yarn generate
RUN rm -rf node_modules && \
NODE_ENV=production yarn install \
--prefer-offline \
--pure-lockfile \
--non-interactive \
--production=true
# production stage
FROM nginx:stable-alpine as production-stage
COPY --from=builder /app/.output/public /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Docker-compose file
version: '3'
services:
backend:
image: python:3.6
volumes:
- ./backend:/backend
ports:
- 8000:8000
command: bash -c "cd ./backend && pip install -r requirements.txt && python manage.py migrate && python manage.p>
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
logging:
driver: "json-file"
options:
max-file: "5"
max-size: 50m
restart: unless-stopped
ports:
- 3001:80

Related

how to get docker-compose populate old volumes?

Im running a docker container where i copy the content of current folder in /app in the container.
Then I put in a volume /app/media folder of the container.
However, when the volume is already created from a previous docker-compose build, i dont find all the new files put in my ./media folder, supposed to be copied to /app/media in the container...
Therefore i'm wondering how docker is populating the volume ? Is it not supposed to check in the container folder new files and put them in the volume?
I had the issue first and it was /media folder in the .dockerignore file, but now it's doing this again with other files in /media folder
Following What is the right way to add data to an existing named volume in Docker? I ve tried to do :
docker run -v mediafiles:/data --name helper busybox true
cd ./media && docker cp . helper:/data
docker rm helper
And it is now working
Thank you
Here is my docker-compose.yml
version: '3.7'
services:
nginx:
build:
context: .
dockerfile: ./compose/production/nginx/Dockerfile
restart: always
ports:
- 80:80
depends_on:
- backend
- frontend
volumes:
- staticfiles:/app/static
- mediafiles:/app/media
networks:
spa_network:
frontend:
build:
context: .
dockerfile: ./compose/production/frontend/Dockerfile
restart: always
stdin_open: true
command: yarn start
ports:
- 3000:3000
depends_on:
- backend
networks:
spa_network:
ipv4_address: 172.20.128.3
backend:
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
restart: always
command: /start
volumes:
- staticfiles:/app/static
- mediafiles:/app/media
- sqlite_db:/app/db
ports:
- 8000:8000
env_file:
- ./env/prod-sample
networks:
spa_network:
ipv4_address: 172.20.128.2
networks:
spa_network:
ipam:
config:
- subnet: 172.20.0.0/16
volumes:
sqlite_db:
staticfiles:
mediafiles:
Here is my dockerfile for backend (where i dont find the /app/media files)
FROM python:3.8-slim-buster
ENV PYTHONUNBUFFERED 1
RUN apt-get update \
# dependencies for building Python packages
&& apt-get install -y build-essential netcat \
# psycopg2 dependencies
&& apt-get install -y libpq-dev \
# Translations dependencies
&& apt-get install -y gettext \
# cleaning up unused files
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
&& rm -rf /var/lib/apt/lists/*
RUN addgroup --system django \
&& adduser --system --ingroup django django
# Requirements are installed here to ensure they will be cached.
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
#COPY ./compose/production/django/entrypoint /entrypoint
#RUN sed -i 's/\r$//g' /entrypoint
#RUN chmod +x /entrypoint
#RUN chown django /entrypoint
COPY ./compose/production/django/start /start
RUN sed -i 's/\r$//g' /start
RUN chmod +x /start
RUN chown django /start
WORKDIR /app
# avoid 'permission denied' error
# copy project code
COPY . .
RUN chown -R django:django /app
#USER django
#ENTRYPOINT ["/entrypoint"]
If the volume is new, then docker will copy any files in the image to the new volume.
If the volume isn't new, then nothing is copied and you get the existing, old, contents of the volume.
More info here: https://docs.docker.com/storage/volumes/#populate-a-volume-using-a-container

Docker Compose with custom django app, ngnix, postgres, certbot, and letsencrypt

I am building my first customs Docker with Docker compose and I feel I am very close to finishing it but I have having an issue with what seem to be the entrypoint
FYI i am tryng to deploy a django app with postgres, nginx, certbot, letsencrypt
this is what I am seeing:
certbot_1 | /data/entrypoint.sh: exec: line 14: certbot: not found
nginx_1 | /data/entrypoint.sh: exec: line 14: run: not found
EDIT: I was able to make them run with the editted code but Ngnix exits with a code 0 and I don't know why and wont run
I have tried changing path with no luck
I am not sure what I am doing wrong
any advice you can provide would be great!
docker compose file:
version: '3.8'
services:
web:
build: .
command: gunicorn FleetOptimal.wsgi:application --bind 0.0.0.0:8000
environment:
- TZ=America/Toronto
volumes:
- /home/littlejiver/src/FleetOptimal/FleetOptimal/FleetOptimal/:/manage/
- /home/littlejiver/src/FleetOptimal/FleetOptimal/:/data/
- /home/littlejiver/src/FleetOptimal/FleetOptimal/FleetOptimal/staticfiles/:/static_volume/
- /home/littlejiver/src/FleetOptimal/FleetOptimal/FleetOptimal/images/:/media_volume/
expose:
- 8000
env_file:
- ./.env.dev
depends_on:
- db
db:
image: postgres:13.0-alpine
volumes:
- /home/littlejiver/docker/postgres/postgres_data:/postgres_data/
environment:
- PUID=1000
- PGID=1000
- TZ=Canada/Toronto
- POSTGRES_USER=someusername
- POSTGRES_PASSWORD=#somepassword
- POSTGRES_DB=somedb
nginx-proxy:
tty: true
image: nginx:latest
container_name: nginx-proxy
build: .
command: nginx -g "daemon off"
restart: always
environment:
- NGINX_DOCKER_GEN_CONTAINER=nginx-proxy-letsencrypt
ports:
- 443:443
- 80:80
volumes:
- /home/littlejiver/src/FleetOptimal/FleetOptimal/:/data/
- /home/littlejiver/src/FleetOptimal/FleetOptimal/FleetOptimal/staticfiles/:/static_volume/
- /home/littlejiver/src/FleetOptimal/FleetOptimal/FleetOptimal/images/:/media_volume/
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- /var/run/docker.sock:/tmp/docker.sock:ro
depends_on:
- web
nginx-proxy-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
env_file:
- ./.env.dev
environment:
- NGINX_DOCKER_GEN_CONTAINER=nginx-proxy-letsencrypt
volumes:
- /home/littlejiver/src/FleetOptimal/FleetOptimal/:/data/
- /var/run/docker.sock:/var/run/docker.sock:ro
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- acme:/etc/acme.sh
depends_on:
- nginx-proxy
volumes:
postgres_data:
static_volume:
media_volume:
certs:
html:
vhost:
acme:
DockerFile for Ngnix:
FROM nginx:latest
COPY vhost.d/default /etc/nginx/vhost.d/default
COPY custom.conf /etc/nginx/conf.d/custom.conf
DockerFile for Web:
###########
# BUILDER #
###########
# pull official base image
FROM python:3.9.6-alpine as builder
# set work directory
WORKDIR /manage
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
# lint
RUN apk add zlib-dev jpeg-dev gcc musl-dev
RUN pip install --upgrade pip
# RUN pip install flake8==3.9.2
COPY . .
# RUN flake8 --ignore=E501,F401 /manage
# install dependencies
COPY ./requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r requirements.txt
#########
# FINAL #
#########
# pull official base image
FROM python:3.9.6-alpine
# create directory for the app user
# create the app user
RUN addgroup -S littlejiver && adduser -S littlejiver -G littlejiver
# create the appropriate directories
ENV HOME=/manage
ENV APP_HOME=/manage
WORKDIR $APP_HOME
# install dependencies
RUN apk add zlib-dev jpeg-dev gcc musl-dev
COPY --from=builder /usr/src/app/wheels /wheels
COPY --from=builder /manage/requirements.txt .
RUN pip install --no-cache /wheels/*
# copy entrypoint.prod.sh
COPY ./entrypoint.sh .
RUN sed -i 's/\r$//g' $APP_HOME/entrypoint.sh
RUN chmod +x $APP_HOME/entrypoint.sh
# copy project
COPY . $APP_HOME
# chown all the files to the app user
RUN chown -R littlejiver:littlejiver $APP_HOME
# change to the app user
USER littlejiver
WORKDIR /manage
# run entrypoint.prod.sh
ENTRYPOINT ["sh", "/data/entrypoint.sh"]
entrypoint.sh
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
exec "$#"
Thanks
-littlejiver
roughly based of this guide:

I'm trying to deploy my dockerized API to Portainer via Git but some images in my docker-compose.yml are getting published while others aren't. Why?

I have a Django API that is completely dockerized and it works locally as well as in my Heroku deployment for production. However when I try to connect the Git repo to Portainer, it is able to successfully pull it but it doesn't publish all the images. It is only able to give the port for the pgadmin image, not for the database or the redis image or the nginx or the django web service itself. These are things I need to get the whole thing working. I'm not sure what's wrong or what to do about it.
This is my docker-compose.yml file:-
version: "3.9"
services:
nginx:
build: ./nginx
ports:
- 8001:80
volumes:
- static-data:/vol/static
depends_on:
- web
restart: "on-failure"
redis:
image: redis:latest
ports:
- 6379:6379
volumes:
- ./config/redis.conf:/redis.conf
command: ["redis-server", "/redis.conf"]
restart: "on-failure"
db:
image: postgres:13
volumes:
- ./data/db:/var/lib/postgresql/data
env_file:
- database.env
restart: always
web:
build: .
command: bash -c "python manage.py makemigrations && python manage.py migrate && python manage.py runserver 0.0.0.0:8101"
container_name: vidhya_io_api
volumes:
- .:/shuddhi
ports:
- 8101:8101
depends_on:
- db
- redis
restart: "on-failure"
volumes:
database-data: # named volumes can be managed easier using docker-compose
static-data:
This is the Dockerfile:-
FROM python:3.8.3
LABEL maintainer="https://github.com/ryarasi"
# ENV MICRO_SERVICE=/app
# RUN addgroup -S $APP_USER && adduser -S $APP_USER -G $APP_USER
# set work directory
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
# create root directory for our project in the container
RUN mkdir /shuddhi
# COPY ./scripts /scripts
WORKDIR /shuddhi
# Copy the current directory contents into the container at /shuddhi
ADD . /shuddhi/
# Install any needed packages specified in requirements.txt
# This is to create the collectstatic folder for whitenoise
RUN pip install --upgrade pip && \
pip install --no-cache-dir -r /requirements.txt && \
mkdir -p /vol/web/static && \
mkdir -p /vol/web/media
# ENV PATH="/scripts:$PATH"
# CMD ["run.sh"]
CMD python manage.py wait_for_db && python manage.py collectstatic --noinput && python manage.py migrate && gunicorn shuddhi.wsgi:application --bind 0.0.0.0:8101
After I set up the stack and run it, this is what I see:-
You can see that the published ports are missing for all the images other than the Redis image.
I have no idea why this is happening.
What should I do to get it all published and working?

Access docker app outside container using docker-compose

I'm running docker-compose to run my application which listen to REST api calls.
For some reason it is not accessible from outside.
I don't understand what am I doing wrong.
Here is the configuration:
version: '3.4'
services:
rabbitmq:
image: rabbitmq:3-management
ports:
- 5672:5672
- 15672:15672
my-server:
image: my-server
build:
context: .
dockerfile: ./apiserver/Dockerfile
ports:
- 5000:5000
restart: on-failure
depends_on:
- rabbitmq
and my Dockerfile is:
FROM ubuntu:16.04
RUN apt-get update -y && \
apt-get install -y python-pip python-dev
# We copy just the requirements.txt first to leverage Docker cache
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip install -r requirements.txt
COPY . /app
EXPOSE 5000
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]

How to run php-fpm in docker-compose.yml?

I tried to build a container used docker-compose. So I wrote the dockerfile and docker-compose.yml like following:
dockerfile
FROM ubuntu:latest
RUN apt-get update
RUN apt-get install -y expect
RUN apt-get -y install software-properties-common
RUN apt-add-repository ppa:ondrej/php
RUN apt-get -y install php7.1 php7.1-fpm
RUN apt-get install php7.1-mysql
RUN apt-get -y install nginx
RUN apt-get -y install vim
COPY default /etc/nginx/sites-available/default
COPY www.conf /etc/php/7.1/fpm/pool.d/www.conf
COPY test /var/www/html/test
CMD service php7.1-fpm start && nginx -g "daemon off;"
docker-compose.yml
version: '3'
services:
web:
build:
context: .
dockerfile: Dockerfile
ports:
- "3011:80"
When I run following command, the php7.1-fpm is run success.
docker-compose build
docker-compose up --force-recreate -d
But I want to move the CMD from dockerfile to docker-compose, so I changed the file like following:
docker-compose.yml
command: service php7.1-fpm start && nginx -g "daemon off;"
But this time php7.1-fpm is not running.
How to fix this issue, so that I can run php7.1-fpm in docker-compose.yml?
you can not use service php7.1-fpm start in your Dockerfile, because container is just a process, not a real virtual machine, main process down and others will down neither
docker suggest you to divide them in different container, php-fpm, nginx, single image single container
solution:
docker/php-fpm/Dockerfile
FROM php:7.2-fpm
RUN docker-php-ext-install pdo pdo_mysql mbstring
docker-compose.yml:
version: '2.1'
services:
nginx:
image: nginx:latest
ports:
- 8001:80
volumes:
- ./:/app
# nginx configs
- ./docker/nginx/conf/nginx.conf:/etc/nginx/nginx.conf
php-fpm:
build: ./docker/php-fpm
volumes:
- ./:/app
php-composer:
restart: 'no'
image: composer
volumes:
- ./:/app
command: install
nodejs:
restart: 'no'
image: node:8.9
volumes:
- ./:/app
command: /bin/bash -c "cd /app && npm install && npm run prod"
networks:
default:

Resources