Fig volumes don't mount properly - docker

I tried to change project layout in Fig+Django tutorial to something like this:
.
├── docker
│   └── django
│   ├── Dockerfile
│   └── requirements.txt
├── fig.yml
└── project
├── figexample
│   ├── __init__.py
│   ├── __init__.pyc
│   ├── settings.py
│   ├── settings.pyc
│   ├── urls.py
│   ├── urls.pyc
│   ├── wsgi.py
│   └── wsgi.pyc
└── manage.py
And my fig.yml looks like:
db:
image: postgres
web:
build: ./docker/django
volumes:
- "project/:/code"
ports:
- "8000:8000"
links:
- db
command: "ls -a ."
But for some reasons instead of project directory it mounts current directory.
Result of fig logs in this case will be:
#$ fig logs
Attaching to figdjango_web_1, figdjango_db_1
db_1 | LOG: database system was shut down at 2014-11-05 15:15:41 UTC
db_1 | LOG: database system is ready to accept connections
db_1 | LOG: autovacuum launcher started
web_1 | .
web_1 | ..
web_1 | .fig.yml.swp
web_1 | docker
web_1 | fig.yml
web_1 | project
figdjango_web_1 exited with code 0
And my Dockerfile:
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
What I am doing wrong? How I can mount /project to /code?
sample on github

The sample you linked to on github is a bit different than what you describe in your question.
In the github sample, replace
command: python /project/manage.py runserver 0.0.0.0:8000
with
command: python /code/manage.py runserver 0.0.0.0:8000`
and it works.

Related

Use directory in docker-compose.yml's parent folder as volume

I have the following directory structure:
.
├── README.md
├── alice
├── docker
│   ├── compose-prod.yml
│   ├── compose-stage.yml
│   ├── compose.yml
│   └── dockerfiles
├── gauntlet
├── nexus
│   ├── Procfile
│   ├── README.md
│   ├── VERSION.txt
│   ├── alembic
│   ├── alembic.ini
│   ├── app
│   ├── poetry.lock
│   ├── pyproject.toml
│   └── scripts
nexus.Dockerfile
FROM python:3.10
RUN addgroup --system app && adduser --system --group app
WORKDIR /usr/src/pdn/nexus
COPY ../../nexus/pyproject.toml ../../nexus/poetry.lock* ./
ARG INSTALL_DEV=true
RUN bash -c "if [ $INSTALL_DEV == 'true' ] ; then poetry install --no-root ; else poetry install --no-root --no-dev ; fi"
COPY ../../nexus .
RUN chmod +x scripts/run.sh
ENV PYTHONPATH=/usr/src/pdn/nexus
RUN chown -R app:app $HOME
USER app
CMD ["./run.sh"]
The relevant service in compose.yml looks like this:
services:
nexus:
platform: linux/arm64
build:
context: ../
dockerfile: ./docker/dockerfiles/nexus.Dockerfile
container_name: nexus
restart: on-failure
ports:
- "8000:8000"
volumes:
- ../nexus:/usr/src/pdn/nexus:ro
environment:
- DATABASE_HOSTNAME=${DATABASE_HOSTNAME?}
env_file:
- .env
When I run compose up, I get the following error:
Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "./scripts/run.sh": permission denied: unknown
The service starts ok without the volume definition. I think it might be because of the the location of nexus in relation to the dockerfile or compose file, but the context is set to the parent.
I tried defining the volume as follows:
volumes:
- ./nexus:/usr/src/pdn/nexus:ro
But I get a similar error, in this case run.sh is not found: and a directory named nexus gets created in the docker directory
Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "./run.sh": stat ./run.sh: no such file or directory: unknown
Not sure what I'm missing.
I've two comments, not sure if they can solve your issue.
First although, in your compose.yml, your are allowed to reference your parent directories, that not the case in your Dockerfile, you can't copy from outside the context which you specified in your compose.yml file (.. which resolve to your app root). So you should change those lines:
COPY ../../nexus/pyproject.toml ../../nexus/poetry.lock* ./
COPY ../../nexus .
to
COPY ./nexus/pyproject.toml ./nexus/poetry.lock* ./
COPY ./nexus .
Second the volume overrides whatever in /usr/src/pdn/nexus by the content of ../nexus. This will render your whole copies, to /usr/src/pdn/nexus, useless. That may not be an issue if the contents are the same, but whatever permission you defined in your files may be gone. So if your contents are the same, the only issue you may have is your starting script, you can put it into a separate directory out of the /usr/src/pdn/nexus so that it won't be overridden, and don't forget to reference it correctly into the CMD.

COPY failed while using docker

Im building a express app but when i use the command sudo docker build - < Dockerfile i get the error COPY failed: file not found in build context or excluded by .dockerignore: stat package.json: file does not exist.
This is how my proyect structure looks like:
.
├── build
│   ├── server.js
│   └── server.js.map
├── Dockerfile
├── esbuild.js
├── package.json
├── package-lock.json
├── Readme.md
└── src
├── index.ts
├── navigate.ts
├── pages.ts
├── responses
│   ├── Errors.ts
│   └── index.ts
└── server.ts
And this is my Dockerfile content
FROM node:14.0.0
WORKDIR /usr/src/app
RUN ls -all
COPY [ "package.json", \
"./"]
COPY src/ ./src
RUN npm install
RUN node esbuild.js
RUN npx nodemon build/server.js
EXPOSE 3001
CMD ["npm", "run", "serve", ]
At the moment of run the command, im located in the root of the project.

Docker-compose - module import issue

I am working on a Flask app and my docker-compose.yml is not able to find my config folder. I have __init__.py files at every levels and I tried to add/remove the config path from my docker-compose.yml file but still the same issue. This is my project structure:
.
├── ...
├── web_messaging
│ ├── app.py
│ ├── __init__.py
│ ├── ...
│ └── config
│ ├── settings.py
│ ├── gunicorn.py
│ └── __init__.py
│
├── __init__py
├── .env
├── requirements.txt
├── docker-compose.yml
└── Dockerfile
When I am running docker-compose up --build, I am getting this error:
website_1 | - 'config' not found.
website_1 |
website_1 | Original exception:
website_1 |
website_1 | ModuleNotFoundError: No module named 'config'
website_1 | [2021-04-04 15:23:26 +0000] [8] [INFO] Worker exiting (pid: 8)
website_1 | [2021-04-04 15:23:26 +0000] [1] [INFO] Shutting down: Master
website_1 | [2021-04-04 15:23:26 +0000] [1] [INFO] Reason: Worker failed to boot.
docker-compose.yml
version: '2'
services:
website:
build: .
command: >
gunicorn -c "python:web_messaging.config.gunicorn" --reload "web_messaging.app:create_app()"
env_file:
- '.env'
volumes:
- '.:/web_messaging'
ports:
- '8000:8000'
app.py
def create_app(settings_override=None):
"""
Create a Flask application using the app factory pattern.
:param settings_override: Override settings
:return: Flask app
"""
app = Flask(__name__, instance_relative_config=True)
app.config.from_object('config.settings')
app.config.from_pyfile('settings.py', silent=True)
if settings_override:
app.config.update(settings_override)
error_templates(app)
app.register_blueprint(user)
app.register_blueprint(texting)
extensions(app)
configure_context_processors(app)
return app
...
Dockerfile
FROM python:3.7.5-slim-buster
RUN apt-get update && apt-get install -qq -y \
build-essential libpq-dev --no-install-recommends
ENV INSTALL_PATH /web_messaging
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD gunicorn -c "python:web_messaging.config.gunicorn" "web_messaging.app:create_app()"
Seems I solved it. The config folder must be outside of the main_messaging folder,

Problems running docker hosted app inherited from another developer

I'm a newbie to Docker and am running into problems running a Docker hosted app with multiple containers, that I inherited from another developer. This docker setup is working fine on a cloud server, however, when I try to install and run it locally then I'm getting errors.
When I run the "docker ps" command on the cloud server, here's what I'm getting:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c1fbb8968c89 app_eserver "nginx -g 'daemon of…" 3 weeks ago Up 34 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp server
85be26cfd761 auction "/usr/local/bin/uwsg…" 2 years ago Up 34 hours 8092/tcp auction-backend
6d2c1ad52ef0 redis:4-alpine "docker-entrypoint.s…" 2 years ago Up 3 weeks 6379/tcp redis
94417f94d374 postgres:10.0-alpine "docker-entrypoint.s…" 2 years ago Up 3 weeks 5432/tcp db
As I understand, the above means that there are 4 containers running the app. Within the directory structure, there is a docker-compose.yml file and 2 Dockerfiles (one in the nginx folder and the other in the folder with the code). There are no Dockerfiles for redis and postgres.
When I try to build using the docker-compose.yml file then I get the following error:
ERROR: The image for the service you're trying to recreate has been removed. If you continue, volume data could be lost. Consider backing up your data before continuing.
Continue with the new image? [yN]y
Pulling backend (auction:)...
ERROR: pull access denied for auction, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
When I try to build the app Dockerfile, it throws a build error.
When I try to build the nginx Dockerfile, it builds but quits the container immediately after running.
I have read a whole bunch on the topic, but I'm unable to figure out how to run this locally on my machine. Any pointers would be really appreciated.
Here's my docker-compose.yml file:
eserver:
container_name: server
build: nginx
restart: always
ports:
- "80:80"
- "443:443"
links:
- backend
volumes_from:
- backend
backend:
container_name: auction-backend
image: auction
hostname: auction-backend
restart: always
env_file: .env
external_links:
- db
- redis
volumes:
- appmedia:/app/auction/media
Here's the nginx Dockerfile:
FROM nginx:1.15-alpine
RUN rm /etc/nginx/conf.d/*
COPY auction.conf /etc/nginx/conf.d/
Here's the app Dockerfile:
FROM python:2.7-alpine
RUN apk update && apk add \
postgresql-dev \
python3-dev \
gcc \
jpeg-dev \
zlib-dev \
musl-dev \
linux-headers && \
mkdir app
WORKDIR /app/
# Ensure that Python outputs everything that's printed inside
# the application rather than buffering it.
ENV PYTHONUNBUFFERED 1
ADD . /app
RUN if [ -s requirements.txt ]; then pip install -r requirements.txt; fi
EXPOSE 8092
VOLUME /app/auction/assets
ENTRYPOINT ["/usr/local/bin/uwsgi", "--ini", "/app/uwsgi.ini"]
Here's my directory structure:
├── app
│   ├── docker-compose.yml
│   └── nginx
│   ├── Dockerfile
│   └── auction.conf
├── auction-master
│   ├── Dockerfile
│   ├── LICENSE.txt
│   ├── README.rst
│   ├── auction
│   │   ├── accounts
│   │   ├── auction
│   │   ├── common
│   │   ├── django_messages
│   │   ├── employer
│   │   ├── log_activity
│   │   ├── manage.py
│   │   ├── notifications
│   │   ├── provider
│   │   ├── static
│   │   ├── templates
│   │   ├── admin
│   │   └── unidecode
│   ├── requirements
│   │   ├── base.txt
│   │   ├── local.txt
│   │   ├── production.txt
│   │   └── test.txt
│   ├── requirements.txt
│   └── uwsgi.ini
According to your error message, it's possible that your Dockerfile is trying to pull a private image.
The reason everything works normally on the cloud but local can not be that docker on the cloud has been logged into an account that has access to the image above.
Now you just need to docker login to the account that has been logged in the cloud, everything can be up and running again.
I was able to run it locally simply by downloading the containers themselves.

MLflow 1.2.0 define MLproject file

Trying to run mlflow run by specifying MLproject and code which lives in a different location as MLproject file.
I have the following directory structure:
/root/mflow_test
.
├── conda
│   ├── conda.yaml
│   └── MLproject
├── docker
│   ├── Dockerfile
│   └── MLproject
├── README.md
├── requirements.txt
└── trainer
├── __init__.py
├── task.py
└── utils.py
When I'm run from: /root/
mlflow run mlflow_test/docker
I get:
/root/miniconda3/bin/python: Error while finding module specification for 'trainer.task' (ImportError: No module named 'trainer')
Since my MLproject file can't find the Python code.
I moved MLproject to mflow_test and this works fine.
This is my MLproject entry point:
name: mlflow_sample
docker_env:
image: mlflow-docker-sample
entry_points:
main:
parameters:
job_dir:
type: string
default: '/tmp/'
command: |
python -m trainer.task --job-dir {job_dir}
How can I run mlflow run and pass the MLproject and ask it to look in a different folder?
I tried:
"cd .. && python -m trainer.task --job-dir {job_dir}"
and I get:
/entrypoint.sh: line 5: exec: cd: not found
Dockerfile
# docker build -t mlflow-gcp-example -f Dockerfile .
FROM gcr.io/deeplearning-platform-release/tf-cpu
RUN git clone github.com/GoogleCloudPlatform/ml-on-gcp.git
WORKDIR ml-on-gcp/tutorials/tensorflow/mlflow_gcp
RUN pip install -r requirements.txt

Resources