I have the following docker-compose.yml:
version: '3'
services:
website:
build:
context: website
env_file: website/config/production.env
The website service corresponds to this website/Dockerfile:
FROM python:2.7
RUN apt-get update
COPY src /usr/website
WORKDIR /usr/website
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
RUN python manage.py migrate
I also have website/config/production.env, with several settings, such as the following:
DJANGO_SECRET_KEY=
DJANGO_DATABASE_NAME=
DJANGO_DATABASE_USER=
DJANGO_DATABASE_PASSWORD=
DJANGO_DATABASE_HOST=
DJANGO_DATABASE_PORT=3306
If I run docker-compose config, the variables show properly under the environment key, but when I run docker-compose build I get
django.core.exceptions.ImproperlyConfigured: DJANGO_SECRET_KEY is not
in your environment
That's because I have this on my settings.py file:
def require_environ(key):
if key in os.environ:
return os.environ.get(key)
raise ImproperlyConfigured('%s is not in your environment' % (key,))
So, the code is working as it should, but the variable is not defined. Why not?
As #David Maze said in the comments above, you can't run migrations from a docker file. The env vars are not available during the build stage, and thus the migrations must/should be run after.
To solve this, I created a script start.sh, as follows:
#!/bin/bash
set -e
# Django: migrate
#
# Django will see that the tables for the initial migrations already exist
# and mark them as applied without running them. (Django won’t check that the
# table schema match your models, just that the right table names exist).
python manage.py migrate --fake-initial
# Django: collectstatic
#
# This will upload the files to s3 because of django-storages-redux
# and the setting:
# STATICFILES_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
python manage.py collectstatic --noinput
#####
# Start uWSGI
#####
/usr/local/bin/uwsgi --emperor /etc/uwsgi/django-uwsgi.ini
The script is ran as the last line of the Dockerfile:
CMD [ "/usr/start.sh" ]
Related
I am learning Docker and have a little confusion that I would greatly appreciate some advice on.
When creating a new rails application. Following the guidelines of Docker's tutorial Quickstart: Compose & Rails. They're steps are as follows.
Create Dockerfile & Entrypoint
Create Gemfile with source and rails gem listed.
Create an empty Gemfile.lock
Create a docker-compose.yml and name services.
Generate a new rails app using docker-compose run.
The rails new command.
docker-compose run --no-deps web rails new . --force -d mysql
This is my simple Dockerfile:
FROM ruby:3.1.1-alpine3.15
RUN apk update -q
RUN apk add bash yarn git build-base nodejs mariadb-dev tzdata
WORKDIR /app
COPY Gemfile /app/Gemfile
COPY Gemfile.lock /app/Gemfile.lock
COPY package.json /app/package.json
COPY yarn.lock /app/yarn.lock
RUN gem install bundler
RUN gem update bundler
RUN bundle install
COPY entrypoint.sh /usr/bin
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
This is my docker-compose.yml:
version: "3.9"
services:
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/app
ports:
- "3000:3000"
As you can see I only have one service listed in my docker-compose.yml, because I do not want any other services or containers.
Everything works fine when I use the command docker-compose run rails new ... it generates a new rails app and persists the files to my local source folder. So I can open and edit the files to develop the application. Perfect.
But when I try and generate a new Rails app without using docker-compose.yml and only the Dockerfile. The generated files are not persisted in my local source folder, only on the container. I can only assume because I am forced to build the image before running it in a container to generate the new rails app.
When using without docker-compose.yml. I have to build the image first.
docker build -t image_name .
Then I can run the container and generate the rails app.
docker run image_name rails new . --force -d mysql
Obviously, this doesn't persit the new files in my local source folder.
How can I generate a new Rails app with just my Dockerfile without docker compose and still have the newly generated files persist in my local source folder?
I must be missing something, but I've done a lot of research and can't find an answer.
In docker-compose.yml you have a 'volumes' key as seen below.
volumes:
- .:/app
But when using it in the CLI you miss passing these volumes.
You can pass these volumes as in the code below.
docker run -d --name=rails --volume /your/workdir:/app image_name
If you want to learn more about volumes, you can find out here.
I've created a simple Django application, and I want to set up a cron job. I'm using django-cron package.
I tried 2 approaches, the first one without docker-compose, I used this approach, but then I realised it wasn't working as the alpine shell was BusyBox, and it didn't have the necessary commands.
Then for the second way, I commented out a few commands in Dockerfile and followed the approach shown in this repository.
I've tried literally everything over 3 days, but every approach has some problems that cannot be FIXED.
Keeping following things in mind -
Alpine version DOES NOT have apt-get, service, cron commands.
I don't want to use Ubuntu base OS image, as it is very big.(BUT IF YOU PROVIDE A PERFECT WORKING SOLUTION, I'M WILLING TO DO ANYTHING)
Dockerfile file
# syntax=docker/dockerfile:1
FROM python:3.10.2-alpine
ENV PYTHONUNBUFFERED=1
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
# Creating DB Tables
RUN python manage.py makemigrations
RUN python manage.py migrate
# Configuring CRONJOB
COPY bashscript /bin/bashscript
# COPY cronjob /var/spool/cron/crontabs/root
# RUN chmod +x /bin/bashscript
# RUN crond -l 2 -b # THIS ISN'T WORKING FOR IDK WHAT REASON
RUN python manage.py collectstatic --noinput
EXPOSE 8000
CMD [ "python", "manage.py", "runserver", "0.0.0.0:8000" ]
docker-compose.yml file
version: "3.9"
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
mycron:
build: .
volumes:
- .:/code
entrypoint: sh /usr/src/app/crontab.sh
crontab.sh file
#!/usr/bin/env bash
# Ensure the log file exists
touch /var/log/crontab.log
# Ensure permission on the command
chmod a+x /bin/bashscript
# Added a cronjob in a new crontab
echo "*/1 * * * * python manage.py runcrons >> /var/log/crontab.log 2>&1" > /etc/crontab
# Registering the new crontab
crontab /etc/crontab
# Starting the cron
/usr/sbin/service cron start # CAN'T USE THIS BECAUSE service is not a command
# Displaying logs
# Useful when executing docker-compose logs mycron
tail -f /var/log/crontab.log
bashscript file
#!/bin/sh
python manage.py runcrons # THIS IS THE COMMAND I WANT TO EXECUTE EVERY nth MINUTES
cronjob file
# do daily/weekly/monthly maintenance
# min hour day month weekday command
*/1 * * * * /bin/bashscript
You want to run a script in a container but the cronjob doesn't need to be configured in the container itself.
You can create a script in the container to do whatever you want. And, in the server, schedule the cronjob to execute a docker exec command that runs the script in the container. Solved.
I am writing this request today because I will like to create my first Docker container. I watched a lot of tutorials, and there I come across a problem that I cannot solve, I must have missed a piece of information.
My program is quite basic, I would like to create a volume so as not to lose the information retrieved each time the container is launched.
Here is my docker-compose
version: '3.3'
services:
homework-logger:
build: .
ports:
- '54321:1235'
volumes:
- ./app:/app
image: 'cinabre/homework-logger:latest'
networks:
- homeworks
networks:
homeworks:
name: homeworks-logger
and here is my DockerFile
FROM debian:9
WORKDIR /app
RUN apt-get update -yq && apt-get install wget curl gnupg git apt-utils -yq && apt-get clean -y
RUN apt-get install python3 python3-pip -y
RUN git clone http://192.168.5.137:3300/Cinabre/Homework-Logger /app
VOLUME /app
RUN ls /app
RUN python3 -m pip install bottle beaker bottle-cork requests
CMD ["python3", "main.py"]
I did an "LS" in the container to see if the / app folder was empty: it is not
Any ideas?
thanks in advance !
Volumes are there to hold your application data, not its code. You don't usually need the Dockerfile VOLUME directive and you should generally avoid it unless you understand exactly what it does.
In terms of workflow, it's commonplace to include the Dockerfile and similar Docker-related files in the source repository yourself. Don't run git clone in the Dockerfile. (Credential management is hard; building a non-default branch can be tricky; layer caching means Docker won't re-pull the branch if it's changed.)
For a straightforward application, you should be able to use a near-boilerplate Dockerfile:
FROM python:3.9 # unless you have a strong need to hand-install it
WORKDIR /app
# Install packages first. Unless requirements.txt changes, Docker
# layer caching won't repeat this step. Do not list out individual
# packages in the Dockerfile; list them in Python-standard setup.py
# or Pipfile.
COPY requirements.txt .
# ...in the "system" Python space, not a virtual environment.
RUN pip3 install -r requirements.txt
# Copy the rest of the application in.
COPY . .
# Set the default command to run the container, and other metadata.
EXPOSE 1235
CMD ["python3", "main.py"]
In your application code you need to know where to store the data. You might put this in an environment variable:
import os
DATA_DIR = os.environ.get('DATA_DIR', '.')
with open(f"${DATA_DIR}/output.txt", "w") as f:
...
Then in your docker-compose.yml file, you can specify an alternate data directory and mount that into your container. Do not mount a volume over the /app directory containing your application's source code.
version: '3.8'
services:
homework-logger:
build: .
image: 'cinabre/homework-logger:latest' # names the built image
ports:
- '54321:1235'
environment:
- DATA_DIR=/data # (consider putting this in the Dockerfile)
volumes:
- homework-data:/data # (could bind-mount `./data:/data` instead)
# Use the automatic `networks: [default]`
volumes:
homework-data:
I'm trying to use Docker and Docker Compose to create a containerized app. I have a PubNub account, which allows me to use different API keys for different environments (dev, test, prod). To help me build images for this, I am trying to use build args set with an env_file.
It's not working.
WARNING: The PUB_KEY variable is not set. Defaulting to a blank string.
WARNING: The SUB_KEY variable is not set. Defaulting to a blank string.
Questions:
What mistake am I making in setting the build args?
How do I fix it?
Is this a good way to set ENV variables for the containers scan and flask?
At the very bottom is an IntelliJ IDE screenshot, or the text code is just below.
Here is the docker-compose.yml content:
version: '3.6'
services:
scan:
env_file:
- sample.env
build:
context: .
dockerfile: Dockerfile
args:
pub_key: $PUB_KEY
sub_key: $SUB_KEY
target: scan
image: bt-beacon/scan:v1
flask:
env_file:
- sample.env
build:
context: .
dockerfile: Dockerfile
args:
pub_key: $PUB_KEY
sub_key: $SUB_KEY
target: flask
image: bt-beacon/flask:v1
ports:
- "5000:5000"
And the Dockerfile:
# --- BASE NODE ---
FROM python:3.6-jessie as base
ARG pub_key
ARG sub_key
RUN test -n "$pub_key"
RUN test -n "$sub_key"
# --- SCAN NODE ---
FROM base as scan
ENV PUB_KEY=$pub_key
ENV SUB_KEY=$sub_key
COPY app/requirements.scan.txt /
RUN apt-get update
RUN apt-get -y install bluetooth bluez bluez-hcidump python-bluez python-numpy python3-dev libbluetooth-dev libcap2-bin
RUN pip install -r /requirements.scan.txt
RUN setcap 'cap_net_raw,cap_net_admin+eip' $(readlink -f $(which python))
COPY app/src /app
WORKDIR /app
CMD ["./scan.py", "$pub_key", "$sub_key"]
# -- FLASK APP ---
FROM base as flask
ENV SUB_KEY=$sub_key
COPY app/requirements.flask.txt /
COPY app/src /app
RUN pip install -r /requirements.flask.txt
WORKDIR /app
EXPOSE 5000
CMD ["flask", "run"]
Finally, sample.env:
# PubNub app keys here
PUB_KEY=xyz1
SUB_KEY=xyz2
env_file can only set environment variables inside a service container. Variables from env_file cannot be injected into docker-compose.yml itself.
You have such options (described there in detail):
inject these variables into the shell, from which you run docker-compose up
create .env file containing these variables (syntax identical to your sample.env)
Personally I would separate image building process and container launching process (take away image building responsibility from docker-compose to external script, then building process can be configured easily).
I tried to make a simple application with Yesod and PostgreSQL using Docker Compose but RUN yesod init -n myApp -d postgresql didn't seem to work as expected.
I defined Dockerfile and docker-compose.yml as below:
Dockerfile:
FROM shuny/ghc-7.8.4:latest
MAINTAINER shuny
# Create default config
RUN cabal update
# Add stackage remote repo
RUN sed -i 's/^remote-repo: [a-zA-Z0-9_\/:.]*$/remote-repo: stackage:http:\/\/www.stackage.org\/lts/g' /root/.cabal/config
# Update packages
RUN cabal update
# Generate locale otherwise happy (because of tf-random) will fail
RUN locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
RUN echo $LANG
# Install build tools for yesod
RUN cabal install alex happy yesod-bin
# Install library for yesod-postgres
RUN apt-get update && apt-get install -y libpq-dev
RUN mkdir /code
WORKDIR /code
RUN yesod init -n myApp -d postgresql
WORKDIR /code/myApp
RUN cabal sandbox init && cabal install --only-dependencies --max-backjumps=-1 --reorder-goals
RUN cabal configure && cabal build && cabal install
ADD . /code
WORKDIR /code
# ADD settings.yml /code/myApp/config/
docker-compose.yml:
database:
image: postgres
ports:
- "5432"
web:
build: .
tty: true
command: yesod devel
volumes:
- .:/code/
ports:
- "3000:3000"
links:
- database
and docker-compose build returned as below:
Step 0 : FROM shuny/ghc-7.8.4:latest
...
Step 17 : WORKDIR /code
---> Running in bf99d0aca48c
---> 37c3c94338d7
Removing intermediate container bf99d0aca48c
Successfully built 37c3c94338d7
but when I check like this:
$docker-compose run web /bin/bash
root#0fe5fb1a3b20:/code# ls
root#0fe5fb1a3b20:/code#
it showed nothing while this commands seem to work as expected:
docker run -ti 37c3c94338d7
root#31e94428de37:/code# ls
docker-compose.yml Dockerfile myApp settings.yml
root#31e94428de37:/code# ls myApp/
app config Handler Model.hs Settings.hs test
Application.hs dist Import myApp.cabal static
cabal.sandbox.config Foundation.hs Import.hs Settings templates
How can I fix it?
I really appliciate any feedback, thank you.
You are doing strange things with volumes and the ADD instruction.
First you build your application inside the image:
RUN yesod init -n myApp -d postgresql
WORKDIR /code/myApp
RUN cabal sandbox init && cabal install --only-dependencies --max-backjumps=-1 --reorder-goals
RUN cabal configure && cabal build && cabal install
Then you add the content of the folder that contains the Dockerfile in the /code folder of the image. I guess this step is useless.
ADD . /code
Then, if you run a container without -volume option, everything works fine
docker run -ti 37c3c94338d7
But in your docker-compose.yml file, you specified a volume option that overides the /code folder in the container with the folder that contains the docker-compose.yml file on the host machine. Therefore, you don't have the content generated during the build of your image anymore.
There are two possibilities:
Don't use the volume instruction in the docker-compose.yml file
Put the content of the /code/myApp/ folder of the image inside the ./myApp folder of the host.
It depends on why you want to use the volume option in docker-compose.yml.
I don't really know what is your goal. But if what you are trying to do is to access to the files built inside the container from the host machine, maybe this should do what you are looking for:
Remove the build steps from your Dockerfile
Run a shell inside a "web" container: docker-compose run web bash
Launch the build commands
So you will have built your application while the volume was mounted and will see the files on the host machine.
Exit the shell
Launch Docker Compose normally
If you just want to be able to backup the content of the /code/myApp/ folder, maybe you should omit the path on the host machine from the volume section of docker-compose.yml.
volumes:
- /code/
And follow this section of the documentation
I hope it helps