How can I install a private repo inside a python image docker? I tried many alternatives but all were unsuccesful. Seems I cant get to set ssh credentials inside a python based image.
My Docker image:
FROM python:3.8
ENV PATH="/scripts:${PATH}"
# Django files
COPY ./requirements.txt /requirements.txt
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
the requirements file has:
git+ssh://git#github.com/my_repo_name.git#dev
And build is triggered from aocker compose file:
....
django_service:
build:
context: ..
dockerfile: Dockerfile
volumes:
- static_data:/vol/web
environment:
- SECRET_KEY=${SECRET_KEY}
depends_on:
....
Perhaps you could use https instead of ssh:
git clone https://${GH_TOKEN}#github.com/username/my_repo_name.git#dev
To set the token inside the Dockerfile use: ARG GH_TOKEN
To keep the token outside the Dockerfile you can build your docker
image with passing the arg like this --build-arg GH_TOKEN=MY_TOKEN
Related
I am writing this request today because I will like to create my first Docker container. I watched a lot of tutorials, and there I come across a problem that I cannot solve, I must have missed a piece of information.
My program is quite basic, I would like to create a volume so as not to lose the information retrieved each time the container is launched.
Here is my docker-compose
version: '3.3'
services:
homework-logger:
build: .
ports:
- '54321:1235'
volumes:
- ./app:/app
image: 'cinabre/homework-logger:latest'
networks:
- homeworks
networks:
homeworks:
name: homeworks-logger
and here is my DockerFile
FROM debian:9
WORKDIR /app
RUN apt-get update -yq && apt-get install wget curl gnupg git apt-utils -yq && apt-get clean -y
RUN apt-get install python3 python3-pip -y
RUN git clone http://192.168.5.137:3300/Cinabre/Homework-Logger /app
VOLUME /app
RUN ls /app
RUN python3 -m pip install bottle beaker bottle-cork requests
CMD ["python3", "main.py"]
I did an "LS" in the container to see if the / app folder was empty: it is not
Any ideas?
thanks in advance !
Volumes are there to hold your application data, not its code. You don't usually need the Dockerfile VOLUME directive and you should generally avoid it unless you understand exactly what it does.
In terms of workflow, it's commonplace to include the Dockerfile and similar Docker-related files in the source repository yourself. Don't run git clone in the Dockerfile. (Credential management is hard; building a non-default branch can be tricky; layer caching means Docker won't re-pull the branch if it's changed.)
For a straightforward application, you should be able to use a near-boilerplate Dockerfile:
FROM python:3.9 # unless you have a strong need to hand-install it
WORKDIR /app
# Install packages first. Unless requirements.txt changes, Docker
# layer caching won't repeat this step. Do not list out individual
# packages in the Dockerfile; list them in Python-standard setup.py
# or Pipfile.
COPY requirements.txt .
# ...in the "system" Python space, not a virtual environment.
RUN pip3 install -r requirements.txt
# Copy the rest of the application in.
COPY . .
# Set the default command to run the container, and other metadata.
EXPOSE 1235
CMD ["python3", "main.py"]
In your application code you need to know where to store the data. You might put this in an environment variable:
import os
DATA_DIR = os.environ.get('DATA_DIR', '.')
with open(f"${DATA_DIR}/output.txt", "w") as f:
...
Then in your docker-compose.yml file, you can specify an alternate data directory and mount that into your container. Do not mount a volume over the /app directory containing your application's source code.
version: '3.8'
services:
homework-logger:
build: .
image: 'cinabre/homework-logger:latest' # names the built image
ports:
- '54321:1235'
environment:
- DATA_DIR=/data # (consider putting this in the Dockerfile)
volumes:
- homework-data:/data # (could bind-mount `./data:/data` instead)
# Use the automatic `networks: [default]`
volumes:
homework-data:
I am new to docker, currently following book to learn Django.
Is it necessary to be in virtual environment when running the below
command?
I have gone through docker basic videos which says it saves each apps as images. But where these images are saved?.
Does this line make the current pc root directory or dockers Image '
WORKDIR /usr/src/app'
ADD is placed before RUN in the Dockerfile.
$ sudo docker-compose build
But I got these errors.
ERROR: Service 'app' failed to build: ADD failed: stat /var/lib/docker/tmp/docker-builder912263941/config/requirements.txt: no such file or directory
Dockerfile
FROM python:3
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
mysql-client default-libmysqlclient-dev
WORKDIR /usr/src/app
ADD config/requirements.txt ./
RUN pip3 install --upgrade pip; \
pip3 install -r requirements.txt
RUN django-admin startproject myproject .;\
mv ./myproject ./origproject
docker-compose.yml
version: '2'
services:
db:
image: 'mysql:5.7'
app:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- './project:/usr/src/app/myproject'
- './media:/usr/src/app/media'
- './static:/usr/src/app/static'
- './templates:/usr/src/app/templates'
- './apps/external:/usr/src/app/external'
- './apps/myapp1:/usr/src/app/myapp1'
- './apps/myapp2:/usr/src/app/myapp2'
ports:
- '8000:8000'
links:
- db
requirements.txt
Pillow~=5.2.0
mysqlclient~=1.3.0
Django~=2.1.0
Is it necessary to be in virtual environment when running the below
command?
No, the docker build environment is isolated from the host. Any virtualenv on the host is ignored on the build context and the resulting image.
I have gone through docker basic videos which says it saves each apps
as images. But where these images are saved?.
The images are stored somewhere in /var/lib/docker but isn't meant to be browsed manually. You can send the images somewhere with docker push <image:tag> or save them with docker save <image:tag> -o <image>.tar
Does this line make the current pc root directory or dockers Image ' WORKDIR > /usr/src/app'
That line change the current workdir on the image.
ERROR: Service 'app' failed to build: ADD failed: stat /var/lib/docker/tmp/docker-builder912263941/config/requirements.txt: no such file or directory
This error means that you do not have config/requirements.txt in your current directory where build is run. Adjust your path on the Dockerfile properly.
$ docker-compose up -d
This will download the necessary Docker images and create a container for the web service.
How can I specify multi-stage build with in a docker-compose.yml?
For each variant (e.g. dev, prod...) I have a multi-stage build with 2 docker files:
dev: Dockerfile.base + Dockerfile.dev
or prod: Dockerfile.base + Dockerfile.prod
File Dockerfile.base (common for all variants):
FROM python:3.6
RUN apt-get update && apt-get upgrade -y
RUN pip install pipenv pip
COPY Pipfile ./
# some more common configuration...
File Dockerfile.dev:
FROM flaskapp:base
RUN pipenv install --system --skip-lock --dev
ENV FLASK_ENV development
ENV FLASK_DEBUG 1
File Dockerfile.prod:
FROM flaskapp:base
RUN pipenv install --system --skip-lock
ENV FLASK_ENV production
Without docker-compose, I can build as:
# Building dev
docker build --tag flaskapp:base -f Dockerfile.base .
docker build --tag flaskapp:dev -f Dockerfile.dev .
# or building prod
docker build --tag flaskapp:base -f Dockerfile.base .
docker build --tag flaskapp:dev -f Dockerfile.dev .
According to the compose-file doc, I can specify a Dockerfile to build.
# docker-compose.yml
version: '3'
services:
webapp:
build:
context: ./dir
dockerfile: Dockerfile-alternate
But how can I specify 2 Dockerfiles in docker-compose.yml (for multi-stage build)?
As mentioned in the comments, a multi-stage build involves a single Dockerfile to perform multiple stages. What you have is a common base image.
You could convert these to a non-traditional multi-stage build with a syntax like (I say non-traditional because you do not perform any copying between the layers and instead use just the from line to pick from a prior stage):
FROM python:3.6 as base
RUN apt-get update && apt-get upgrade -y
RUN pip install pipenv pip
COPY Pipfile ./
# some more common configuration...
FROM base as dev
RUN pipenv install --system --skip-lock --dev
ENV FLASK_ENV development
ENV FLASK_DEBUG 1
FROM base as prod
RUN pipenv install --system --skip-lock
ENV FLASK_ENV production
Then you can build one stage or another using the --target syntax to build, or a compose file like:
# docker-compose.yml
version: '3.4'
services:
webapp:
build:
context: ./dir
dockerfile: Dockerfile
target: prod
The biggest downside is the current build engine will go through every stage until it reaches the target. Build caching can mean that's only a sub-second process. And BuildKit which is coming out of experimental in 18.09 and will need upstream support from docker-compose will be more intelligent about only running the needed commands to get your desired target built.
All that said, I believe this is trying to fit a square peg in a round hole. The docker-compose developer is encouraging users to move away from doing the build within the compose file itself since it's not supported in swarm mode. Instead, the recommended solution is to perform builds with a CI/CD build server, and push those images to a registry. Then you can run the same compose file with docker-compose or docker stack deploy or even some k8s equivalents, without needing to redesign your workflow.
you can use as well concating of docker-compose files, with including both dockerfile pointing to your existing dockerfiles and run docker-compose -f docker-compose.yml -f docker-compose.prod.yml build
I want to create a generalized docker-compose yaml that can take any parent image/docker file build it if it doesn't exist and then run the following commands to run it in flask.
version: '3.3'
services:
web:
build: .
volumes:
- ./app:/app
ports:
- "80:80"
command: >
RUN pip3 install flask
&& COPY ./app /app
&& WORKDIR /app
&& RUN python run.py
But I keep getting an error
starting container process caused "exec: \"RUN\": executable file not found in $PATH": unknown
Not sure why.
Anyways, any help would be much appreciated.
I would say create a generalised dockerfile instead of compose.
Because when you use command, it runs your command on your container and RUN, COPY etc are not linux commands, these are dockerfile instructions.
Though you can run pip install && python run in command, bur copy is not possible from host to container usinh command.
Use below instructions in dockerfile
RUN pip install flask
COPY ./app /app
WORKDIR /app
CMD python run.py
And in docker-compose.yml you can overwrite the CMD defined in dockerfile
With something like this:
command:
python some.py or parameters
I have these codes in my Dockerfile.
FROM python:3
# Create user named "airport".
RUN adduser --disabled-password --gecos "" airport
# Login as the newly created "airport" user.
RUN su - airport
# Change working directory.
WORKDIR /home/airport/mount_point/
# Install Python packages at system-wide level.
RUN pip install -r requirements.txt
# Make sure to migrate all static files to the root of the project.
RUN python manage.py collectstatic --noinput
# This utility.sh script is used to reset the project environment. This includes
# removing unecessary .pyc and __pycache__ folders. This is optional and not
# necessary, I just prefer to have my environment clean before Docking.
RUN utility_scripts/utility.sh
When I called docker-compose build it returns /bin/sh: 1: requirements.txt: not found. Despite I have load the necessary volume in my docker-compose.yml. I am sure that requirements.txt is in ./
web:
build:
context: ./
dockerfile: Dockerfile
command: /home/airport/mount_point/start_web.sh
container_name: django_airport
expose:
- "8080"
volumes:
- ./:/home/airport/mount_point/
- ./timezone:/etc/timezone
How can I solve this problem?
Before running RUN pip install -r requirements.txt, you need to add the requirements.txt file to the image.
...
ADD requirements.txt requirements.txt
RUN pip install -r requirements.txt
...
For a sample on how to dockerize a django application, check https://docs.docker.com/compose/django/ . You need to add the requirements.txt and the code to the image.