In getting a django env setup, was working on how to containerize the env. In doing so, I can't get the entrypoint to work on Docker for Windows/Linux.
Successfully built e9cb8e009d91
Successfully tagged avengervision_web:latest
avengervision_db_1 is up-to-date
Starting avengervision_web_1 ... done
CONTAINER ID IMAGE COMMAND CREATED
1da83169ba41 avengervision_web "sh /usr/src/app/ent…" 44 minutes
STATUS PORTS NAMES
Exited (2) 20 seconds ago avengervision_web_1
docker logs 1da83169ba41
sh: can't open '/usr/src/app/entrypoint.sh': No such file or directory
Have simplified the entrypoint.sh to just get it to execute.
Have tried
ENTRYPOINT ["sh","/usr/src/app/entrypoint.sh"] &
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
Made sure the line ending in git and vscode are set to LF and ran the code through dos2unix
Ran the same Docker Compose on Windows and Linux and get the same exception on both
added to the Dockerfile as extra precaution to remove all line endings and made sure to chmod +x the script
Commented out the EntryPoint and ran docker run -tdi and I was able to docker attach and execute the script from within the container without any issue.
*****docker-compose.yml*****
version: '3.7'
services:
web:
build:
context: .
dockerfile: ./docker/Dockerfile
#command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./main/:/usr/src/app/
ports:
- 8000:8000
environment:
- DEBUG=1
- SECRET_KEY=foo
- SQL_ENGINE=django.db.backends.postgresql
- SQL_DATABASE=hello_django_dev
- SQL_USER=hello_django
- SQL_PASSWORD=hello_django
- SQL_HOST=db
- SQL_PORT=5432
- DATABASE=postgres
depends_on:
- db
db:
image: postgres:11.2-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=hello_django
- POSTGRES_PASSWORD=hello_django
- POSTGRES_DB=hello_django_dev
volumes:
postgres_data:
*****Dockerfile*****
# pull official base image
FROM python:3.7-alpine
# set work directory
WORKDIR /usr/src/app
# set environment varibles
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
&& pip install psycopg2 \
&& apk del build-deps
# install dependencies
RUN pip install --upgrade pip
RUN pip install pipenv
COPY ./docker/Pipfile /usr/src/app/Pipfile
RUN pipenv install --skip-lock --system --dev
# copy entrypoint.sh
COPY ./docker/entrypoint.sh /usr/src/app/entrypoint.sh
#RUN chmod +x /usr/src/app/entrypoint.sh
# copy project
COPY main /usr/src/app/main
COPY manage.py /usr/src/app
#RUN /usr/src/app/entrypoint.sh
RUN sed -i 's/\r$//' /usr/src/app/entrypoint.sh && \
chmod +x /usr/src/app/entrypoint.sh
# run entrypoint.sh
ENTRYPOINT ["sh","/usr/src/app/entrypoint.sh"]
*****entrypoint.sh*****
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
echo "Testing"
#python /usr/src/app/manage.py flush
#python /usr/src/app/manage.py migrate
#python /usr/src/app/manage.py collectstatic --no-input --clear
exec "$#"
The goal in the end is that the container would be up and running with the django application created.
In leveraging the layout listed here - https://github.com/testdrivenio/django-on-docker it worked. The difference in what I was doing is I created a new docker directory at the root and then had docker compose leverage that. Everything seemed to copy into the container as it was supposed to, but for some reason the EntryPoint would not work. Without changing any of the code other than updating the references to the new file locations, everything worked. Below were the changes made:
web:
build:
context: .
dockerfile: ./docker/Dockerfile
to
web:
build: ./app
and then changing the directory structure from
Project Layout:
├───.vscode
├───docker
│ └───Dockerfile
│ └───entrypoint.sh
│ └───Pipfile
│ └───nginx
└───main
├───migrations
├───static
│ └───images
├───templates
├───Artwork
├───django-env
│ ├───Include
│ ├───Lib
│ └───Scripts
└───docker-compose.yml
└───managy.py
to
Project Layout:
├───.vscode
├───app
│ └───main
│ ├───migrations
│ ├───static
│ │ └───images
│ ├───templates
│ └───Dockerfile
│ └───entrypoint.sh
│ └───managy.py
│ └───Pipfile
├───Artwork
├───django-env
│ ├───Include
│ ├───Lib
│ └───Scripts
└───nginx
└───docker-compose.yml
Related
Hello I am trying to build an image which can compile and run a c++ program securely.
FROM golang:latest as builder
WORKDIR /app
COPY . .
RUN go mod download
RUN env CGO_ENABLED=0 go build -o /worker
FROM alpine:latest
RUN apk update && apk add --no-cache g++ && apk add --no-cache tzdata
ENV TZ=Asia/Kolkata
WORKDIR /
COPY --from=builder worker /bin
ARG USER=default
RUN addgroup -S $USER && adduser -S $USER -G $USER
USER $USER
ENTRYPOINT [ "worker" ]
version: "3.9"
services:
gpp:
build: .
environment:
- token=test_token
- code=#include <iostream>\r\n\r\nusing namespace std;\r\n\r\nint main() {\r\n int a = 10;\r\n int b = 20;\r\n cout << a << \" \" << b << endl;\r\n int temp = a;\r\n a = b;\r\n b = temp;\r\n cout << a << \" \" << b << endl;\r\n return 0;\r\n}
network_mode: bridge
privileged: false
read_only: true
tmpfs: /tmp
security_opt:
- "no-new-privileges"
cap_drop:
- "all"
Here worker is a golang binary which reads code from environment variable and stores it in /tmp folder as main.cpp, and then tries to compile and run it using g++ /tmp/main.cpp && ./tmp/a.out (using golang exec)
I am getting this error scratch_4-gpp-1 | Error : fork/exec /tmp/a.out: permission denied, from which what I can understand / know that executing anything from tmp directory is restricted.
Since, I am using read_only root file system, I can only work on tmp directory, Please guide me how I can achieve above task keeping my container secured.
Docker's default options for a tmpfs include noexec. docker run --tmpfs allows an extended set of mount options, but neither Compose tmpfs: nor the extended syntax of volumes: allows changing anything other than the size option.
One straightforward option here is to use an anonymous volume. Syntactically this looks like a normal volumes: line, except it only has a container path. The read_only: option will make the container's root filesystem be read-only, but volumes are exempted from this.
version: '3.8'
services:
...
read_only: true
volumes:
- /build # which will be read-write
This will be a "normal" Docker volume, so it will be disk-backed and you'll be able to see it in docker volume ls.
Complete summary of solution -
#davidmaze mentioned to add an anonymous volume using
version: '3.8'
services:
...
read_only: true
volumes:
- /build # which will be read-write
as I replied I am still getting an error Cannot create temporary file in ./: Read-only file system when I tried to compile my program. When I debugged my container to see file system changes in read_only:false mode, I found that compiler is trying to save the a.out file in /bin folder, which is suppose
to be read only.
So I added this additional line before the entry point and my issue was solved.
FROM golang:latest as builder
WORKDIR /app
COPY . .
RUN go mod download
RUN env CGO_ENABLED=0 go build -o /worker
FROM alpine:latest
RUN apk update && apk add --no-cache g++ && apk add --no-cache tzdata
ENV TZ=Asia/Kolkata
WORKDIR /
COPY --from=builder worker /bin
ARG USER=default
RUN addgroup -S $USER && adduser -S $USER -G $USER
USER $USER
WORKDIR /build <---- this line
ENTRYPOINT [ "worker" ]
I'm new in docker and I want to setting-up a docker-compose for my django app. in the backend of my app, I have golang packages too and run that in djang with subprocess library.
But, when I want to install a package using go install github.com/x/y#latest and then copy its binary to the project directory, it gives me the error: package github.com/x/y#latest: cannot use path#version syntax in GOPATH mode
I searched a lot in the internet but didn't find a solution to solve my problem. Could you please tell me where I'm wrong?
here is my Dockerfile:
FROM golang:1.18.1-bullseye as go-build
# Install go package
RUN go install github.com/hakluke/hakrawler#latest \
&& cp $GOPATH/bin/hakrawler /usr/local/bin/
# Install main image for backend
FROM python:3.8.11-bullseye
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Install Dist packages
RUN apt-get update \
&& apt-get -y install --no-install-recommends software-properties-common libpq5 python3-dev musl-dev git netcat-traditional golang \
&& rm -rf /var/lib/apt/lists/
# Set work directory
WORKDIR /usr/src/redteam_toolkit/
# Install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# Copy project, and then the go package
COPY . .
COPY --from=go-build /usr/local/bin/hakrawler /usr/src/redteam_toolkit/toolkit/scripts/webapp/
docker-compose.yml:
version: '3.3'
services:
webapp:
build: .
command: python manage.py runserver 0.0.0.0:4334
container_name: toolkit_webapp
volumes:
- .:/usr/src/redteam_toolkit/
ports:
- 4334:4334
env_file:
- ./.env
depends_on:
- db
db:
image: postgres:13.4-bullseye
container_name: database
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
- POSTGRES_DB=redteam_toolkit_db
volumes:
postgres_data:
the get.py file inside /usr/src/redteam_toolkit/toolkit/scripts/webapp/ directory, to just run the go package, and list files in this dir:
import os
import subprocess
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
print(f"Current path is: {BASE_DIR}")
def go(target_url):
run_go_package = subprocess.getoutput(
f"echo {target_url} | {BASE_DIR}/webapp/hakrawler -t 15 -u"
)
list_files = subprocess.getoutput(f"ls {BASE_DIR}/webapp/")
print(run_go_package)
print(list_files)
go("https://example.org")
and then I just run:
$ docker-compose up -d --build
$ docker-compose exec webapp python toolkit/scripts/webapp/get.py
The output is:
Current path is: /usr/src/redteam_toolkit/toolkit/scripts
/bin/sh: 1: /usr/src/redteam_toolkit/toolkit/scripts/webap/hakrawler: not found
__init__.py
__pycache__
scr.py
gather.py
This looks like a really good candidate for a multi-stage build:
FROM golang:1.18.0 as go-build
# Install packages
RUN go install github.com/x/y#latest \
&& cp $GOPATH/bin/pacakge /usr/local/bin/
FROM python:3.8.11-bullseye as release
...
COPY --from=go-build /usr/local/bin/package /usr/src/toolkit/toolkit/scripts/webapp/
...
Your compose file also needs to be updated, it is masking the entire /usr/src/redteam_toolkit folder with the volume mount. Delete that volume mount to see the content of the image.
GOPATH mode does not work with Golang modules, in your Dockerfile file, add:
RUN unset GOPATH
use RUN go get <package_repository>
This question already has answers here:
How to include files outside of Docker's build context?
(19 answers)
Closed 1 year ago.
THis is the project structure
Project
/deployment
/Dockerfile
/docker-compose.yml
/services
/ui
/widget
Here is the docker file
FROM node:14
WORKDIR /app
USER root
# create new user (only root can do this) and assign owenership to newly created user
RUN echo "$(date '+%Y-%m-%d %H:%M:%S'): ======> Setup Appusr" \
&& groupadd -g 1001 appusr \
&& useradd -r -u 1001 -g appusr appusr \
&& mkdir /home/appusr/ \
&& chown -R appusr:appusr /home/appusr/\
&& chown -R appusr:appusr /app
# switch to new created user so that appuser will be responsible for all files and has access
USER appusr:appusr
COPY ../services/ui/widget/ /app/
COPY ../.env /app/
# installing deps
RUN npm install
and docker-compose
version: "3.4"
x-env: &env
HOST: 127.0.0.1
services:
widget:
build:
dockerfile: Dockerfile
context: .
ports:
- 3002:3002
command:
npm start
environment:
<<: *env
restart: always
and from project/deplyment/docker-compose up it shows
Step 6/8 : COPY ../services/ui/widget/ /app/
ERROR: Service 'widget' failed to build : COPY failed: forbidden path outside the build context: ../services/ui/widget/ ()
am i setting the wrong context?
You cannot COPY or ADD files outside the current path where Dockerfile exists.
You should either move these two directories to where Dockerfile is and then change your Dockerfile to:
COPY ./services/ui/widget/ /app/
COPY ./.env /app/
Or use volumes in docker-compose, and remove the two COPY lines.
So, your docker-compose should look like this:
x-env: &env
HOST: 127.0.0.1
services:
widget:
build:
dockerfile: Dockerfile
context: .
ports:
- 3002:3002
command:
npm start
environment:
<<: *env
restart: always
volumes:
- /absolute/path/to/services/ui/widget/:/app/
- /absolute/path/to/.env/:/app/
And this should be your Dockerfile if you use volumesindocker-compose`:
FROM node:14
WORKDIR /app
USER root
# create new user (only root can do this) and assign owenership to newly created user
RUN echo "$(date '+%Y-%m-%d %H:%M:%S'): ======> Setup Appusr" \
&& groupadd -g 1001 appusr \
&& useradd -r -u 1001 -g appusr appusr \
&& mkdir /home/appusr/ \
&& chown -R appusr:appusr /home/appusr/\
&& chown -R appusr:appusr /app
# switch to new created user so that appuser will be responsible for all files and has access
USER appusr:appusr
# installing deps
RUN npm install
You problem is that you are referencing a file which is outside Dockerfile context. By default, is the location from where you execute the build command.
From docker documentation - Copy section:
The path must be inside the context of the build; you cannot COPY ../something /something, because the first step of a docker build is to send the context directory (and subdirectories) to the docker daemon.
However, you can use the parameter -f to specify the dockerfile independently of the folder you are running your build. So you could use the next line executing it from projects:
docker build -f ./deployment/Dockerfile .
You will need to modify your copy lines as well to point at the right location.
COPY ./services/ui/widget/ /app/
COPY ./.env /app/
I am experimnting in automating parts of a project that I'd ideally deploy and forget about. The project is comprised of an XML parser and a small Flask website. At the moment the folder structure looks like this:
.
├── init.sql
└── parser
├── cloudbuild.yaml
├── cloud_func
│ └── main.py
├── Dockerfile
├── feed_parse.py
├── get_so.py
├── requirements.txt
└── utils.py
Now, I can correctly set up the trigger to look at /parser/cloudbuild.yaml, but building the image with the following command raises an error:
build . --build-arg "CLIENT_CERT=$CSQL_CERT CLIENT_KEY=$CSQL_KEY SERVER_CA=$CSQL_CA SERVER_PW=$CSQL_PW SERVER_HOST=$CSQL_IP" -t gcr.io/and-reporting/appengine/so-parser:latest
BUILD
Already have image (with digest): gcr.io/cloud-builders/docker
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /workspace/Dockerfile: no such file or directory
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: exit status 1
it looks to me that gcp has troubles locating my Dockerfile which is in the same folder cloudbuild.yaml is located.
What am I missing?
For the sake of completeness, the Dockerfile looks like this:
FROM python:3.7-alpine
RUN apk update \
&& apk add gcc python3-dev musl-dev libffi-dev \
&& apk del libressl-dev \
&& apk add openssl-dev
COPY requirements.txt /
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt
ADD . /parser
WORKDIR /parser/
RUN mkdir -p certs
# Set env variables from secrets
ARG CLIENT_CERT
ENV CSQL_CERT=${CLIENT_CERT}
ARG CLIENT_KEY
ENV CSQL_KEY=${CLIENT_KEY}
ARG SERVER_CA
ENV CSQL_CA=${SERVER_CA}
ARG SERVER_PW
ENV CSQL_PW=${SERVER_PW}
ARG SERVER_HOST
ENV CSQL_IP=${SERVER_HOST}
# Get ssl certs in files
RUN echo $CLIENT_CERT > ./certs/ssl_cert.pem \
&& echo $CLIENT_KEY > ./certs/ssl_key.pem \
&& echo $SERVER_CA > ./certs/ssl_ca.pem
CMD python get_so.py
edit: and the cloudbuild.yaml I'm using for the build
steps:
# Building image
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'-f',
'Dockerfile',
'--build-arg',
'CLIENT_CERT=$$CSQL_CERT CLIENT_KEY=$$CSQL_KEY SERVER_CA=$$CSQL_CA SERVER_PW=$$CSQL_PW SERVER_HOST=$$CSQL_IP',
'-t',
'gcr.io/$PROJECT_ID/appengine/so-parser:latest',
'.'
]
secretEnv: ['CSQL_CERT', 'CSQL_KEY', 'CSQL_CA', 'CSQL_PW', 'CSQL_IP']
# Push Images
# - name: 'gcr.io/cloud-builders/docker'
# args: ['push', 'gcr.io/$PROJECT_ID/appengine/so-parser:latest']
secrets:
- kmsKeyName: projects/myproject/locations/global/keyRings/so-jobs/cryptoKeys/board
secretEnv:
CSQL_CERT: [base64 string]
CSQL_KEY: [base64 string]
CSQL_CA: [base64 string]
CSQL_PW: [base64 string]
CSQL_IP: [base64 string]
Because of the dot in your cloudbuild.yaml file, docker is not able to find Dockerfile which is in parser directory:
steps:
- name: "gcr.io/cloud-builders/docker"
args: ["build", "-t", "gcr.io/$PROJECT_ID/mynodejs:$SHORT_SHA", "./parser"]
If you want to mention the dockerfile name:
steps:
- name: "gcr.io/cloud-builders/docker"
args: ["build", "-t", "gcr.io/$PROJECT_ID/mynodejs:$SHORT_SHA", "-f", "./parser/your-dockerfile"]
I have these codes in my Dockerfile.
FROM python:3
# Create user named "airport".
RUN adduser --disabled-password --gecos "" airport
# Login as the newly created "airport" user.
RUN su - airport
# Change working directory.
WORKDIR /home/airport/mount_point/
# Install Python packages at system-wide level.
RUN pip install -r requirements.txt
# Make sure to migrate all static files to the root of the project.
RUN python manage.py collectstatic --noinput
# This utility.sh script is used to reset the project environment. This includes
# removing unecessary .pyc and __pycache__ folders. This is optional and not
# necessary, I just prefer to have my environment clean before Docking.
RUN utility_scripts/utility.sh
When I called docker-compose build it returns /bin/sh: 1: requirements.txt: not found. Despite I have load the necessary volume in my docker-compose.yml. I am sure that requirements.txt is in ./
web:
build:
context: ./
dockerfile: Dockerfile
command: /home/airport/mount_point/start_web.sh
container_name: django_airport
expose:
- "8080"
volumes:
- ./:/home/airport/mount_point/
- ./timezone:/etc/timezone
How can I solve this problem?
Before running RUN pip install -r requirements.txt, you need to add the requirements.txt file to the image.
...
ADD requirements.txt requirements.txt
RUN pip install -r requirements.txt
...
For a sample on how to dockerize a django application, check https://docs.docker.com/compose/django/ . You need to add the requirements.txt and the code to the image.