I'm trying to dockerise my pelican site project. I've created a docker-compose.yml file and a Dockerfile.
However, every time I try to build my project (docker-compose up) I get the following errors for both pip install and npm install:
npm WARN saveError ENOENT: no such file or directory, open '/src/package.json'
...
Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
The directory structure of the project is as follows:
- **Dockerfile**
- **docker-compose.yml**
- content/
- pelican-plugins/
- src/
- Themes/
- Pelican config files
- requirements.txt
- gulpfile.js
- package.js
All the pelican makefiles etc. are in the src directory.
I'm trying to load the content, src, and pelican-plugins directories as volumes so I can modify them on my local machine for the docker container to use.
Here is my Dockerfile:
FROM python:3
WORKDIR /src
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev build-essential
# Install Node.js 8 and npm 5
RUN apt-get update
RUN apt-get -qq update
RUN apt-get install -y build-essential
RUN apt-get install -y curl
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash
RUN apt-get install -y nodejs
# Set the locale
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
RUN npm install
RUN python -m pip install --upgrade pip
RUN pip install -r requirements.txt
ENV SRV_DIR=/src
RUN chmod +x $SRV_DIR
RUN make clean
VOLUME /src/output
RUN make devserver
RUN gulp
And here is my docker-compose.yml file:
version: '3'
services:
web:
build: .
ports:
- "80:80"
volumes:
- ./content:/content
- ./src:/src
- ./pelican-plugins:/pelican-plugins
volumes:
logvolume01: {}
It definitely looks like I have set up my volumes directories properly in dockerfiles...
Thanks in advance!
Your Dockerfile doesn't COPY (or ADD) any files at all, so the /src directory is empty.
You can verify this yourself. When you run docker build it will print out output like:
Step 13/22 : ENV LC_ALL en_US.UTF-8
---> Running in 3ab80c3741f8
Removing intermediate container 3ab80c3741f8
---> d240226b6600
Step 14/22 : RUN npm install
---> Running in 1d31955d5b28
npm WARN saveError ENOENT: no such file or directory, open '/src/package.json'
The last line in each step with just a hex number is actually a valid image ID that's the final result of running each step, and you can then:
% docker run --rm -it d240226b6600 sh
# pwd
/src
# ls
To fix this you need a line in the Dockerfile like
COPY . .
You probably also need to change into the src subdirectory to run npm install and the like as you've shown your directory layout. This can look like:
WORKDIR /src
COPY . .
# Either put "cd" into the command itself
# (Each RUN command starts a fresh container at the current WORKDIR)
RUN cd src && npm install
# Or change WORKDIRs
WORKDIR /src/src
RUN pip install -r requirements.txt
WORKDIR /src
Remember that everything in the Dockerfile happens before any setting in docker-compose.yml outside the build: block is even considered. Environment variables, volume mounts, and networking options for a container have no effect on the image build sequence.
In terms of Dockerfile style, your VOLUME declaration will have some tricky unexpected side effects and probably is unnecessary; I'd remove it. Your Dockerfile is also missing the CMD that the container should run. You should also combine RUN apt-get update && apt-get install into single commands; the way Docker layer caching works and the way the Debian repositories work, it's very easy to wind up with a cached package index that names files from a week ago that don't exist any more.
While the setup you're describing is fairly popular, it also essentially hides everything the Dockerfile does with your local source tree. The npm install you're describing here, for example, will be a no-op because the volume mount will hide /src/src/node_modules. I generally find it easier to just run python, npm, etc. locally while I'm developing, rather than write and debug this 50-line YAML file and run sudo docker-compose up.
Related
docker can't find file develop.sh even though it's in the root directory
my Dockerfile:
FROM node:16.13.0
WORKDIR /app/medusa
COPY package.json .
COPY develop.sh .
COPY yarn.* .
RUN apt-get update
RUN apt-get install -y python
RUN npm install -g npm#latest
RUN npm install -g #medusajs/medusa-cli#latest
RUN npm install
COPY . .
ENTRYPOINT ["./develop.sh"]
Edit: I am trying to run an open source project called medusa, you can find the code here, I haven't changed any thing except node version in Dockerfile
as per #Charles Duffy suggestion: changing the entrypoint to ENTRYPOINT ["/bin/sh", "./develop.sh"] solved the issue
I'm trying to build a Python docker image which pip installs from a private repository using ssh. The details of which are in a requirements.txt file.
I've spent a long time reading guides from StackOverflow as well as the official Docker documentation on the subject ...
https://docs.docker.com/develop/develop-images/build_enhancements/#using-ssh-to-access-private-data-in-builds
https://docs.docker.com/compose/compose-file/build/#ssh
... and have come up with a Dockerfile which builds and runs fine when using:
$ docker build --ssh default -t build_tester .
However, when I try to do the same in a docker-compose.yml file, I get the following error:
$ docker-compose up
services.build-tester.build Additional property ssh is not allowed
This is the same even when enabling buildkit:
$ COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker-compose up
services.build-tester.build Additional property ssh is not allowed
Project structure
- docker-compose.yml
- build_files
- Dockerfile
- requirements.txt
- app
- app.py
Dockerfile
# syntax=docker/dockerfile:1.2
FROM python:bullseye as builder
RUN mkdir -p /build/
WORKDIR /build/
RUN apt-get update; \
apt-get install -y git; \
rm -rf /var/lib/apt/lists/*
RUN mkdir -p -m 0600 ~/.ssh; \
ssh-keyscan -H github.com >> ~/.ssh/known_hosts
RUN python3 -m venv env; \
env/bin/pip install --upgrade pip
COPY requirements.txt .
RUN --mount=type=ssh \
env/bin/pip install -r requirements.txt; \
rm requirements.txt
FROM python:slim as runner
RUN mkdir -p /app/
WORKDIR /app/
COPY --from=builder /build/ .
COPY app/ .
CMD ["env/bin/python", "app.py"]
docker-compose.yml
services:
build-tester:
container_name: build-tester
image: build-tester
build:
context: build_files
dockerfile: Dockerfile
ssh:
- default
If I remove ...
ssh:
- default
... the docker-compose up command builds the image OK but obviously doesn't run as app.py doesn't have the required packages installed from pip.
I'd really like to be able to get this working in this way if possible so any advice would be much appreciated.
OK - so ended up being a very simple fix... Just needed to ensure docker-compose was updated to version 2.6 on my Mac.
For some reason brew wasn't updating my docker cask properly so was still running a package from early Jan 2022. Seems --ssh compatibility was added sometime between then and now.
my docker-compose.yml
version: "3.9"
services:
admin_site:
build:
context: ./
dockerfile: Dockerfile.local
volumes:
- .:/usr/src/app
ports:
- "8010:8010"
restart: always
I want to mount current folder to /usr/src/app
Dockerfile.local
FROM python:3.9.5
ENV PYTHONUNBUFFERED 1
RUN apt-get update && apt-get install -y netcat
WORKDIR /usr/src/app
RUN pip install pipenv
RUN pipenv install --dev --system
When I try to docker compose -f docker-compose.yml build
This error occurs.
ERROR:: --system is intended to be used for pre-existing Pipfile installation, not installation of specific packages. Aborting.
I guess it means there is no Pipfile
However There is Pipfile in my current directly.
So I guess volumes: doesn't work before Dockerfile?
How can I solve this problem?
Mounts always apply to Container and not to the Image Build.
So would use
# DO
# $ pipenv lock --dev --requirements > requirements.txt
# before build!
COPY requirements*.txt ./
RUN python -m pip --no-cache-dir install -r requirements.txt
This is not using your Pipenv-File, you have to manually generate the requirements-file. There are some benefits doing it like that:
Packages in Docker-Image can't be changed by accident
You can define versions for dependencies separately, very helpful if you have bugs in dependency-packages, but need to update the main package
Pipenv does not run lock on image build (takes ages every time)
Furthermore copying whole folder forces docker build to not reuse this layer on any file change.
I want to create a dockerfile for a debian file extension which runs on ubuntu 18.04. So far I've written this
FROM ubuntu:18.04 AS ubuntu
RUN apt-get update
WORKDIR /Downloads/invisily
RUN apt-get install ./invisily.deb
All phases run fine except the last one. It shows this error:
E: Unsupported file ./invisily.deb given on commandline
The command '/bin/sh -c apt-get install ./invisily.deb' returned a non-zero code: 100
I'm new to docker and cloud so any help would be appreciated thanks!
Edit:
I solved it by putting the dockerfile and the debian file in the same directory and using COPY . ./
This is what my dockerfile looks like now:
FROM ubuntu:18.04 AS ubuntu
RUN apt-get update
WORKDIR /invisily
COPY . ./
USER root
RUN chmod +x a.deb && \
apt-get install a.deb
A few things,
WORKDIR is the working directory inside of your container.
You will need to copy the file invisily.deb from locally to your container when building your Docker image.
You can pass multiple bash commands in the RUN field combining them with multilines.
Try something like this
FROM ubuntu:18.04 AS ubuntu
WORKDIR /opt/invisily
#Drop the invisily.deb in to the same directory as your Dockerfile
#This will copy it from local to your container, inside of /opt/invisily dir
COPY invisily.deb .
RUN apt-get update && \
chmod +x invisily.deb && \
apt-get install invisily.deb
in your WORKDIR there isn't any invisly.deb file, so if you have it you can copy it the container like this:
FROM ubuntu ...
WORKDIR /Downloads/invisily
RUN apt-get update
COPY ./your invisly file path ./
RUN chmod +x ./invisily
RUN apt-get install ./invisily.deb
I am running my monolith application in a docker container and k8s on GKE.
The application contains python & node dependencies also webpack for front end bundle.
We have implemented CI/CD which is taking around 5-6 min to build & deploy new version to k8s cluster.
Main goal is to reduce the build time as much possible. Written Dockerfile is multi stage.
Webpack is taking more time to generate the bundle.To buid docker image i am using already high config worker.
To reduce time i tried using the Kaniko builder.
Issue :
As docker cache layers for python code it's working perfectly. But when there is any changes in JS or CSS file we have to generate bundle.
When there is any changes in JS & CSS file instead if generate new bundle its use caching layer.
Is there any way to separate out build new bundle or use cache by passing some value to docker file.
Here is my docker file :
FROM python:3.5 AS python-build
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt &&\
pip3 install Flask-JWT-Extended==3.20.0
ADD . /app
FROM node:10-alpine AS node-build
WORKDIR /app
COPY --from=python-build ./app/app/static/package.json app/static/
COPY --from=python-build ./app ./
WORKDIR /app/app/static
RUN npm cache verify && npm install && npm install -g --unsafe-perm node-sass && npm run sass && npm run build
FROM python:3.5-slim
COPY --from=python-build /root/.cache /root/.cache
WORKDIR /app
COPY --from=node-build ./app ./
RUN apt-get update -yq \
&& apt-get install curl -yq \
&& pip install -r requirements.txt
EXPOSE 9595
CMD python3 run.py
I would suggest to create separate build pipelines for your docker images, where you know that the requirements for npm and pip aren't so frequent.
This will incredibly improve the speed, reducing the time of access to npm and pip registries.
Use a private docker registry (the official one or something like VMWare harbor or SonaType Nexus OSS).
You store those build images on your registry and use them whenever something on the project changes.
Something like this:
First Docker Builder // python-builder:YOUR_TAG [gitrev, date, etc.)
docker build --no-cache -t python-builder:YOUR_TAG -f Dockerfile.python.build .
FROM python:3.5
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt &&\
pip3 install Flask-JWT-Extended==3.20.0
Second Docker Builder // js-builder:YOUR_TAG [gitrev, date, etc.)
docker build --no-cache -t js-builder:YOUR_TAG -f Dockerfile.js.build .
FROM node:10-alpine
WORKDIR /app
COPY app/static/package.json /app/app/static
WORKDIR /app/app/static
RUN npm cache verify && npm install && npm install -g --unsafe-perm node-sass
Your Application Multi-stage build:
docker build --no-cache -t app_delivery:YOUR_TAG -f Dockerfile.app .
FROM python-builder:YOUR_TAG as python-build
# Nothing, already "stoned" in another build process
FROM js-builder:YOUR_TAG AS node-build
ADD ##### YOUR JS/CSS files only here, required from npm! ###
RUN npm run sass && npm run build
FROM python:3.5-slim
COPY . /app # your original clean app
COPY --from=python-build #### only the files installed with the pip command
WORKDIR /app
COPY --from=node-build ##### Only the generated files from npm here! ###
RUN apt-get update -yq \
&& apt-get install curl -yq \
&& pip install -r requirements.txt
EXPOSE 9595
CMD python3 run.py
A question is: why do you install curl and execute again the pip install -r requirements.txt command in the final docker image?
Triggering every time an apt-get update and install without cleaning the apt cache /var/cache/apt folder produces a bigger image.
As suggestion, use the docker build command with the option --no-cache to avoid caching result:
docker build --no-cache -t your_image:your_tag -f your_dockerfile .
Remarks:
You'll have 3 separate Dockerfiles, as I listed above.
Build the Docker images 1 and 2 only if you change your python-pip and node-npm requirements, otherwise keep them fixed for your project.
If any dependency requirement changes, then update the docker image involved and then the multistage one to point to the latest built image.
You should always build only the source code of your project (CSS, JS, python). In this way, you have also guaranteed reproducible builds.
To optimize your environment and copy files across the multi-stage builders, try to use virtualenv for python build.