I want to clone a git repo, install a fresh neos.io or something like that in the Dockerfile via RUN. Later on I want to mount the directory including the files into my local filesystem.
According to this github issue its not possible since
mounting the volume will remove the data.
How to archive the wanted behaviour? Using CMD or ENTRYPOINT will clone the git repo f.e. on every start. Thats not necessary.
Dockerfile
FROM debian:stable
RUN apt-get update \
&& apt-get install -y git
WORKDIR /home/app
RUN git clone https://github.com/libgit2/libgit2
CMD ["sleep", "infinity"]
docker-compose.yml
version: "2"
services:
app:
build: .
# Uncomment this will remove data on docker-compose up
# volumes:
# - ./app:/home/app
You can clone the repo to an arbitrary directory, say /home/app-clone for example, then in your ENTRYPOINT or CMD, you copy the files from this directory to your volume directory, so something like:
.
.
RUN git clone https://github.com/libgit2/libgit2 /home/app-clone
CMD cp -r /home/app-clone /home/app && sleep infinity
Dockerfile
FROM debian:stable
RUN apt-get update \
&& apt-get install -y git
WORKDIR /home/app
RUN git clone https://github.com/libgit2/libgit2.git
CMD ["cp", "-r", "libgit2", "/tmp"]
docker-compose.yml
version: "2"
services:
app:
build: .
volumes:
- ./app:/tmp
Related
my docker-compose.yml
version: "3.9"
services:
admin_site:
build:
context: ./
dockerfile: Dockerfile.local
volumes:
- .:/usr/src/app
ports:
- "8010:8010"
restart: always
I want to mount current folder to /usr/src/app
Dockerfile.local
FROM python:3.9.5
ENV PYTHONUNBUFFERED 1
RUN apt-get update && apt-get install -y netcat
WORKDIR /usr/src/app
RUN pip install pipenv
RUN pipenv install --dev --system
When I try to docker compose -f docker-compose.yml build
This error occurs.
ERROR:: --system is intended to be used for pre-existing Pipfile installation, not installation of specific packages. Aborting.
I guess it means there is no Pipfile
However There is Pipfile in my current directly.
So I guess volumes: doesn't work before Dockerfile?
How can I solve this problem?
Mounts always apply to Container and not to the Image Build.
So would use
# DO
# $ pipenv lock --dev --requirements > requirements.txt
# before build!
COPY requirements*.txt ./
RUN python -m pip --no-cache-dir install -r requirements.txt
This is not using your Pipenv-File, you have to manually generate the requirements-file. There are some benefits doing it like that:
Packages in Docker-Image can't be changed by accident
You can define versions for dependencies separately, very helpful if you have bugs in dependency-packages, but need to update the main package
Pipenv does not run lock on image build (takes ages every time)
Furthermore copying whole folder forces docker build to not reuse this layer on any file change.
I'm using Dockerfile to run and install node_modules for a gatsby project. The Dockerfile has the below structure:
FROM node:alpine
EXPOSE 8000
RUN apk add --update --no-cache build-base python3-dev python3 libffi-dev libressl-
dev bash git gettext curl \
&& curl -O https://bootstrap.pypa.io/get-pip.py \
&& python3 get-pip.py \
&& pip install --upgrade six awscli awsebcli
WORKDIR /app
COPY ./package.json .
RUN npm install
COPY . .
RUN yarn install && yarn cache clean
CMD ["yarn", "develop", "-H", "0.0.0.0" ]
And Here is the code of docker-compose.yml :
version: '3'
services:
web:
build:
context: .
dockerfile: Dockerfile
ports:
- "8000:8000"
volumes:
- app/node_modules
- .:/app
After running docker-compose build command I'm getting the below error:
How can I solve this problem and install node_modules with docker?
Usually these problems are related to internet Censorship.
Run this command docker container prune and docker image prune
Then use a proxy.
These commands remove cached and useless containers and images. But be careful if you have an important image or container.
I have a docker file which has a command RUN python3 manage.py dumpdata --natural-foreign --exclude=auth.permission --exclude=contenttypes --indent=4 > data.json" this creates a Json file.
when i build the docker file it creates an image of specific name and when i run that using below command and open in bash i am able to see the data.json file created.
docker run -it --rm vijeth11/fassionplaza bash
files in Docker container created via above cmd
when i use the same image and run docker compose run web bash cmd
i am not able to see the data.json file, while other files are present in the container.
files in Docker container created via Docker compose
Is there anything wrong in my docker commands
Command used to build:
docker build --no-cache -t vijeth11/fassionplaza .
Docker-compose.yml
version: "3"
services:
db:
image: postgres
environment:
- POSTGRES_DB=fashionplaza
ports:
- "5432:5432"
web:
image: vijeth11/fassionplaza
command: >
sh -c "ls -l && python3 manage.py makemigrations && python3 manage.py migrate && python3 manage.py loaddata data.json && gunicorn --bind :8000 --workers 3 FashionPlaza.wsgi"
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Dockerfile
FROM python:3.7
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY ./Backend /code/Backend
COPY ./frontEnd /code/frontEnd
WORKDIR /code/Backend
RUN pip3 install -r requirements.txt
WORKDIR /code/Backend/FashionPlaza
RUN python3 manage.py dumpdata --natural-foreign \
--exclude=auth.permission --exclude=contenttypes \
--indent=4 > data.json
RUN chmod 755 data.json
WORKDIR /code/frontEnd/FashionPlaza
RUN apt-get update -y
RUN apt -y install curl dirmngr apt-transport-https lsb-release ca-certificates
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash
RUN apt install nodejs -y
RUN npm i
RUN npm run prod
ARG buildtime_variable=PROD
ENV server_type=$buildtime_variable
WORKDIR /code/Backend/FashionPlaza
Thank you in advance.
You map your current directory to /code when you run with these lines in your docker-compose file
volumes:
- .:/code
That hides all existing files in /code and replaces it with the mapped directory.
Since your data.json file is located in /code/Backend/FashionPlaza in the image, it becomes hidden and inaccessible.
The best thing to do is to map your volumes to empty directories in the image, so you don't inadvertently hide anything.
I am novice with Docker and I am trying to create a docker image and use the docker container so I did the following:
My Dockerfile is:
FROM ubuntu:16.04
# # Front stack
# RUN apt-get install -y npm && \
# npm install -g #angular/cli
FROM python:3.6
RUN apt-get update
RUN apt-get install -y libpython-dev curl build-essential unzip python-dev libaio-dev libaio1 vim
rpm2cpio cpio python-pip dos2unix
RUN mkdir /code
COPY ./requirements.txt /code/requirements.txt
RUN pip install -r /code/requirements.txt
RUN pip install --upgrade pip
COPY . /code/
WORKDIR /code
ENV PYTHONPATH=/code/py_lib
CMD ["bash", "-c", "tail -f /dev/null"]
My dockerCompose file is:
version: '3.5'
services:
testsample:
image: toto/test-sample
restart: unless-stopped
env_file:
- .env
command: bash -c "pip3 install -r requirements.txt && tail -f /dev/null"
# command: bash -c "tail -f /dev/null"
volumes:
- .:/code
I executed these commands:
docker build . -f Dockerfile
docker images
docker-compose up
This gave me an error:
Pulling testsample (toto/test-sample:)...
ERROR: The image for the service you're trying to recreate has been removed. If you continue, volume
data could be lost. Consider backing up your data before continuing.
Continue with the new image? [yN]y
Pulling testsample (toto/test-sample:)...
ERROR: pull access denied for toto/test-sample, repository does not exist or may require 'docker
login': denied: requested access to the resource is denied
I tried docker login and I am able to connect.
So what would lead to this problem?
You have to provide tag name when you are building a docker image using a docker file like the following:
docker build -t toto/test-sample -f Dockerfile .
-t here is for the tag name
-f here is for telling the name of the Dockerfile (in this case it is optinal as Dockerfile is the default name)
If you put the Dockerfile in the same directory as your docker-compose.yml file, you can do the following:
version: '3.5'
services:
testsample:
image: toto/test-sample
build:
context: .
dockerfile: Dockerfile
restart: unless-stopped
env_file:
- .env
volumes:
- .:/code
Then, do:
docker-compose up --build -d
Otherwise, if you are simply having problems building the image, you just need to do:
docker build -t toto/test-sample .
build command should be:
docker build -t toto/test-sample .
I'm trying to dockerise my pelican site project. I've created a docker-compose.yml file and a Dockerfile.
However, every time I try to build my project (docker-compose up) I get the following errors for both pip install and npm install:
npm WARN saveError ENOENT: no such file or directory, open '/src/package.json'
...
Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
The directory structure of the project is as follows:
- **Dockerfile**
- **docker-compose.yml**
- content/
- pelican-plugins/
- src/
- Themes/
- Pelican config files
- requirements.txt
- gulpfile.js
- package.js
All the pelican makefiles etc. are in the src directory.
I'm trying to load the content, src, and pelican-plugins directories as volumes so I can modify them on my local machine for the docker container to use.
Here is my Dockerfile:
FROM python:3
WORKDIR /src
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev build-essential
# Install Node.js 8 and npm 5
RUN apt-get update
RUN apt-get -qq update
RUN apt-get install -y build-essential
RUN apt-get install -y curl
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash
RUN apt-get install -y nodejs
# Set the locale
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
RUN npm install
RUN python -m pip install --upgrade pip
RUN pip install -r requirements.txt
ENV SRV_DIR=/src
RUN chmod +x $SRV_DIR
RUN make clean
VOLUME /src/output
RUN make devserver
RUN gulp
And here is my docker-compose.yml file:
version: '3'
services:
web:
build: .
ports:
- "80:80"
volumes:
- ./content:/content
- ./src:/src
- ./pelican-plugins:/pelican-plugins
volumes:
logvolume01: {}
It definitely looks like I have set up my volumes directories properly in dockerfiles...
Thanks in advance!
Your Dockerfile doesn't COPY (or ADD) any files at all, so the /src directory is empty.
You can verify this yourself. When you run docker build it will print out output like:
Step 13/22 : ENV LC_ALL en_US.UTF-8
---> Running in 3ab80c3741f8
Removing intermediate container 3ab80c3741f8
---> d240226b6600
Step 14/22 : RUN npm install
---> Running in 1d31955d5b28
npm WARN saveError ENOENT: no such file or directory, open '/src/package.json'
The last line in each step with just a hex number is actually a valid image ID that's the final result of running each step, and you can then:
% docker run --rm -it d240226b6600 sh
# pwd
/src
# ls
To fix this you need a line in the Dockerfile like
COPY . .
You probably also need to change into the src subdirectory to run npm install and the like as you've shown your directory layout. This can look like:
WORKDIR /src
COPY . .
# Either put "cd" into the command itself
# (Each RUN command starts a fresh container at the current WORKDIR)
RUN cd src && npm install
# Or change WORKDIRs
WORKDIR /src/src
RUN pip install -r requirements.txt
WORKDIR /src
Remember that everything in the Dockerfile happens before any setting in docker-compose.yml outside the build: block is even considered. Environment variables, volume mounts, and networking options for a container have no effect on the image build sequence.
In terms of Dockerfile style, your VOLUME declaration will have some tricky unexpected side effects and probably is unnecessary; I'd remove it. Your Dockerfile is also missing the CMD that the container should run. You should also combine RUN apt-get update && apt-get install into single commands; the way Docker layer caching works and the way the Debian repositories work, it's very easy to wind up with a cached package index that names files from a week ago that don't exist any more.
While the setup you're describing is fairly popular, it also essentially hides everything the Dockerfile does with your local source tree. The npm install you're describing here, for example, will be a no-op because the volume mount will hide /src/src/node_modules. I generally find it easier to just run python, npm, etc. locally while I'm developing, rather than write and debug this 50-line YAML file and run sudo docker-compose up.