Changes in my requirements.txt are not being reflected when I run:
docker-compose -f docker-compose-dev.yml up -d
docker-compose-dev.yml
version: '3.6'
services:
web:
build:
context: ./services/web
dockerfile: Dockerfile-dev
volumes:
- './services/web:/usr/src/app'
ports:
- 5001:5000
environment:
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_URL=postgres://postgres:postgres#web-db:5432/web_dev
- DATABASE_TEST_URL=postgres://postgres:postgres#web-db:5432/web_test
depends_on:
- web-db
web-db:
build:
context: ./services/web/project/db
dockerfile: Dockerfile
ports:
- 5435:5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
nginx:
build:
context: ./services/nginx
dockerfile: Dockerfile-dev
restart: always
ports:
- 80:80
depends_on:
- web
- client
client:
build:
context: ./services/client
dockerfile: Dockerfile-dev
volumes:
- './services/client:/usr/src/app'
- '/usr/src/app/node_modules'
ports:
- 3007:3000
environment:
- NODE_ENV=development
- REACT_APP_WEB_SERVICE_URL=${REACT_APP_WEB_SERVICE_URL}
depends_on:
- web
Dockerfile-dev
# base image
FROM python:3.6-alpine
# install dependencies
RUN apk update && \
apk add --virtual build-deps gcc python-dev musl-dev && \
apk add libffi-dev && \
apk add postgresql-dev && \
apk add netcat-openbsd && \
apk add bind-tools && \
apk add --update --no-cache g++ libxslt-dev && \
apk add jpeg-dev zlib-dev
ENV PACKAGES="\
dumb-init \
musl \
libc6-compat \
linux-headers \
build-base \
bash \
git \
ca-certificates \
freetype \
libgfortran \
libgcc \
libstdc++ \
openblas \
tcl \
tk \
libssl1.0 \
"
ENV PYTHON_PACKAGES="\
numpy \
matplotlib \
scipy \
scikit-learn \
nltk \
"
RUN apk add --no-cache --virtual build-dependencies python3 \
&& apk add --virtual build-runtime \
build-base python3-dev openblas-dev freetype-dev pkgconfig gfortran \
&& ln -s /usr/include/locale.h /usr/include/xlocale.h \
&& python3 -m ensurepip \
&& rm -r /usr/lib/python*/ensurepip \
&& pip3 install --upgrade pip setuptools \
&& ln -sf /usr/bin/python3 /usr/bin/python \
&& ln -sf pip3 /usr/bin/pip \
&& rm -r /root/.cache \
&& pip install --no-cache-dir $PYTHON_PACKAGES \
&& pip3 install 'pandas<0.21.0' \
&& apk del build-runtime \
&& apk add --no-cache --virtual build-dependencies $PACKAGES \
&& rm -rf /var/cache/apk/*
# set working directory
WORKDIR /usr/src/app
# add and install requirements
COPY ./requirements.txt /usr/src/app/requirements.txt # <--- refer to EDIT
RUN pip install -r requirements.txt
# add entrypoint.sh
COPY ./entrypoint.sh /usr/src/app/entrypoint.sh
RUN chmod +x /usr/src/app/entrypoint.sh
# add app
COPY . /usr/src/app
# run server
CMD ["/usr/src/app/entrypoint.sh"]
what am I missing?
EDIT
Like the accepted answer in [Docker how to run pip requirements.txt only if there was a change?, I'm already copying the requirements.txt file in a separate build step before adding the entire application into the image, but it does not seem to work.
I think the problem likely is that $ docker-compose up alone will not rebuild your images if you make changes. In order to get docker-compose to include your changes to your requirements.txt you will need to pass the --build flag to docker-compose.
I.e instead run:
docker-compose -f docker-compose-dev.yml up --build -d
Which will force a docker-compose rebuild the image. However this will rebuild all images in the docker-compose file which may or may not be desired.
If you only want to rebuild the image of a single service you can first run docker-compose -f docker-compose-dev.yml build web, then afterwards just run your original docker-compose command.
More info on the build command here.
Try to install requirements from the copied file
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
It is an example of their Dockerfile
COPY requirements.txt /tmp/
RUN pip install --requirement /tmp/requirements.txt
This is what you have
RUN pip install -r requirements.txt
Then after you have changed your docker file, you have to stop your container, remove your image, build a new one, and run container from it.
Stop container and remove the image.
docker-compose down
docker-compose --rmi all
--rmi all - removes all images. You might want to use --rmi IMAGE_NAME
And to start it (if you use not default parameters, change these commands with your arguments).
docker-compose up
Update
In case you have running docker and you do not want to stop it and rebuild an image (if you just want to install a package or run some commands or even start a new application), you can connect the container from your local machine and run command line commands.
docker exec -it [CONTAINER_ID] bash
To get [CONTAINER_ID], run
docker ps
Note docker-compose ps will give you containers names, but you need container id to ssh the container.
Related
I have a yii1 application. And I have a dockerfile. And I had a docker-compose file.
But for the momemnt I only have one application. Because I have a remote database. So the database is not in a container.
So I have this dockerfile:
FROM php:7.3-apache
#COPY BaltimoreCyberTrustRoot.crt.pem /usr/local/share/ca-certificates/AzureDB.crt
# Copy virtual host into container
COPY 000-default.conf /etc/apache2/sites-available/000-default.conf
# Enable rewrite mode
RUN a2enmod rewrite
# Install necessary packages
RUN apt-get update && \
apt-get install \
libzip-dev \
wget \
git \
unzip \
-y --no-install-recommends
# Install PHP Extensions
RUN docker-php-ext-install zip pdo_mysql
# RUN pecl install -o -f xdebug-3.1.3 \
# && rm -rf /tmp/pear
# Copy composer installable
COPY ./install-composer.sh ./
# Copy php.ini
COPY ./php.ini /usr/local/etc/php/
#COPY BaltimoreCyberTrustRoot.crt.pem /var/www/html/
EXPOSE 80
# Cleanup packages and install composer
RUN apt-get purge -y g++ \
&& apt-get autoremove -y \
&& rm -r /var/lib/apt/lists/* \
&& rm -rf /tmp/* \
&& sh ./install-composer.sh \
&& rm ./install-composer.sh
# Change the current working directory
WORKDIR /var/www/html
# Change the owner of the container document root
RUN chown -R www-data:www-data /var/www
# Start Apache in foreground
CMD ["apache2-foreground"]
And I had this docker-compose file:
version: '3'
services:
web:
build: ./docker
container_name: dockeryiidisc
ports:
- 80:80
- 443:443
volumes:
- C:\xampp\htdocs\webScraper/docker:/etc/apache2/sites-enabled/
- C:\xampp\htdocs\webScraper:/var/www/html/
and that worked.
But so now I only want to use the dockerfile.
So I tried this:
docker build -t docker_webcrawler .
and this command:
docker run -d -p 80:80 --name cntr-apache docker_webcrawler
But if I then go to: http://localhost:80
I only see a empty directory:
Index of /
[ICO] Name Last modified Size Description
So what I have to change? That I only have to use the dockerfile?
Thank you
It looks like you're missing the volume mappings that you have in your docker-compose file. Try this
docker run -d -p 80:80 --name cntr-apache -v C:\xampp\htdocs\webScraper/docker:/etc/apache2/sites-enabled/ -v C:\xampp\htdocs\webScraper:/var/www/html/ docker_webcrawler
I have a profile page in my app which I am migrating to docker. I want to have a default profile picture for user who don't upload any picture for that I need to store that picture in my container.
I have my default profile pic stored at data/web/media/default.jpg and want it to copy to vol/web/media/default.jpg in my docker container.
I tried COPY but got this error
failed to solve: rpc error: code = Unknown desc = failed to compute cache key: "/data" not found: not found
My dockerfile :
FROM python:3.9-alpine3.13
LABEL maintainer="mRk"
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
COPY ./app /app
COPY ./scripts /scripts
COPY ./data /vol
WORKDIR /app
EXPOSE 8000
RUN python -m venv /py && \
/py/bin/pip install --upgrade pip && \
apk add --update --no-cache postgresql-client && \
apk add --update --no-cache --virtual .tmp-deps \
build-base postgresql-dev musl-dev linux-headers && \
apk add --virtual build-deps gcc python3-dev musl-dev && \
apk add jpeg-dev zlib-dev libjpeg && \
pip install Pillow && \
apk del build-deps && \
/py/bin/pip install -r /requirements.txt && \
apk del .tmp-deps && \
adduser --disabled-password --no-create-home app && \
mkdir -p /vol/web/static && \
mkdir -p /vol/web/media && \
cp ./data/web/media/default.jpg /vol/web/media/default.jpg && \
chown -R app:app /vol && \
chmod -R 755 /vol && \
chmod -R +x /scripts
ENV PATH="/scripts:/py/bin:$PATH"
USER app
CMD ["run.sh"]
My docker-compose file :
version: "3.9"
services:
app:
build:
context: .
restart: always
volumes:
- static-data:/vol/web
environment:
- DB_HOST=db
- DB_NAME=${DB_NAME}
- DB_USER=${DB_USER}
- DB_PASS=${DB_PASS}
- SECRET_KEY=${SECRET_KEY}
- ALLOWED_HOSTS=${ALLOWED_HOSTS}
- EMAIL_USER=${EMAIL_USER}
- EMAIL_PASS=${EMAIL_PASS}
depends_on:
- db
db:
image: postgres:13-alpine
restart: always
volumes:
- postgres-data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASS}
proxy:
build:
context: ./proxy
restart: always
depends_on:
- app
ports:
- 80:8000
volumes:
- static-data:/vol/static
volumes:
postgres-data:
static-data:
It looks like you try to copy something from the data dir:
cp ./data/web/media/default.jpg /vol/web/media/default.jpg && \
But this directory does not exist. However, you copied everything from the data dir on your host machine into your container dir vol on line:
COPY ./data /vol
That means, that all your files and dirs from data or copied into the vol dir in the container. So I geuss you do not need that line at all, as it is already in your container but in the vol directory.
This line isn't going to work since you set your WORKDIR to /app/ earlier.
cp ./data/web/media/default.jpg /vol/web/media/default.jpg && \
You have to change your working directory or use an absolute path
An other solution would be to put this line below the RUN command since you don't need to be in the /app/ directory to run it:
WORDIR = /app/
You can use docker volumes which help you to copy and read files from docker
https://docs.docker.com/storage/volumes/
I am a docker newbie and i can't rly figure out how the changes that will be made to my working directory will be continuously copied to the docker container. Is there a command that copies all my changes to the docker container all the time ?
Edit : i added docker file and docker compose
My docker file
FROM scratch
ADD centos-7-x86_64-docker.tar.xz /
LABEL \
org.label-schema.schema-version="1.0" \
org.label-schema.name="CentOS Base Image" \
org.label-schema.vendor="CentOS" \
org.label-schema.license="GPLv2" \
org.label-schema.build-date="20201113" \
org.opencontainers.image.title="CentOS Base Image" \
org.opencontainers.image.vendor="CentOS" \
org.opencontainers.image.licenses="GPL-2.0-only" \
org.opencontainers.image.created="2020-11-13 00:00:00+00:00"
RUN yum clean all && yum update -y && yum -y upgrade
RUN yum groupinstall "Development Tools" -y
RUN yum install -y wget gettext-devel curl-devel openssl-devel perl-devel perl-CPAN zlib-devel && wget https://github.com/git/git/archive/v2.26.2.tar.gz\
&& tar -xvzf v2.26.2.tar.gz && cd git-2.26.2 && make configure && ./configure --prefix=/usr/local && make install
# RUN mkdir -p /root/.ssh && \
# chmod 0700 /root/.ssh && \
# ssh-keyscan github.com > /root/.ssh/known_hosts
# RUN ssh-keygen -q -t rsa -N '' -f /id_rsa
# RUN echo "$ssh_prv_key" > /root/.ssh/id_rsa && \
# echo "$ssh_pub_key" > /root/.ssh/id_rsa.pub && \
# chmod 600 /root/.ssh/id_rsa && \
# chmod 600 /root/.ssh/id_rsa.pub
RUN ls
RUN cd / && git clone https://github.com/odoo/odoo.git \
&& cd odoo \
&& git fetch \
&& git checkout 9.0
RUN yum install python-devel libxml2-devel libxslt-dev openldap-devel libtiff-devel libjpeg-devel libzip-devel freetype-devel lcms2-devel \
libwebp-devel tcl-devel tk-devel python-pip nodejs
RUN pip install setuptools==1.4.1 beautifulsoup4==4.9.3 pillow openpyxl==2.6.4 luhn gmp-devel paramiko==1.7.7.2 python2-secrets cffi pysftp==0.2.8
RUN pip install -r requirements.txt
RUN npm install -g less
CMD ["/bin/bash","git"]
My docker-compose
version: '3.3'
services:
app: &app
build:
context: .
dockerfile: ./docker/app/Dockerfile
container_name: app
tty: true
db:
image: postgres:9.2.18
environment:
- POSTGRES_DB=test
ports:
- 5432:5432
volumes:
- ./docker/db/pg-data:/var/lib/postgresql/data
odoo:
<<: *app
command: python odoo.py -w odoo -r odoo
ports:
- '8069:8069'
depends_on:
- db
If I understand correctly you want to mount a path from the host into a container which can be done using volumes. Something like this would keep the folders in sync which can be useful for development
docker run -v /path/to/local/folder:/path/in/container busybox
I have this docker-compose config.
The "app" is a PHP application. As you can see, 3 env vars are passed to the container.
However, after docker-compose up, PHP doesn't see these. They are not returned by getenv() and they cannot be found $_ENV either.
What's wrong here?
version: '3.4'
services:
db:
image: postgres:11.0
restart: always
environment:
POSTGRES_PASSWORD: testuser
POSTGRES_USER: test
POSTGRES_DB: db
volumes:
- /data/db
redis:
image: redis:latest
restart: always
volumes:
- /data/redis
app:
build:
context: .
dockerfile: Dockerfile
environment:
- DATABASE_URL=postgresql://db:5432/db
- REDIS_URL=tcp://redis:6379?database=1
- NODE_ENV=development
ports:
- '80:80'
volumes:
- '${BASEDIR}:/var/www/some'
Here is my Dockerfile:
FROM ubuntu:18.04
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y \
software-properties-common \
apt-utils \
tzdata \
locales
RUN add-apt-repository ppa:ondrej/php
RUN apt-get update
RUN locale-gen en_US.UTF-8
RUN echo "Europe/Budapest" > /etc/timezone && dpkg-reconfigure -f noninteractive tzdata
RUN apt-get -y update && apt-get -y install \
libglu1-mesa \
less \
vim \
nginx \
php7.4 \
php7.4-fpm \
php7.4-cli \
php7.4-common \
php7.4-curl \
php-deepcopy \
php7.4-gd \
php7.4-mbstring \
php7.4-pgsql \
php7.4-soap \
php7.4-xdebug \
php7.4-zip \
php7.4-xml \
phpunit \
npm && npm i -g npm
ENV NGINX_RUN_USER www-data
ENV NGINX_RUN_GROUP www-data
ENV NGINX_LOG_DIR /var/log/nginx
ENV NGINX_LOCK_DIR /var/lock/nginx
ENV NGINX_PID_FILE /run/nginx.pid
RUN mkdir www
RUN mkdir -p /var/www/dbv.local/html
RUN chmod -R 755 /var/www/dbv.local
COPY ./php.ini /etc/php/7.4/fpm/php.ini
COPY ./dbv.local /etc/nginx/sites-available/dbv.local
COPY ./lib/aspose_php.so /usr/lib/php/20190902
COPY ./lib/libaspose_cpp_clang3_libstdcpp.so /usr/lib/libaspose_cpp_clang3_libstdcpp.so
COPY ./lib/libAspose.Slides_clang3_libstdcpp.so /usr/lib/libAspose.Slides_clang3_libstdcpp.so
COPY ./lib/libphpcpp.so.2.2 /usr/lib/libphpcpp.so.2.2
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
RUN ln -s /etc/nginx/sites-available/dbv.local /etc/nginx/sites-enabled/
ADD ./xdebug.ini /etc/php/7.4/mods-available/xdebug.ini
ADD ./aspose_php.ini /etc/php/7.4/mods-available/aspose_php.ini
ADD ./start.sh /root/start.sh
RUN ln -s /etc/php/7.4/mods-available/xdebug.ini /etc/php/7.4/mods-available/20-xdebug.ini
RUN ln -s /etc/php/7.4/mods-available/aspose_php.ini /etc/php/7.4/fpm/conf.d/aspose_php.ini
RUN ln -s /etc/php/7.4/mods-available/aspose_php.ini /etc/php/7.4/cli/conf.d/aspose_php.ini
RUN rm -rf /var/lib/apt/lists/*
RUN apt-get clean
CMD ["/root/start.sh"]
EXPOSE 80 9000 5432
Edit:
start.sh is just a one liner
service php7.4-fpm start && nginx
Ubuntu 18 is necessary. I could use an official Nginx image though. Maybe that's the issue?
Im the process of learning docker swarm, I have a three node nuc setup running docker swarm at the moment on Ubuntu 16.04. I am looking to build a 2 node clickhouse cluster using the official image from:
https://hub.docker.com/r/yandex/clickhouse-server/dockerfile
I can run this easily as an image on one node but I am trying to deploy the docker image to 2 of the nodes so I can build the cluster from there using this documentation:
https://docs.docker.com/engine/swarm/stack-deploy/
But I am getting the following error when I run docker-compose up -d:
ERROR: Service 'builder' failed to build: COPY failed: stat /var/lib/docker/tmp/docker-builder795701575/docker_related_config.xml: no such file or directory
directory map:
my_app
----docker-compose.yml
----docker
-------client
-------server
-------builder
--Dockerfile
Dockerfile:
https://hub.docker.com/r/yandex/clickhouse-server/dockerfile
FROM ubuntu:18.04
ARG repository="deb http://repo.yandex.ru/clickhouse/deb/stable/ main/"
ARG version=19.1.13
ARG gosu_ver=1.10
RUN apt-get update \
&& apt-get install --yes --no-install-recommends \
apt-transport-https \
dirmngr \
gnupg \
&& mkdir -p /etc/apt/sources.list.d \
&& apt-key adv --keyserver keyserver.ubuntu.com --recv E0C56BD4 \
&& echo $repository > /etc/apt/sources.list.d/clickhouse.list \
&& apt-get update \
&& env DEBIAN_FRONTEND=noninteractive \
apt-get install --allow-unauthenticated --yes --no-install-recommends \
clickhouse-common-static=$version \
clickhouse-client=$version \
clickhouse-server=$version \
libgcc-7-dev \
locales \
tzdata \
wget \
&& rm -rf \
/var/lib/apt/lists/* \
/var/cache/debconf \
/tmp/* \
&& apt-get clean
ADD https://github.com/tianon/gosu/releases/download/1.10/gosu-amd64 /bin/gosu
RUN locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
RUN mkdir /docker-entrypoint-initdb.d
COPY docker_related_config.xml /etc/clickhouse-server/config.d/
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x \
/entrypoint.sh \
/bin/gosu
EXPOSE 9000 8123 9009
VOLUME /var/lib/clickhouse
ENV CLICKHOUSE_CONFIG /etc/clickhouse-server/config.xml
ENTRYPOINT ["/entrypoint.sh"]
docker-compose.yml
https://github.com/yandex/ClickHouse/blob/master/docker-compose.yml
version: "2"
services:
builder:
image: yandex/clickhouse-builder
build: docker/builder
client:
image: yandex/clickhouse-client
build: docker/client
command: ['--host', 'server']
server:
image: yandex/clickhouse-server
build: docker/server
ports:
- 8123:8123
Am I approaching this incorrectly? help is appreciated.
Update:
Attempted comment solution but did not work:
ERROR: Service 'builder' failed to build: COPY failed: stat /var/lib/docker/tmp/docker-builder511288209/docker_related_config.xml: no such file or directory
Look at the Github repository for this project and try to build it from there: https://github.com/yandex/ClickHouse/tree/master/docker/server
Don't just copy the Dockerfile, but clone the project and build it from there.