Docker-compose and PHP composer not found although it says the opposite - docker

I'm quite new to Docker. I have a Laravel project I did using Laragon. I would like to create a Docker Compose project out of it. My problem is, when I install composer, even if the terminal is showing me no error, I can't see anything inside of the container.
What I have:
I have a docker-compose.yml with PHP and NGINX services
I have a Dockerfile for the laravel app
Here is the Dockerfile:
FROM php:7.3.19-fpm-stretch
ENV ACCEPT_EULA=Y
# Set working directory
WORKDIR /var/www
# Install dependencies
RUN apt-get update && apt-get install -y \
build-essential \
libpng-dev \
gnupg \
libjpeg62-turbo-dev \
libfreetype6-dev \
locales \
zip \
libzip-dev \
jpegoptim optipng pngquant gifsicle \
vim \
unzip \
git \
curl
# Microsoft SQL Server Prerequisites
RUN apt-get update \
&& curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - \
&& curl https://packages.microsoft.com/config/debian/9/prod.list \
> /etc/apt/sources.list.d/mssql-release.list \
&& apt-get install -y --no-install-recommends \
locales \
apt-transport-https \
&& echo "en_US.UTF-8 UTF-8" > /etc/locale.gen \
&& locale-gen \
&& apt-get update \
&& apt-get -y --no-install-recommends install \
unixodbc-dev \
msodbcsql17
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install extensions
RUN docker-php-ext-install pdo_mysql mbstring zip exif pcntl
RUN docker-php-ext-configure gd --with-gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ --with-png-dir=/usr/include/
RUN docker-php-ext-install gd
RUN pecl install sqlsrv pdo_sqlsrv xdebug \
&& docker-php-ext-enable sqlsrv pdo_sqlsrv xdebug
# Install composer
RUN curl -sS https://getcomposer.org/installer | \
php -- --install-dir=/usr/bin/ --filename=composer
# Copy composer.lock and composer.json
COPY composer.lock composer.json /var/www/
# Run composer install
RUN composer install --no-scripts --no-autoloader
# Add user for laravel application
RUN groupadd -g 1000 www
RUN useradd -u 1000 -ms /bin/bash -g www www
# Copy existing application directory contents
COPY . /var/www
RUN composer dump-autoload --optimize
# Copy existing application directory permissions
COPY --chown=www:www . /var/www
# Change current user to www
USER www
# Expose port 9000 and
EXPOSE 9000
# Start php-fpm server
CMD ["php-fpm"]
What I get when I run docker-compose build:
[...]
Step 11/21 : RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/bin/ --filename=composer
---> Running in 9ee162ed2895
All settings correct for using Composer
Downloading...
Composer (version 1.10.8) successfully installed to: /usr/bin/composer
Use it: php /usr/bin/composer
Removing intermediate container 9ee162ed2895
---> dc1df86b9a7b
Step 12/21 : COPY composer.lock composer.json /var/www/
---> 7bcf92a0a647
Step 13/21 : RUN composer install --no-scripts --no-autoloader
---> Running in f1695442d1db
Loading composer repositories with package information
Installing dependencies (including require-dev) from lock file
Package operations: 113 installs, 0 updates, 0 removals
- Installing doctrine/inflector (2.0.3): Downloading (100%)
[...] (all composer packages seem to be installed correctly)
Step 17/21 : RUN composer dump-autoload --optimize
---> Running in 18b8c992ace9
Generating optimized autoload files
> Illuminate\Foundation\ComposerScripts::postAutoloadDump
> #php artisan package:discover --ansi
Discovered Package: facade/ignition
Discovered Package: fideloper/proxy
Discovered Package: fruitcake/laravel-cors
Discovered Package: laravel/socialite
Discovered Package: laravel/tinker
Discovered Package: nesbot/carbon
Discovered Package: nunomaduro/collision
Discovered Package: socialiteproviders/manager
Package manifest generated successfully.
Generated optimized autoload files containing 4925 classes
Removing intermediate container 18b8c992ace9
---> d5075486971d
Step 18/21 : COPY --chown=www:www . /var/www
---> 15fcdde49457
Step 19/21 : USER www
---> Running in be9ab541543c
Removing intermediate container be9ab541543c
---> e17d1ae7a052
Step 20/21 : EXPOSE 9000
---> Running in 3e09f6279930
Removing intermediate container 3e09f6279930
---> 7a3649b9df62
Step 21/21 : CMD ["php-fpm"]
---> Running in 0db51df5de3c
Removing intermediate container 0db51df5de3c
---> fbb8e77bb8d1
Successfully built fbb8e77bb8d1
Successfully tagged digitalocean.com/php:latest
The docker-compose build sounds like it's working.
When I run docker-composer up -d and I go to http://localhost, I have a PHP error :
Fatal error: require(): Failed opening required '/var/www/public/../vendor/autoload.php' (include_path='.:/usr/local/lib/php') in /var/www/public/index.php on line 24
What I did so far:
Delete all the containers/images, rebuild
Edit the Dockerfile according to some tutorials I found on the internet
Verified that there's a vendor directory in www. Result : no.
Tried to execute composer install directly into the container (within docker exec -it <container-name> /bin/sh) : Result : /bin/sh: composer: not found.
I can't see where I'm wrong, and I don't understand where exactly does composer had installed its packages, and how to run my project...
Thanks forehand.

Related

Installing a python project using Poetry in a Docker container

I am using Poetry to install a python project using Poetry in a Docker container. Below you can find my Docker file, which used to work fine until recently when I switched to a new version of Poetry (1.2.1) and the new recommended Poetry installer:
# pull official base image
FROM ubuntu:20.04
ENV PATH = "${PATH}:/home/poetry/bin"
ENV APP_HOME=/home/app/web
RUN apt-get -y update && \
apt upgrade -y && \
apt-get install -y \
python3-pip \
curl \
netcat \
gunicorn && \
rm -fr /var/lib/apt/lists
# alias python2 to python3
RUN ln -s /usr/bin/python3 /usr/bin/python
# Install Poetry
RUN mkdir -p /home/poetry && \
curl -sSL https://install.python-poetry.org | POETRY_HOME=/home/poetry python -
# Cleanup
RUN apt-get remove -y curl && \
apt-get clean
RUN pip install --upgrade pip && \
pip install cryptography && \
pip install psycopg2-binary
# create directory for the app user
# create the app user
# create the appropriate directories
RUN adduser --system --group app && \
mkdir -p $APP_HOME/static-incdtim && \
mkdir -p $APP_HOME/mediafiles
# copy project
COPY . $APP_HOME
WORKDIR $APP_HOME
# Install Python packages
RUN poetry config virtualenvs.create false
RUN poetry install --only main
# copy entrypoint-prod.sh
COPY ./entrypoint.incdtim.prod.sh $APP_HOME/entrypoint.sh
RUN chmod a+x $APP_HOME/entrypoint.sh
# chown all the files to the app user
RUN chown -R app:app $APP_HOME
# change to the app user
USER app
# run entrypoint.prod.sh
ENTRYPOINT ["/home/app/web/entrypoint.sh"]
The poetry install works fine, I attached to a running container and run it myself and found that it works without problems. However, when I open a Python console and try to import a module (django) which is installed by the Poetry project, the module is not found. Please note that I am installing my project in the system environment (poetry config virtualenvs.create false). I verified, and there is only one version of python installed in the docker container. The specific error I get when trying to import a python module installed by Poetry is: ModuleNotFoundError: No module named xxxx
Although this is not an answer, it is too long to fit within the comment section. It is rather a piece of advice:
declare your ENV at the top of the Dockerfile to make it easier to read.
merge the multiple RUN commands together to avoid creating useless intermediate layers. In the particular case of apt-get install, this will also prevent you from installing a package which dates back from the first "apt-get update". Indeed, since the command line has not changed Docker will not re-execute the command and thus not refresh the package list..
avoid making a copy of all the files in "." when you previously copy some specific files to specific places..
Here, you Dockerfile could rather look like:
# pull official base image
FROM ubuntu:20.04
ENV PATH = "${PATH}:/home/poetry/bin"
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN apt-get -y update && \
apt upgrade -y && \
apt-get install -y \
python3-pip \
curl \
netcat \
gunicorn && \
rm -fr /var/lib/apt/lists
# alias python2 to python3
RUN ln -s /usr/bin/python3 /usr/bin/python
# Install Poetry
RUN mkdir -p /home/poetry && \
curl -sSL https://install.python-poetry.org | POETRY_HOME=/home/poetry python -
# Cleanup
RUN apt-get remove -y \
curl && \
apt-get clean
RUN pip install --upgrade pip && \
pip install cryptography && \
pip install psycopg2-binary
# create directory for the app user
# create the app user
# create the appropriate directories
RUN mkdir -p /home/app && \
adduser --system --group app && \
mkdir -p $APP_HOME/static-incdtim && \
mkdir -p $APP_HOME/mediafiles
WORKDIR $APP_HOME
# copy project
COPY . $APP_HOME
# Install Python packages
RUN poetry config virtualenvs.create false && \
poetry install --only main
# copy entrypoint-prod.sh
RUN cp $APP_HOME/entrypoint.incdtim.prod.sh $APP_HOME/entrypoint.sh && \
chmod a+x $APP_HOME/entrypoint.sh && \
chown -R app:app $APP_HOME
# change to the app user
USER app
# run entrypoint.prod.sh
ENTRYPOINT ["/home/app/web/entrypoint.sh"]
UPDATE:
Let's get back to your question. Having your program running okay when you "run it yourself" does not mean all the dependencies are met. Indeed, this can mean that your module has not been imported yet (and thus has not triggered the ModuleNotFoundError exception yet).
In order to validate this theory, you can either:
create a simple application which imports the failing module and then quits. If the import succeeds then there is something weird indeed.
list the installed modules with poetry show --latest. If the package is listed, then there is something weird indeed.
If none of the above indicates the module is installed, that just means the module is not installed and you should update your Dockerfile to install it.
NOTE: I do not know much about poetry, but you may want to have a list external dependencies to be met for your application. In the case of pip3, the list is expressed as a file named requirement.txt and can be installed with pip3 install -r requirement.txt.
It turns out this is known a bug in Poetry: https://github.com/python-poetry/poetry/issues/6459

Docker image build failure - wget

I have a docker file in which I do wget to copy something in the image. But the build is failing giving 'wget command not found'. WHen i googled I found suggestions to install wget like below
RUN apt update && apt upgrade
RUN apt install wget
Docker File:
FROM openjdk:17
LABEL maintainer="app"
ARG uname
ARG pwd
RUN useradd -ms /bin/bash -u 1000 user1
COPY . /app
WORKDIR /app
RUN ./gradlew build -PmavenUsername=$uname -PmavenPassword=$pwd
ARG YOURKIT_VERSION=2021.11
ARG POLARIS_YK_DIR=YourKit-JavaProfiler-2019.8
RUN wget https://www.yourkit.com/download/docker/YourKit-JavaProfiler-${YOURKIT_VERSION}-docker.zip --no-check-certificate -P /tmp/ && \
unzip /tmp/YourKit-JavaProfiler-${YOURKIT_VERSION}-docker.zip -d /usr/local && \
mv /usr/local/YourKit-JavaProfiler-${YOURKIT_VERSION} /usr/local/$POLARIS_YK_DIR && \
rm /tmp/YourKit-JavaProfiler-${YOURKIT_VERSION}-docker.zip
EXPOSE 10001
EXPOSE 8080
EXPOSE 5005
USER 1000
ENTRYPOINT ["sh", "/docker_entrypoint.sh"]
On doing this I am getting error app-get not found. Can some one suggest any solution.
The openjdk image you use is based on Oracle Linux which uses microdnf rather than apt as it's package manager.
To install wget (and unzip which you also need), you can add this to your Dockerfile:
RUN microdnf update \
&& microdnf install --nodocs wget unzip \
&& microdnf clean all \
&& rm -rf /var/cache/yum
The commands clean up the package cache after installing, to keep the image size as small as possible.

Docker build command fails on COPY

Hi I have a docker file which is failing on the COPY command. It was running fine initially but then it suddenly crashed during the build process. The Docker file basically sets up the dev environment and authenticate with GCP.
FROM ubuntu:16.04
## ENV Variables
ENV PYTHON_VERSION="3.6.5"
ENV BUCKET_NAME='detection-sandbox'
ENV DIRECTORY='/usr/local/gcloud'
# Update and Install packages
RUN apt-get update -y \
&& apt-get install -y \
curl \
wget \
tar \
xz-utils \
bc \
build-essential \
cmake \
curl \
zlib1g-dev \
libssl-dev \
libsqlite3-dev \
python3-pip \
python3-setuptools \
unzip \
g++ \
git \
python-tk
# Install Python 3.6.5
RUN wget https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tar.xz \
&& tar -xvf Python-${PYTHON_VERSION}.tar.xz \
&& rm -rf Python-${PYTHON_VERSION}.tar.xz \
&& cd Python-${PYTHON_VERSION} \
&& ./configure \
&& make install \
&& cd / \
&& rm -rf Python-${PYTHON_VERSION}
# Install pip
RUN curl -O https://bootstrap.pypa.io/get-pip.py \
&& python3 get-pip.py \
&& rm get-pip.py
# Add SNI support to Python
RUN pip --no-cache-dir install \
pyopenssl \
ndg-httpsclient \
pyasn1
## Download and Install Google Cloud SDK
RUN mkdir -p /usr/local/gcloud \
&& curl https://sdk.cloud.google.com > install.sh \
&& bash install.sh --disable-prompts --install-dir=${DIRECTORY}
# Adding the package path to directory
ENV PATH $PATH:${DIRECTORY}/google-cloud-sdk/bin
# working directory
WORKDIR /usr/src/app
COPY requirements.txt ./ \
testproject-264512-9de8b1b35153.json ./
It fails at this step :
Step 13/21 : COPY requirements.txt ./ testproject-264512-9de8b1b35153.json ./
COPY failed: stat /var/lib/docker/tmp/docker-builder942576416/testproject-264512-9de8b1b35153.json: no such file or directory
Any leads in this would be helpful.
How are you running docker build command?
In docker best practices I've read that docker fails if you try to build your image from stdin using -
Attempting to build a Dockerfile that uses COPY or ADD will fail if this syntax is used. The following example illustrates this:
# create a directory to work in
mkdir example
cd example
# create an example file
touch somefile.txt
docker build -t myimage:latest -<<EOF
FROM busybox
COPY somefile.txt .
RUN cat /somefile.txt
EOF
# observe that the build fails
...
Step 2/3 : COPY somefile.txt .
COPY failed: stat /var/lib/docker/tmp/docker-builder249218248/somefile.txt: no such file or directory
I've reproduced issue... Here is my Dockerfile:
FROM alpine:3.7
## ENV Variables
ENV PYTHON_VERSION="3.6.5"
ENV BUCKET_NAME='detection-sandbox'
ENV DIRECTORY='/usr/local/gcloud'
# working directory
WORKDIR /usr/src/app
COPY kk.txt ./ \
kk.2.txt ./
If I create image by running docker build -t testImage:1 [DOCKERFILE_FOLDER], docker creates image and works correctly.
However if I try the same command from stdin as:
docker build -t test:2 - <<EOF
FROM alpine:3.7
ENV PYTHON_VERSION="3.6.5"
ENV BUCKET_NAME='detection-sandbox'
ENV DIRECTORY='/usr/local/gcloud'
WORKDIR /usr/src/app
COPY kk.txt ./ kk.2.txt ./
EOF
I get the following error:
Step 1/6 : FROM alpine:3.7
---> 6d1ef012b567
Step 2/6 : ENV PYTHON_VERSION="3.6.5"
---> Using cache
---> 734d2a106144
Step 3/6 : ENV BUCKET_NAME='detection-sandbox'
---> Using cache
---> 18fba29fffdc
Step 4/6 : ENV DIRECTORY='/usr/local/gcloud'
---> Using cache
---> d926a3b4bc85
Step 5/6 : WORKDIR /usr/src/app
---> Using cache
---> 57a1868f5f27
Step 6/6 : COPY kk.txt ./ kk.2.txt ./
COPY failed: stat /var/lib/docker/tmp/docker-builder518467298/kk.txt: no such file or directory
It seems that docker build images from /var/lib/docker/tmp/ if you build image from stdin, thus ADD or COPY commands don't work.
Incorrect path in source is a common error.
Use
COPY ./directory/testproject-264512-9de8b1b35153.json /dir/
instead of
COPY testproject-264512-9de8b1b35153.json /dir/

How to activate GRPC PHP extension via Dockerfile?

I have this Dockerfile. When I try to run it, in the Composer Update line it returns an error, that extensions are not installed.
That is because of GRPC not being activated on the php.ini
My question is, how can I activated it via terminal?
FROM php:7.2-apache
WORKDIR /var/www/html
COPY . ./
RUN apt-get update && apt-get install -y -q nodejs npm curl unzip git rake ruby-ronn zlib1g-dev libpng-dev && apt-get clean
RUN apt-get install php7.2=dev php-pear phpunit
RUN curl -sS https://getcomposer.org/installer | php
RUN mv composer.phar /usr/local/bin/composer
RUN pecl install grpc
RUN docker-php-ext-install mbstring
RUN docker-php-ext-install zip
RUN docker-php-ext-install gd
RUN npm install
RUN npm install -g gulp-cli
RUN composer update
RUN gulp --env=production
EXPOSE 80 443
you forgot to activate the grcp extension ,you can use the code under the comment #install protoc to activated,and you will get message if it has activated or not by RUN php -r "echo extension_loaded('grpc') ? 'yes' : 'no';"
I think this way is better and more concise:
FROM php:7-apache
RUN apt-get update && apt-get install -y -q git rake ruby-ronn zlib1g-dev && apt-get clean
# install composer
RUN cd /usr/local/bin && curl -sS https://getcomposer.org/installer | php
RUN cd /usr/local/bin && mv composer.phar composer
RUN pecl install grpc
#install protoc
RUN mkdir -p /tmp/protoc && \
curl -L https://github.com/google/protobuf/releases/download/v3.2.0/protoc-3.2.0-linux-x86_64.zip > /tmp/protoc/protoc.zip && \
cd /tmp/protoc && \
unzip protoc.zip && \
cp /tmp/protoc/bin/protoc /usr/local/bin && \
cd /tmp && \
rm -r /tmp/protoc && \
docker-php-ext-enable grpc
RUN php -r "echo extension_loaded('grpc') ? 'yes' : 'no';"
WORKDIR /app
COPY . /app
RUN composer install
RUN npm install
RUN npm install -g gulp-cli
RUN gulp --env=production
EXPOSE 8181

Setting up our Rasa/NLU container, error?

I have this file Dockerfile.nlu
FROM chatbot/spacy:latest
WORKDIR /app
COPY nlu ./agent_nlu
RUN python –m rasa_nlu.train --config agent_nlu/config.yml --data agent_nlu/data/ --path agent_nlu/agent --fixed_model_name default
and I get the error below:
]$ sudo docker build -t nlu:latest -f docker/Dockerfile.nlu .
Sending build context to Docker daemon 9.216kB
Step 1/4 : FROM chatbot/spacy:latest
---> 496dc6a38abb
Step 2/4 : WORKDIR /app
---> Using cache
---> 7f02012c8452
Step 3/4 : COPY nlu ./agent_nlu
COPY failed: stat /var/lib/docker/tmp/docker-builder363868051/nlu: no such file or directory
It doesn't look like Docker can find the nlu directory. Are you sure it exists? Are you sure that you are executing the command from the correct directory?
But you also aren't installing Rasa at all or any of it's requirements. Is there a reason you aren't using the pre-built Rasa images? available here with docs here.
Here is a fully functional Docker file pulled from their repo.
FROM python:3.6-slim
ENV RASA_NLU_DOCKER="YES" \
RASA_NLU_HOME=/app \
RASA_NLU_PYTHON_PACKAGES=/usr/local/lib/python3.6/dist-packages
# Run updates, install basics and cleanup
# - build-essential: Compile specific dependencies
# - git-core: Checkout git repos
RUN apt-get update -qq \
&& apt-get install -y --no-install-recommends build-essential git-core openssl libssl-dev libffi6 libffi-dev curl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
WORKDIR ${RASA_NLU_HOME}
COPY . ${RASA_NLU_HOME}
# use bash always
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN pip install -r alt_requirements/requirements_spacy_sklearn.txt
RUN pip install -e .
RUN pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_md-2.0.0/en_core_web_md-2.0.0.tar.gz --no-cache-dir > /dev/null \
&& python -m spacy link en_core_web_md en \
&& pip install https://github.com/explosion/spacy-models/releases/download/de_core_news_sm-2.0.0/de_core_news_sm-2.0.0.tar.gz --no-cache-dir > /dev/null \
&& python -m spacy link de_core_news_sm de
COPY sample_configs/config_spacy.yml ${RASA_NLU_HOME}/config.yml
VOLUME ["/app/projects", "/app/logs", "/app/data"]
EXPOSE 5000
ENTRYPOINT ["./entrypoint.sh"]
CMD ["start", "-c", "config.yml", "--path", "/app/projects"]

Resources