Running composer install in docker container - docker

I have a docker-compose.yml script which looks like this:
version: '2'
services:
php:
build: ./docker/php
volumes:
- .:/var/www/website
The DockerFile located in ./docker/php looks like this:
FROM php:fpm
RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
RUN php composer-setup.php
RUN php -r "unlink('composer-setup.php');"
RUN mv composer.phar /usr/local/bin/composer
RUN composer update -d /var/www/website
Eventho this always fails with the error
[RuntimeException]
Invalid working directory specified, /var/www/website does not exist.
When I remove the RUN composer update line and enter the container, the directory does exist and contains my project code.
Please tell me if I am doing anything wrong OR if I'm doing the composer update on a wrong place

RUN ... lines are run when the image is being built.
Volumes are attached to the container. You have at least two options here:
use COPY command to, well, copy your app code to the image so that all commands after that command will have access to it. (Do not push the image to any public Docker repo as it will contain your source that you probably don't want to leak)
install composer dependencies with command run on your container (CMD or ENTRYPOINT in Dockerfile or command option in docker-compose)

You are mounting your local volume over your build directory so anything you built in '/var/www/website' will be mounted over by your local volume when the container runs.

Related

Docker : Build a base image and make Dockerfile point to it

I intend to build and run a dockerized container using an image that is built locally and not using docker hub. My use case is the following :
Cloned an open source repo source code(https://github.com/jitsi/jitsi-meet).
They have their own dockerized version too which build from the source and is deployed
on Docker Hub:(https://github.com/jitsi/docker-jitsi-meet)
Renamed the filenames and the contents inside the filenames of jitsi-meet for my own ease of use.
Packaged as a 7z packaged with the final changes.
I now Require to build the image locally using the code from the 7z package/ OR source code folder itself without uploading the image to the public docker hub.
RUN specific set of commands inside the Dockerfile.
my Dockerfile:
ARG MYPROJECT_REPO=myproject
ARG BASE_TAG=stable
FROM ${MYPROJECT_REPO}/base:${BASE_TAG}
LABEL org.opencontainers.image.title="Myproject"
LABEL org.opencontainers.image.url="https://myproject.org/myproject-meet/"
LABEL org.opencontainers.image.source="https://github.com/myproject/docker-myproject-meet"
LABEL org.opencontainers.image.documentation="https://myproject.github.io/handbook/"
ADD https://raw.githubusercontent.com/acmesh-official/acme.sh/2.8.8/acme.sh /opt
COPY rootfs/ /
RUN apt-dpkg-wrap apt-get update && \
apt-dpkg-wrap apt-get install -y cron nginx-extras myproject-meet-web socat curl jq && \
mv /usr/share/myproject-meet/interface_config.js /defaults && \
rm -f /etc/nginx/conf.d/default.conf && \
apt-cleanup
EXPOSE 80 443
VOLUME ["/config", "/usr/share/myproject-meet/transcripts"]
My docker-compose.yml (Only uploading relevant parts):
services:
# Frontend
myproject_webserver:
container_name: myproject-webserver
build:
dockerfile: ./Dockerfile
context: ./
#image: jitsi/web:${JITSI_IMAGE_VERSION:-unstable}
restart: ${RESTART_POLICY:-unless-stopped}
ports:
- '${HTTP_PORT}:80'
- '${HTTPS_PORT}:443'
volumes:
- ${CONFIG}/web:/config:Z
- ${CONFIG}/web/crontabs:/var/spool/cron/crontabs:Z
- ${CONFIG}/transcripts:/usr/share/myproject-meet/transcripts:Z
environment:
- AMPLITUDE_ID
- ANALYTICS_SCRIPT_URLS
As you can see i have commented out the public docker image of jitsi from docker hub and used build context instead. I need to build a local image and deploy to the DockerFile.
My core problem stems from the issue of renaming files/folders and the contents of the same.
Kindly correct my understanding of the following :
If i had used the core code i could have made minute changes to the code itself which are necessary without renaming and used a COPY command in DockerFile which would be used instead of the core file keeping everything else intact and also keeping the image line in docker-compose.yml as is.
So if the original repo has folder A/filenamea.js running inside a container :
Can docker COPY command be used if I have folder A1/filenamea1.js' as
renamed files to replace and run instead of the ones inside the container folder
A/filenamea.js?

docker-compose named volume with one file: ERROR: Cannot create container for service, source is not directory

I am trying to make the binary file /bin/wkhtmltopdf from the container wkhtmltopdf available in the web container. I try to achieve this with a named volume.
I have the following docker container setup in my docker-compose.yml:
services:
web:
image: php:7.4-apache
command: sh -c "mkdir -p /usr/local/bin && touch /usr/local/bin/wkhtmltopdf"
entrypoint: sh -c "exec 'apache2-foreground'"
volumes:
- wkhtmltopdfvol:/usr/local/bin/wkhtmltopdf
wkhtmltopdf:
image: madnight/docker-alpine-wkhtmltopdf
command: sh -c "touch /bin/wkhtmltopdf"
entrypoint: sh -c "tail -f /dev/null" # workaround to keep container running
volumes:
- wkhtmltopdfvol:/bin/wkhtmltopdf
volumes:
wkhtmltopdfvol:
However, I get the following error when running docker-compose up:
ERROR: for wkhtmltopdf Cannot create container for service wkhtmltopdf:
source /var/lib/docker/overlay2/42e7082b8024ae4ebb13a4f0003a9e17bc18b33ef0677431dd002da3c21dde88/merged/bin/wkhtmltopdf is not directory
.../bin/wkhtmltopdf is not directory
Does that mean that I can't share one file between containers but only directories through a named volume? How do I achieve this?
Edit: I also noticed that /usr/local/bin/wkhtmltopdf inside the web container is a directory and not a file as I expected.
It can be tricky to share binaries between containers like this. Volumes probably aren't the mechanism you're looking for.
If you look at the Docker Hub page for the php image you can see that php:7.4-apache is an alias for (currently) php:7.4.15-apache-buster, where "Buster" is the name of a Debian release. You can then search on https://packages.debian.org/ to discover that Debian has a prepackaged wkhtmltopdf package. You can install this using a custom Dockerfile:
FROM php:7.4-apache
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --assume-yes --no-install-recommends \
wkhtmltopdf
# COPY ...
# Base image provides EXPOSE, CMD
Then your docker-compose.yml file needs to build this image:
version: '3.8'
services:
web:
build: .
# no image:, volumes:, or command: override
Just in terms of the mechanics of sharing binaries like this, you can run into trouble where a binary needs a shared library that's not present in the target container. The apt-get install mechanism handles this for you. There are also potential troubles if a container has a different shared-library ecosystem (especially Alpine-based containers), or using host binaries from a different operating system.
The Compose file you show mixes several concepts in a way that doesn't really work. A named volume is always a directory, so trying to mount that over the /bin/wkhtmltopdf file in the second container causes the error you see. There's a dependency issue for which container starts up first and gets to create the volume. A container only runs a single command, and if you have both entrypoint: and command: then the command gets passed as extra arguments to the entrypoint (and if the entrypoint is an sh -c ... invocation, effectively ignored).
If you really wanted to try this approach, you should make web: {depends_on: [wkhtmltopdf]} to force the dependency order. The second container should mount the volume somewhere else, it probably shouldn't have an entrypoint:, and it should do something like command: cp -a /bin/wkhtmltopdf /export. (It will exit immediately once this cp finishes, but that shouldn't matter.) The first container can then mount the volume on, say, /usr/local/bin, and not specially set command: or entrypoint:. There will still be a minor race condition (you're not guaranteed the cp command will complete before Apache starts) but it probably wouldn't be a practical problem.

Docker app supposed to output json to a dir on a volume, when running no data show

I am working on a docker app. The purpose of this repo is to output some json into a volume. I am using a Dockerfile, docker-compose and a Makefile. I'll show the contents of each file below. Goal/desired outcome is that when I run using make up that the container runs and outputs the json.
Directory looks like this:
docker-compose.yaml
Dockerfile
Makefile
main/ # a directory
Here are the contents of directory Main:
example.R
Not sure the best order to show these files. Throughout my setup I refer to a variable $PROJECTS_DIR which is a global on the host / local:
echo $PROJECTS_DIR
/home/doug/Projects
Here are my files:
docker-compose.yaml:
version: "3.5"
services:
nextzen_ga_extract_marketing:
build:
context: .
environment:
start_date: "2020-11-18"
start_date: "2020-11-19"
volumes:
- ${PROJECTS_DIR}/Zen/nextzen_google_analytics_extract_pipeline:/home/rstudio/Projects/nextzen_google_analytics_extract_pipeline
Dockerfile:
FROM rocker/tidyverse:latest
ADD main main
WORKDIR "/main"
RUN apt-get update && apt-get install -y \
less \
vim
ENTRYPOINT ["Rscript", "example.R"]
Makefile:
.PHONY: build
build:
docker-compose build
.PHONY: up
up:
docker-compose pull
docker-compose up -d
.PHONY: restart
restart:
docker-compose restart
.PHONY: down
down:
docker-compose down
Here is the contents of the 'main' file of the Docker app, example.R:
library(jsonlite)
unlink("../output_data", recursive = TRUE) # delete any existing data from previous runs
dir.create('../output_data')
write(toJSON(mtcars), '../output_data/ga_tables.json')
If I navigate into ${PROJECTS_DIR}/Zen/nextzen_google_analytics_extract_pipeline/main and then run sudo Rscript example.R then the file runs and outputs the json in '../output_data/ga_tables.json as expected.
I am struggling to get this to happen when running the container. If I navigate into ${PROJECTS_DIR}/Zen/nextzen_google_analytics_extract_pipeline/ and then in the terminal run make up for:
docker-compose pull
docker-compose up -d
I then see:
make up
docker-compose pull
docker-compose up -d
Creating network "nextzengoogleanalyticsextractpipeline_default" with the default driver
Creating nextzengoogleanalyticsextractpipeline_nextzen_ga_extract_marketing_1 ...
Creating nextzengoogleanalyticsextractpipeline_nextzen_ga_extract_marketing_1 .
It 'looks' like everything ran as expected with no errors. Except no output appears in directory output_data as expected?
I guess I'm misunderstanding or misusing ENTRYPOINT in the Dockerfile with ENTRYPOINT ["Rscript", "example.R"]. My goal is that this file would run when the container is run.
How can I 'run' (if that's the correct terminology) my app so that it outputs json into /output_data/ga_tables.json?
Not sure what other info to provide? Any help much appreciated, I'm still getting to grips with docker.
If you run your application from /main and its output is supposed to go into ../output_data (so effectively /output_data), you need to bind mount this directory to have this output available on host. Therefore I would update your docker-compose.yaml to read something like this:
volumes:
- /path/to/output_data/on/host:/output_data
Bear in mind however that your script will not be able to remove /output_data when bind-mounted this way, so you might want to change your step to removing directory contents and not directory itself.
In my case, I got this working when I used full paths as opposed to relative paths.

How to run any commands in docker volumes?

After couple of days testing and working on docker (i am in general trying to migrate from vagrant to docker) i encountered a huge problem which i am not sure how or where to fix it.
docker-compose.yml
version: "3"
services:
server:
build: .
volumes:
- ./:/var/www/dev
links:
- database_dev
- database_testing
- database_dev_2
- mail
- redis
ports:
- "80:8080"
tty: true
#the rest are only images of database redis and mailhog with ports
Dockerfile
example_1
FROM ubuntu:latest
LABEL Yamen Nassif
SHELL ["/bin/bash", "-c"]
RUN apt-get install vim mc net-tools iputils-ping zip curl git -y
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN cd /var/www/dev
RUN composer install
Dockerfile
example_2
....
RUN apt-get install apache2 openssl php7.2 php7.2-common libapache2-mod-php7.2 php7.2-fpm php7.2-mysql php7.2-curl php7.2-dom php7.2-zip php7.2-gd php7.2-json php7.2-opcache php7.2-xml php7.2-cli php7.2-intl php7.2-mbstring php7.2-redis -y
# basically 2 files with just rooting to /var/www/dev
COPY docker/config/vhosts /etc/apache2/sites-available/
RUN service apache2 restart
....
now the example_1 composer.json file/directory not found
and example_2 apache will says the root dir is not found
file/directory = /var/www/dev
i guess its because its a volume and it wont be up until the container is fully up because if i launch the container without the prev commands which will lead to an error i can then login to the container and execute the commands from command line without anyerror
HOW TO FIX THIS ?
In your first Dockerfile, use the COPY directive to copy your application into the image before you do things like RUN composer install. It'd look something like
FROM php:7.0-cli
COPY . /usr/src/app
WORKDIR /usr/src/app
RUN composer install
(cribbed from the php image documentation; that image may not have composer preinstalled).
In both Dockerfiles, remember that each RUN command creates a new empty container, runs its command, and cleans up after itself. That means commands like RUN cd ... have no effect, and you can't start a service in the background in one RUN command and have it available later; it will get stopped before the Dockerfile moves on to the next line.
In the second Dockerfile, commands like service or systemctl or initctl just don't work in Docker and you shouldn't try to use them. Standard practice is to start the server process as a foreground process when the container launches via a default CMD directive. The flip side of this is that, since the server won't start until docker run time, your volume will be available at that point. I might RUN mkdir in the Dockerfile just to be sure it exists.
The problem seems the execution order. At image build time /var/www/dev is available. When you start a container from that image the container /var/www/dev is overwritten with your local mount.
If you need no access from your host, the you can simple skip the extra volume.
In case you want use it in other containers to, the you should work with symlinks.

Why are files I generate with a bash script within a docker container also saving locally?

I have an aws/appium test project I want to run in docker. I have a bash script that runs in the container which downloads a file from S3 and creates a zip of my project.
The Dockerfile:
FROM maven:3.3.9
RUN apt-get update && \
apt-get -y install python && \
apt-get -y install python-pip && \
pip install awscli
RUN export PATH=$PATH:/usr/local/bin
There's a docker compose file, the command runs a bash script:
version: '2'
volumes:
maven_cache: ~
services:
application: &application
build: .
tmpfs:
- /tmp:rw,nodev,noexec,nosuid
volumes:
- ./:/app
- maven_cache:/root/.m2/repository
working_dir: /app
command: ./aws-upload.sh
This is the beginning of the ./aws-upload.sh bash script. It prepares the files I need for uploading later:
#!/usr/bin/env bash
mvn clean package -DskipTests=true
aws s3 cp s3://<bucket-name>/app.apk $(pwd)
cp target/zip-with-dependencies.zip $(pwd)
I only want the above files to exist within the container, however they appear locally also. Is there something in my docker-compose file that isn't configured correctly?
Thanks
In your compose file you are defining a volume ./:/app which maps the host folder where the compose file is located to the containers app folder. If you execute your bash script in the app folder it will also make the files it is creating available on the host.
If you want to avoid this either remove the volume mapping (in case you don't need it) or execute the script in another folder which is not mapped to your host.
This is normal. When you declared the following inside the composefile:
volumes:
- ./:/app
This means mount the current host directory onto /app inside the container. This will effectivelty keep the current directory and the /app folder inside the container in sync.
Thus if the aws-upload.sh script creates files in /app, they will also show next to the compose file.

Resources