Building a Dockerfile from inside Docker Compose - docker

So I'm trying to follow these instructions:
https://github.com/open-forest/sendy
I'm using Portainer and trying to run a Sendy container (newsletter software). Instead of running a MySQL image with it, I'm just using my external managed database instead.
On my server I keep project data at: /var/docker/project-name. I use this structure for bind mounting if I need to bring data into the containers from the start.
So for this project in the project-name folder I have sendy-6.0.2.zip and this Dockerfile: (This file was provide via the instructions on the above link)
#
# Docker with Sendy Email Campaign Marketing
#
# Build:
# $ docker build -t sendy:latest --target sendy -f ./Dockerfile .
#
# Build w/ XDEBUG installed
# $ docker build -t sendy:debug-latest --target debug -f ./Dockerfile .
#
# Run:
# $ docker run --rm -d --env-file sendy.env sendy:latest
FROM php:7.4.8-apache as sendy
ARG SENDY_VER=6.0.2
ARG ARTIFACT_DIR=6.0.2
ENV SENDY_VERSION ${SENDY_VER}
RUN apt -qq update && apt -qq upgrade -y \
# Install unzip cron
&& apt -qq install -y unzip cron \
# Install php extension gettext
# Install php extension mysqli
&& docker-php-ext-install calendar gettext mysqli \
# Remove unused packages
&& apt autoremove -y
# Copy artifacts
COPY ./artifacts/${ARTIFACT_DIR}/ /tmp
# Install Sendy
RUN unzip /tmp/sendy-${SENDY_VER}.zip -d /tmp \
&& cp -r /tmp/includes/* /tmp/sendy/includes \
&& mkdir -p /tmp/sendy/uploads/csvs \
&& chmod -R 777 /tmp/sendy/uploads \
&& rm -rf /var/www/html \
&& mv /tmp/sendy /var/www/html \
&& chown -R www-data:www-data /var/www \
&& mv /usr/local/etc/php/php.ini-production /usr/local/etc/php/php.ini \
&& rm -rf /tmp/* \
&& echo "\nServerName \${SENDY_FQDN}" > /etc/apache2/conf-available/serverName.conf \
# Ensure X-Powered-By is always removed regardless of php.ini or other settings.
&& printf "\n\n# Ensure X-Powered-By is always removed regardless of php.ini or other settings.\n\
Header always unset \"X-Powered-By\"\n\
Header unset \"X-Powered-By\"\n" >> /var/www/html/.htaccess \
&& printf "[PHP]\nerror_reporting = E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED\n" > /usr/local/etc/php/conf.d/error_reporting.ini
# Apache config
RUN a2enconf serverName
# Apache modules
RUN a2enmod rewrite headers
# Copy hello-cron file to the cron.d directory
COPY cron /etc/cron.d/cron
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/cron \
# Apply cron job
&& crontab /etc/cron.d/cron \
# Create the log file to be able to run tail
&& touch /var/log/cron.log
COPY artifacts/docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
CMD ["apache2-foreground"]
#######################
# XDEBUG Installation
#######################
FROM sendy as debug
# Install xdebug extension
RUN pecl channel-update pecl.php.net \
&& pecl install xdebug \
&& docker-php-ext-enable xdebug \
&& rm -rf /tmp/pear
Here is my Docker Compose file:
version: '3.7'
services:
project-sendy:
container_name: project-sendy
image: sendy:6.0.2
build:
dockerfile: var/docker/project-sendy/Dockerfile
restart: unless-stopped
networks:
- proxy
- default
labels:
- "traefik.enable=true"
- "traefik.docker.network=proxy"
- "traefik.http.routers.project-secure.entrypoints=websecure"
- "traefik.http.routers.project-secure.rule=Host(`project.com`)"
environment:
SENDY_PROTOCOL: https
SENDY_FQDN: project.com
MYSQL_HOST: db-host-name-here
MYSQL_DATABASE: db-name-here
MYSQL_USER: db-user-name-here
MYSQL_PASSWORD: db-password-here
SENDY_DB_PORT: db-port-here
networks:
proxy:
external: true
When I try to deploy I get:
failed to deploy a stack: project-sendy Pulling project-sendy
Error could not find /data/compose/126/var/docker/project-sendy:
stat /data/compose/126/var/docker/project-sendy: no such file or directory

So here's what I've done.
I have the cron and artifacts folder on the same directory as the Dockerfile.
In the Dockerfile look for this line:
COPY artifacts/docker-entrypoint.sh /usr/local/bin/
Right below it put this line:
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
Otherwise you will get this error:
Starting Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/usr/local/bin/docker-entrypoint.sh": permission denied: unknown
Then build it with:
docker build -t sendy:6.0.2 .
Then your image will show up in portainer.
You can then remove the build section in your docker compose file and hit deploy. It now works for me.

Related

Downloaded file inside dockerfile missing after build

I'm trying to dockerize my php application, this is my very first attempt.
Dockerfile.
FROM ubuntu:18.04
WORKDIR /php55
ARG GIT_TOKEN
ARG DEBIAN_FRONTEND=noninteractive
# Install apache2
RUN set -x; \
perl -pe 's/(\S+\.)?archive\.ubuntu\.com/mirror.sg.gs/g' /etc/apt/sources.list > temp-sc && mv temp-sc /etc/apt/sources.list \
&& sed -i 's#security.ubuntu.com#mirror.sg.gs#g' /etc/apt/sources.list \
&& apt-get update && apt-get install --yes apache2 curl wget nano \
&& a2enmod rewrite headers
# Configure apache2
RUN set -x; \
sed -i.backup 's#/var/www/html#/var/www#g' "/etc/apache2/sites-available/000-default.conf" \
&& echo "ServerName localhost" > "/etc/apache2/conf-available/fqdn.conf" && a2enconf fqdn \
&& cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/000-default.conf.backup
# Copy configuration
COPY ujian.conf /etc/apache2/sites-available/000-default.conf
RUN set -x; \
curl -H "Authorization: token ${GIT_TOKEN}" -O https://git.mydomain.com/api/v1/repos/liso/ujian/archive/main.tar.gz \
&& mkdir -p /var/www/ujian \
&& tar -xvzf main.tar.gz -C /var/www/ujian --strip-components=1 \
&& rm main.tar.gz
# Install PHP
COPY install-php5 .
RUN chmod +x install-php5 && ./install-php5
EXPOSE 80 7825
CMD ["apachectl", "-D", "FOREGROUND"]
docker-compose.yml
version: '3'
services:
ujian:
image: liso/ujian-dockerize
container_name: docker-ujian
build:
context: .
args:
GIT_TOKEN: ${GIT_TOKEN} # from .env file
dockerfile: ./Dockerfile
ports:
- 127.0.0.1:8080:80
volumes:
- ./www:/var/www
extra_hosts:
- "host.docker.internal:host-gateway"
.env contain my api token to git instance.
The problem is after building, I can't find the downloaded file located in `/var/www` on the container, it's empty.
root#6835554968db:/var/www# ls -al
total 12
drwxr-xr-x 2 root root 4096 Jan 30 11:07 .
drwxr-xr-x 1 root root 4096 Jan 30 11:11 ..
I have rebuild several times but still empty /var/www, I never touch docker before so I'm really lost. Can you help me debugging this problem ?
Yeah it seems the volume was overwriting my previous downloaded files, that's why it keep missing after I launched the container. Ultimately I had to create docker-entrypoint.sh script which run after the contain has been provisioned. Then all is well.

how to make a dockerfile with only one container?

I have a yii1 application. And I have a dockerfile. And I had a docker-compose file.
But for the momemnt I only have one application. Because I have a remote database. So the database is not in a container.
So I have this dockerfile:
FROM php:7.3-apache
#COPY BaltimoreCyberTrustRoot.crt.pem /usr/local/share/ca-certificates/AzureDB.crt
# Copy virtual host into container
COPY 000-default.conf /etc/apache2/sites-available/000-default.conf
# Enable rewrite mode
RUN a2enmod rewrite
# Install necessary packages
RUN apt-get update && \
apt-get install \
libzip-dev \
wget \
git \
unzip \
-y --no-install-recommends
# Install PHP Extensions
RUN docker-php-ext-install zip pdo_mysql
# RUN pecl install -o -f xdebug-3.1.3 \
# && rm -rf /tmp/pear
# Copy composer installable
COPY ./install-composer.sh ./
# Copy php.ini
COPY ./php.ini /usr/local/etc/php/
#COPY BaltimoreCyberTrustRoot.crt.pem /var/www/html/
EXPOSE 80
# Cleanup packages and install composer
RUN apt-get purge -y g++ \
&& apt-get autoremove -y \
&& rm -r /var/lib/apt/lists/* \
&& rm -rf /tmp/* \
&& sh ./install-composer.sh \
&& rm ./install-composer.sh
# Change the current working directory
WORKDIR /var/www/html
# Change the owner of the container document root
RUN chown -R www-data:www-data /var/www
# Start Apache in foreground
CMD ["apache2-foreground"]
And I had this docker-compose file:
version: '3'
services:
web:
build: ./docker
container_name: dockeryiidisc
ports:
- 80:80
- 443:443
volumes:
- C:\xampp\htdocs\webScraper/docker:/etc/apache2/sites-enabled/
- C:\xampp\htdocs\webScraper:/var/www/html/
and that worked.
But so now I only want to use the dockerfile.
So I tried this:
docker build -t docker_webcrawler .
and this command:
docker run -d -p 80:80 --name cntr-apache docker_webcrawler
But if I then go to: http://localhost:80
I only see a empty directory:
Index of /
[ICO] Name Last modified Size Description
So what I have to change? That I only have to use the dockerfile?
Thank you
It looks like you're missing the volume mappings that you have in your docker-compose file. Try this
docker run -d -p 80:80 --name cntr-apache -v C:\xampp\htdocs\webScraper/docker:/etc/apache2/sites-enabled/ -v C:\xampp\htdocs\webScraper:/var/www/html/ docker_webcrawler

docker-compose up failed - reading directory failed

I have a docker image which I wanted to bring up to run test automatically, the scripts are located at /opt/robotframework/tests
Error occurred that docker cannot read the directory:
$ docker-compose up
Creating network "docker-robot-framework_default" with the default driver
Creating robot-runner ... done
Attaching to robot-runner
robot-runner | [ ERROR ] Reading directory '/opt/robotframework/tests' failed: PermissionError: [Errno 13] Permission denied: '/opt/robotframework/tests'
robot-runner |
robot-runner | Try --help for usage information.
robot-runner exited with code 252
docker-compose.yml
version: '3'
services:
robot-runner:
build:
context: .
dockerfile: /Dockerfile
container_name: robot-runner
image: ppodgorsek/robot-framework:latest
volumes:
- ./test:/opt/robotframework/tests
- ./test-audios:/opt/robotframework/test-audios
- ./output-local:/opt/robotframework/reports
environment:
PYTHONWARNINGS: "ignore:Unverified HTTPS request"
Dockerfile:
FROM fedora:36
MAINTAINER Paul Podgorsek <ppodgorsek#users.noreply.github.com>
LABEL description Robot Framework in Docker.
# Set the reports directory environment variable
ENV ROBOT_REPORTS_DIR /opt/robotframework/reports
# Set the tests directory environment variable
ENV ROBOT_TESTS_DIR /opt/robotframework/tests
# ENV ROBOT_TEST_AUDIOS_DIR /opt/robotframework/test-audios
# Set the working directory environment variable
ENV ROBOT_WORK_DIR /opt/robotframework/temp
# Setup X Window Virtual Framebuffer
ENV SCREEN_COLOUR_DEPTH 24
ENV SCREEN_HEIGHT 1080
ENV SCREEN_WIDTH 1920
# Setup the timezone to use, defaults to UTC
ENV TZ UTC
# Set number of threads for parallel execution
# By default, no parallelisation
ENV ROBOT_THREADS 1
# Define the default user who'll run the tests
ENV ROBOT_UID 1000
ENV ROBOT_GID 1000
# Dependency versions
ENV ALPINE_GLIBC 2.35-r0
ENV AWS_CLI_VERSION 1.22.87
ENV AXE_SELENIUM_LIBRARY_VERSION 2.1.6
ENV BROWSER_LIBRARY_VERSION 12.2.0
ENV CHROMIUM_VERSION 99.0
ENV DATABASE_LIBRARY_VERSION 1.2.4
ENV DATADRIVER_VERSION 1.6.0
ENV DATETIMETZ_VERSION 1.0.6
ENV FAKER_VERSION 5.0.0
ENV FIREFOX_VERSION 98.0
ENV FTP_LIBRARY_VERSION 1.9
ENV GECKO_DRIVER_VERSION v0.30.0
ENV IMAP_LIBRARY_VERSION 0.4.2
ENV PABOT_VERSION 2.5.2
ENV REQUESTS_VERSION 0.9.2
ENV ROBOT_FRAMEWORK_VERSION 5.0
ENV SELENIUM_LIBRARY_VERSION 6.0.0
ENV SSH_LIBRARY_VERSION 3.8.0
ENV XVFB_VERSION 1.20
# By default, no reports are uploaded to AWS S3
ENV AWS_UPLOAD_TO_S3 false
# Prepare binaries to be executed
COPY bin/chromedriver.sh /opt/robotframework/bin/chromedriver
COPY bin/chromium-browser.sh /opt/robotframework/bin/chromium-browser
COPY bin/run-tests-in-virtual-screen.sh /opt/robotframework/bin/
# COPY bin/mml_4_apr_2018_b_session3_2.wav /opt/robotframework/test-audios
# COPY bin/mml_4_apr_2018_b_session3_2.stm /opt/robotframework/test-audios
# Install system dependencies
RUN dnf upgrade -y --refresh \
&& dnf install -y \
chromedriver-${CHROMIUM_VERSION}* \
chromium-${CHROMIUM_VERSION}* \
firefox-${FIREFOX_VERSION}* \
npm \
nodejs \
python3-pip \
tzdata \
xorg-x11-server-Xvfb-${XVFB_VERSION}* \
&& dnf clean all
# FIXME: below is a workaround, as the path is ignored
RUN mv /usr/lib64/chromium-browser/chromium-browser /usr/lib64/chromium-browser/chromium-browser-original \
&& ln -sfv /opt/robotframework/bin/chromium-browser /usr/lib64/chromium-browser/chromium-browser
# Install Robot Framework and associated libraries
RUN pip3 install \
--no-cache-dir \
robotframework==$ROBOT_FRAMEWORK_VERSION \
robotframework-browser==$BROWSER_LIBRARY_VERSION \
robotframework-databaselibrary==$DATABASE_LIBRARY_VERSION \
robotframework-datadriver==$DATADRIVER_VERSION \
robotframework-datadriver[XLS] \
robotframework-datetime-tz==$DATETIMETZ_VERSION \
robotframework-faker==$FAKER_VERSION \
robotframework-ftplibrary==$FTP_LIBRARY_VERSION \
robotframework-imaplibrary2==$IMAP_LIBRARY_VERSION \
robotframework-pabot==$PABOT_VERSION \
robotframework-requests==$REQUESTS_VERSION \
robotframework-seleniumlibrary==$SELENIUM_LIBRARY_VERSION \
robotframework-sshlibrary==$SSH_LIBRARY_VERSION \
axe-selenium-python==$AXE_SELENIUM_LIBRARY_VERSION \
PyYAML \
# Install awscli to be able to upload test reports to AWS S3
awscli==$AWS_CLI_VERSION
# Gecko drivers
RUN dnf install -y \
wget \
# Download Gecko drivers directly from the GitHub repository
&& wget -q "https://github.com/mozilla/geckodriver/releases/download/$GECKO_DRIVER_VERSION/geckodriver-$GECKO_DRIVER_VERSION-linux64.tar.gz" \
&& tar xzf geckodriver-$GECKO_DRIVER_VERSION-linux64.tar.gz \
&& mkdir -p /opt/robotframework/drivers/ \
&& mv geckodriver /opt/robotframework/drivers/geckodriver \
&& rm geckodriver-$GECKO_DRIVER_VERSION-linux64.tar.gz \
&& dnf remove -y \
wget \
&& dnf clean all
# Install the Node dependencies for the Browser library
# FIXME: Playright currently doesn't support relying on system browsers, which is why the `--skip-browsers` parameter cannot be used here.
RUN rfbrowser init \
&& ln -sf /usr/lib64/libstdc++.so.6 /usr/local/lib/python3.10/site-packages/Browser/wrapper/node_modules/playwright-core/.local-browsers/firefox-1316/firefox/libstdc++.so.6
# Create the default report and work folders with the default user to avoid runtime issues
# These folders are writeable by anyone, to ensure the user can be changed on the command line.
RUN mkdir -p ${ROBOT_REPORTS_DIR} \
&& mkdir -p ${ROBOT_WORK_DIR} \
&& chown ${ROBOT_UID}:${ROBOT_GID} ${ROBOT_REPORTS_DIR} \
&& chown ${ROBOT_UID}:${ROBOT_GID} ${ROBOT_WORK_DIR} \
&& chmod ugo+w ${ROBOT_REPORTS_DIR} ${ROBOT_WORK_DIR}
# Allow any user to write logs
RUN chmod ugo+w /var/log \
&& chown ${ROBOT_UID}:${ROBOT_GID} /var/log
# Update system path
ENV PATH=/opt/robotframework/bin:/opt/robotframework/drivers:$PATH
# Set up a volume for the generated reports
VOLUME ${ROBOT_REPORTS_DIR}
USER ${ROBOT_UID}:${ROBOT_GID}
# A dedicated work folder to allow for the creation of temporary files
WORKDIR ${ROBOT_WORK_DIR}
# Execute all robot tests
CMD ["run-tests-in-virtual-screen.sh"]
local directories:
enter image description here
Basically the USER specified in dockerfile (USER ${ROBOT_UID}:${ROBOT_GID}) is used the container and has no access rights to the folder on your host. While you could use root in the container to "solve" the problem your container may get root on host. You should NEVER use root in a docker container.
To avoid the problem give the user (in your case 1000:1000) appropriate rights on the folder on host (./test) with setfacl. If the user is not present on host just add one with same UID/GID:
sudo addgroup robot --gid 1000
sudo adduser robot --ingroup robot --uid 1000
setfacl -R -m u:robot:rwx test
by adding user: root in docker-compose.yml. The user granted full access right to path.
version: '3'
services:
robot-runner:
build:
context: .
dockerfile: /Dockerfile
container_name: robot-runner
# image: ppodgorsek/robot-framework:latest
image: robot-runner:latest
user: root
volumes:
- ./BrowserTests:/opt/robotframework/tests
- ./output-local:/opt/robotframework/reports
environment:
PYTHONWARNINGS: "ignore:Unverified HTTPS request"
extra_hosts:
- "speech.sts:172.17.0.1"
- "speech.srs:172.17.0.1"
networks:
- sts_sts_network
networks:
sts_sts_network:
external: true

Create database from one docker container in another

I have a program that builds servers automatically whenever we want stakeholders to test a new feature.
Currently I have the following setup:
Container 1 - all (contains nodejs, php and other dependencies)
Container 2 - db (contains the mysql database)
I'm aware that container 1 should be split but this will involve more unnecessary complexity to this stage of development.
Whenever a new feature is completed and ready to be deployed to a stage server we run: yarn run create:server --branchName=new-feature. This will create all of the configuration necessary to bring up our newly created server.
My problem is that whenever I run the command above I need to create a database in db container from all container:
mysql -u root -pxxxx -e "CREATE DATABASE IF NOT EXISTS `xxxx`"
The script main.ts is running in the context of all container, so it is necessary for all to communicate with db.
export const createDatabase = (subdomain: string) => {
const username = process.env.DB_USERNAME;
const password = process.env.DB_PASSWORD;
console.log(`[INFO] Creating database with name \`${subdomain}\``);
// triple back slash is necessary to avoid `command substitution` in some shells
if (isLocalEnviroment()) {
execSync(`docker run -it stage-manager-db mysql -u ${username} -p${password} -e "CREATE DATABASE IF NOT EXISTS \\\`${subdomain}\\\`"`)
} else {
execSync(`mysql -u ${username} -p${password} -e "CREATE DATABASE IF NOT EXISTS \\\`${subdomain}\\\`"`)
}
console.log(`[INFO] Database \`${subdomain}\` created successfully`);
}
On local environment we would like to use docker, while in production everything will sit in the same machine (db, frontendapp and api).
When trying to run the following command docker run -it stage-manager-db mysql -u root -ppassword -e "CREATE DATABASE IF NOT EXISTS master" from all I get
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
I have tried restarting the service with:
service docker restart
which gives
[ ok ] Starting Docker: docker.
but trying to communicate with db from all keeps getting the same error. Upon trying to service docker stop I get:
[....] Stopping Docker: dockerstart-stop-daemon: warning: failed to kill 825: No such process
No process in pidfile '/var/run/docker-ssd.pid' found running; none killed.
failed!
From now on I have tried the several links to fix this issue:
https://github.com/docker/for-linux/issues/52#issuecomment-333563492
https://askubuntu.com/questions/1146634/how-to-remove-docker-from-windows-subsystem
Cannot connect to the Docker daemon at unix:///var/run/docker.sock
Cant uninstall Docker from Ubuntu on WSL
How can I communicate from all container to db container?
Dockerfile
FROM php:7.4-fpm
# Install dependencies
RUN apt-get update && apt-get install -y \
build-essential \
libpng-dev \
libjpeg62-turbo-dev \
libfreetype6-dev \
locales \
zip \
jpegoptim optipng pngquant gifsicle \
vim \
unzip \
git \
curl \
libzip-dev \
libfontconfig1 \
libxrender1 \
libpng-dev \
make \
nginx \
apt-transport-https \
gnupg2 \
wget \
procps \
docker.io
# Install nodejs
RUN apt -y install curl dirmngr apt-transport-https lsb-release ca-certificates
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash -
RUN apt -y install nodejs
# Install extensions
RUN docker-php-ext-install pdo_mysql exif zip pcntl gd
RUN docker-php-ext-configure gd --with-freetype --with-jpeg
RUN docker-php-ext-install -j$(nproc) gd
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y git
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install Yarn
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt update && apt install yarn
# Install dependencies for this project
RUN yarn global add ts-node typescript
RUN useradd -m forge
# Copy existing application directory contents
COPY . /var/www
# Copy existing application directory permissions
COPY --chown=forge:forge . /var/forge
# Copy ssh keys
COPY ./config/ssh /home/forge/.ssh/
# Give right permissions to `ssh` keys
RUN chmod 600 /home/forge/.ssh/config
RUN chmod 600 /home/forge/.ssh/back_end_deploy_key
RUN chmod 600 /home/forge/.ssh/frontend_deploy_key
RUN chmod 644 /home/forge/.ssh/back_end_deploy_key.pub
RUN chmod 644 /home/forge/.ssh/frontend_deploy_key.pub
RUN chown forge:forge /home/forge/.ssh/*
# Up Docker
RUN service docker start
RUN usermod -aG docker forge
# Create folder for stage servers
RUN mkdir -p /var/www/stage-servers
# Give correct permissions to `stage-servers` folder
RUN chown forge:www-data /var/www/stage-servers
RUN chmod g+s /var/www/stage-servers
RUN chmod o-rwx /var/www/stage-servers
# Change current user to forge
USER forge
# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ["php-fpm"]
docker-composer.yml
version: '3.7'
services:
all:
working_dir: /var/www/stage-manager
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:80"
volumes:
- "./:/var/www/stage-manager"
- "./config/ssh:/root/.ssh"
networks:
- main
#MySQL Service
db:
image: mysql:5.7.22
container_name: stage-manager-db
restart: unless-stopped
tty: true
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: whatever
MYSQL_ROOT_PASSWORD: password
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- dbdata:/var/lib/mysql/
networks:
- main
volumes:
project:
driver: local
driver_opts:
type: none
device: $PWD/
o: bind
dbdata:
driver: local
networks:
main:
I'm fairly new to docker so any approach that I might be doing wrong, please let me know. I have a feeling that this could be done much better so feel free to suggest improvements.
Update
** DO NOT DO THIS **
Instead of deleting this answer I will leave it here so others can see that this is not a secure/valid solution to this problem
By David Maze's comment:
Remember that anyone who can access the Docker socket has unrestricted root-level access over the whole host system. I would not add the Docker socket in casually here.
I was able to make it working by sharing the socket between my host OS and the all container.
docker-compose.yml
all:
working_dir: /var/www/stage-manager
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:80"
volumes:
- "./:/var/www/stage-manager"
- "./config/ssh:/root/.ssh"
- "/var/run/docker.sock:/var/run/docker.sock" -> important part
networks:
- main

Docker not pulling updated php version

I am trying to update php version on the Docker
This is how my Dockerfile looks like
FROM php:7.2-fpm
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
zip \
unzip
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install PHP extensions
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd zip
# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Create system user to run Composer and Artisan Commands
RUN useradd -G www-data,root -d /home/ubuntu ubuntu
RUN mkdir -p /home/ubuntu/.composer && \
chown -R ubuntu:ubuntu /home/ubuntu
# Set working directory
WORKDIR /var/www
USER ubuntu
I have changed the php version to 7.3, and I tried to delete all docker containers and recreate it docker rm -vf $(docker ps -a -q). And then I built my docker containers using docker-compose build --nocache --pull.
docker-compose.yaml file looks like this:
version: "3.7"
services:
app:
build:
context: ./
dockerfile: ./docker/Dockerfile
image: myapp
container_name: myapp-app
restart: unless-stopped
working_dir: /var/www/
volumes:
- ./:/var/www
networks:
- myapp
But still the php version is stated as 7.2.
Any advice?
To remove all containers/images/networks/.. run:
docker system prune -a
Then try to build the image.
If that don't works: can you give the logs, where the wrong version will pulled?

Resources