Empty volume folder after docker-compose up - docker

I have a dockerfile which load some Stuff from composer in a vendor folder on the container. Now I want to link the vendor folder on the container with my host enviroment. If I start the service with docker-compose up the vendor folder is empty. What can I do to keep the data on the container?
Here is my dockerfile:
FROM php:7.3.3-apache-stretch
RUN apt-get update && \
apt-get install -y --no-install-recommends nano \
git \
openssh-server
RUN curl -s https://getcomposer.org/installer | php && \
echo "{}" > composer.json && \
php composer.phar require slim/slim "^3.0" && \
chown -R www-data. .
VOLUME /var/www/html/vendor
And here my docker-compose.yml:
version: '3.2'
services:
slim:
build:
context: ./slim
ports:
- "1337:1337"
networks:
- backend
volumes:
- ./slim/vendor:/var/www/html/vendor
networks:
backend:
thanks for the help

What you see is expected behaviour.
If you want the vendor folder populated and available on the host as well then you have to run the installation AFTER the mapping is happening, not the other way round.
This command:
curl -s https://getcomposer.org/installer | php && \
echo "{}" > composer.json && \
php composer.phar require slim/slim "^3.0" && \
chown -R www-data. .
should become your ENTRYPOINT or CMD so that it is run when the container starts (not when it is built).
I would suggest to put those commands in an install script and run that. It would look cleaner and easier to understand.
Hope this helps but if you need more information just let me know.

Related

Create database from one docker container in another

I have a program that builds servers automatically whenever we want stakeholders to test a new feature.
Currently I have the following setup:
Container 1 - all (contains nodejs, php and other dependencies)
Container 2 - db (contains the mysql database)
I'm aware that container 1 should be split but this will involve more unnecessary complexity to this stage of development.
Whenever a new feature is completed and ready to be deployed to a stage server we run: yarn run create:server --branchName=new-feature. This will create all of the configuration necessary to bring up our newly created server.
My problem is that whenever I run the command above I need to create a database in db container from all container:
mysql -u root -pxxxx -e "CREATE DATABASE IF NOT EXISTS `xxxx`"
The script main.ts is running in the context of all container, so it is necessary for all to communicate with db.
export const createDatabase = (subdomain: string) => {
const username = process.env.DB_USERNAME;
const password = process.env.DB_PASSWORD;
console.log(`[INFO] Creating database with name \`${subdomain}\``);
// triple back slash is necessary to avoid `command substitution` in some shells
if (isLocalEnviroment()) {
execSync(`docker run -it stage-manager-db mysql -u ${username} -p${password} -e "CREATE DATABASE IF NOT EXISTS \\\`${subdomain}\\\`"`)
} else {
execSync(`mysql -u ${username} -p${password} -e "CREATE DATABASE IF NOT EXISTS \\\`${subdomain}\\\`"`)
}
console.log(`[INFO] Database \`${subdomain}\` created successfully`);
}
On local environment we would like to use docker, while in production everything will sit in the same machine (db, frontendapp and api).
When trying to run the following command docker run -it stage-manager-db mysql -u root -ppassword -e "CREATE DATABASE IF NOT EXISTS master" from all I get
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
I have tried restarting the service with:
service docker restart
which gives
[ ok ] Starting Docker: docker.
but trying to communicate with db from all keeps getting the same error. Upon trying to service docker stop I get:
[....] Stopping Docker: dockerstart-stop-daemon: warning: failed to kill 825: No such process
No process in pidfile '/var/run/docker-ssd.pid' found running; none killed.
failed!
From now on I have tried the several links to fix this issue:
https://github.com/docker/for-linux/issues/52#issuecomment-333563492
https://askubuntu.com/questions/1146634/how-to-remove-docker-from-windows-subsystem
Cannot connect to the Docker daemon at unix:///var/run/docker.sock
Cant uninstall Docker from Ubuntu on WSL
How can I communicate from all container to db container?
Dockerfile
FROM php:7.4-fpm
# Install dependencies
RUN apt-get update && apt-get install -y \
build-essential \
libpng-dev \
libjpeg62-turbo-dev \
libfreetype6-dev \
locales \
zip \
jpegoptim optipng pngquant gifsicle \
vim \
unzip \
git \
curl \
libzip-dev \
libfontconfig1 \
libxrender1 \
libpng-dev \
make \
nginx \
apt-transport-https \
gnupg2 \
wget \
procps \
docker.io
# Install nodejs
RUN apt -y install curl dirmngr apt-transport-https lsb-release ca-certificates
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash -
RUN apt -y install nodejs
# Install extensions
RUN docker-php-ext-install pdo_mysql exif zip pcntl gd
RUN docker-php-ext-configure gd --with-freetype --with-jpeg
RUN docker-php-ext-install -j$(nproc) gd
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y git
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install Yarn
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt update && apt install yarn
# Install dependencies for this project
RUN yarn global add ts-node typescript
RUN useradd -m forge
# Copy existing application directory contents
COPY . /var/www
# Copy existing application directory permissions
COPY --chown=forge:forge . /var/forge
# Copy ssh keys
COPY ./config/ssh /home/forge/.ssh/
# Give right permissions to `ssh` keys
RUN chmod 600 /home/forge/.ssh/config
RUN chmod 600 /home/forge/.ssh/back_end_deploy_key
RUN chmod 600 /home/forge/.ssh/frontend_deploy_key
RUN chmod 644 /home/forge/.ssh/back_end_deploy_key.pub
RUN chmod 644 /home/forge/.ssh/frontend_deploy_key.pub
RUN chown forge:forge /home/forge/.ssh/*
# Up Docker
RUN service docker start
RUN usermod -aG docker forge
# Create folder for stage servers
RUN mkdir -p /var/www/stage-servers
# Give correct permissions to `stage-servers` folder
RUN chown forge:www-data /var/www/stage-servers
RUN chmod g+s /var/www/stage-servers
RUN chmod o-rwx /var/www/stage-servers
# Change current user to forge
USER forge
# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ["php-fpm"]
docker-composer.yml
version: '3.7'
services:
all:
working_dir: /var/www/stage-manager
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:80"
volumes:
- "./:/var/www/stage-manager"
- "./config/ssh:/root/.ssh"
networks:
- main
#MySQL Service
db:
image: mysql:5.7.22
container_name: stage-manager-db
restart: unless-stopped
tty: true
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: whatever
MYSQL_ROOT_PASSWORD: password
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- dbdata:/var/lib/mysql/
networks:
- main
volumes:
project:
driver: local
driver_opts:
type: none
device: $PWD/
o: bind
dbdata:
driver: local
networks:
main:
I'm fairly new to docker so any approach that I might be doing wrong, please let me know. I have a feeling that this could be done much better so feel free to suggest improvements.
Update
** DO NOT DO THIS **
Instead of deleting this answer I will leave it here so others can see that this is not a secure/valid solution to this problem
By David Maze's comment:
Remember that anyone who can access the Docker socket has unrestricted root-level access over the whole host system. I would not add the Docker socket in casually here.
I was able to make it working by sharing the socket between my host OS and the all container.
docker-compose.yml
all:
working_dir: /var/www/stage-manager
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:80"
volumes:
- "./:/var/www/stage-manager"
- "./config/ssh:/root/.ssh"
- "/var/run/docker.sock:/var/run/docker.sock" -> important part
networks:
- main

Files which generated through docker-compose run web rails g doesn't create

I have project which used docker-compose to provide environment to developers. The application is running fine on docker-compose build command and running on 0.0.0.0:3000 on docker-compose up command. When I am trying to run the command docker-compose run web rails g uploader or docker-compose run web rails g migration it's show in console thats they successfuly create but when I check project there are no files.
This is my Dockerfile:
# Base image
FROM ruby:2.7.0
# Set enviroment variables in docker
ENV INSTALL_PATH=/app \
RAILS_ENV=production \
RACK_ENV=$RAILS_ENV \
RAILS_LOG_TO_STDOUT=true \
RAILS_SERVE_STATIC_FILES=true \
SECRET_KEY_BASE=ad187ccccdf25beb51568211a26b0bff237385d79df37e08151acda85266f9a469f37926450ba18d9362ec5e83d1b612c09368bc59dc895cb5ce2798a3ab456b
RUN env
# Ensure gems are cached and only get updated when they change. This will
# drastically increase build times when your gems do not change.
ADD Gemfile* $INSTALL_PATH/
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - \
&& echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list \
&& apt-get update \
&& apt-get install -qq -y build-essential nodejs libpq-dev cron htop vim sqlite3 yarn imagemagick netcat --fix-missing --no-install-recommends \
&& cd $INSTALL_PATH; bundle install --jobs 20 --retry 5
WORKDIR $INSTALL_PATH
ADD . .
RUN mv config/database.docker.yml config/database.yml \
# Fix windows line ending from windows runners
&& find ./ -type f -exec sed -i 's/\r$//' {} + \
&& chmod +x docker/* \
&& yarn install --check-files \
&& RAILS_ENV=$RAILS_ENV bundle exec rails assets:precompile \
&& chown -R nobody:nogroup $INSTALL_PATH
USER nobody
# Expose a volume so that nginx will be able to read in assets in production.
VOLUME ["$INSTALL_PATH/public"]
EXPOSE 3000
CMD ["docker/startup.sh"]
This one is my docker-compose.yml:
version: '2'
volumes:
database_data:
driver: local
web_rails_public: {}
services:
web:
restart: always
image: eu.gcr.io/academic-ivy-225422/joystree_web
container_name: joystree_web_app_container
build: .
volumes:
- web_rails_public:/app/public
env_file:
- '.env.web'
ports:
- "3000:3000"
links:
- "database:database"
depends_on:
- database
database:
restart: always
container_name: joystree_postgres_container
image: postgres:11
env_file:
- '.env.db'
ports:
- "5432:5432"
volumes:
- database_data:/var/lib/postgresql/data
I had this same problem and solved it by following these steps:
1 - in Dockerfile add the fallowing lines, for be able to create new files:
RUN mkdir /home/web \
&& chown $(id -un):$(id -gn) /home/web
WORKDIR /home/web
2 - in docker-compose.yml web the volume should be .:/home/web or just the same as you call in mkdir
web:
volumes:
- .:/home/web
I hope that solve your problem too

Docker not pulling updated php version

I am trying to update php version on the Docker
This is how my Dockerfile looks like
FROM php:7.2-fpm
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
zip \
unzip
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install PHP extensions
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd zip
# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Create system user to run Composer and Artisan Commands
RUN useradd -G www-data,root -d /home/ubuntu ubuntu
RUN mkdir -p /home/ubuntu/.composer && \
chown -R ubuntu:ubuntu /home/ubuntu
# Set working directory
WORKDIR /var/www
USER ubuntu
I have changed the php version to 7.3, and I tried to delete all docker containers and recreate it docker rm -vf $(docker ps -a -q). And then I built my docker containers using docker-compose build --nocache --pull.
docker-compose.yaml file looks like this:
version: "3.7"
services:
app:
build:
context: ./
dockerfile: ./docker/Dockerfile
image: myapp
container_name: myapp-app
restart: unless-stopped
working_dir: /var/www/
volumes:
- ./:/var/www
networks:
- myapp
But still the php version is stated as 7.2.
Any advice?
To remove all containers/images/networks/.. run:
docker system prune -a
Then try to build the image.
If that don't works: can you give the logs, where the wrong version will pulled?

docker-compose and Dockerfile do not install drupal

New to docker and I wanted to install Drupal 7 with docker, to mirror our production server environment. (We are getting ready to upgrade to Drupal 8 - not relevant to this question here.) When I run docker-compose the docker container and an app folder is created, but there is nothing inside app/ . I then placed a composer.json in the root to run composer and install drupal 7. That works, but I thought the point of docker-compose was that it would install everything including drupal 7.
What am I doing wrong?
Follow up question:
Since I am trying to mirror the drupal site on the production server environment, I need to install drupal version 7.69, but this version is not listed on Docker Hub as a package. So, I can't install that specific version?
Docker 19.03.13
MacOS 10.14.6
LAMP
MySQL databases not in volume, but served from Mac development environment
Directory structure:
root
|--apache-drupal.conf
|--docker-compose.yml
|--Dockerfile
|--composer.json
Dockerfile
FROM drupal:7.73-apache
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y \
automake \
bsdmainutils \
build-essential \
ssh \
unzip \
curl \
libopenmpi-dev \
openmpi-bin \
git \
default-mysql-client \
vim \
wget \
zlib1g-dev
# Install Composer
RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" && \
php composer-setup.php && \
mv composer.phar /usr/local/bin/composer && \
php -r "unlink('composer-setup.php');" && \
ln -s /root/.composer/vendor/bin/drush /usr/local/bin/drush
RUN cp /usr/local/etc/php/php.ini-production /usr/local/etc/php/php.ini && \
sed -i -e "s/^ *memory_limit.*/memory_limit = -1/g" /usr/local/etc/php/php.ini && \
sed -i -e "s/^ *upload_max_filesize.*/upload_max_filesize = 30M/g" /usr/local/etc/php/php.ini
# Install Drush
RUN composer global require drush/drush:8.2 && \
composer global update
#RUN wget -O drush.phar https://github.com/drush-ops/drush-launcher/releases/download/0.4.2/drush.phar && \
# chmod +x drush.phar && \
# mv drush.phar /usr/local/bin/drush
# Clean repository
RUN apt-get clean && rm -rf /var/www/html/* && rm -rf /var/lib/apt/lists/*
COPY apache-drupal.conf /etc/apache2/sites-enabled/000-default.conf
WORKDIR /app
docker-compose.yml
version: '2'
services:
drupal:
image: userID/website_d7:1.0
container_name: website_d7
build: .
ports:
- "8033:80"
extra_hosts:
- "test.docker:127.0.0.1"
environment:
MYSQL_USER: user
MYSQL_PASS: pass
MYSQL_DATABASE: website_d7
volumes:
- ./app:/app:cached
restart: always
Running docker containers with:
docker-compose build
docker-compose up
I personnaly never installed drupal from docker / docker-compose ; I always used composer to do it which is in my opinion better cause you can manage the drupal version you want and the modules you need in the composer.json. I only use docker / docker-compose to build the environment and the containers (database / frontend / backend / cache manager).
under volumes:, you are mounting your local ./app folder INTO the container, overwriting /app inside the container. so, if there is nothing in it to begin with, then it wont be filled by your docker container being created.
On the Drupal Docker Hub page (under the volumes heading) they talk about adding 4 volume mounts for specific folders, like:
volumes:
- /path/on/host/modules:/var/www/html/modules
- /path/on/host/profiles:/var/www/html/profiles
- /path/on/host/sites:/var/www/html/sites
- /path/on/host/themes:/var/www/html/themes
if you put all those into your ./app folder locally, you might end up with something like:
volumes:
- ./app/modules:/var/www/html/modules
- ./app/profiles:/var/www/html/profiles
- ./app/sites:/var/www/html/sites
- ./app/themes:/var/www/html/themes
generally, I like to use docker for containerizing the environment/runtime/third party systems (like Drupal or more often in my case WordPress) and then I setup volumes similar to the above for specific folder(s) that are unique to the project (like themes, plugins, etc.). In my case, I usually do WordPress development, so I just have a single mount for ./wp-content:/var/www/html/wp-content
RE: your follow up question - if you look at the "Tags" tab on that same docker hub page (or search for 7.69 on there) you'll see it is actually listed there, so it should be available.

Reference to image built by docker compose is tied to project/directory name

I'm trying to set up a drupal environment with docker compose and it is working somewhat.
However, I've split my image up in a base drupal image and a custom layer on top with my configuration, modules and so on. The base image is pulled from the official repo and enhanced with a couple of tools I need (for example composer and a php extension).
My dockerfile for the custom layer looks like this (first few lines):
FROM reponame_drupal
COPY sites /var/www/html/sites/
RUN chown -R www-data:www-data /var/www/html/sites/default/files
(I'm aware that I should probably change permissions with an entrypoint script)
This works since everything is placed in a directory called 'repo-name', however this seems incredibly fragile. If I change the name of my project for some reason my dockerfiles breaks.
I would very much like to just write FROM drupal or a custom name that I control, instead of one based on the directory name.
Can I change the name of the network to something I can control in code (in docker-compose.yml)? What is the best practice here?
The dockerfile for the drupal base looks like this:
FROM drupal:7.56-apache
# Install packages
RUN rm /bin/sh && ln -s /bin/bash /bin/sh && \
apt-get update && apt-get install --no-install-recommends -y \
curl \
wget \
vim \
git \
unzip \
libmcrypt-dev
# Install PHP extensions
RUN docker-php-ext-install \
mcrypt
# Install Composer
RUN curl -sS https://getcomposer.org/installer | php && \
mv composer.phar /usr/local/bin/composer && \
ln -s /root/.composer/vendor/bin/drush /usr/local/bin/drush
# Install Drush
RUN composer global require drush/drush:8 && \
composer global update
# Clean repository
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
The docker-compose.yml looks something like this:
version: '3.3'
services:
mysql:
image: mysql/mysql-server:5.7
[additional settings]
drupal:
build: ./docker/drupal
customlayer:
build: ./docker/customlayer
ports:
- "8090:80"
depends_on:
- mysql
restart: always
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
[additional settings]
You can specify image option for services you want to build in docker-compose.yml:
drupal:
build: ./docker/drupal
image: reponame_drupal # or whatever you like
Although having this in docker-compose.yml just to build base image for the real service sounds wrong.

Resources