How to define a docker cli service in docker-compose - docker

I have a docker-compose file that runs a few services.
services:
cli:
build:
context: .
dockerfile: docker/cli/Dockerfile
volumes:
- ./drupal8site:/var/www/html/drupal8site
drupal:
container_name: drupal
build:
context: .
dockerfile: docker/DockerFile.drupal
args:
DOC_ROOT: /var/www/html/drupal8site
ports:
- 80:80
volumes:
- ./drupal8site:/var/www/html/drupal8site
restart: always
environment:
APACHE_DOCUMENT_ROOT: /var/www/html/drupal8site/web
mysql:
image: mysql:5.7
ports:
- 3306:3306
volumes:
- ./mysql:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
I would like to add another service which will be a container in which I could run CLI commands (composer, drush for drupal, php, etc).
The following Dockerfile was how I initially defined the cli service but it stops right after it is run. How do I define it so it is part of my docker-compose, shares my mounted volume, and I can interactively connect to it and run CLI commands on it ?
FROM php:7.2-cli
#various programs
RUN apt-get update \
&& apt-get install vim --assume-yes \
&& apt-get install git --assume-yes \
&& apt-get install mysql-client --assume-yes
CMD ["bash"]
Thanks,
Yaron

If you want to run automated scripts on docker images this is obviously a job for a ci-pipeline. You can use CloudFoundry or OpenStack to do this.
But there are many other questions in this post:
1.) How can i share my mounted volume:
You can pass a volume with the -v option to a container. e.g.:
docker run -it -d -v $(pwd)/localFolder:/exposedFolderFromDocker mydockerhub/myawesomeimage
2.) Can I interactively connect to it and run CLI commands on it
docker exec -it docker_cli_1 bash
I recommend to implement features of an docker-image to the individual docker-images Dockerfile. For example copying and running a prepared shell-script:
# your Dockerfile
FROM php:7.2-cli
#various programs
RUN apt-get update \
&& apt-get install vim --assume-yes \
&& apt-get install git --assume-yes \
&& apt-get install mysql-client --assume-yes
# individual changes
COPY your_script.sh /
RUN chown root:root /your_script.sh && \
chmod 0755 /your_script.sh
CMD ["/your_script.sh"]
# a folder to expose
VOLUME /exposedFolderFromDocker
CMD ["bash"]

Related

Docker Compose won't build image but tries to pull from Dockerhub

I have a docker-compose.yml file that defines multiple different services. Multiple of those services should be combined into the same image. They do refer to it in their definition. The first one has also the reference to the Dockerfile location, which should allow the build of this image. But since an update of Docker Desktop it seems to no longer be working.
The docker-compose.yml file looks something like this:
version: '3.9'
services:
frontend:
build: .
image: webapp
volumes:
- .:/app
working_dir: /app/src/frontend
command: yarn run serve
ports:
- 8080:8080
backend:
image: webapp
command: ./run_web.sh
env_file:
- .env
ports:
- 8081:80
depends_on:
- db
db:
image: postgres:13
volumes:
- db-data:/var/lib/postgresql/data
environment:
POSTGRES_USER: user
POSTGRES_DB: db
POSTGRES_PASSWORD: XXXXXXXXX
This worked perfectly fine until a few days ago, but now I get an error when trying to built the project.
WARNING: Some service image(s) must be built from source by running:
docker compose build frontend
Error response from daemon: pull access denied for webapp, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
What am I missing?
The Dockerfile is not very spectacular and it builds if I simply call it manually.
FROM python:3.7-stretch as base
RUN apt-get update && apt-get install -y \
curl \
unzip \
apt-transport-https \
&& rm -rf /var/lib/apt/lists/*
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - \
&& echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list \
&& curl -sL https://deb.nodesource.com/setup_14.x | bash - \
&& apt-get update && apt-get install -y \
yarn \
&& rm -rf /var/lib/apt/lists/*
RUN pip install pip==21.1.1 poetry==1.1.6
WORKDIR /app/src/backend
FROM base as prod
COPY src/backend/poetry.lock src/backend/poetry.toml src/backend/pyproject.toml /app/src/backend/
RUN poetry install --no-dev
COPY src/frontend/package.json src/frontend/yarn.lock /app/src/frontend/
RUN cd /app/src/frontend && yarn install --prod
COPY src/ /app/src/
Ok, I've seen, that you're using the image name webapp in the frontend service and the backend service.
The frontend service has the build instruction included and will name the builded image webapp. The backend service does not have any build instruction and tries to pull the image webapp from the registry.

How to deploy dockerized laravel app with elastic beanstalk?

I'm new to Docker. Trying to deploy dockerized laravel app using elastic beanstalk. Current Docker files -
docker-compose.yml -
version: '3'
services:
#PHP Service
app:
build:
context: ./
dockerfile: Dockerfile
image: admin
container_name: admin-app
restart: unless-stopped
working_dir: /usr/share/nginx/app/
volumes:
- ./:/usr/share/nginx/app/
networks:
- app-network
nginx:
image: nginx:stable-alpine
container_name: admin-nginx
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./:/usr/share/nginx/app/
- ./nginx/conf.d/:/etc/nginx/conf.d
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
and Dockerfile
FROM php:7.4-fpm
ARG uid=1000
ARG user=sammy
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
zip \
unzip \
libcurl4-openssl-dev pkg-config libssl-dev
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install PHP extensions
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd
RUN pecl install mongodb && docker-php-ext-enable mongodb && \
pecl install xdebug && docker-php-ext-enable xdebug
RUN pecl config-set php_ini /etc/php.ini
# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Add user for laravel application
RUN useradd -G www-data,root -u $uid -d /home/$user $user
RUN mkdir -p /home/$user/.composer && \
chown -R $user:$user /home/$user
# Copy existing application directory contents
COPY . /usr/share/nginx/app
WORKDIR /usr/share/nginx/app
RUN chown -R $user:$user .
USER $user
RUN chown -R $user:$user storage bootstrap/cache
RUN chmod -R 775 storage bootstrap/cache
RUN composer install
RUN php artisan cache:clear
RUN php artisan view:clear
RUN php artisan config:clear
# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ["php-fpm"]
It works fine at local machine when I run docker compose up -d only if I have already run composer install otherwise throws following error
It is ok for development purpose I have to run composer install once, but for production, I think it is not right way to manually do composer install every time new version is deployed. Doesn't the command RUN composer install at Dockerfile install the required dependencies? I can see progress bar of dependencies being installed but no vendor folder is generated if I ssh into container. Again it works fine if I ssh into instance and manually install dependencies.
I have deployed a nodejs app also successfully using elastic beanstalk. There the dependencies were installed properly using command RUN npm install at Dockerfile. I don't see any difference in the process. Do I have to include vendor folder also in the zip file?. Please suggest correct way to deploy.

Create database from one docker container in another

I have a program that builds servers automatically whenever we want stakeholders to test a new feature.
Currently I have the following setup:
Container 1 - all (contains nodejs, php and other dependencies)
Container 2 - db (contains the mysql database)
I'm aware that container 1 should be split but this will involve more unnecessary complexity to this stage of development.
Whenever a new feature is completed and ready to be deployed to a stage server we run: yarn run create:server --branchName=new-feature. This will create all of the configuration necessary to bring up our newly created server.
My problem is that whenever I run the command above I need to create a database in db container from all container:
mysql -u root -pxxxx -e "CREATE DATABASE IF NOT EXISTS `xxxx`"
The script main.ts is running in the context of all container, so it is necessary for all to communicate with db.
export const createDatabase = (subdomain: string) => {
const username = process.env.DB_USERNAME;
const password = process.env.DB_PASSWORD;
console.log(`[INFO] Creating database with name \`${subdomain}\``);
// triple back slash is necessary to avoid `command substitution` in some shells
if (isLocalEnviroment()) {
execSync(`docker run -it stage-manager-db mysql -u ${username} -p${password} -e "CREATE DATABASE IF NOT EXISTS \\\`${subdomain}\\\`"`)
} else {
execSync(`mysql -u ${username} -p${password} -e "CREATE DATABASE IF NOT EXISTS \\\`${subdomain}\\\`"`)
}
console.log(`[INFO] Database \`${subdomain}\` created successfully`);
}
On local environment we would like to use docker, while in production everything will sit in the same machine (db, frontendapp and api).
When trying to run the following command docker run -it stage-manager-db mysql -u root -ppassword -e "CREATE DATABASE IF NOT EXISTS master" from all I get
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
I have tried restarting the service with:
service docker restart
which gives
[ ok ] Starting Docker: docker.
but trying to communicate with db from all keeps getting the same error. Upon trying to service docker stop I get:
[....] Stopping Docker: dockerstart-stop-daemon: warning: failed to kill 825: No such process
No process in pidfile '/var/run/docker-ssd.pid' found running; none killed.
failed!
From now on I have tried the several links to fix this issue:
https://github.com/docker/for-linux/issues/52#issuecomment-333563492
https://askubuntu.com/questions/1146634/how-to-remove-docker-from-windows-subsystem
Cannot connect to the Docker daemon at unix:///var/run/docker.sock
Cant uninstall Docker from Ubuntu on WSL
How can I communicate from all container to db container?
Dockerfile
FROM php:7.4-fpm
# Install dependencies
RUN apt-get update && apt-get install -y \
build-essential \
libpng-dev \
libjpeg62-turbo-dev \
libfreetype6-dev \
locales \
zip \
jpegoptim optipng pngquant gifsicle \
vim \
unzip \
git \
curl \
libzip-dev \
libfontconfig1 \
libxrender1 \
libpng-dev \
make \
nginx \
apt-transport-https \
gnupg2 \
wget \
procps \
docker.io
# Install nodejs
RUN apt -y install curl dirmngr apt-transport-https lsb-release ca-certificates
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash -
RUN apt -y install nodejs
# Install extensions
RUN docker-php-ext-install pdo_mysql exif zip pcntl gd
RUN docker-php-ext-configure gd --with-freetype --with-jpeg
RUN docker-php-ext-install -j$(nproc) gd
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y git
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install Yarn
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt update && apt install yarn
# Install dependencies for this project
RUN yarn global add ts-node typescript
RUN useradd -m forge
# Copy existing application directory contents
COPY . /var/www
# Copy existing application directory permissions
COPY --chown=forge:forge . /var/forge
# Copy ssh keys
COPY ./config/ssh /home/forge/.ssh/
# Give right permissions to `ssh` keys
RUN chmod 600 /home/forge/.ssh/config
RUN chmod 600 /home/forge/.ssh/back_end_deploy_key
RUN chmod 600 /home/forge/.ssh/frontend_deploy_key
RUN chmod 644 /home/forge/.ssh/back_end_deploy_key.pub
RUN chmod 644 /home/forge/.ssh/frontend_deploy_key.pub
RUN chown forge:forge /home/forge/.ssh/*
# Up Docker
RUN service docker start
RUN usermod -aG docker forge
# Create folder for stage servers
RUN mkdir -p /var/www/stage-servers
# Give correct permissions to `stage-servers` folder
RUN chown forge:www-data /var/www/stage-servers
RUN chmod g+s /var/www/stage-servers
RUN chmod o-rwx /var/www/stage-servers
# Change current user to forge
USER forge
# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ["php-fpm"]
docker-composer.yml
version: '3.7'
services:
all:
working_dir: /var/www/stage-manager
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:80"
volumes:
- "./:/var/www/stage-manager"
- "./config/ssh:/root/.ssh"
networks:
- main
#MySQL Service
db:
image: mysql:5.7.22
container_name: stage-manager-db
restart: unless-stopped
tty: true
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: whatever
MYSQL_ROOT_PASSWORD: password
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- dbdata:/var/lib/mysql/
networks:
- main
volumes:
project:
driver: local
driver_opts:
type: none
device: $PWD/
o: bind
dbdata:
driver: local
networks:
main:
I'm fairly new to docker so any approach that I might be doing wrong, please let me know. I have a feeling that this could be done much better so feel free to suggest improvements.
Update
** DO NOT DO THIS **
Instead of deleting this answer I will leave it here so others can see that this is not a secure/valid solution to this problem
By David Maze's comment:
Remember that anyone who can access the Docker socket has unrestricted root-level access over the whole host system. I would not add the Docker socket in casually here.
I was able to make it working by sharing the socket between my host OS and the all container.
docker-compose.yml
all:
working_dir: /var/www/stage-manager
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:80"
volumes:
- "./:/var/www/stage-manager"
- "./config/ssh:/root/.ssh"
- "/var/run/docker.sock:/var/run/docker.sock" -> important part
networks:
- main

Docker-compose container exits with status 0

I am trying to create a docker environment for TYPO3. So far I am creating an application container with apache webserver. But the container always stops with status 0. Is there any tips?
My docker-compose:
version: '3'
services:
app:
build: ./docker/app/
volumes:
- ./app:/app
ports:
- 8080:80
- 8443:443
environment:
- SERVER_NAME=local.typo3.com
db:
image: mysql
build: ./docker/db/
restart: always
ports:
- "3307:3306"
environment:
- MYSQL_ROOT_PASSWORD="root"
- MYSQL_ROOT_USER="root"
The Dockerfile looks like this:
FROM ubuntu:18.04
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update; \
apt-get -y upgrade; \
echo "Europe/Berlin" > /etc/timezone; \
ln -sf /usr/share/zoneinfo/Europe/Berlin /etc/localtime;
#Install apache
RUN apt-get update && \
apt-get -y install apache2;
RUN apt-get update;
#Install php
RUN apt-get -y install php7.2 \
php7.2-soap \
php7.2-mysql \
php7.2-xml \
php7.2-cli \
php7.2-json \
php7.2-ldap \
php-curl;
RUN apt-get install -y --no-install-recommends mysql-client;
COPY resources/000-default.conf /etc/apache2/sites-available/
RUN a2ensite 00-default;
CMD ["service apache2 restart"]
Any help will be appreciated.
As David Maze points out, this simply doesn't work in most containerized environments:
CMD ["service apache2 restart"]
Depending on the environment, this will do one of two things:
It could try to contact systemd, which isn't running because you haven't started it. You don't want to start it! Running a process manager inside a container is generally an anti-pattern.
It would start Apache in the background. When this happens, from Docker's perspective the main process has exited, so the container exits. This is probably what is happening to you.
Rather than using the service command, you should start Apache in the foreground by hand, e.g. by running:
httpd -DFOREGROUND
Or, as suggested, building your application on top of an existing image that already has the necessary features.

New code changes exist in live container but are not reflected in the browser

I am using Docker with the open source BI tool Apache Superset. I have added a new file, specifically a .geojson file in the CountryMap directory. Now, when I try to build using docker-compose up --build or make changes in the frontend, Docker is not fully updated, and I get a file not found error when trying to run a query. When I look inside the container via docker exec -it container_id bash, the new file is there.
Dockerfile:
FROM python:3.6-jessie
RUN useradd --user-group --create-home --no-log-init --shell /bin/bash superset
# Configure environment
ENV LANG=C.UTF-8 \
LC_ALL=C.UTF-8
RUN apt-get update -y
# Install dependencies to fix `curl https support error` and `elaying package configuration warning`
RUN apt-get install -y apt-transport-https apt-utils
# Install superset dependencies
# https://superset.incubator.apache.org/installation.html#os-dependencies
RUN apt-get install -y build-essential libssl-dev \
libffi-dev python3-dev libsasl2-dev libldap2-dev libxi-dev
# Install extra useful tool for development
RUN apt-get install -y vim less postgresql-client redis-tools
# Install nodejs for custom build
# https://superset.incubator.apache.org/installation.html#making-your-own-build
# https://nodejs.org/en/download/package-manager/
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash - \
&& apt-get install -y nodejs
WORKDIR /home/superset
COPY requirements.txt .
COPY requirements-dev.txt .
COPY contrib/docker/requirements-extra.txt .
RUN pip install --upgrade setuptools pip \
&& pip install -r requirements.txt -r requirements-dev.txt -r requirements-extra.txt \
&& rm -rf /root/.cache/pip
RUN pip install gevent
COPY --chown=superset:superset superset superset
ENV PATH=/home/superset/superset/bin:$PATH \
PYTHONPATH=/home/superset/superset/:$PYTHONPATH
USER superset
RUN cd superset/assets \
&& npm ci \
&& npm run build \
&& rm -rf node_modules
COPY contrib/docker/docker-init.sh .
COPY contrib/docker/docker-entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
HEALTHCHECK CMD ["curl", "-f", "http://localhost:8088/health"]
EXPOSE 8088
docker-compose.yml:
version: '2'
services:
redis:
image: redis:3.2
restart: unless-stopped
ports:
- "127.0.0.1:6379:6379"
volumes:
- redis:/data
postgres:
image: postgres:10
restart: unless-stopped
environment:
POSTGRES_DB: superset
POSTGRES_PASSWORD: superset
POSTGRES_USER: superset
ports:
- "127.0.0.1:5432:5432"
volumes:
- postgres:/var/lib/postgresql/data
superset:
build:
context: ../../
dockerfile: contrib/docker/Dockerfile
restart: unless-stopped
environment:
POSTGRES_DB: superset
POSTGRES_USER: superset
POSTGRES_PASSWORD: superset
POSTGRES_HOST: postgres
POSTGRES_PORT: 5432
REDIS_HOST: redis
REDIS_PORT: 6379
# If using production, comment development volume below
#SUPERSET_ENV: production
SUPERSET_ENV: development
# PYTHONUNBUFFERED: 1
user: root:root
ports:
- 8088:8088
depends_on:
- postgres
- redis
volumes:
# this is needed to communicate with the postgres and redis services
- ./superset_config.py:/home/superset/superset/superset_config.py
# this is needed for development, remove with SUPERSET_ENV=production
- ../../superset:/home/superset/superset
volumes:
postgres:
external: false
redis:
external: false
Why is there a not found error?
try to use absolute path in volumes:
volumes:
- /home/me/my_project/superset_config.py:/home/superset/superset/superset_config.py
- /home/me/my_project/superset:/home/superset/superset
It is because docker-compose is utilizing cache. If the dockerfile and the docker-compose.yml in not changed it does not recreate the container image. To avoid this you should use the following flag:
--force-recreate
--force-recreate
Recreate containers even if their configuration and image haven't
changed.
For development purposes I like to use the following switch as well:
-V, --renew-anon-volumes
Recreate anonymous volumes instead of retrieving data from the previous containers.

Resources