I'm new to Docker. Trying to deploy dockerized laravel app using elastic beanstalk. Current Docker files -
docker-compose.yml -
version: '3'
services:
#PHP Service
app:
build:
context: ./
dockerfile: Dockerfile
image: admin
container_name: admin-app
restart: unless-stopped
working_dir: /usr/share/nginx/app/
volumes:
- ./:/usr/share/nginx/app/
networks:
- app-network
nginx:
image: nginx:stable-alpine
container_name: admin-nginx
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./:/usr/share/nginx/app/
- ./nginx/conf.d/:/etc/nginx/conf.d
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
and Dockerfile
FROM php:7.4-fpm
ARG uid=1000
ARG user=sammy
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
zip \
unzip \
libcurl4-openssl-dev pkg-config libssl-dev
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install PHP extensions
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd
RUN pecl install mongodb && docker-php-ext-enable mongodb && \
pecl install xdebug && docker-php-ext-enable xdebug
RUN pecl config-set php_ini /etc/php.ini
# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Add user for laravel application
RUN useradd -G www-data,root -u $uid -d /home/$user $user
RUN mkdir -p /home/$user/.composer && \
chown -R $user:$user /home/$user
# Copy existing application directory contents
COPY . /usr/share/nginx/app
WORKDIR /usr/share/nginx/app
RUN chown -R $user:$user .
USER $user
RUN chown -R $user:$user storage bootstrap/cache
RUN chmod -R 775 storage bootstrap/cache
RUN composer install
RUN php artisan cache:clear
RUN php artisan view:clear
RUN php artisan config:clear
# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ["php-fpm"]
It works fine at local machine when I run docker compose up -d only if I have already run composer install otherwise throws following error
It is ok for development purpose I have to run composer install once, but for production, I think it is not right way to manually do composer install every time new version is deployed. Doesn't the command RUN composer install at Dockerfile install the required dependencies? I can see progress bar of dependencies being installed but no vendor folder is generated if I ssh into container. Again it works fine if I ssh into instance and manually install dependencies.
I have deployed a nodejs app also successfully using elastic beanstalk. There the dependencies were installed properly using command RUN npm install at Dockerfile. I don't see any difference in the process. Do I have to include vendor folder also in the zip file?. Please suggest correct way to deploy.
Related
In my new Symfony application, I am trying to run docker-compose build when I get an error:
In my root bin folder I have the file from the error message. I am starting to question if this is a path problem? Can someone please help? Maybe it is something wrong with volume definition in the docker file I posted below.
RUN /var/www/html/bin/app_build.sh:
#19 0.164 /bin/sh: 1: /var/www/html/bin/app_build.sh: not found
version: "3.9"
services:
app-www:
container_name: app-www
hostname: app-www
restart: unless-stopped
entrypoint: apache2-foreground
build:
context: .
args:
ENVIRONMENT: local
volumes:
- ./www:/var/www/html
- ./.docker/.ssh:/root/.ssh
- ./www/node_modules:/var/www/html/node_modules:rw,cached
- ./www/vendor:/var/www/html/vendor:rw,cached
ports:
- "8080:80"
- "8081:443"
depends_on:
- redis
redis:
image: redis:6.2-alpine
restart: always
ports:
- '6363:6379'
command: redis-server --save 20 1 --loglevel warning
volumes:
- cache:/data
volumes:
cache:
driver: local
and dockerfile
FROM php:7.4-apache
ENV TZ="Europe/Zurich"
ARG COMPOSER_TOKEN
ENV COMPOSER_TOKEN=${COMPOSER_TOKEN}
# Debian Packages
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update \
&& apt-get --yes --no-install-recommends install libxml2-dev libgmp-dev zip npm zlib1g-dev libpng-dev libonig-dev git unzip tzdata \
&& npm install --global yarn
# PHP Extensions
RUN docker-php-ext-install soap bcmath gmp pdo pdo_mysql intl opcache gd json mbstring gmp \
&& docker-php-ext-enable soap bcmath gmp pdo pdo_mysql intl opcache gd json mbstring gmp \
&& pecl install xdebug \
&& pecl install redis \
&& docker-php-ext-enable xdebug redis
# Install composer
COPY --from=composer:latest /usr/bin/composer /usr/local/bin/composer
RUN mkdir /root/.composer && echo "${COMPOSER_TOKEN}" > /root/.composer/auth.json
# Configure PHP
COPY ./config/docker/php/php.ini /usr/local/etc/php/php.ini
# Configure Apache
RUN a2enmod headers
RUN a2enmod rewrite
RUN a2enmod ssl
RUN rm -rf /etc/apache2/sites-enabled/* /etc/apache2/sites-available/
COPY ./config/docker/apache2/breitling.conf /etc/apache2/sites-enabled
COPY ./config/docker/apache2/ssl/ /etc/apache2/ssl/
# Deploy & Build app
COPY . /var/www/html/
RUN /var/www/html/bin/app_build.sh
# Fix permissions
RUN chmod -R 777 /var/www/html/var/
EXPOSE 80 443
ENTRYPOINT /var/www/html/bin/entrypoint.sh
I am not sure what is wrong as this is the main config for my project. And I am running it on MacOs.
After looking into these lines:
COPY . /var/www/html/
RUN /var/www/html/bin/app_build.sh
I would expect that inside path /var/www/html/ there is again www directory.
My guess is that you need only the contents of the www dir copied into the docker image. Then your copy command should look like this:
COPY ./www/ /var/www/html/
Give it a shot :-)
I have a docker-compose file which uses a Dockerfile to build the image. In this image (Dockerfile) I created the folder /workspace which I'd like to bind mount for persistence in my local filesystem.
After the docker-compose up, the folder is empty if I bind mount, but if I do not mount this folder everything works fine (and the folder exist with all the files I added).
This is my docker-compose.yml:
version: "3.9"
services:
web:
build: .
command: uwsgi --ini /workspace/confs/uwsgi.ini --logger file:/workspace/logs/uswgi.log --processes 1 --workers 1 --plugins-dir=/usr/lib/uwsgi/plugins/ --plugin=python
environment:
- DB_HOST=db
- DB_NAME=***
- DB_USER=***
- DB_PASS=***
depends_on:
- db
- redis
- memcached
volumes:
- ./workspace:/workspace
networks:
- asyncmail
- traefik
# db, redis and memcached are ommited here
# aditional labels for traefik is also ommited
This is my Dockerfile:
FROM ubuntu:trusty
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
SHELL ["/bin/bash", "-c"]
RUN mkdir /workspace
RUN apt-get update -y && apt-get upgrade -y
RUN apt-get install -y redis-server python3-pip git-core postgresql-client
RUN apt-get install -y libpq-dev python3-dev libffi-dev libtiff5-dev zlib1g-dev libjpeg8-dev libyaml-dev libpython3-dev openssh-client uwsgi-plugin-python3 libpcre3 libpcre3-dev uwsgi-plugin-python
ADD myapp /workspace/
WORKDIR /workspace/src/
RUN /bin/bash -c "pip3 install cffi \
&& pip3 install -r /workspace/src/requirements.txt \
&& ./manage.py collectstatic --noinput"
RUN ln -sf /usr/share/zoneinfo/America/Sao_Paulo /etc/localtime
# CMD ["uwsgi", "--ini", "/workspace/confs/uwsgi.ini", "--logger", "file:/workspace/logs/uswgi.log"]
I know there is some items it could be optimized, but when I do a docker-compose up -d the folder ./workspace is created with only a folder inside called src. Inside the container the /workspace only have this empty folder too;
If I remove the volumes line in docker-compose, inside the container, the folder /workspace have all the sourcecode of my app.
What am I doing wrong that I can't bind mount the workspace folder?
PS: I know this image i'm using (ubuntu trusty) is old, but my old app only run with this version.
am I correct in assuming that the files you want to appear inside workspace are actually in a folder called "myapp" in your host machine
(it seems so from this line)
ADD myapp /workspace/
I think you meant to map that into your docker container, so under volumes
volumes:
- ./myapp:/workspace
volume maps work one way, that is the folder inside the container is replaced by the contents of the mapped folder on the host, not the other way around...
I ended up with adding to the container the sourcecode directory to fix this problem. #NiRR answer helped a lot.
The final Dockerfile was changes to not include sourcecode in the image:
FROM ubuntu:trusty
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
ARG DEBIAN_FRONTEND=noninteractive
SHELL ["/bin/bash", "-c"]
RUN apt-get update -y && apt-get upgrade -y
RUN apt-get install -y python3-pip git-core postgresql-client
RUN apt-get install -y libpq-dev python3-dev libffi-dev libtiff5-dev zlib1g-dev libjpeg8-dev libyaml-dev libpython3-dev openssh-client uwsgi-plugin-python3 libpcre3 libpcre3-dev
WORKDIR /workspace/src
COPY myapp/src/requirements.txt .
RUN /bin/bash -c "pip3 install cffi \
&& pip3 install -r requirements.txt"
# To set timezone
RUN ln -sf /usr/share/zoneinfo/America/Sao_Paulo /etc/localtime
And I changed the docker-compose to the following final version:
version: "3.9"
services:
web:
build: .
command: ./start.sh
environment:
- DB_HOST=db
- DB_NAME=***
- DB_USER=***
- DB_PASS=***
volumes:
- ./myapp:/workspace
Now in the container start all the sourcecode from myapp is copied to inside the container;
Everything is under GIT control
If the code changes, we can make a push/pull and docker-compose up -d to restart the container. The new version will already be there.
I have a program that builds servers automatically whenever we want stakeholders to test a new feature.
Currently I have the following setup:
Container 1 - all (contains nodejs, php and other dependencies)
Container 2 - db (contains the mysql database)
I'm aware that container 1 should be split but this will involve more unnecessary complexity to this stage of development.
Whenever a new feature is completed and ready to be deployed to a stage server we run: yarn run create:server --branchName=new-feature. This will create all of the configuration necessary to bring up our newly created server.
My problem is that whenever I run the command above I need to create a database in db container from all container:
mysql -u root -pxxxx -e "CREATE DATABASE IF NOT EXISTS `xxxx`"
The script main.ts is running in the context of all container, so it is necessary for all to communicate with db.
export const createDatabase = (subdomain: string) => {
const username = process.env.DB_USERNAME;
const password = process.env.DB_PASSWORD;
console.log(`[INFO] Creating database with name \`${subdomain}\``);
// triple back slash is necessary to avoid `command substitution` in some shells
if (isLocalEnviroment()) {
execSync(`docker run -it stage-manager-db mysql -u ${username} -p${password} -e "CREATE DATABASE IF NOT EXISTS \\\`${subdomain}\\\`"`)
} else {
execSync(`mysql -u ${username} -p${password} -e "CREATE DATABASE IF NOT EXISTS \\\`${subdomain}\\\`"`)
}
console.log(`[INFO] Database \`${subdomain}\` created successfully`);
}
On local environment we would like to use docker, while in production everything will sit in the same machine (db, frontendapp and api).
When trying to run the following command docker run -it stage-manager-db mysql -u root -ppassword -e "CREATE DATABASE IF NOT EXISTS master" from all I get
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
I have tried restarting the service with:
service docker restart
which gives
[ ok ] Starting Docker: docker.
but trying to communicate with db from all keeps getting the same error. Upon trying to service docker stop I get:
[....] Stopping Docker: dockerstart-stop-daemon: warning: failed to kill 825: No such process
No process in pidfile '/var/run/docker-ssd.pid' found running; none killed.
failed!
From now on I have tried the several links to fix this issue:
https://github.com/docker/for-linux/issues/52#issuecomment-333563492
https://askubuntu.com/questions/1146634/how-to-remove-docker-from-windows-subsystem
Cannot connect to the Docker daemon at unix:///var/run/docker.sock
Cant uninstall Docker from Ubuntu on WSL
How can I communicate from all container to db container?
Dockerfile
FROM php:7.4-fpm
# Install dependencies
RUN apt-get update && apt-get install -y \
build-essential \
libpng-dev \
libjpeg62-turbo-dev \
libfreetype6-dev \
locales \
zip \
jpegoptim optipng pngquant gifsicle \
vim \
unzip \
git \
curl \
libzip-dev \
libfontconfig1 \
libxrender1 \
libpng-dev \
make \
nginx \
apt-transport-https \
gnupg2 \
wget \
procps \
docker.io
# Install nodejs
RUN apt -y install curl dirmngr apt-transport-https lsb-release ca-certificates
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash -
RUN apt -y install nodejs
# Install extensions
RUN docker-php-ext-install pdo_mysql exif zip pcntl gd
RUN docker-php-ext-configure gd --with-freetype --with-jpeg
RUN docker-php-ext-install -j$(nproc) gd
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y git
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install Yarn
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt update && apt install yarn
# Install dependencies for this project
RUN yarn global add ts-node typescript
RUN useradd -m forge
# Copy existing application directory contents
COPY . /var/www
# Copy existing application directory permissions
COPY --chown=forge:forge . /var/forge
# Copy ssh keys
COPY ./config/ssh /home/forge/.ssh/
# Give right permissions to `ssh` keys
RUN chmod 600 /home/forge/.ssh/config
RUN chmod 600 /home/forge/.ssh/back_end_deploy_key
RUN chmod 600 /home/forge/.ssh/frontend_deploy_key
RUN chmod 644 /home/forge/.ssh/back_end_deploy_key.pub
RUN chmod 644 /home/forge/.ssh/frontend_deploy_key.pub
RUN chown forge:forge /home/forge/.ssh/*
# Up Docker
RUN service docker start
RUN usermod -aG docker forge
# Create folder for stage servers
RUN mkdir -p /var/www/stage-servers
# Give correct permissions to `stage-servers` folder
RUN chown forge:www-data /var/www/stage-servers
RUN chmod g+s /var/www/stage-servers
RUN chmod o-rwx /var/www/stage-servers
# Change current user to forge
USER forge
# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ["php-fpm"]
docker-composer.yml
version: '3.7'
services:
all:
working_dir: /var/www/stage-manager
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:80"
volumes:
- "./:/var/www/stage-manager"
- "./config/ssh:/root/.ssh"
networks:
- main
#MySQL Service
db:
image: mysql:5.7.22
container_name: stage-manager-db
restart: unless-stopped
tty: true
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: whatever
MYSQL_ROOT_PASSWORD: password
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- dbdata:/var/lib/mysql/
networks:
- main
volumes:
project:
driver: local
driver_opts:
type: none
device: $PWD/
o: bind
dbdata:
driver: local
networks:
main:
I'm fairly new to docker so any approach that I might be doing wrong, please let me know. I have a feeling that this could be done much better so feel free to suggest improvements.
Update
** DO NOT DO THIS **
Instead of deleting this answer I will leave it here so others can see that this is not a secure/valid solution to this problem
By David Maze's comment:
Remember that anyone who can access the Docker socket has unrestricted root-level access over the whole host system. I would not add the Docker socket in casually here.
I was able to make it working by sharing the socket between my host OS and the all container.
docker-compose.yml
all:
working_dir: /var/www/stage-manager
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:80"
volumes:
- "./:/var/www/stage-manager"
- "./config/ssh:/root/.ssh"
- "/var/run/docker.sock:/var/run/docker.sock" -> important part
networks:
- main
I am trying to update php version on the Docker
This is how my Dockerfile looks like
FROM php:7.2-fpm
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
zip \
unzip
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install PHP extensions
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd zip
# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Create system user to run Composer and Artisan Commands
RUN useradd -G www-data,root -d /home/ubuntu ubuntu
RUN mkdir -p /home/ubuntu/.composer && \
chown -R ubuntu:ubuntu /home/ubuntu
# Set working directory
WORKDIR /var/www
USER ubuntu
I have changed the php version to 7.3, and I tried to delete all docker containers and recreate it docker rm -vf $(docker ps -a -q). And then I built my docker containers using docker-compose build --nocache --pull.
docker-compose.yaml file looks like this:
version: "3.7"
services:
app:
build:
context: ./
dockerfile: ./docker/Dockerfile
image: myapp
container_name: myapp-app
restart: unless-stopped
working_dir: /var/www/
volumes:
- ./:/var/www
networks:
- myapp
But still the php version is stated as 7.2.
Any advice?
To remove all containers/images/networks/.. run:
docker system prune -a
Then try to build the image.
If that don't works: can you give the logs, where the wrong version will pulled?
I have a setup using a Docker container with Apache2, PHP7, and Xdebug installed. The host system runs Ubuntu 16.04. I have installed Eclipse Neon latest download on the host computer and have tried many different configurations to get Xdebug working. I have configured the container to expose ports 80 and 9000 and have configured Xdebug for remote start and port 9000 usage. When I try to configure eclipse debugging it tells me that port 9000 is in use and will not connect. I've searched the web for any info that would help, but came up with nothing.
Here is the web server configuration code for the docker container:
FROM php:7.0.19-apache
COPY config/php.ini /usr/local/etc/php/
RUN apt-get update && apt-get install -y \
libfreetype6-dev \
libjpeg62-turbo-dev \
libmcrypt-dev \
libpng12-dev \
git \
&& docker-php-ext-install -j$(nproc) iconv mcrypt \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
&& docker-php-ext-install -j$(nproc) gd\
&& docker-php-ext-install -j$(nproc) mysqli\
&& pecl install xdebug-2.5.0 \
&& docker-php-ext-enable xdebug
# Installation of Composer
RUN cd /usr/src && curl -sS http://getcomposer.org/installer | php
RUN cd /usr/src && mv composer.phar /usr/bin/composer
# Installation of tools with composer
RUN composer global require --no-plugins --no-scripts phpdocumentor/phpdocumentor
RUN composer global require --no-plugins --no-scripts squizlabs/php_codesniffer
RUN composer global require --no-plugins --no-scripts phpunit/phpunit
RUN composer global require --no-plugins --no-scripts phpunit/dbunit
RUN composer global require --no-plugins --no-scripts phploc/phploc
RUN composer global require --no-plugins --no-scripts phpmd/phpmd
RUN composer global require --no-plugins --no-scripts simpletest/simpletest
ENV PATH /root/.composer/vendor/bin:$PATH
Here are the php.ini settings` for Xdebug:
zend_extension=/usr/local/lib/php/extensions/no-debug-non-zts-20151012/xdebug.so
xdebug.profiler_enable_trigger = 1
xdebug.trace_enable_trigger = 1
xdebug.remote_enable=1
xdebug.remote_host=172.18.0.1
xdebug.remote_port=9000
xdebug.remote_handler="dbgp"
And here is the docker-compose code that exposes the ports and links in a database container:
version: '2'
services:
webserver:
build: ./docker/webserver
ports:
- "80:80"
- "443:443"
- "9000:9000"
volumes:
- /home/www:/var/www
links:
- db
db:
image: mysql:5.5
ports:
- "3306:3306"
volumes:
- ./db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=mypassword
myadmin:
image: phpmyadmin/phpmyadmin:4.6
ports:
- "8080:80"
links:
- db
environment:
MYSQL_ROOT_PASSWORD: mypassword
I'd appreciate any help in getting the debugger working so that I can use it for setting breakpoints. I have installed theeasiestxdebug extension for Firefox and intend to use that as the control for debugging when I get everything working.
Thanks in advance.