Writable folder permissions in docker - docker

I have a docker setup with some websites for localhost. I use Smarty as my template engine and it requires to have a writable templates_c folder. Any idea how I can make this folder writable?
The error is as following:
PHP Fatal error: Smarty error: unable to write to $compile_dir
'/var/www/html/sitename.local/httpdocs/templates_c'.
Be sure $compile_dir is writable by the web server user. in
/var/www/html/sitename.local/httpdocs/libs/Smarty.class.php on
line 1093
I know this could be set manually with linux but I am looking for an automatic global solution since I have many websites who have this issue
Also worth mentioning I am using a pretty clean docker-compose.yml
php56:
build: .
dockerfile: /etc/docker/dockerfile_php_56
volumes:
- ./sites:/var/www/html
- ./etc/php:/usr/local/etc/php
- ./etc/apache2/apache2.conf:/etc/apache2/conf-enabled/apache2.conf
- ./etc/apache2/hosts.conf:/etc/apache2/sites-enabled/hosts.conf
ports:
- "80:80"
- "8080:8080"
links:
- mysql
mysql:
image: mysql
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=MY_PASSWORD
- MYSQL_DATABASE=YOUR_DATABASE_NAME
volumes:
- ./etc/mysql:/docker-entrypoint-initdb.d
With a small dockerfile for basics:
FROM php:5.6-apache
RUN /usr/local/bin/docker-php-ext-install mysqli mysql
RUN docker-php-ext-configure mysql --with-libdir=lib/x86_64-linux-gnu/ \
&& docker-php-ext-install mysql
RUN a2enmod rewrite
https://github.com/wesleyd85/docker-php7-httpd-apache2-mysql (but then with php 5.6)

I solved the same problem with the solution here: Running docker on Ubuntu: mounted host volume is not writable from container
Just need to add:
RUN chmod a+rwx -R project-dir/smarty.cache.dir
to Dockerfile

Related

Flask in docker working very slowly and not synch files

I've 2 problems with flask app in docker. Application working slowly and freeze after finish last request (for example: first route work fine, next click other link/page app freeze. If i go to homepage via URL and run page again working ok ). Outside docker app working very fast.
Second problem is docker not synch files in container after change files.
# Dockerfile
FROM python:3.9
# set work directory
WORKDIR /base
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update
RUN pip install --upgrade pip
COPY ./requirements.txt /base/requirements.txt
COPY ./base_app.py /base/base_app.py
COPY ./config.py /base/config.py
COPY ./certs/ /base/certs/
COPY ./app/ /base/app/
COPY ./tests/ /base/tests/
RUN pip install -r requirements.txt
# docker-compose
version: '3.3'
services:
web:
build: .
command: tail -f /dev/null
volumes:
- ${PWD}/app/:/usr/src/app/
networks:
- flask-network
ports:
- 5000:5000
depends_on:
- flaskdb
flaskdb:
image: postgres:13-alpine
volumes:
- ${PWD}/postgres_database:/var/lib/postgresql/data/
networks:
- flask-network
environment:
- POSTGRES_DB=db_name
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
ports:
- "5432:5432"
restart: always
networks:
flask-network:
driver: bridge
`
You have a couple of significant errors in the code you show.
The first problem is that your application doesn't run at all: the Dockerfile is missing the CMD line that tells Docker what to run, and you override it in the Compose setup with a meaningless tail command. You should generally set this in the Dockerfile:
CMD ["./base_app.py"]
You can remove most of the Compose settings you have. You do not need command: (it's in the Dockerfile), volumes: (what you have is ineffective and the code is in the image anyways), or networks: (Compose provides a network named default; delete all of the networks: blocks in the file).
Second problem is docker not synch files in container after change files.
I don't usually recommend trying to do actual development in Docker. You can tell Compose to just start the database
docker-compose up -d flaskdb
and then you can access it from the host (PGHOST=localhost, PGPORT=5432). This means you can use an ordinary non-Docker Python virtual environment for development.
If you do want to try to use volumes: to simulate a live development environment (you talk about performance; this specific path can be quite slow on non-Linux hosts) then you need to make sure the left side of volumes: is the host directory with your code (probably .), the right side is the container directory (your Dockerfile uses /base), and your Dockerfile doesn't rearrange, modify, or generate the files at all (the bind mount hides all of it).
# don't run the application in the image; use the Docker infrastructure
# to run something else
volumes:
# v-------- left side: host path (matches COPY source directory)
- .:/base
# ^^^^-- right side: container path (matches WORKDIR/destination directory)

Grails App running in Docker Container not using Local Packages

I'm currently trying to run our app that's in Grails 2.3.11 through docker-compose with the database. I have the database up and running without issue, and also the app container sets up grails and starts the compilation process, but it goes on to downloading all the packages every time I stop and restart the package. This becomes an issue because we have to download so many packages (And there's a bunch of errors we have to work around because Grails 2). I've tried to mount my local grails folders into the container to have it run off of those but it seems to not be having any success. Is there something obvious I'm doing wrong, or some way I can easily check where the issue might be?
I'm also attempting to map all local database information into the mysql container with issue. But I haven't looked into it much yet, if you see an obvious issue there that would be helpful.
docker-compose.yml:
version: '2'
services:
grails:
image: ibbrussell/grails:2.3.11
command: run-app
volumes:
- ~/.m2:/home/developer/.m2
- ~/.gradle:/home/developer/.gradle
- ~/.grails:/home/developer/.grails
- ./:/app
ports:
- "8080:8080" #Grails default port
- "5005:5005" #Grails debug port
links:
- db
deploy:
resources:
limits:
memory: 4G
reservations:
memory: 4G
db:
image: mysql:5.6
container_name: grails_mysql
ports:
- "3306:3306"
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: 1
MYSQL_DATABASE: grails
volumes:
- "/usr/local/mysql/data:/var/lib/mysql"
Dockerfile:
FROM java:8
# Set customizable env vars defaults.
ENV GRAILS_VERSION 2.3.11
# Install Grails
WORKDIR /usr/lib/jvm
RUN wget https://github.com/grails/grails-core/releases/download/v$GRAILS_VERSION/grails-$GRAILS_VERSION.zip && \
unzip grails-$GRAILS_VERSION.zip && \
rm -rf grails-$GRAILS_VERSION.zip && \
ln -s grails-$GRAILS_VERSION grails
# Setup Grails path.
ENV GRAILS_HOME /usr/lib/jvm/grails
ENV PATH $GRAILS_HOME/bin:$PATH
ENV GRAILS_OPTS="-XX:MaxPermSize=4g -Xms4g -Xmx4g"
# Create App Directory
RUN mkdir /app
# Set Workdir
WORKDIR /app
# Set Default Behavior
ENTRYPOINT ["grails"]
So the mapping I was using ended up not being correct. I was going off a file mapping from 1 article and ended up working after trying another working mapping. I made the switch below:
original:
volumes:
- ~/.m2:/home/developer/.m2
- ~/.gradle:/home/developer/.gradle
- ~/.grails:/home/developer/.grails
- ./:/app
new:
volumes:
- ~/.m2:/root/.m2
- ~/.gradle:/root/.gradle
- ~/.grails:/root/.grails
- ./:/app

How to remove Docker volumes for production and COPY instead?

I have a simple Laravel application with Nginx, PHP and MySQL each in its own container. It works great in my development environment but for production I need to remove bind volumes and copy the contents to the image itself instead. But how do I do this?
Do I need a seperate docker-compose-prod.yml file? How can I remove volues for production? How can I copy my source code and configuration to the image when deploying for production?
Here is my docker-compose.yml file
version: '3'
networks:
laranet:
services:
nginx:
image: nginx:stable-alpine
container_name: nginxcontainer
ports:
- "80:80"
volumes:
- ./src:/var/www/html
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
depends_on:
- php
- mysql
networks:
- laranet
mysql:
image: mysql:5.7.22
container_name: mysqlcontainer
ports:
- "3306:3306"
volumes:
- ./mysql:/var/lib/mysql
networks:
- laranet
php:
build:
context: .
dockerfile: php/Dockerfile
container_name: phpcontainer
volumes:
- ./src:/var/www/html
ports:
- "9000:9000"
networks:
- laranet
and here is my php/Dockerfile
FROM php:7.2-fpm-alpine
RUN docker-php-ext-install pdo pdo_mysql
RUN chown -R www-data:www-data /var/www
RUN chmod 755 /var/www
1) copy data only for prod
You can use multistage builds to copy the contents only when you build with the target "prod".
FROM php:7.2-fpm-alpine as base
RUN docker-php-ext-install pdo pdo_mysql
RUN chown -R www-data:www-data /var/www
RUN chmod 755 /var/www
FROM base as dev
VOLUME /var/www/html
FROM base as prod
COPY data /var/www/html
VOLUME /var/www/html
your Docker-compose.yml gets a new line for prod
php:
build:
context: .
dockerfile: php/Dockerfile
target: prod
container_name: phpcontainer
ports:
- "9000:9000"
networks:
- laranet
2) No bind in prod?
would anonymous volumes for dev be a valid solution? e.g. through the definition of VOLUME /var/www/html you specify that the contents of the /var/www/html path should be put into a volume on container start. If no volume is specified in the docker-compose.yml it will create a volume for you. Sweet right?
Sidenote
I do not recommend to split your behavior between dev and prod.
I recommend that you use volumes throughout your stages. The only difference in prod could be that you copy the contents into the image -> before you define the VOLUME, since defining a VOLUME makes the folder unchangeable in the following layers.
david-maze pointed out (see comment)
Putting a VOLUME in your Dockerfile mostly only has confusing side effects, and I'd recommend doing it only if you're absolutely clear on what it means. It's definitely not needed for the OP's setup (and in fact has the likely side effect of leaking anonymous volumes on the production system)
Sources
multi-stage build in docker compose?
https://docs.docker.com/engine/reference/builder/#volume

Running docker-compose up, stuck on a "infinite" "creating...[container/image]" php and mysql images

I'm new to Docker, so i don't know if it's a programming mistake or something, one thing i found strange is that in a Mac it worked fine, but running on windows, doesn't.
docker-compose.yml
version: '2.1'
services:
db:
build: ./backend
restart: always
ports:
- "3306:3306"
volumes:
- /var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=123
- MYSQL_DATABASE=demo
- MYSQL_USER=user
- MYSQL_PASSWORD=123
php:
build: ./frontend
ports:
- "80:80"
volumes:
- ./frontend:/var/www/html
links:
- db
Docker file inside ./frontend
FROM php:7.2-apache
# Enable mysqli to connect to database
RUN docker-php-ext-install mysqli
# Document root
WORKDIR /var/www/html
COPY . /var/www/html/
Dockerfile inside ./backend
FROM mysql:5.7
COPY ./demo.sql /docker-entrypoint-initdb.d
Console:
$ docker-compose up
Creating phpsampleapp_db_1 ... done
Creating phpsampleapp_db_1 ...
Creating phpsampleapp_php_1 ...
It stays forever like that, i tried a bunch of things.
I'm using Docker version 17.12.0-ce. And enabled Linux container mode.
I think i don't need the "version" and "services", but anyway.
Thanks.
In my case, the fix was simply to restart Docker Desktop. After that all went smoothly

Install PHP composer in existing docker image

I'm running docker-letsencrypt through a docker-compose.yml file. It comes with PHP. I'm trying to run PHP composer with it. I can install composer while being in the container through bash, but that won't stick when I recreate the container. How do I keep a permanent install of composer in an existing container that doesn't come with compose by default?
My docker-compose.yml looks like this:
version: "3"
services:
letsencrypt:
image: linuxserver/letsencrypt
container_name: letsencrypt
cap_add:
- NET_ADMIN
ports:
- "80:80"
- "443:443"
environment:
- PUID=1000
- PGID=1000
- EMAIL=<mailadress>
- URL=<tld>
- SUBDOMAINS=<subdomains>
- VALIDATION=http
- TZ=Europe/Paris
volumes:
- /home/ubuntu/letsencrypt:/config
I did find the one-line installer for composer:
php -r "readfile('http://getcomposer.org/installer');" | php -- --install-dir=/usr/bin/ --filename=composer
I could add this to command in my docker-compose.yml, but that would reinstall composer even on container restarts right?
You're right about your comment about the command option, it will indeed be run every time you launch your container.
One workaround would be to create your own dockerfile, as follow :
FROM linuxserver/letsencrypt
RUN php -r "readfile('http://getcomposer.org/installer');" | php -- --install-dir=/usr/bin/ --filename=composer
(RUN directives are only run during the build step).
You should then modify your docker-compose.yml :
...
build: ./dir
#dir/ is the folder where your Dockerfile resides
#use the dockerfile directive if you use a non-default naming convention
#or if your Dockerfile isn't at the root of your project
container_name: letsencrypt
...

Resources