Grails App running in Docker Container not using Local Packages - docker

I'm currently trying to run our app that's in Grails 2.3.11 through docker-compose with the database. I have the database up and running without issue, and also the app container sets up grails and starts the compilation process, but it goes on to downloading all the packages every time I stop and restart the package. This becomes an issue because we have to download so many packages (And there's a bunch of errors we have to work around because Grails 2). I've tried to mount my local grails folders into the container to have it run off of those but it seems to not be having any success. Is there something obvious I'm doing wrong, or some way I can easily check where the issue might be?
I'm also attempting to map all local database information into the mysql container with issue. But I haven't looked into it much yet, if you see an obvious issue there that would be helpful.
docker-compose.yml:
version: '2'
services:
grails:
image: ibbrussell/grails:2.3.11
command: run-app
volumes:
- ~/.m2:/home/developer/.m2
- ~/.gradle:/home/developer/.gradle
- ~/.grails:/home/developer/.grails
- ./:/app
ports:
- "8080:8080" #Grails default port
- "5005:5005" #Grails debug port
links:
- db
deploy:
resources:
limits:
memory: 4G
reservations:
memory: 4G
db:
image: mysql:5.6
container_name: grails_mysql
ports:
- "3306:3306"
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: 1
MYSQL_DATABASE: grails
volumes:
- "/usr/local/mysql/data:/var/lib/mysql"
Dockerfile:
FROM java:8
# Set customizable env vars defaults.
ENV GRAILS_VERSION 2.3.11
# Install Grails
WORKDIR /usr/lib/jvm
RUN wget https://github.com/grails/grails-core/releases/download/v$GRAILS_VERSION/grails-$GRAILS_VERSION.zip && \
unzip grails-$GRAILS_VERSION.zip && \
rm -rf grails-$GRAILS_VERSION.zip && \
ln -s grails-$GRAILS_VERSION grails
# Setup Grails path.
ENV GRAILS_HOME /usr/lib/jvm/grails
ENV PATH $GRAILS_HOME/bin:$PATH
ENV GRAILS_OPTS="-XX:MaxPermSize=4g -Xms4g -Xmx4g"
# Create App Directory
RUN mkdir /app
# Set Workdir
WORKDIR /app
# Set Default Behavior
ENTRYPOINT ["grails"]

So the mapping I was using ended up not being correct. I was going off a file mapping from 1 article and ended up working after trying another working mapping. I made the switch below:
original:
volumes:
- ~/.m2:/home/developer/.m2
- ~/.gradle:/home/developer/.gradle
- ~/.grails:/home/developer/.grails
- ./:/app
new:
volumes:
- ~/.m2:/root/.m2
- ~/.gradle:/root/.gradle
- ~/.grails:/root/.grails
- ./:/app

Related

Why is my first test in Postman/Newman hanging in Travis-CI?

Tl;dr I'm using Docker to run my Postman/Newman tests and my API tests hang when ran in Travis-CI but not when ran locally. Why am I encountering tests that run infinitely?
Howdy guys! I've recently started to learn Docker, Travis-CI and Newman for a full stack application. I started with developing the API and I'm taking a TDD approach. As such, I'm testing my API first. I setup my .travis.yml file to download a specific version of Docker-Compose and then use Docker-Compose to run my tests in a container I name api-test. The container has an image, dannydainton/htmlextra, which is built from the official postman/newman:alpine image like so:
language: node_js
node_js:
- "14.3.0"
env:
global:
- DOCKER_COMPOSE_VERSION: 1.26.2
- PGHOST: db
- PGDATABASE: battle_academia
- secure: "xDZHJ9ZVe3WPXr6WetERMjFnTlMowyEoeckzLcRvWyEIe2qbnWrJRo7cIRxA0FsyJ7ao4QLVv4XhOIeqJupwW3nfnljo35WGcuRBLh76CW6JSuTIbpV1dndOpATW+aY3r6GSwpojnN4/yUVS53pvIeIn03PzQWmnbiJ0xfjStrJzYNpSVIHLn0arujDUMyze8+4ptS1qfekOy2KRifG5+viFarUbWUXaUiJfZCn14S4Wy5N/T+ycltNjX/qPAVZYV3fxY1ZyNX7wzJA+oV71MyApp5PgNW2SBlePkeZTnkbI7FW100MUnE4bvy00Jr/aCoWZYTySz86KT+8HSGzy6d+THO8zjOXKJV5Vn93+XWmxtp/yjBsg+dtFlZUWkN99EBkEjjwJc1Oy5zrOQNjsptNGpl1kid5+bAT4XcP4xn7X5pc7QB8ZE3igbfKTM11LABYN1adcIwgGIjUz1eQnFuibtkVM4oqE92JShUF/6gbwGJsWjQGBNBCOBBueYNB86sk0TiAfS08z2VW9L3pcljA2IwdXclw3f1ON6YelBTJmc88EmxI4TS0hRC5KgMCkegW1ndcTZwqIQGFm+NFbe1hKMmqTfgOg5M8OQZBtUkF60Lox09ECg59IrYj+BIa9J303+bo+IMgZ1JVYlL7FA2qc0bE8J/9A1C2wCRjDLLE="
- secure: "F/Ru7QZvA+zWjQ7K7vhA3M2ZrYKXVIlkIF1H7v2dPv/lsc18eWGpOQep4uAjX4IMyLY/6n7uYRLnSlbvOWulVUW8U52zWiQkYFF9OwosuTdIlVTAQGp3B0CAA+RCxMtDQay6fN9H6e2bL3KwjT//VUHd1E6BPu+O1/RyX+0+0KvTmExmMSuioSpDPcI20Mym2vRCgNPb1gfajr5QfWKPJlrPjfyNhDxWMhM94nwTuLYIVZwZPTZ0Ro5D6hhXFVZOFIlHr5VDbbFa+Xo0TIdP/ZudxZ7p3Mn7ncA8seLx2Q5/zH6tJ4DSUpEm67l5IqUrvd9qp0CNCjlTcl3kOJK4qIB1WtLm6oW2rBqDyvthhuprPpqEcs7C9z2604VLybdOmJ0+Y/7uIo6po388avGN4ZwZbWQ1xiiW+Ja8kkHZYEKo4m0AbKdX9pn8otcNO+1xlDtUU7CZey2QA8WrFlfHWqRapIgNfT5tTSTAul3yWAFCRw09PHYELuO7oQCqFZi7zu3HKWknbkzjf+Cz3TfIFTX/3saiqyquhieOPbnGC5xgTmTrA2ShfNxQ6nkDJPU0/qmaCNJt9CwpNS2ArqcK3xYijiNi+SHaKwEsYh0VqiUqSCWn05eYKNAe3MUQDsyKFEkykJW60yEkN7JsvO1WpI53VKmOnZlRHLzJyc5WkZw="
- PGPORT: 5432
services:
- docker
before_install:
- npm rebuild
- sudo rm /usr/local/bin/docker-compose
- curl -L https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-`uname -s`-`uname -m` > docker-compose
- chmod +x docker-compose
- sudo mv docker-compose /usr/local/bin
jobs:
include:
- stage: api tests
script:
- docker --version
- docker-compose --version
- >
docker-compose run api-test
newman run battle-academia_placement-exam_api-test.postman-collection.json
-e battle-academia_placement-exam_docker.postman-environment.json
-r htmlextra,cli
And, my docker-compose.yml file has 4 containers:
client is the React front end,
api is the NodeJs/Express back end,
db is the database that the API pulls data from in the test environment,
api-test is the container with Newman/Postman and some reporters which I believe is built from NodeJs.
I hardcode in the environment variables when running locally, but the file is as follows:
version: '3.8'
services:
client:
build: ./client
ports:
- "80:80"
depends_on:
- api
api:
build: ./server
environment:
- PGHOST=${PGHOST}
- PGDATABASE=${PGDATABASE}
- PGUSER=${PGUSER}
- PGPASSWORD=${PGPASSWORD}
- PGPORT=${PGPORT}
ports:
- "3000:3000"
depends_on:
- db
db:
image: postgres:12.3-alpine
restart: always
environment:
- POSTGRES_DB=${PGDATABASE}
- POSTGRES_USER=${PGUSER}
- POSTGRES_PASSWORD=${PGPASSWORD}
ports:
- "5432:5432"
volumes:
- ./server/db/scripts:/docker-entrypoint-initdb.d
api-test:
image: dannydainton/htmlextra
entrypoint: [""]
command: newman run -v
volumes:
- ./server/api/postman-collections:/etc/newman
depends_on:
- api
Now that the setup is out of the way, my issue is that this config works locally when I cut out .travis.yml and run the commands myself, however, putting Travis-CI in the mix stirs up an issue where my first test just... runs.
I appreciate any advice or insight towards this issue that anyone provides. Thanks in advance!
The issue did not come from where I had expected. After debugging, I thought that the issue originally came from permission errors since I discovered that the /docker-entrypoint-initdb.d directory got ignored during container startup. After looking at the Postgres Dockerfile, I learned that the files are given permission for a user called postgres. The actual issue stemmed from me foolishly adding the database initialization scripts to my .gitignore.
Edit
Also the Newman tests were hanging because they were trying to access database tables that did not exist.

How can I persist go 1.11 modules in a Docker container?

I am migrating a Go 1.10 app to Go 1.11. This also includes migrating from dep to mod for managing dependencies.
As the application depends on a database, I am using a docker-compose to set up the local development environment. With Go 1.10 I simply mounted the local repository (including the vendor folder) into the correct location in the container's GOPATH:
web:
image: golang:1.10
working_dir: /go/src/github.com/me/my-project
volumes:
- .:/go/src/github.com/me/my-project
environment:
- GOPATH=/go
- PORT=9999
command: go run cmd/my-project/main.go
Since Go 1.11 ditches GOPATH (when using modules that is) I thought I could just do the following:
web:
image: golang:1.11rc2
working_dir: /app
volumes:
- .:/app
environment:
- PORT=9999
command: go run cmd/my-project/main.go
This works, but every time I docker-compose up (or any other command that calls through to the Go tool) it will resolve and re-download the dependency tree from scratch. This does not happen (rather only once) when I run the command outside of the container (i.e. on my local OS).
How can I improve the setup so that the Docker container persists the modules being downloaded by the go tool?
This is not mentioned in the wiki article on modules, but from reading the updated docs on the go tool, I found out that when using Go modules, the go tool will still use GOPATH to store the available sources, namely $GOPATH/pkg/mod.
This means that for my local dev setup, I can 1. define the GOPATH in the container and 2. mount the local $GOPATH/pkg/mod into the container's GOPATH.
web:
image: golang:1.11rc2
working_dir: /app
volumes:
- .:/app
- $GOPATH/pkg/mod:/go/pkg/mod
environment:
- GOPATH=/go
- PORT=9999
command: go run cmd/my-project/main.go
You can use a volume instead of your local GOPATH. the docker-compose.yml is like:
version: '3'
services:
web:
image: golang:1.11
working_dir: /app
volumes:
- .:/app
- cache:/go
environment:
- PORT=9999
command: go run cmd/my-project/main.go
volumes:
cache:
The volume cache is going to persist all the changes on the GOPATH for the container.

Running docker-compose up, stuck on a "infinite" "creating...[container/image]" php and mysql images

I'm new to Docker, so i don't know if it's a programming mistake or something, one thing i found strange is that in a Mac it worked fine, but running on windows, doesn't.
docker-compose.yml
version: '2.1'
services:
db:
build: ./backend
restart: always
ports:
- "3306:3306"
volumes:
- /var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=123
- MYSQL_DATABASE=demo
- MYSQL_USER=user
- MYSQL_PASSWORD=123
php:
build: ./frontend
ports:
- "80:80"
volumes:
- ./frontend:/var/www/html
links:
- db
Docker file inside ./frontend
FROM php:7.2-apache
# Enable mysqli to connect to database
RUN docker-php-ext-install mysqli
# Document root
WORKDIR /var/www/html
COPY . /var/www/html/
Dockerfile inside ./backend
FROM mysql:5.7
COPY ./demo.sql /docker-entrypoint-initdb.d
Console:
$ docker-compose up
Creating phpsampleapp_db_1 ... done
Creating phpsampleapp_db_1 ...
Creating phpsampleapp_php_1 ...
It stays forever like that, i tried a bunch of things.
I'm using Docker version 17.12.0-ce. And enabled Linux container mode.
I think i don't need the "version" and "services", but anyway.
Thanks.
In my case, the fix was simply to restart Docker Desktop. After that all went smoothly

Install PHP composer in existing docker image

I'm running docker-letsencrypt through a docker-compose.yml file. It comes with PHP. I'm trying to run PHP composer with it. I can install composer while being in the container through bash, but that won't stick when I recreate the container. How do I keep a permanent install of composer in an existing container that doesn't come with compose by default?
My docker-compose.yml looks like this:
version: "3"
services:
letsencrypt:
image: linuxserver/letsencrypt
container_name: letsencrypt
cap_add:
- NET_ADMIN
ports:
- "80:80"
- "443:443"
environment:
- PUID=1000
- PGID=1000
- EMAIL=<mailadress>
- URL=<tld>
- SUBDOMAINS=<subdomains>
- VALIDATION=http
- TZ=Europe/Paris
volumes:
- /home/ubuntu/letsencrypt:/config
I did find the one-line installer for composer:
php -r "readfile('http://getcomposer.org/installer');" | php -- --install-dir=/usr/bin/ --filename=composer
I could add this to command in my docker-compose.yml, but that would reinstall composer even on container restarts right?
You're right about your comment about the command option, it will indeed be run every time you launch your container.
One workaround would be to create your own dockerfile, as follow :
FROM linuxserver/letsencrypt
RUN php -r "readfile('http://getcomposer.org/installer');" | php -- --install-dir=/usr/bin/ --filename=composer
(RUN directives are only run during the build step).
You should then modify your docker-compose.yml :
...
build: ./dir
#dir/ is the folder where your Dockerfile resides
#use the dockerfile directive if you use a non-default naming convention
#or if your Dockerfile isn't at the root of your project
container_name: letsencrypt
...

Writable folder permissions in docker

I have a docker setup with some websites for localhost. I use Smarty as my template engine and it requires to have a writable templates_c folder. Any idea how I can make this folder writable?
The error is as following:
PHP Fatal error: Smarty error: unable to write to $compile_dir
'/var/www/html/sitename.local/httpdocs/templates_c'.
Be sure $compile_dir is writable by the web server user. in
/var/www/html/sitename.local/httpdocs/libs/Smarty.class.php on
line 1093
I know this could be set manually with linux but I am looking for an automatic global solution since I have many websites who have this issue
Also worth mentioning I am using a pretty clean docker-compose.yml
php56:
build: .
dockerfile: /etc/docker/dockerfile_php_56
volumes:
- ./sites:/var/www/html
- ./etc/php:/usr/local/etc/php
- ./etc/apache2/apache2.conf:/etc/apache2/conf-enabled/apache2.conf
- ./etc/apache2/hosts.conf:/etc/apache2/sites-enabled/hosts.conf
ports:
- "80:80"
- "8080:8080"
links:
- mysql
mysql:
image: mysql
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=MY_PASSWORD
- MYSQL_DATABASE=YOUR_DATABASE_NAME
volumes:
- ./etc/mysql:/docker-entrypoint-initdb.d
With a small dockerfile for basics:
FROM php:5.6-apache
RUN /usr/local/bin/docker-php-ext-install mysqli mysql
RUN docker-php-ext-configure mysql --with-libdir=lib/x86_64-linux-gnu/ \
&& docker-php-ext-install mysql
RUN a2enmod rewrite
https://github.com/wesleyd85/docker-php7-httpd-apache2-mysql (but then with php 5.6)
I solved the same problem with the solution here: Running docker on Ubuntu: mounted host volume is not writable from container
Just need to add:
RUN chmod a+rwx -R project-dir/smarty.cache.dir
to Dockerfile

Resources