Restarting docker-compose after running rake assets:precompile resets the changes - ruby-on-rails

I'm currently trying to use Zammad Open Source, a helpdesk ticketing system with Docker-compose. However I have used it on a non-docker setup before and I edited the html views and added some logos and extra features that are required by my team. However we are needed to move into a docker-based instance soon due to reasons.
I succeeded in installing it normally, and the default compose file does mount an image when bringing the container up. After that I go and apply the changes as how I did on my existing setup. The changes require me to run
rake assets:precompile
and restart only the rails container. After restarting it, it works and the changes are reflected.
However, once I run
docker-compose restart
All the containers restart (as expected) but the rails server seems to discard every single change I made, and everything looks as if I just brought up a fresh container.
What I've tried:
Apply the changes, restart rails container, and commit the container into a custom image and pulled from it. Didn't work.
Edited dockerfile, entrypoint scripts to apply the changes and also run precompile during installation. Didn't work.
docker-compose.yml
version: '3'
services:
zammad-backup:
command: ["zammad-backup"]
depends_on:
- zammad-railsserver
- zammad-postgresql
entrypoint: /usr/local/bin/backup.sh
environment:
- BACKUP_SLEEP=86400
- HOLD_DAYS=10
- POSTGRESQL_USER=${POSTGRES_USER}
- POSTGRESQL_PASSWORD=${POSTGRES_PASS}
image: ${IMAGE_REPO}:zammad-postgresql${VERSION}
restart: ${RESTART}
volumes:
- zammad-backup:/var/tmp/zammad
- zammad-data:/opt/zammad
zammad-elasticsearch:
environment:
- discovery.type=single-node
image: ${IMAGE_REPO}:zammad-elasticsearch${VERSION}
restart: ${RESTART}
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
zammad-init:
command: ["zammad-init"]
depends_on:
- zammad-postgresql
environment:
- MEMCACHE_SERVERS=${MEMCACHE_SERVERS}
- POSTGRESQL_USER=${POSTGRES_USER}
- POSTGRESQL_PASS=${POSTGRES_PASS}
- REDIS_URL=${REDIS_URL}
image: ${IMAGE_REPO}:zammad${VERSION}
restart: on-failure
volumes:
- zammad-data:/opt/zammad
zammad-memcached:
command: memcached -m 256M
image: memcached:1.6.10-alpine
restart: ${RESTART}
zammad-nginx:
command: ["zammad-nginx"]
expose:
- "8080"
depends_on:
- zammad-railsserver
image: ${IMAGE_REPO}:zammad${VERSION}
restart: ${RESTART}
volumes:
- zammad-data:/opt/zammad
zammad-postgresql:
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASS}
image: ${IMAGE_REPO}:zammad-postgresql${VERSION}
restart: ${RESTART}
volumes:
- postgresql-data:/var/lib/postgresql/data
zammad-railsserver:
command: ["zammad-railsserver"]
depends_on:
- zammad-memcached
- zammad-postgresql
- zammad-redis
environment:
- MEMCACHE_SERVERS=${MEMCACHE_SERVERS}
- REDIS_URL=${REDIS_URL}
image: ${IMAGE_REPO}:zammad${VERSION}
restart: ${RESTART}
volumes:
- zammad-data:/opt/zammad
zammad-redis:
image: redis:6.2.5-alpine
restart: ${RESTART}
zammad-scheduler:
command: ["zammad-scheduler"]
depends_on:
- zammad-memcached
- zammad-railsserver
- zammad-redis
environment:
- MEMCACHE_SERVERS=${MEMCACHE_SERVERS}
- REDIS_URL=${REDIS_URL}
image: ${IMAGE_REPO}:zammad${VERSION}
restart: ${RESTART}
volumes:
- zammad-data:/opt/zammad
zammad-websocket:
command: ["zammad-websocket"]
depends_on:
- zammad-memcached
- zammad-railsserver
- zammad-redis
environment:
- MEMCACHE_SERVERS=${MEMCACHE_SERVERS}
- REDIS_URL=${REDIS_URL}
image: ${IMAGE_REPO}:zammad${VERSION}
restart: ${RESTART}
volumes:
- zammad-data:/opt/zammad
volumes:
elasticsearch-data:
driver: local
postgresql-data:
driver: local
zammad-backup:
driver: local
zammad-data:
driver: local

I found a solution after a few headaches here and there:
What I did:
Dove in the files and found out that it pulls an image from github, downloaded the image. After extracting the tar.gz file, I applied all the changes that i needed, repacked the tar.gz, and edited the dockerfile to point to the new image.
After that, i need to force docker-compose to rebuild the image. Then the changes are persistent even after restarts.

Related

Can I deploy a containerized ElasticSearch on Heroku with a web app?

I have a toy MVP application that I'd like to deploy on Heroku. There's an ElasticSearch dependency expressed in a docker-compose file. The smallest ES add-on for Heroku is $67/month which is more than I want to spend for an MVP. I'm trying to figure out how to deploy it alongside the web app in a containerized fashion. All the guides I saw for multiple processes have a Dockerfile, not a docker-compose. Can I express this in a heroku.yml configuration?
Here is my Dockerfile:
version: '3.6'
services:
web:
image: denoland/deno:latest
container_name: my_app
build: .
ports:
- 3001:3001
environment:
- DENO_ENV=local
- ES_HOST=elasticsearch
- DENO_PORT=3001
- ELASTIC_URL=http://elasticsearch:9200
volumes:
- .:/usr/src/app
command: deno run --allow-net --allow-read --allow-env src/main.ts
links:
- elasticsearch
depends_on:
- elasticsearch
networks:
- es-net
elasticsearch:
container_name: es-container
image: docker.elastic.co/elasticsearch/elasticsearch:8.5.2
volumes:
- esdata:/usr/share/elasticsearch/data
environment:
- xpack.security.enabled=false
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.type=single-node
logging:
driver: none
ports:
- 9300:9300
- 9200:9200
networks:
- es-net
volumes:
esdata:
networks:
es-net:
driver: bridge
Not unless you want to pay for private spaces, and even then I don't think that it would work properly. Heroku's Docker support does not include volume mounts.
Internal routing is only available for apps in private spaces.

NextCloud with OnlyOffice not opening previosly saved documents

OnlyOffice is not opening previously saved documents after doing docker-compose down. I needed to increase the memory of nextcloud instance (docker container) so I proceeded to stop all the containers, modify the docker-compose and set everything up again.
There are no issues with new documents so far but editing previously saved ones OnlyOffice opens a blank document besides the files sizes are intact (no errors in console), still showing KB in NextCloud.
version: "2.3"
services:
nextcloud:
container_name: nextcloud
image: nextcloud:latest
hostname: MYDOMAIN
stdin_open: true
tty: true
restart: always
expose:
- "80"
networks:
- cloud_network
volumes:
- /mnt/apps/nextcloud/data:/var/www/html
environment:
- MYSQL_HOST=mariadb
- PHP_MEMORY_LIMIT=-1
env_file:
- db.env
mem_limit: 8g
depends_on:
- mariadb
mariadb:
container_name: mariadb
image: mariadb
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW --innodb-file-per-table=1 --skip-innodb-read-only-compressed
restart: always
networks:
- cloud_network
volumes:
- mariadb_volume:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=SOMEPASSWORD
env_file:
- db.env
onlyoffice:
container_name: onlyoffice
image: onlyoffice/documentserver:latest
stdin_open: true
tty: true
restart: always
networks:
- cloud_network
expose:
- "80"
volumes:
#- /mnt/apps/onlyoffice/data:/var/www/onlyoffice/Data
- office_data_volume:/var/www/onlyoffice/Data
#- onlyoffice_log_volume:/var/log/onlyoffice
- office_db_volume:/var/lib/postgresql
caddy:
container_name: caddy
image: abiosoft/caddy:no-stats
stdin_open: true
tty: true
restart: always
ports:
- 80:80
- 443:443
networks:
- cloud_network
environment:
- CADDYPATH=/certs
- ACME_AGREE=true
# CHANGE THESE OR THE CONTAINER WILL FAIL TO RUN
- CADDY_LETSENCRYPT_EMAIL=MYEMAIL
- CADDY_EXTERNAL_DOMAIN=MYDOMAIN
volumes:
- /mnt/apps/caddy/certs:/certs:rw
- /mnt/apps/caddy/Caddyfile:/etc/Caddyfile:ro
networks:
cloud_network:
driver: "bridge"
volumes:
office_data_volume:
office_db_volume:
mariadb_volume:
Please also note that you must ALWAYS disconnect you users before stop/restart your container. See https://github.com/ONLYOFFICE/Docker-DocumentServer#document-server-usage-issues
sudo docker exec onlyoffice documentserver-prepare4shutdown.sh
It seems that every time the containers are mounted in a NextCloud + OnlyOffice setup it generates tokens to authorize the access to the documents thru headers.
I solved it by adding a third docker volume to preserve the documentserver files. Fortunately I had a backup of my files, I removed the containers and added them again and everything it's working now.
- office_config_volume:/etc/onlyoffice/documentserver
onlyoffice:
container_name: onlyoffice
image: onlyoffice/documentserver:latest
stdin_open: true
tty: true
restart: unless-stopped
networks:
- cloud_network
expose:
- "80"
volumes:
- office_data_volume:/var/www/onlyoffice/Data
- office_db_volume:/var/lib/postgresql
- office_config_volume:/etc/onlyoffice/documentserver

GatsbyJS - How can I solve the "net::ERR_CONNECTION_REFUSED" of GatsbyJS?

Note: Thanks #Ferran Buireu for the suggestion. I'm quite sure to get minus vote because of very new to docker and changing network world to system and programming.
After deploy gatsbyjs, I found the socketio error "net::ERR_CONNECTION_REFUSED".
Even it works properly when I browse to any pages but I think it is not running correctly.
How can I solve this error? (below is the error capture)
I implement and deploy these services on Ubuntu 20.04.2 with Docker 20.10.6, please see the below "docker-compose.yml"
version: "3"
services:
frontendapp01:
working_dir: /frontendapp01
build:
context: ./frontendapp01
dockerfile: Dockerfile
depends_on:
- backendsrv01
- mongoserver
volumes:
- ./sentric01:/srv/front
ports:
- "8001:8000"
environment:
GATSBY_WEBPACK_PUBLICPATH: /
STRAPI_URL: backendsrv01:1337
networks:
- vpsnetwork
frontendapp02:
working_dir: /frontendapp02
build:
context: ./frontendapp02
dockerfile: Dockerfile
depends_on:
- backendsrv02
- mongoserver
volumes:
- ./sentric02:/srv/front
ports:
- "8002:8000"
environment:
GATSBY_WEBPACK_PUBLICPATH: /
STRAPI_URL: backendsrv02:1338
networks:
- vpsnetwork
frontendapp03:
working_dir: /frontendapp03
build:
context: ./frontendapp03
dockerfile: Dockerfile
depends_on:
- backendsrv02
- mongoserver
volumes:
- ./sentric03:/srv/front
ports:
- "8003:8000"
environment:
GATSBY_WEBPACK_PUBLICPATH: /
STRAPI_URL: backendsrv02:1338
networks:
- vpsnetwork
backendsrv01:
image: strapi/strapi
container_name: backendsrv01
restart: unless-stopped
environment:
DATABASE_CLIENT: mongo
DATABASE_NAME: essential
DATABASE_HOST: mongoserver
DATABASE_PORT: 27017
networks:
- vpsnetwork
volumes:
- ./app01:/srv/app
ports:
- "1337:1337"
backendsrv02:
image: strapi/strapi
container_name: backendsrv02
restart: unless-stopped
environment:
DATABASE_CLIENT: mongo
DATABASE_NAME: solven
DATABASE_HOST: mongoserver
DATABASE_PORT: 27017
networks:
- vpsnetwork
volumes:
- ./app02:/srv/app
ports:
- "1338:1337"
mongoserver:
image: mongo
container_name: mongoserver
restart: unless-stopped
networks:
- vpsnetwork
volumes:
- vpsappdata:/data/db
ports:
- "27017:27017"
networks:
vpsnetwork:
driver: bridge
volumes:
vpsappdata:
The socket connection only appears during the development stage (gatsby develop) and it's intended to refresh and update the browser on each saves by hot-reloading, so without losing component state. This feature is known as fast-refresh.
As I said, and for obvious reasons, this only applies in gatsby develop. Under gatsby build, there's no connection socket. If your Docker development environment is sharing the port 8000 and 8001 (according to your docker-compose.yml setup), once built, can cause a break of the socket because it has changed the scope of the project.
Answering, you don't have to worry about, your project seems to build properly but, because of the sharing port between environments it prompts the log.
Further readings:
https://www.gatsbyjs.com/docs/conceptual/overview-of-the-gatsby-build-process/
https://www.gatsbyjs.com/docs/reference/local-development/fast-refresh/

env-file and MariaDB in docker-compose

I'm trying to set up nextcloud on a Raspberry Pi 3B+ with MariaDB, roughly following this example:
https://github.com/nextcloud/docker/blob/master/.examples/docker-compose/with-nginx-proxy/mariadb/apache/docker-compose.yml
My compose file looks like this:
version: '3'
services:
db:
image: mariadb
env_file:
- pi.env
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
restart: always
volumes:
- ${BASE_PATH}/db:/var/lib/mysql
nextcloud:
image: nextcloud:apache
env_file:
- pi.env
restart: always
ports:
- 80:80
- 443:443
volumes:
- ${BASE_PATH}/www:/var/www
depends_on:
- db
environment:
- MYSQL_HOST=db
Then there is the pi.env file:
MYSQL_PASSWORD=secure-password
MYSQL_ROOT_PASSWORD=even-more-secure.password
MYSQL_DATABASE=nextcloud
MYSQL_USER=nextcloud
BASE_PATH=/tmp
After running docker-compose up from the directory the yaml and the env file are sitting in, the two containers start up fine. Alas, the database connection can not be established because the db-container only accepts a blank password (popping up a shell in the container and running mysql -u nextcloud without handing in a password gives me database access). Still, the $MYSQL_ROOT_PASSWORD environment variable can be correctly echoed from the container.
If I start a mariadb-image alone with docker run -e MYSQL_ROOT_PASSWORD=secure-password, everything behaves as expected.
Can someone point me to my mistake?
I finally cured my setup some time ago. Sadly, I can not reconstruct what did the trick anymore (and my git commit messages were not as clear to my future self as I hoped they would be :D).
But it appears to me that exclusively declaring the environment variables for the database password in the pi.env file instead of the docker-compose.yaml did the trick.
My docker-compose.yaml:
services:
db:
image: jsurf/rpi-mariadb
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW --character-set-server=utf8mb4 --collation-server=utf8mb4_general_ci
restart: always
volumes:
- db:/var/lib/mysql
env_file:
- pi.env
nextcloud:
image: nextcloud:apache
restart: always
container_name: nextcloud
volumes:
- www:/var/www/html
environment:
- VIRTUAL_HOST=${VIRTUAL_HOST}
- LETSENCRYPT_HOST=${VIRTUAL_HOST}
- LETSENCRYPT_EMAIL=${LETSENCRYPT_EMAIL}
- MYSQL_HOST=db
- NEXTCLOUD_TRUSTED_DOMAINS=${VIRTUAL_HOST}
- NEXTCLOUD_TRUSTED_DOMAINS=proxy
env_file:
- pi.env
depends_on:
- db
networks:
- proxy-tier
- default
pi.env:
MYSQL_PASSWORD=secure-password
MYSQL_ROOT_PASSWORD=even-more-secure.password
MYSQL_DATABASE=nextcloud
MYSQL_USER=nextcloud
But thank you non the less #Zanndorin!
I know this is a super late answer but I just stumbled upon this while Googling something completely unrelated.
If I recall correctly you have to tell docker-compose to actually send the ENV variables to the docker by just declaring them in environment.
environment:
- MYSQL_HOST=db
- MYSQL_PASSWORD
- MYSQL_USER
I have never declared the .env-file in the docker-compose so maybe that already fixes that issue. I use it this way (I also have a .env file which I then sometimes override some values from).
Example from my developer MariaDB container:
environment:
- MYSQL_DATABASE=mydb
- MYSQL_USER=${DB_USER}
- MYSQL_PASSWORD=${DB_PASSWORD}
- MYSQL_ROOT_PASSWORD

ElasticSearch container won't start up in Docker

I'm attempting to run this script in Win10 to configure everything.
All containers except the elastic container are initialized correctly and
Elastic times out and then exits with code 124.
https://imgur.com/a/FO8ckwc (some log outputs)
I'm running this script where I didn't touch anything except the Windows ports (you can see the comments)
https://pastebin.com/7Z8Gnenr
version: '3.1'
# Generated on 23-04-2018
services:
alfresco:
image: openmbeeguest/mms-repo:3.2.4-SNAPSHOT
environment:
CATALINA_OPTS: "-Xmx1G -XX:+UseConcMarkSweepGC"
depends_on:
- postgresql
- activemq
- elastic
networks:
- internal
ports:
- 8080:8080
volumes:
- alf_logs:/usr/local/tomcat/logs
- alf_data:/opt/alf_data
tmpfs:
- /tmp
- /usr/local/tomcat/temp/
- /usr/local/tomcat/work/
solr:
image: openmbeeguest/mms-solr:3.2.4-SNAPSHOT
environment:
CATALINA_OPTS: "-Xmx1G -XX:+UseG1GC -XX:+ParallelRefProcEnabled -XX:G1HeapRegionSize=8m -XX:MaxGCPauseMillis=200"
depends_on:
- alfresco
networks:
- internal
volumes:
- solr_logs:/usr/local/tomcat/logs/
- solr_content_store:/opt/solr/ContentStore
tmpfs:
- /tmp
- /usr/local/tomcat/temp/
- /usr/local/tomcat/work/
activemq:
image: openmbeeguest/mms-activemq:3.2.4-SNAPSHOT
ports:
#I changed these Windows side ports
- 61615:61616
- 61617:61614
- 8162:8161
# ORIGINAL
#- 61616:61616
#- 61614:61614
#- 8161:8161
volumes:
- activemq-data-volume:/data/activemq
- activemq-log-volume:/var/log/activemq
- activemq-conf-volume:/opt/activemq/conf
environment:
- ACTIVEMQ_ADMIN_LOGIN admin
- ACTIVEMQ_ADMIN_PASSWORD admin
networks:
- internal
elastic:
image: openmbeeguest/mms-elastic:3.2.4-SNAPSHOT
environment:
CLEAN: 'false'
ports:
- 9200:9200
volumes:
- elastic-data-volume:/usr/share/elasticsearch/data
networks:
- internal
postgresql:
image: openmbeeguest/mms-postgres:3.2.4-SNAPSHOT
volumes:
- pgsql_data:/var/lib/postgresql/data
environment:
- POSTGRES_USER=alfresco
- POSTGRES_PASSWORD=alfresco
- POSTGRES_DB=alfresco
networks:
- internal
volumes:
alf_logs:
alf_data:
solr_logs:
solr_content_store:
pgsql_data:
activemq-data-volume:
activemq-log-volume:
activemq-conf-volume:
elastic-data-volume:
nginx-external-volume:
networks:
internal:
Any help would be greatly appreciated!
Do you have the logs from the elasticsearch container to share? Without that it's hard to tell why it's exiting.
One thing that's tripped me up repeatedly though is the vm.max_map_count setting - the default in Docker is too low for elasticsearch to function, so it's a good first thing to check.

Resources