NextCloud with OnlyOffice not opening previosly saved documents - docker

OnlyOffice is not opening previously saved documents after doing docker-compose down. I needed to increase the memory of nextcloud instance (docker container) so I proceeded to stop all the containers, modify the docker-compose and set everything up again.
There are no issues with new documents so far but editing previously saved ones OnlyOffice opens a blank document besides the files sizes are intact (no errors in console), still showing KB in NextCloud.
version: "2.3"
services:
nextcloud:
container_name: nextcloud
image: nextcloud:latest
hostname: MYDOMAIN
stdin_open: true
tty: true
restart: always
expose:
- "80"
networks:
- cloud_network
volumes:
- /mnt/apps/nextcloud/data:/var/www/html
environment:
- MYSQL_HOST=mariadb
- PHP_MEMORY_LIMIT=-1
env_file:
- db.env
mem_limit: 8g
depends_on:
- mariadb
mariadb:
container_name: mariadb
image: mariadb
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW --innodb-file-per-table=1 --skip-innodb-read-only-compressed
restart: always
networks:
- cloud_network
volumes:
- mariadb_volume:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=SOMEPASSWORD
env_file:
- db.env
onlyoffice:
container_name: onlyoffice
image: onlyoffice/documentserver:latest
stdin_open: true
tty: true
restart: always
networks:
- cloud_network
expose:
- "80"
volumes:
#- /mnt/apps/onlyoffice/data:/var/www/onlyoffice/Data
- office_data_volume:/var/www/onlyoffice/Data
#- onlyoffice_log_volume:/var/log/onlyoffice
- office_db_volume:/var/lib/postgresql
caddy:
container_name: caddy
image: abiosoft/caddy:no-stats
stdin_open: true
tty: true
restart: always
ports:
- 80:80
- 443:443
networks:
- cloud_network
environment:
- CADDYPATH=/certs
- ACME_AGREE=true
# CHANGE THESE OR THE CONTAINER WILL FAIL TO RUN
- CADDY_LETSENCRYPT_EMAIL=MYEMAIL
- CADDY_EXTERNAL_DOMAIN=MYDOMAIN
volumes:
- /mnt/apps/caddy/certs:/certs:rw
- /mnt/apps/caddy/Caddyfile:/etc/Caddyfile:ro
networks:
cloud_network:
driver: "bridge"
volumes:
office_data_volume:
office_db_volume:
mariadb_volume:

Please also note that you must ALWAYS disconnect you users before stop/restart your container. See https://github.com/ONLYOFFICE/Docker-DocumentServer#document-server-usage-issues
sudo docker exec onlyoffice documentserver-prepare4shutdown.sh

It seems that every time the containers are mounted in a NextCloud + OnlyOffice setup it generates tokens to authorize the access to the documents thru headers.
I solved it by adding a third docker volume to preserve the documentserver files. Fortunately I had a backup of my files, I removed the containers and added them again and everything it's working now.
- office_config_volume:/etc/onlyoffice/documentserver
onlyoffice:
container_name: onlyoffice
image: onlyoffice/documentserver:latest
stdin_open: true
tty: true
restart: unless-stopped
networks:
- cloud_network
expose:
- "80"
volumes:
- office_data_volume:/var/www/onlyoffice/Data
- office_db_volume:/var/lib/postgresql
- office_config_volume:/etc/onlyoffice/documentserver

Related

Persist nifi data and volume

I want to make my nifi data volume and configuration persist means even if I delete container and docker compose up again I would like to keep what I built so far in my nifi. I try to mount volumes as follows in my docker compose file in volumes section nevertheless it doesn't work and my nifi processors are not saved. How can I do it correctly? Below my docker-compose.yaml file.
version: "3.7"
services:
nifi:
image: koroslak/nifi:latest
container_name: nifi
restart: always
environment:
- NIFI_HOME=/opt/nifi/nifi-current
- NIFI_LOG_DIR=/opt/nifi/nifi-current/logs
- NIFI_PID_DIR=/opt/nifi/nifi-current/run
- NIFI_BASE_DIR=/opt/nifi
- NIFI_WEB_HTTP_PORT=8080
ports:
- 9000:8080
depends_on:
- openldap
volumes:
- ./volume/nifi-current/state:/opt/nifi/nifi-current/state
- ./volume/database/database_repository:/opt/nifi/nifi-current/repositories/database_repository
- ./volume/flow_storage/flowfile_repository:/opt/nifi/nifi-current/repositories/flowfile_repository
- ./volume/nifi-current/content_repository:/opt/nifi/nifi-current/repositories/content_repository
- ./volume/nifi-current/provenance_repository:/opt/nifi/nifi-current/repositories/provenance_repository
- ./volume/log:/opt/nifi/nifi-current/logs
#- ./volume/conf:/opt/nifi/nifi-current/conf
postgres:
image: koroslak/postgres:latest
container_name: postgres
restart: always
environment:
- POSTGRES_PASSWORD=secret123
ports:
- 6000:5432
volumes:
- postgres:/var/lib/postgresql/data
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4:4.18
restart: always
environment:
- PGADMIN_DEFAULT_EMAIL=admin
- PGADMIN_DEFAULT_PASSWORD=admin
ports:
- 8090:80
metabase:
container_name: metabase
image: metabase/metabase:v0.34.2
restart: always
environment:
MB_DB_TYPE: postgres
MB_DB_DBNAME: metabase
MB_DB_PORT: 5432
MB_DB_USER: metabase_admin
MB_DB_PASS: secret123
MB_DB_HOST: postgres
ports:
- 3000:3000
depends_on:
- postgres
openldap:
image: osixia/openldap:1.3.0
container_name: openldap
restart: always
ports:
- 38999:389
# Mocked source systems
jira-api:
image: danielgtaylor/apisprout:latest
container_name: jira-api
restart: always
ports:
- 8000:8000
command: https://raw.githubusercontent.com/mvrabel/nifi-postgres-metabase/master/api_examples/jira-api.json
pipedrive-api:
image: danielgtaylor/apisprout:latest
container_name: pipedrive-api
restart: always
ports:
- 8100:8000
command: https://raw.githubusercontent.com/mvrabel/nifi-postgres-metabase/master/api_examples/pipedrive-api.yaml
restcountries-api:
image: danielgtaylor/apisprout:latest
container_name: restcountries-api
restart: always
ports:
- 8200:8000
command: https://raw.githubusercontent.com/mvrabel/nifi-postgres-metabase/master/api_examples/restcountries-api.json
volumes:
postgres:
nifi:
openldap:
metabase:
pgadmin:
Using Registry you can achieve that all changes you are doing or your nifi are committed to git. I.e. if you change some processor configuration, it will be reflected in your git repo.
As for flow files, you may need to fix volumes mappings.

Why is my app not installed? File not found

This is my docker-compose.yml. I am trying to deploy app and mysql,I added network.
version: '3'
services:
#PHP Service
app:
image: lscr.io/linuxserver/bookstack
container_name: bookstack
restart: unless-stopped
environment:
- DB_HOST=mysql
- DB_USER=quantox
- DB_PASS=****
- DB_DATABASE=bookstackapp
working_dir: /var/www
volumes:
- ./:/var/www
- ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
ports:
- 6875:80
networks:
- app-network
db:
image: mysql:5.7.22
container_name: mysql
restart: unless-stopped
ports:
- 33060:3306
environment:
- MYSQL_ROOT_PASSWORD=***
- TZ=Europe/Budapest
- MYSQL_DATABASE=bookstackapp
- MYSQL_USER=bookstack
- MYSQL_PASSWORD=****
volumes:
- dbdata:/var/lib/mysql/
- ./mysql/my.cnf:/etc/mysql/my.cnf
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
#Volumes
volumes:
dbdata:
driver: local
After I go for up -d
I got
Name Command State Ports
-----------------------------------------------------------------------------------------------
bookstack /init Up 443/tcp, 0.0.0.0:6875->80/tcp,:::6875->80/tcp
mysql docker-entrypoint.sh mysqld Up 0.0.0.0:33060->3306/tcp,:::33060->3306/tcp
But in browser localhost:6875 shows
File not found.
Why? Both my app and mysql are on same network. What should I check now?
When using volumes (-v flags) permissions issues can arise between the host OS and the container, you could avoid this issue by allowing you to specify the user PUID and group PGID.

How to use docker volume with docker image from dockerhub

Deploy my project using docker compose. At the moment, my project is being built locally, from local folders with projects. Docker uses Laravel service, VueJS service, NGINX service, Mysql service and Redis service.
I set up a docker compose from an article from digital ocean. The question is as follows. In the article, it was necessary to create volumes (volumes) that linked the files of my application locally with the files of the container, but what should I do if I want to build my project using DockerHub, after all, I will no longer have the files stored locally.
I tried without these volumes in the backend and webserver, but then without any errors during the assembly, I cannot reach the server via the localhost.
Do I just need to push the image right away with the volume? Here is my compose.yml:
version: '3'
services:
#VueJS Service
frontend:
build: ./frontend
container_name: frontend
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: frontend
working_dir: /var/www/frontend
ports:
- "3000:80"
volumes:
- ./frontend/:/var/www/frontend
- ./frontend/node_modules:/var/www/frontend/node_modules
networks:
- backend-network
#PHP Service
backend:
build: ./backend
container_name: backend
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: backend
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./backend/:/var/www
- ./backend/php/local.ini:/usr/local/etc/php/conf.d/local.ini
networks:
- backend-network
#Nginx Service
webserver:
image: nginx:alpine
container_name: webserver
restart: unless-stopped
tty: true
ports:
- "443:443"
- "8080:8080"
volumes:
- ./backend/:/var/www
- ./backend/nginx/conf.d/:/etc/nginx/conf.d/
networks:
- backend-network
#MySQL Service
db:
image: mysql:5.7.22
container_name: db
restart: unless-stopped
tty: true
ports:
- "33062:3306"
environment:
MYSQL_DATABASE: qcortex
MYSQL_ROOT_PASSWORD: kc3wcfjk5
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- dbdata:/var/lib/mysql
- ./backend/mysql/my.cnf:/etc/mysql/my.cnf
networks:
- backend-network
#Redis
redis:
image: caster977/redis
restart: unless-stopped
container_name: redis
networks:
- backend-network
#Docker Networks
networks:
backend-network:
driver: bridge
#Volumes
volumes:
dbdata:
driver: local

Docker Compose - Network_mode:Service: - 2 containers cant talk to each other

Over the last couple weeks I have been getting to grips with docker and decided to teach myself. I’m getting a fairly good grasp of it now and have just ventured out into using docker compose.
I have a docker compose yml file that I found online and have been modfying it to my needs.
Basically this docker compose file runs Sonarr, Radarr, Transmission, openvpn and nginx.
Sonarr and Radarr are using service:vpn as their network so that their traffic passes through the VPN.
NGINX is used as a reverse proxy to get to Sonarr and Radarr at the following urls - http://radarr:7878 & http://sonarr:8989 using “sonarr.myredacteddomain/com” “radarr.myredacteddomain/com” which is working great.
NGINX uses the “Link:” to be able get to sonarr and radarr as you can see from my yml below. Problem i’m now having is that I need Radarr and Sonarr to be able to talk to transmission so that they both know where my download client is located. If on Sonarr I enter transmission:9091 or transmission.myredacteddomain/com Sonarr/Radarr cant see it.
Looking at these containers Radarr and Sonarr dont seem to have an IP address so how is it that NGINX can see them via radarr:7878 etc but Sonarr and Radarr cant see transmission in the same way? Can anyone help me with my yml file here?
version: '3.0'
networks:
default:
ipam:
driver: default
services:
jackett:
image: linuxserver/jackett
depends_on:
- vpn
restart: always
network_mode: "service:vpn"
environment:
PGID: 1000
PUID: 1000
TZ: Europe/London
volumes:
- /mnt/user/config/jackett:/config
- /mnt/user/media/downloads/jackett:/downloads
transmission:
image: linuxserver/transmission:48
depends_on:
- vpn
environment:
TZ: 'Europe/London'
PGID: 1000
PUID: 1000
network_mode: "service:vpn"
tmpfs:
- /tmp
restart: unless-stopped
stdin_open: true
tty: true
volumes:
- /mnt/user/config/transmission:/config
- /mnt/user/media/downloads:/downloads
radarr:
image: linuxserver/radarr
depends_on:
- vpn
restart: always
network_mode: "service:vpn"
environment:
PGID: 1000
PUID: 1000
TZ: Europe/London
volumes:
- /mnt/user/config/radarr:/config
- /mnt/user/media/downloads/complete:/downloads
- /mnt/user/media/movies:/movies
sonarr:
image: linuxserver/sonarr
depends_on:
- vpn
restart: always
network_mode: "service:vpn"
environment:
PGID: 1000
PUID: 1000
TZ: Europe/London
volumes:
- /mnt/user/config/sonarr:/config
- /mnt/user/media/downloads/complete:/downloads
- /mnt/user/media/tvshows:/tv
vpn:
image: dperson/openvpn-client
sysctls:
- net.ipv6.conf.all.disable_ipv6=0
cap_add:
- net_admin
dns:
- 8.8.4.4
- 8.8.8.8
environment:
TZ: 'Europe/London'
read_only: true
tmpfs:
- /tmp
restart: unless-stopped
security_opt:
- label:disable
stdin_open: true
tty: true
volumes:
- /dev/net:/dev/net:z
- /mnt/user/config/vpn:/vpn
web:
image: nginx
depends_on:
- transmission
- sonarr
- jackett
- radarr
environment:
TZ: 'Europe/London'
IPV6: 0
links:
- vpn:transmission
- vpn:jackett
- vpn:radarr
- vpn:sonarr
ports:
- "80:80"
- "443:443"
read_only: true
volumes:
- /mnt/user/config/nginx:/etc/nginx/conf.d:ro
- /mnt/user/config/nginx/ssl:/etc/nginx/ssl:ro
tmpfs:
- /run
- /tmp
- /var/cache/nginx
restart: unless-stopped
stdin_open: true
tty: true
Managed to work it out!
Turns out that containers that are using the Network_Mode:Service:vpn share the ip address of the VPN docker.
Solved my own problem!

docker container network access when using vpn

Expected Result:
Container can access each other thru hostname or hostcomputer ip.
Actual Result:
When using network_mode I can't make any changes as static ip or links to other containers.
Description:
I've a couple of containers all using --net (network_mode) to a openvpn. As single instances they work and with nginx proxy I can access each from any computer.
However the containers can't find each other except with local ip (172.19.0.x). I could use that but what happens on host reboot. Will the ip addresses change?
docker-compose.yml
version: '3.4'
services:
vpn:
image: dperson/openvpn-client
container_name: vpn
cap_add:
- net_admin
networks:
- default
tmpfs:
- /tmp
restart: unless-stopped
security_opt:
- label:disable
stdin_open: true
tty: true
volumes:
- ../openvpn:/vpn
- /dev/net:/dev/net:z
environment:
- DNS='8.8.4.4 8.8.8.8'
- FIREWALL="1"
- TZ='Europe/Stockholm'
command: -f ""
networks:
- default
proxy:
image: nginx
container_name: proxy
environment:
TZ: 'Europe/Stockholm'
ports:
- "6003:8989" # sonarr
- "6004:7878" # radarr
- "6001:8112" # deluge
- "6002:9117" # jackett
depends_on:
- sonarr
- radarr
- deluge
- jackett
links:
- vpn:sonarr
- vpn:radarr
- vpn:deluge
- vpn:jackett
networks:
- default
volumes:
- ../nginx/default.conf:/etc/nginx/conf.d/default.conf
restart: always
command: "nginx -g 'daemon off;'"
sonarr:
image: linuxserver/sonarr
container_name: sonarr
volumes:
- ../sonarr:/config
- /etc/localtime:/etc/localtime:ro
- /media/megadrive/Media/Series:/tv
- /media/megadrive/Media/tmp/completed:/downloads
env_file: ../uidgid.env
network_mode: "service:vpn"
environment:
- TZ='Europe/Stockholm'
cap_add:
- net_admin
depends_on:
- vpn
restart: always
radarr:
image: linuxserver/radarr
container_name: radarr
volumes:
- ../radarr:/config
- /media/megadrive/Media/Movies:/movies
- /media/megadrive/Media/tmp/completed:/downloads
- /etc/localtime:/etc/localtime:ro
env_file: ../uidgid.env
network_mode: "service:vpn"
environment:
- TZ='Europe/Stockholm'
cap_add:
- net_admin
depends_on:
- vpn
restart: always
deluge:
image: linuxserver/deluge
container_name: deluge
depends_on:
- vpn
network_mode: "service:vpn"
volumes:
- ../deluge:/config
- /media/megadrive/Media/tmp/:/downloads
- /etc/localtime:/etc/localtime:ro
restart: always
env_file: ../uidgid.env
environment:
- TZ='Europe/Stockholm'
jackett:
container_name: jackett
image: linuxserver/jackett
restart: unless-stopped
network_mode: "service:vpn"
env_file: ../uidgid.env
environment:
- TZ='Europe/Stockholm'
volumes:
- ../jackett:/config
- /media/megadrive/Media/tmp/blackhole:/downloads
networks:
default:
It seems that letting vpn service use host instead of bridge (default). Will solve a couple of things.
Allow everything to work on host computer ip. As long as every service is on its own port this is okay.
Services still seems to be following openvpn rules
no more need for nginx for proxy to the webgui
vpn:
image: dperson/openvpn-client
container_name: vpn
cap_add:
- net_admin
tmpfs:
- /tmp
restart: unless-stopped
security_opt:
- label:disable
stdin_open: true
tty: true
volumes:
- ../openvpn:/vpn
- /dev/net:/dev/net:z
environment:
- DNS='8.8.4.4 8.8.8.8'
- FIREWALL="1"
- TZ='Europe/Stockholm'
command: -f ""
network_mode: "host"
The other option is that the services in the vpn use localhost to access each other. Since they share the network stack of the vpn container they are accessed as if they were the same host. This one had me stumped for a while this week.
One comment, you've got net_admin on all your containers, you only need it on the vpn

Resources