Trying to install nextcloud on rpi4.
I'm getting below error when trying to install nextcloud on rpi4 running buster
Initializing nextcloud 23.0.4.1 ...,
touch: setting times of '/var/www/html/nextcloud-init-sync.lock': Operation not permitted,
Initializing nextcloud 23.0.4.1 ...,
Another process is initializing Nextcloud. Waiting 10 seconds...,
My docker-compose looks like this
version: '2'
services:
db:
image: yobasystems/alpine-mariadb:latest
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
restart: always
volumes:
- /nextcloud:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=YOURROOTPASSWORD
- MYSQL_PASSWORD=YOURPASSWORD
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
app:
image: nextcloud
ports:
- 8181:80
links:
- db
volumes:
- /nextcloud:/var/www/html
restart: always
Please help!
Remove /var/www/html/nextcloud-init-sync.lock to unlock the installation process
I had the same issue and I could fixed it by mounting /var/www/html to a separate nextcloud volume. On the same level like service add this:
volumes:
nextcloud:
in your app volumes set the volume like this:
- nextcloud:/var/www/html
Recently I had the same problem with nextcloud 25.0.3.2 on Raspberry Pi 4 and did some research.
This causes the problem:
Unfortunately Raspbian uses some very old
packages. There is nothing we can fix in our image. 😕
Source: https://github.com/nextcloud/docker/issues/1589#issuecomment-923371168
There is a workaround, by giving extended privileges to the nextcloud container:
I did another investigation and deleted the lock file many times.
After some time I found out that if I run the nextcloud container as
priviliged, the error touch: setting times of
'/var/www/html/nextcloud-init-sync.lock': Operation not permitted does
not happen again and I could upgrade to 23.0.4.
Source: https://github.com/nextcloud/docker/issues/1742#issuecomment-1133837814
But beware:
The --privileged flag gives all capabilities to the container. When
the operator executes docker run --privileged, Docker will enable
access to all devices on the host as well as set some configuration in
AppArmor or SELinux to allow the container nearly all the same access
to the host as processes running outside containers on the host.
Additional information about running with --privileged is available on
the Docker Blog.
Source: https://docs.docker.com/engine/reference/run/
tl;dr: Give extended privileges to nextcloud container
...
app:
image: nextcloud
privileged: true
Related
I've been using a docker-compose.yml file to set up a basic/simple instance of Nifi. Last week my nifi instance was working perfectly fine. Haven't changed anything to my nifi docker-compose file.
I have updated both the browser and docker desktop on Monday, and ever since then. However, my coworker has tried running the docker compose file and has had the same issue.
When I run docker compose up on the docker-compose.yml file, there are no issue in the container logs, and it seems the docker container is running perfectly fine. When I try and access 'https://localhost:8443/nifi' firefox returns the following message:
An error occurred during a connection to 127.0.0.1:8443. PR_END_OF_FILE_ERROR
I've tried different browsers, both chrome and edge return the following message:
This site can’t be reached localhost unexpectedly closed the connection.
I've also tried restarting my computer, docker desktop, and even the containers but nothing solved this issue. Here is my docker-compose.yml file contents:
version: '3'
services:
nifi:
cap_add:
- NET_ADMIN # low port bindings
image: apache/nifi
container_name: nifi
ports:
- "8080:8080/tcp" # HTTP interface
- "8443:8443/tcp" # HTTPS interface
- "514:514/tcp" # Syslog
- "514:514/udp" # Syslog
- "2055:2055/udp" # NetFlow
environment:
- SINGLE_USER_CREDENTIALS_USERNAME=admin
- SINGLE_USER_CREDENTIALS_PASSWORD=password1234
volumes:
- ../../nifi/drivers:/opt/nifi/nifi-current/drivers
- ../../nifi/certs:/opt/certs
- ./output:/opt/nifi/nifi-current/ls-target
- nifi-conf:/opt/nifi/nifi-current/conf
restart: unless-stopped
nifi-registry:
image: apache/nifi-registry
container_name: nifi-registry
ports:
- "18080:18080/tcp" # HTTP interface
restart: unless-stopped
Not sure what my next steps should be. I have followed the instructions on this site "https://kinsta.com/knowledgebase/pr-end-of-file-error/" but no luck. I feel as if it must be something with docker desktop or the container that's causing an issue with certs on the browser. Since, both my coworker and I are having this issue.
When running Corda in docker with external Postgres DB configurations, I get insufficient privileges to access error.
Note:
Corda: 4.6 Postgresql: 9.6
Docker engine 20.10.6
Docker-compose: docker-compose version 1.29.1, build c34c88b2
docker-compose.yml file:
version: '3.3'
services:
partyadb:
hostname: partyadb
container_name: partyadb
image: "postgres:9.6"
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: partyadb
ports:
- 5432
partya:
hostname: partya
# image: corda/corda-zulu-java1.8-4.7:RELEASE
image: corda/corda-zulu-java1.8-4.6:latest
container_name: partya
ports:
- 10006
- 2223
command: /bin/bash -c "java -jar /opt/corda/bin/corda.jar run-migration-scripts -f /etc/corda/node.conf --core-schemas --app-schemas && /opt/corda/bin/run-corda"
volumes:
- ./partya/node.conf:/etc/corda/node.conf:ro
- ./partya/certificates:/opt/corda/certificates:ro
- ./partya/persistence.mv.db:/opt/corda/persistence/persistence.mv.db:rw
- ./partya/persistence.trace.db:/opt/corda/persistence/persistence.trace.db:rw
# - ./partya/logs:/opt/corda/logs:rw
- ./shared/additional-node-infos:/opt/corda/additional-node-infos:rw
- ./shared/cordapps:/opt/corda/cordapps:rw
- ./shared/drivers:/opt/corda/drivers:ro
- ./shared/network-parameters:/opt/corda/network-parameters:rw
environment:
- ACCEPT_LICENSE=${ACCEPT_LICENSE}
depends_on:
- partyadb
Error:
[ERROR] 12:41:24+0000 [main] internal.NodeStartupLogging. - Exception during node startup. Corda started with insufficient privileges to access /opt/corda/additional-node-infos/nodeInfo-5B........................................47D
The corda/corda-zulu-java1.8-4.6:latest image runs under the user corda, not root. This user has user id 1000, and also is in a group called corda, also with gid 1000:
corda#5bb6f196a682:~$ id -u corda
1000
corda#5bb6f196a682:~$ groups corda
corda : corda
corda#5bb6f196a682:~$ id -G corda
1000
The problem here seems to be that the file you are mounting into the docker container (./shared/additional-node-infos/nodeInfo-5B) does not have permissions setup in such a way as to allow this user to access it. I'm assuming the user needs read and write access. A very simple fix would be to give other read and write access to this file:
$ chmod o+rw ./shared/additional-node-infos/nodeInfo-5B
There are plenty of other ways to manage this kind of permissions issue in docker, but remember that the permissions are based on uid/gid which usually do not map nicely from your host machine into the docker container.
So the error itself describes that it's a permission problem.
I don't know if you crafted this dockerfile yourself, you may want to take a look at generating them with the dockerform task (https://docs.corda.net/docs/corda-os/4.8/generating-a-node.html#use-cordform-and-dockerform-to-create-a-set-of-local-nodes-automatically)
This permission problem could be that you're setting only read / write within the container:
- ./shared/additional-node-infos:/opt/corda/additional-node-infos:rw
or it could be that you need to change the permissions on the shared folder. Try changing the permissions of shared to 777 and see if that works, then restrict your way back down to permissions you're comfortable with.
I just configure the image to be run as root. This works but may not be safe. Simply add
services:
cordaNode:
user: root
to the service configuration.
Ref: How to configure docker-compose.yml to up a container as root
So for some reason, I'd like to use a docker:dind inside a docker-compose.yml.
I know that the "easy" way is to mount directly the socket inside the image (like that : /var/run/docker.sock:/var/run/docker.sock) but I want to avoid that (for security reasons).
Here is my experimental docker-compose.yml :
version: '3.8'
services:
dind:
image: docker:19.03.7-dind
container_name: dind
restart: unless-stopped
privileged: true
environment:
- DOCKER_TLS_CERTDIR=/certs
volumes:
- dind-certs-ca:/certs/ca
- dind-certs-client:/certs/client
networks:
- net
expose:
- 2375
- 5000
volumes:
dind-certs-ca:
dind-certs-client:
networks:
net:
driver: bridge
Nothing complexe here, then I try to see if the service is correctly set :
docker logs dind
Here no problem it is up and running.
However, once I try to use it with for instance :
docker run --rm -it --network net --link dind:docker docker version
I got the following error :
Cannot connect to the Docker deamon at tcp://docker:2375. Is there a deamon running ?
Do you have any idea why the deamon is not responding ?
---------------------------------------------------------- EDIT ----------------------------------------------------------
Following hariK's comment (thanks by the way) I add the port 2376 to the exposed one. I think I'm neer solving my issue. Here is the error that I get :
error during connect: Get http://docker:2375/v1.40/version dial tcp: lookup on docker on [ip]: no such host
So I look at this error and found that it seems to be a recurrent one on dind versions (there is a lot of issues on gitlab on it like this one). There is also a post on stackoverflow on a similar issue for gitlab here.
For the workaround I tried :
Putting this value DOCKER_TLS_CERTDIR: "" hopping to turn off TLS ... but it failed
Downgrading the version to docker:18.05-dind. It actualy worked but I don't think it's a good move to make.
If someone has an idea to keep TLS ON and make it works it would be great :) (I'll still be looking on my own but if you can give a nudge with interesting links it would be cool ^^)
To use Docker with disabled TLS (i.e. TCP port 2375 by default), unset the DOCKER_TLS_CERTDIR variable in your dind service definition in Docker Compose, like:
dind:
image: docker:dind
container_name: dind
privileged: true
expose:
- 2375
environment:
- DOCKER_TLS_CERTDIR=
(NB: do not initialize it to any value like '' or "")
So I found a solution, and I added to the basic docker-compose a resgistry with TLS options.
So I had fisrt to generate the certs and then correctly mount them.
If any of you run in a similar issue I made a github repo with the docker-compose and command lines for the certs.
Some time later, and I was looking for the same thing.
Here is an example that with specific versions for the images, that should still work in a few years from now:
version: '3'
services:
docker:
image: docker:20.10.17-dind-alpine3.16
privileged: yes
volumes:
- certs:/certs
docker-client:
image: docker:20.10.17-cli
command: sh -c 'while [ 1 ]; do sleep 1; done'
environment:
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_VERIFY: 1
DOCKER_CERT_PATH: /certs/client
volumes:
- certs:/certs
volumes:
certs:
The TLS certificates are generated by the "docker" service on startup and shared using a volume.
Use the client as follows:
docker-compose exec docker-client sh
#now within docker-client container
docker run hello-world
I have a problem mounting a WD MyCloud EX2 NAS as an NFS share for a Nextcloud and MariaDB container combination, using Docker Compose. When I run docker-compose up -d, here's the error I get:
Creating nextcloud_app_1 ... error
ERROR: for nextcloud_app_1 Cannot create container for service app: b"error while mounting volume with options: type='nfs' device=':/mnt/HD/HD_a/nextcloud' o='addr=192.168.1.73,rw': permission denied"
ERROR: for app Cannot create container for service app: b"error while mounting volume with options: type='nfs' device=':/mnt/HD/HD_a/nextcloud' o='addr=192.168.1.73,rw': permission denied"
ERROR: Encountered errors while bringing up the project.
Here's docker-compose.yml (all sensitive info replaced with <brackets>:
Version: '2'
volumes:
nextcloud:
driver: local
driver_opts:
type: nfs
o: addr=192.168.1.73,rw
device: ":/mnt/HD/HD_a/nextcloud"
db:
services:
db:
image: mariadb
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
restart: always
volumes:
- db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=<****>
- MYSQL_PASSWORD=<****>
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- NEXTCLOUD_ADMIN_USER=<****>
- NEXTCLOUD_ADMIN_PASSWORD=<****>
app:
image: nextcloud
ports:
- 8080:80
links:
- db
volumes:
- nextcloud:/var/www/html
restart: always
I SSHd into the NAS box to check /etc/exports and sure enough, it was using all_squash, so I changed that.
Here's the /etc/exports file on the NAS box:
"/nfs/nextcloud" 192.168.1.73(rw,no_root_squash,sync,no_wdelay,insecure_locks,insecure,no_subtree_check,anonuid=501,anongid=1000)
"/nfs/Public" 192.168.1.73(rw,no_root_squash,sync,no_wdelay,insecure_locks,insecure,no_subtree_check,anonuid=501,anongid=1000)
Then, I refreshed the service with exportfs -a
Nothing changed - docker-compose throws the same error. And I'm deleting all containers and images and redownloading the image every time I attempt the build.
I've read similar questions and done everything I can think of. I also know this is a container issue because I can access the NFS share quite happily from the command line thanks to my settings in /etc/fstabs.
What else should I be doing here?
In our case, we are mounting the nfs volume localy on the docker host, then mounting the folder inside the containers.
We are running with oracle-linux 7, with SElinux enable.
We fixed by adding the following parameter inside /etc/fstab in the fs_mntops block (see https://man7.org/linux/man-pages/man5/fstab.5.html):
defaults,context="system_u:object_r:svirt_sandbox_file_t:s0"
Try to check with the command line ini nextcloud folder "ls -l /var/www/html", see groups and users who can access it
I fixed it by removing the anonuid=501,anongid=1000 entries in the NAS box's /etc/exports file, and I also managed to enter the wrong IP - the NAS box wasn't granting access to the Ubuntu computer that was trying to connect with it.
I am running a Java app inside a Docker container which is supposed to connect MySQL inside the other container. Trying multiple options suggested in the forms, nothing really works. Here is my Docker Compose file:
version: "3"
services:
app:
build:
context: ./
dockerfile: /src/main/docker/Dockerfile
image: app1
environment:
- DB_HOST=Imrans-MacBook-Pro.local
- DB_PORT=3306
ports:
- 8080:8080
networks:
- backend
depends_on:
- mysql
mysql:
image: mysql:5.7.20
hostname: mysql
environment:
- MYSQL_USER=root
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
- MYSQL_DATABASE=app1
ports:
- 3306:3306
command: mysqld --lower_case_table_names=1 --skip-ssl --character_set_server=utf8 --explicit_defaults_for_timestamp
networks:
- backend
networks:
backend:
driver: bridge
Where DB_HOST=Imrans-MacBook-Pro.local is my laptop's name. This did not work. Some suggest that the container name can be used so tried DB_HOST= mysql, never worked.
The only thing works from times to time when I pass the laptop's IP address, which is not I want to do. So, what is a good way to create communication between those containers?
The mysql is running in the container so there are two things that you should consider here:
If the mysql is running in the container then you will need to link the app container to the mysql container. This will allow them to talk to
each other using docker's inter container communication. The containers talk to each other using hostnames to resolve their respective internal IP addresses. See later in my answer I will show you how to get the two containers to communicate with each other using a compose file.
The mysql container should make use of a docker volume to store the database. This will allow you to store the database and related files on the file system of the host (server or machine where the containers are running on). The docker volume will then be mounted as a directory in the container. Thus the container can now read and write to a directory on the machine where the docker containers are running on. This means that even if the containers are all deleted or removed you will still have the database data persist. Here is a nice beginner friendly article on docker volumes and using them with MySQL:
https://severalnines.com/blog/mysql-docker-containers-understanding-basics
Container communication using only docker without compose:
You have container "app" and "mysql", you want to be able to access "app" on localhost and you want "app" to be able to connect to mysql. How are you gonna do this?
1. You need to expose a port for container "app" so we can access it on localhost. The docker containers have their own internal network and it is closed to you unless you expose some ports with docker.
You need to link the "mysql" container to "app" without exposing "mysql" 's ports to the rest of the world.
This config should work for what you want to achieve:
version: "2"
services:
app:
build:
context: ./
dockerfile: /src/main/docker/Dockerfile
image: app1:latest
links:
- mysql
environment:
- DB_HOST=mysql
# This is the hostname that app will reach the mysql container on.
# If you do with app container:
# docker exec -it <app container id> bash
# # apt-get update -y && apt-get install iputils-ping -y
#
# Then you should be able to ping mysql container with:
#
# # ping -c 2 mysql
- DB_PORT=3306
ports:
- 8080:8080
# You will access "app" on localhost:8080 in your browser. If this is running on your own machine.
mysql: #hostname actually gets set here so no need to set it later
image: mysql:5.7.20
environment:
- MYSQL_USER=root
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
- MYSQL_DATABASE=app1
# Remember to use a volume if you would like this container's data to persist or if you would like
# to restore a database backup.
command: mysqld --lower_case_table_names=1 --skip-ssl --character_set_server=utf8 --explicit_defaults_for_timestamp
Now you can just start it up with:
$ docker-compose up
If you ran this before then just make sure to run this first before running docker-compose up:
$ docker-compose down
Let me know if that helps.
I have, in the past, gotten this to work without explicitly setting the host networking part in Docker Compose. Because Docker images inside a Docker Compose File are put into a Docker Network with each other, you really shouldn't have to do anything to get this to work: by default you should be able to attach into the container for your Spring app and be able to ping mysql and have it work out.
DB host should be localhost or 127.0.0.1