I am running Docker on my Raspberry Pi. I have added a CIFS volume that is on my synology nas.
I am able to deploy my container instance if Nextcloud with /var/www/html linked to the CIFS volume.
Note on the setup of my volume. I had creates the colume through the instance Portainer that I am running.
When I try to access Nextcloud on my network the page loads but with an error message is displayed. The message states that it "Can't write into config directory!"
How can I fix this error?
What I am trying to do is get Nextcloud running on my PI but use my nas for the storage of the data.
Related
im fairly new to the Docker Container world and im trying to move my Nextcloud server to the container.
i can deploy it successfully on a test environment, but im trying to map an externall HDD that will eventually contain all of the data (profiles/pics/data/etc) as it is on my current server.
my current setup is an ubuntu server 20.04.1 and Nextcloud 18 with an external HDD mounted for storage.
so far i havent been able to map the external drive.
can anyone provide any insights?
Regards!
To help you specifically, more information is required, like which docker image are you using and how are you deploying your container. Also, this might be a question for https://serverfault.com/
The general concepts of "mounting" parts of a filesystem into a container are described at Docker Volumes and Bind Mounts.
Suppose your harddrive is mounted at /mnt/usb on the host, you could access it within a docker container at /opt/usb when started like this
docker run -i -t -v /mnt/usb:/opt/usb ubuntu /bin/bash
I have a two EC2 server and I wanted to create volume from aws EBS which should be available for both server. So I used REx-Ray plugin for this.
steps I did:
install
docker plugin install rexray/ebs REXRAY_PREEMPT=true EBS_ACCESSKEY=* EBS_SECRETKEY=*
create volume
docker volume create -d rexray/ebs --name mongo_vol -o=volumeType=io1 -o=size=100 -o=iops=100
When I ran docker volume ls in first EC2 server shows result like this;
DRIVER VOLUME NAME
rexray/ebs:latest External MongoDB Data
rexray/ebs:latest MySQL
rexray/ebs:latest Private MongoDB
rexray/ebs:latest mongo_vol
But when I ran docker volume ls in my second server that shows result like this:
DRIVER VOLUME NAME
local mongo_vol
My driver have not change, but volume name shows in both side.
I could not find anything related this on internet when do my research about this.
Does anyone give me a idea to solve this?
I had a issue like this. Rex-ray make EBS accessible to both server, I think you have install rexy-ray into one server.
Install Rex-Ray into your other server as well.
that won't fix your issue, Next,
Remove local driver volume in your other server
before remove volume, make a backup or snapshot of your volume in case.
EBS volumes can only be attached to one EC2 instance at a time. If you need storage that is accessible to both servers simultaneously, you can use EFS and the REX-Ray EFS driver.
I created a asp.net webform project in Visual Studio with Docker support (Windows). When I run the project using Visual Studio page comes up as below
Visual Studio creates a docker image which I saw using command
docker images
See image below (webapplication3)
Now I run another instance of Image (webapplication3) by command
Docker run webapplication3:dev
I can see container running
Docker ps
see image below
But now when I access this new running container using ip http://172.17.183.118/PageA.aspx
it's not coming up, see image below (I have taken IP 172.17.183.118 from docker inspect command, so it is correct.
Can someone tell me why am I not able to view the page? Why is it saying "Resource cannot be found" error?
When you run a Docker container default, the container will run with an internal IP address and an expose port map the local machine port, and the IP address will go out to the internet through the docker bridge which associated with the local machine network interface.
When you access the container inside the local machine, you just need to access the localhost with the port shows you. In your issue, you need to access the address http://localhost:62774/PageA.aspx. If you want to access the container from the Internet, you should access the IP address of your local machine with the port. For you, it means the address http://your-local-machine-public-ip:62774/PageA.aspx.
You can get more details from Docker Network. Also, I suggest you'd better run the container with the special port you plan just like docker run -d -p nodePort:containerPort --name containerName yourImage.
I am running Docker Swarm with 3-Masters and 3-Worker nodes.
On this Swarm, I have an Elastic search container which reads data from multiple log files and then writes the data into a directory. Later it reads data from this directory and shows me the logs on a UI.
Now the problem is that I am running only 1 instance of this Elastic Search Container and say for some reason it goes down then docker swarm start this on another machine. Since I have 6 machines I have created the particular directory on all the machines, but whenever I start the docker stack the ES container starts and starts reading/writing directory on the machine where it is running.
Is there a way that we can
Force docker swarm to run a container on a particular machine
or
Map volume to shared/network drive
Both are available.
Force docker swarm to run a container on a particular machine
Add --constraint flag when executing docker service create. Some introduction.
Map volume to shared/network drive
Use docker volume with a driver that supports writing files to an external storage system like NFS or Amazon S3. More introduction.
Guys i just Docker Toolbox on my Windows 10 PC.
The lamp server is working fine but i just wanted to know how can i access the www folder which was created by linode lamp container ?
it is accessible via terminal but how can i access it in file browser so that i can create html files and run them.
I want to know how to access that var/www folder that they state in their tutorials on installing lamp.
I tried creating a file in that docker terminal using touch but could not access it.
Using docker volume ls you can find which are the volumes that are being used by the container. And now you can find the location of volume using docker inspect <volume_name>.
OR
You can inspect the container using docker inspect <container_name>. This will list out the details of the container and there you will find the paths which are being used as volume or mounts.
Usually in windows, the files of internal docker volumes are stored in C:\Users\Public\Documents\Hyper-V\Virtual hard disks.