Nextcloud Container and external hdd - docker

im fairly new to the Docker Container world and im trying to move my Nextcloud server to the container.
i can deploy it successfully on a test environment, but im trying to map an externall HDD that will eventually contain all of the data (profiles/pics/data/etc) as it is on my current server.
my current setup is an ubuntu server 20.04.1 and Nextcloud 18 with an external HDD mounted for storage.
so far i havent been able to map the external drive.
can anyone provide any insights?
Regards!

To help you specifically, more information is required, like which docker image are you using and how are you deploying your container. Also, this might be a question for https://serverfault.com/
The general concepts of "mounting" parts of a filesystem into a container are described at Docker Volumes and Bind Mounts.
Suppose your harddrive is mounted at /mnt/usb on the host, you could access it within a docker container at /opt/usb when started like this
docker run -i -t -v /mnt/usb:/opt/usb ubuntu /bin/bash

Related

Update Docker Images via dockerized Jenkins Job

I run some docker containers on my Synology NAS. Now I also run Jenkins via Docker on the NAS and want to create a job that does the following steps:
Stop all Docker Containers
Delete all unnecessary stuff (-> docker system prune)
Rebuild all Docker images
Run the new Docker image
But I don't know how to access the host system in dockerized Jenkin. SSH to the Host doesn't seem to be a good idea.
Do you have any tips?
The whole point of your Docker images is to run in an isolated sandbox, so it's by design that your image doesn't have access to the native system. SSH is one approach, but risky, as you point out.
A better approach is to set the DOCKER_HOST environment variable to point to the IP of the NAS (which might need to be the virtual network NAS address). You will probably need to experiment a bit with getting the correct address and making sure the hosted docker command has permissions to drive the host's Docker service.
This post in the Synology Forums may get you on the right track.

How to persist my appsettings.json to host machine from Docker container and mount that single file

I have an aspnetcore application that has been containerized and it is running properly, the value of the node ConnectionStrings in appsettings.jon file is being saved based on the value entered from the web interface when application is being setup and this file shouldn't be overwritten after the initial setup when application is updated as you know containers are ephemeral. I have a SetUpController for doing this and everything works well, the reason for doing this is the application runs on premise for most of our clients and they have different setups.
My question is how to ensure the appsettings.jon isn't overwritten when the container is updated. I want to be able to copy the file to a host volume after the application is successfully setup and mount the appsettings.json from the host volume the next time the application runs. I am using a linux container on windows and Docker Toolbox because the system where the application is run can't install Docker for windows.
How do I mount an appsettings.json stored on the host machine anytime the application run?
I have checked online for ideas but none of them seems to work.
So basically since I am using Docker Toolbox for windows the host folder available to it by default is /c/Users you can check this by opening VirtualBox -> settings -> Shared Folders but I was trying to mount /c/app which the VM doesn't have access to. So I changed my run command to this
docker run -d --name=containername -p 80:80 -v /c/Users/appsettings.json:/app/appsettings.json imagename
Also if you are using Docker ToolBox for windows ensure that the file you are mounting exists in the location before running the command in some cases if you are running into problem I would recommend also restarting the virtual machine using
docker-machine restart default
I would use persistent storage for this the docker documentation on https://docs.docker.com/storage/ will explain much better than i how to use for your purpose.
Hope it helps
Thanks #pressharp This worked for me for Linux:
docker run -d
--name=name
-p 80:80
-v /opt/dockertest2/appsettings.json:/app/appsettings.json
imagename

Docker container lamp/linode

Guys i just Docker Toolbox on my Windows 10 PC.
The lamp server is working fine but i just wanted to know how can i access the www folder which was created by linode lamp container ?
it is accessible via terminal but how can i access it in file browser so that i can create html files and run them.
I want to know how to access that var/www folder that they state in their tutorials on installing lamp.
I tried creating a file in that docker terminal using touch but could not access it.
Using docker volume ls you can find which are the volumes that are being used by the container. And now you can find the location of volume using docker inspect <volume_name>.
OR
You can inspect the container using docker inspect <container_name>. This will list out the details of the container and there you will find the paths which are being used as volume or mounts.
Usually in windows, the files of internal docker volumes are stored in C:\Users\Public\Documents\Hyper-V\Virtual hard disks.

Docker - Install CI Server on a remote host

I have a remote host that is already running a Ubuntu OS. I want to now create a docker file that would help me run a Continuous Integration server like TeamCity on this remote host.
I understand that I create a DockerFile from a base image like Ubuntu. But I do not need another Ubuntu filesystem on a Ubuntu host. How can I handle this situation?
If you need all the userspace files of Ubuntu, then this is how Docker operates - in order to promise that you can lift your container off an Ubuntu machine and run it on a different Linux, Docker has its own copy of everything above the kernel. This will be shared amongst every container based on Ubuntu, but still it's a couple of hundred megs of disk space.
If you don't need so much from Ubuntu, then you can start with a much smaller image such as busybox.
You could also create a fairly empty container image and map parts of your Ubuntu disk to be visible using the -v option. But then you won't have everything you need inside the container.

How to persist Docker data in HOST

Having a docker database container, in this case a neo4j container, how can I persist the data, and make sure that the next time I start a neo4j docker image that it points to my HOST database and not a new database?
I am using Docker in windows, so boot2docker is being used. And I say database but I am also thinking how do I serve a directory that I am working on a web application to be run, so I don't have to commit all the changes to the image... I just want to edit a folder in my windows environment and debug it using a docker web server stack.
The easiest way would be to have a shared folder between your Windows host and boot2docker VM (This post can help)
Then you just have to share that folder to your container using the -v option.
docker run -d -v /path/to/shared/folder/in/VM:/path/to/folder/in/container myimage /cmd
More info on how to share data between container and host

Resources