nexus3 oss deployment on docker - docker-volume

I would like to deploy nexus3 oss in docker container (standalone instance) on-prem. According to the documentation here : https://help.sonatype.com/repomanager3/product-information/system-requirements#SystemRequirements-FileSystems, look likes only Local storage or NFS4 storage is supported. But it didn't mention anything for docker container. I am checking here to see if other folks have same situation like mine and type of storage they use? I am thinking of using either NFS4 volume create as docker volume or iSCSI volume mounted directly to the docker host and point /var/lib/docker to iSCSI volume.
any suggestion is really appreciated.
Thanks

Related

Are Bind Mounts and Host Volumes the same thing in Docker?

I have seen the terms "bind mount" and "host volume" being used in various articles but none of them mention whether they are the same thing or not. But looking at their function, it looks like they are pretty much the same thing. Can anyone answer whether it is the same thing or not? If not, what is the difference?
Ref:
Docker Docs - Use bind mounts
https://blog.logrocket.com/docker-volumes-vs-bind-mounts/
They are different concepts.
As mentioned in bind mounts:
Bind mounts have been around since the early days of Docker. Bind mounts have limited functionality compared to volumes. When you use a bind mount, a file or directory on the host machine is mounted into a container. The file or directory is referenced by its absolute path on the host machine. By contrast, when you use a volume, a new directory is created within Docker’s storage directory on the host machine, and Docker manages that directory’s contents.
And as mentioned in volumes:
Volumes are the preferred mechanism for persisting data generated by
and used by Docker containers. While bind mounts are dependent on the
directory structure and OS of the host machine, volumes are completely
managed by Docker. Volumes have several advantages over bind mounts:
Volumes are easier to back up or migrate than bind mounts.
You can manage volumes using Docker CLI commands or the Docker API.
Volumes work on both Linux and Windows containers.
Volumes can be more safely shared among multiple containers.
Volume drivers let you store volumes on remote hosts or cloud providers, to encrypt the contents of volumes, or to add other functionality.
New volumes can have their content pre-populated by a container.
Volumes on Docker Desktop have much higher performance than bind mounts from Mac and Windows hosts.
A "bind mount" is when you let your container see and use a normal directory in a normal filesystem on your host. Changes made by programs running in the container will be visible in your host's filesystem.
A "volume" is a single file on your host that acts like a whole filesystem visible to the container. You can't normally see what's inside it from the host.
I was able to figure it out.
There are 3 types of storage in Docker.
1. Bind mounts-also known as host volumes.
2. Anonymous volumes.
3. Named volumes.
So bind mount = host volume. They are the same thing. "Host volume" must be a deprecating term though, as I cannot see it in Docker docs. But it can be seen in various articles published 1-2 years ago.
Examples for where it is referred to as "host volume":
https://docs.drone.io/pipeline/docker/syntax/volumes/host/
https://spin.atomicobject.com/2019/07/11/docker-volumes-explained/
This docs page here Manage data in Docker is quite helpful
Volumes are stored in a part of the host filesystem which is managed by Docker (/var/lib/docker/volumes/ on Linux). Non-Docker processes should not modify this part of the filesystem. Volumes are the best way to persist data in Docker.
Bind mounts may be stored anywhere on the host system. They may even be important system files or directories. Non-Docker processes on the Docker host or a Docker container can modify them at any time.

Access SMB share through docker image or through docker host's connection to SMB share?

I have a service or three that needs access to the same SMB share. The service(s) is running inside a Docker container. I think my choices are:
Have the Docker container(s) where the service(s) is running mount the SMB share itself
Have the host of the Docker container(s) mount the SMB share and then share it with the Docker container(s) where the service(s) is running
Which is better from a best practices perspective (which should probably include security as a dimension)?
Am I missing an option?
Thanks!
In standard Docker, you should almost always mount the filesystem on the host and then use a bind mount to attach the mounted filesystem to a container.
Containers can't usually call mount(2), a typical image won't contain smbclient(1) or mount.cifs(8), and safely passing credentials into a container is tricky. It will be much easier to do the mount on the host, and you can use standard images against the mounted filesystem without having to customize them to add site-specific tools.
One way is to mount the SMB shares on the host system as normal, for example if you are on Linux using mount and fstab. Afterwards you can use docker volumes to add the SMB shares, on your host system to your containers as volumes.
Advantages of using docker volumes are explained in the docker documentation.
More information about docker volumes in the docker documentation,
https://docs.docker.com/storage/volumes/

Rex-Ray volume won't share with other EC2 servers

I have a two EC2 server and I wanted to create volume from aws EBS which should be available for both server. So I used REx-Ray plugin for this.
steps I did:
install
docker plugin install rexray/ebs REXRAY_PREEMPT=true EBS_ACCESSKEY=* EBS_SECRETKEY=*
create volume
docker volume create -d rexray/ebs --name mongo_vol -o=volumeType=io1 -o=size=100 -o=iops=100
When I ran docker volume ls in first EC2 server shows result like this;
DRIVER VOLUME NAME
rexray/ebs:latest External MongoDB Data
rexray/ebs:latest MySQL
rexray/ebs:latest Private MongoDB
rexray/ebs:latest mongo_vol
But when I ran docker volume ls in my second server that shows result like this:
DRIVER VOLUME NAME
local mongo_vol
My driver have not change, but volume name shows in both side.
I could not find anything related this on internet when do my research about this.
Does anyone give me a idea to solve this?
I had a issue like this. Rex-ray make EBS accessible to both server, I think you have install rexy-ray into one server.
Install Rex-Ray into your other server as well.
that won't fix your issue, Next,
Remove local driver volume in your other server
before remove volume, make a backup or snapshot of your volume in case.
EBS volumes can only be attached to one EC2 instance at a time. If you need storage that is accessible to both servers simultaneously, you can use EFS and the REX-Ray EFS driver.

Docker Volume Binded with Container from Shared location

I am trying to make a centralized location for postgresql data, and use that for multiple containers on the same network of docker. For this i have to use the shared location in the volume, for example something like
docker -v 0.0.0.0\data:/var/lib/postgrsql/data
How can i specify the shared location as volume's host path and make a linkage with the container's binded folder.
Environment details:
Ubuntu 17.10
Docker 17
Any help or guidance to achieve this would be appreciated.
In Swarm you want to use a volume driver. This way 1. you don't have manual nfs mounts on each node host OS, and 2. you can ensure that the volume you want for a specific service is connected to the host it plans to run on.
The Docker Store has a list of volume plugin drivers for various storage solutions.
If you're using cloud storage or simple NFS, REX-Ray is likely the plugin you want.

When does a running Docker container run out of disk space?

I've read through so much documentation, and I'm still not sure how this really works. It's a bit of a Docker vs. VM question.
If I start a VM with a 2GB hard drive and fill its disk with files, I know it runs out after 2GB of files.
Does Docker work the same way? I would assume so. But from what I've read about "UnionFS" it seems like it does not run out of space.
So then why do Docker "volumes" exist? Is that automagically expanding Docker disk space transient in some way? Will the files I've saved inside of my Docker container disappear after a reboot? How about after restarting the container?
Docker's usage (1.12+) depends on the Docker storage driver and possibly the physical file system in use.
TL;DR Storage will be shared between all containers and local volumes unless you are using the devicemapper storage driver or have set a limit via docker run --storage-opt size=X when running on the zfs or btrfs drivers. Docker 1.13+ also supports a quota size with overlay2 on an xfs backed file system.
Containers
For all storage drivers, except devicemapper, the container and local volume storage is limited by the underlying file system hosting /var/lib/docker and it's subdirectories. A container can fill the shared file system and then other containers can't write any more.
When using the devicemapper driver, a default volume size of 100G is "thin allocated" for each container. The default size can be overridden with the daemon option --storage-opt dm.basesize option or set on a per container basis with docker run --storage-opt size=2G.
The same per container quota support is available for the zfs and btrfs drivers as both file systems provide simple built in support for creating volumes with a size or quota.
The overlay2 storage driver on xfs supporta per container quotas as of Docker 1.13. This will probably be extended to ext4 when new 4.5+ kernels become standard/common and ext4 and xfs quotas share a common API.
Volumes
Docker volumes are separate from a container and can be viewed as a persistant storage area for an ephemeral container.
Volumes are stored separately from Docker storage, and have their own plugins for different backends. local is the default backend, which writes data to /var/lib/docker/volumes so is held outside of the containers storage and possible quota system.
Other volume plugins could be used if you wanted to set per volume limits on a local file system that supports it.
Containers will keep their own file state over a container restart and reboot, until you docker rm the container. Files in a volume will survive a container removal and can be mounted on creation of the new container.

Resources